Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI Hiring Tech Being Used Despite Lack of Substantive Support

The algorithm examines facial expressions, word choice and tone to establish “willingness to learn” and “personal stability.” But AI researchers say “regulators should ban the use of affect recognition” due to potential discrimination.

(TNS) — Companies are leaping into emotion-recognition technology, also called “affect recognition,” despite no substantive evidence it actually works.

More than 100 employers already have used it to evaluate more than a million video job interviews, the Washington Post reported in November. The hiring algorithm evaluates an applicant’s facial expressions, voice and word selection to come up with scores in categories such as “willingness to learn” and even “personal stability.”

Affect-recognition technology also reportedly is being used by law enforcement to analyze suspects’ stress and by casinos to identify employees and customers feeling anger, fear or sadness. Prisons are starting to use it to pick up early signs of aggression.

The AI Now Institute at New York University notes that the technology is being touted for uses such as determining “the price of insurance, patient pain assessment [and] student performance in school.”

The problem: the technology at this point appears to be hogwash.

NYU’s AI institute, which researches the social and economic impact of artificial intelligence, insists there’s growing evidence that affect-recognition technology is often wrong and harmful, especially to the poor and people of color. It notes that one recent study, using photos of NBA players, determined that emotion-recognition programs consistently gave negative emotional scores to black players -- “no matter how much they smiled.”

The institute says in its December 2019 report that “regulators should ban the use of affect recognition in important decisions that impact people’s lives and access to opportunities.”

The AI Now Institute was founded in 2017 by Kate Crawford, an NYU professor and principal researcher at Microsoft Research, and Meredith Whittaker, who serves as a lead AI researcher for Google.

The institute’s report pulls no punches about this emerging technology that is already a multibillion-dollar industry. It concludes:

“There remains little to no evidence that these new affect-recognition products have any scientific validity.”

©2019 The Oregonian (Portland, Ore.). Distributed by Tribune Content Agency, LLC.

From Our Partners