Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Science https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Time to regulate AI that interprets human emotions

Time to regulate AI that interprets human emotions



During the pandemic, technology companies have launched their emotion recognition software for remote monitoring of workers and even children. Take, for example, a system called 4 small trees. Developed in Hong Kong, the program claims to assess children’s emotions while doing class work. It maps facial features to assign each student’s emotional state into a category such as happiness, sadness, anger, disgust, surprise, and fear. It also measures ‘motivation’ and predicts grades. Similar tools have been marketed to monitor teleworkers. According to an estimate, the emotion recognition industry will grow to $ 37 billion by 2026.

There is deep scientific disagreement about whether AI can detect emotions. A 201

9 review found no reliable evidence for that. “Technology companies may well ask a question that is fundamentally wrong,” the study concluded (LF Barrett et al. Psychol. Sci. Public interest 20, 1–68; 2019).

And there is growing scientific concern about the use and misuse of these technologies. Last year, Rosalind Picard, co-founder of an artificial intelligence (AI) start-up called Affectiva in Boston and head of the Affective Computing Research Group at the Massachusetts Institute of Technology in Cambridge, said she supports regulation. Researchers have called for mandatory and rigorous review of all AI technologies used for hiring along with publishing the results. In March, a citizens’ panel convened by the Ada Lovelace Institute in London said an independent legal body should oversee the development and implementation of biometric technologies (see go.nature.com/3cejmtk). Such oversight is essential to defend against systems driven by what I call the phrenological impulse: to draw defective assumptions about internal states and capabilities from external appearances in order to extract more about a person than they choose to reveal .

Countries around the world have rules to enforce scientific rigor in the development of drugs that treat the body. Tools that make demands on our minds must have at least the same protection. For years, researchers have called on federal entities to regulate robotics and face recognition; it should also include emotion recognition. It is time for national regulators to protect themselves from untested applications, especially those targeting children and other vulnerable populations.

Lessons from clinical trials show why regulation is important. Federal claims and subsequent advocacy have made many more clinical trial data available to the public and subject to strict control. This will be the basis for better political decision-making and public trust. Regulatory oversight of affective technologies would bring similar benefits and accountability. It could also help establish norms to address over reach from companies and governments.

The polygraph is a useful parallel. This ‘lie detector’ test was invented in the 1920s and was used by the FBI and the US military for decades with inconsistent results that harmed thousands of people until its use was largely banned by federal law. It was not until 1998 that the US Supreme Court concluded that “there was simply no agreement that polygraph evidence is reliable”.

A formative figure behind the claim that there are universal facial expressions of emotion is psychologist Paul Ekman. In the 1960s, he traveled the highlands of Papua New Guinea to test his controversial hypothesis that all human beings exhibit a small number of ‘universal’ emotions that are innate, cross-cultural, and consistent. Early on, anthropologist Margaret Mead disputed this idea, saying it discounted context, culture, and social factors.

But the six emotions that Ekman described fit perfectly into the model of the new field of computer vision. As I write in my book from 2021 Atlas for AIhis theory was adopted because it suited what the tools could do. Six consistent emotions could be standardized and automated on a scale – as long as the more complex issues were ignored. Ekman sold his system to the US Transportation Security Administration after the terrorist attacks of September 11, 2001, to assess which air passengers were feared or stressed and perhaps also terrorists. It was heavily criticized for lacking credibility and for being racistly biased. However, many of today’s tools, such as 4 small trees, are based on Ekman’s categorization of six emotions. (Ekman claims that faces convey universal emotions, but says he has seen no evidence that automated technologies work.)

Yet companies continue to sell software that will impact people’s capabilities without clearly documented, independently audited evidence of effectiveness. Job applicants are judged unfairly because their facial expressions or vocal tones do not match those of employees; students are marked in school because their faces seem angry. Researchers have also shown that face recognition software interprets black faces as having more negative emotions than white faces.

We can no longer let emotion recognition technologies go unregulated. It is time for legislative protection against the untested use of these tools in all areas – education, health care, employment and criminal justice. These safeguards will newer rigorous science and dispel the mythology that internal states are just another data set that can be scraped from our faces.


Source link