“Artificial intelligence-based facial recognition and emotion recognition are 21st century phrenology.”
Kate Crawford, a professor at the University of Southern California (USC), who co-founded AI Now, a research institute that explores the social impact of artificial intelligence in 2017, said in published in the United States last month. He criticized AI face recognition and emotion recognition for being similar to the phrenology of the 19th century.
The unlock function of the latest smartphone and financial transaction apps automatically identify the identity through facial recognition. The New York Times identified the wedding guests of Prince Harry and Meghan Markle of England in May 2018 through Amazon’s facial recognition tool ‘Recognition’. Chinese public security is using facial recognition technology to arrest wanted people at concert halls with tens of thousands of people and catch pedestrians who violate traffic lights. The ‘4 Little Trees’ program developed in Hong Kong uses artificial intelligence facial recognition to understand the emotional state of students in class, and is used as a tool to measure learning motivation and predict grades. American AI recruitment programs such as Pymetrics and Hireview are called ‘high-efficiency hiring interviewers without prejudice’, and are being used by many multinational companies such as McDonald’s, Kraft, and Heinz. Emotion recognition technology is spreading to remote monitoring, detection of work fatigue such as driving, and shopping and marketing.
While the number of services using facial recognition and emotion recognition functions using the high-accuracy image recognition capability of artificial intelligence is increasing, criticism from the industry and academia is also increasing.
Microsoft Chairman and Chief Legal Officer (CLO) Brad Smith said in his domestic publication “The Age of Technology” in March, “If facial recognition technology is used, collective surveillance on an unprecedented scale can occur.” “The facial recognition technology itself In addition to this, development and use companies should be controlled by law,” he insisted. In July 2018, Chairman Smith publicly requested regulation, saying, “Face recognition technology is too dangerous to use without government supervision.” In April 2019, Microsoft Research claimed that “facial recognition is a technology with insurmountable flaws in that it schematizes and classifies people based on race and gender,” and claimed that it was “the plutonium of artificial intelligence.” Because facial recognition technology is inherently harmful, it is a ‘dangerous technology’ that the government must research and manage for safety, just like automobiles and pharmaceuticals.
Phrenology is a 19th century American medical scientist Samuel Morton who said that human beings can be divided into five types, including Africans, Native Americans, Caucasians, Malays, and Mongols, based on skull and face shapes, and through this, personality and characteristics can be identified. argument. Professor Crawford argued that artificial intelligence facial recognition and emotion recognition are being used for unscientific and unfair classification of humans, such as phrenology.
AI emotion recognition technology has its roots in the basic emotion theory of psychologist Paul Ekman. In the 1960s, through extensive experimental research sponsored by the U.S. Department of Defense, Ekman argued for the ‘universal basic emotion’ theory that everyone’s facial expressions can be classified into six categories: joy, fear, disgust, anger, surprise, and sadness. Although Ekman’s classification of six basic emotions was criticized by anthropologist Margaret Mead as an approach that ignores context, culture, and social factors, it is a model that is easy to apply to computers and machine learning, and its application has spread to various fields. After the 9/11 attacks, the U.S. Transportation Security Administration used the technology to measure the level of fear and stress on the faces of potential terrorists and passengers, sparking controversy over reliability and racism. However, according to the April 26 issue of Wired, an American information technology magazine, a review of more than 1,000 research papers found no evidence that human emotions can be reliably inferred from faces.
Crawford pointed out that the approach to current AI problems, not just face recognition and emotion recognition, is too narrow and focuses on technological improvement. In an interview with MIT Technology Review, he told MIT Technology Review, “The real pitfall of the tech sector over the past decade is that problems have always been presented with technical solutions, and have recently expanded to include the role of regulators and policy makers. He emphasized that AI should be approached from a ‘perspective of power’. This is the argument that ‘who actually operates and operates artificial intelligence’ should be treated as a power issue instead of ethical approaches such as ‘artificial intelligence ethical principles’ or ‘good AI’.
In an interview with Wired, he said, “Professionals are misunderstanding artificial intelligence more, such as presenting it as a mysterious and objective tool.” This is because artificial intelligence is created with the input of vast natural resources, energy, and human labor, and without human training, it cannot distinguish objects and the way they create meaning is completely different.
Professor Crawford’s argument points out that attempts to solve problems caused by artificial intelligence through ethical guidelines are fundamentally just a ‘technology-oriented approach’. It is characterized by approaching the problem of power that affects This is expanding the horizon of discourse on artificial intelligence in that it requests that the issue of artificial intelligence be subject to social regulation rather than technical discussion.
Also Read: Who will “resurrect” the iPad mini?