SciTech

AI displays gender, racial bias because of human writing

Credit: Lisa Qian/ Credit: Lisa Qian/

Artificial intelligence (AI) can be just as biased as humans, according to a new study by researchers from Princeton University’s Center for Information Technology Policy. The study, published April 13 in Science Magazine, uncovered racial and gen- der bias in a prominent ma- chine learning algorithm — Stanford University’s Global Vectors for Word Representation, or GLoVe, which learns to associate related words and concepts by studying words all across the internet.

AI finds patterns in real-world texts, and develops the biases present in those materials. Bias in machine learning algorithms often takes the form of language patterns. The program associated words and concepts in ways that turn out to be sexist or racist. For example, the program associated women with the household and family, but not professional or career diction. The bias got even more detailed: “man” was linked with “professor”, while “woman” was connected with “assistant professor.”

While previous research on AI bias has shown similar results, this study is the rst to integrate psychological research and human bias. The researchers tested GLoVE with a method traditionally used to detect human bias, called the Implicit Association Test (IAT). “In the IAT, subjects are presented with two images — say, a white man and a black man — and words like ‘pleasant’ or ‘unpleasant,’” explains science writer Angela Chen. “The IAT calculates how quickly you match up ‘white man’ and ‘pleasant’ versus ‘black man’ and ‘pleasant.’”

The idea is that the longer it takes to match up concepts, the more trouble the test-taker has associating them; however, instead of measuring response time in humans, the researchers measured the mathematical distance between concepts in the algorithm, with a smaller distance corresponding to a stronger association. They found, as expected, several associations re ecting human bias. GLoVe learned to consider black names less pleasant than white names, and to associate women with the arts but not with the sciences.

AI will soon have a huge impact in everyday life. Currently, machine learning algorithms control a wide range of processes, from making Google Translate more accurate to deciding whose résumés get passed on to hiring departments. Bias in algorithms like these could potentially have dire consequences for a large number of people.

“Language is a bridge to ideas and a lot of algorithms are built on language in the real world,” says Megan Garcia, an expert on algorithmic bias. “So unless an algorithm is making a decision based only on numbers, this nding is going to be important. [Computer] bias is everywhere we look.”

Machine learning is a type of AI that enables computers to learn and adapt without explicitly being programmed. It relies on large amounts of data intake as the computer experiences. It’s understandable that a computer would mimic gender and racial bias because humans are known to consciously and subconsciously display these tendencies. This does not mean that the AI is at fault — it actually shows that the program is working just as it should. What this does suggest is that we, as humans, havealongwaytogotorid ourselves of unjust biases in thinking and writing.