This article is more than 1 year old

How machine-learning code turns a mirror on its sexist, racist masters

Word-analyzing AI study reveals 'historical social changes'

Be careful which words you feed into that machine-learning software you're building, and how.

A study of news articles and books written during the 20th and 21st century has shown that not only are gender and ethnic stereotypes woven into our language, but that algorithms commonly used to train code can end up unexpectedly baking these biases into AI models.

Basically, no one wants to see tomorrow's software picking up yesterday's racism and sexism.

A paper published in the Proceedings of the US National Academy of Sciences on Tuesday describes how word embeddings, a common set of techniques used by machine-leaning applications to develop associations between words, can pick up social attitudes towards men and women, and people of different ethnicities, from old articles and novels.

In word-embedding models, an algorithm converts each word into a mathematical vector and maps it to a latent space. Vectors closer together represent words that are also more closely associated with one another. For example, the vectors for France and Paris would be nearer to one another compared to the vectors for France and England.

By inspecting these relationships, the paper's authors, based at Stanford University in the US, saw how stereotypes, buried in our use of language, can end up permeating data structures used by AI software.

The fear is that programmers and data scientists could use volumes of text to train software to intrinsically and unfairly treat people in a certain way, depending on their gender and race. For instance, a product search engine could be taught that all men like watching football, and every woman is into sewing.

“The vector for the adjective 'honorable' would be closer to the vector for 'man', whereas the vector for submissive would be closer to 'woman'," the paper stated. "These stereotypes are automatically learned by the embedding algorithm, and could be problematic if the embedding is then used for sensitive applications such as search rankings, product recommendations, or translations."

Changing stereotypes

The researchers collected a large dataset by scraping together different sources of text with a variety of algorithms. The word2vec model was used to train vectors on about 100 billion words from Google News. Word embeddings were also taken from Google books and the Corpus of Historical American English for comparison throughout history. The GloVe algorithm looked at 1.8 million New York Times articles from 1988 to 2005.

They found that word embeddings perpetuated various stereotypes. For example, the model minority myth that all Asian Americans are more brainy than their fellow non-Asian American citizens. The top occupations associated with Asian Americans were often academic such as professor, scientist, or accountant.

Meanwhile, Hispanic people were associated with more menial occupations such as housekeeper, janitor, or cashier. And for White people, it was sheriff, surveyor, or statistician.

Common gender stereotypes were also found. Text data analyzed during 1910 showed that the adjectives commonly used for women were words like delicate, charming, and dreamy. By 1950, the list shifted to include new adjectives such as transparent, placid, and colorless. Fast forward 40 years to 1990, and the words have moved away from historic ideas of femininity. It was replaced with words like physical, artificial, and somewhat bizarrely, morbid.

What’s interesting is that the biggest change was during the 1960s and 1970s, a critical time during the women’s rights movements in America.

Adjectives used to describe Asian Americans also changed throughout the years. In 1910s, it was words like barbaric, monstrous, or cruel. But by the 1990s, it was words more commonly associated with the stereotypes of Asian Americans today like passive, sensitive, or inhibited. The change in description correlates with the large influx of Asian immigrants to the US in the 1960s and 1980s, the team observed.

Using machines to understand humans

“The starkness of the change in stereotypes stood out to me. When you study history, you learn about propaganda campaigns and these outdated views of foreign groups. But how much the literature produced at the time reflected those stereotypes was hard to appreciate,” Nikhil Garg, coauthor of the paper and a PhD student at Stanford University, said.

He also told The Register that “modern learning algorithms perform so well because they automatically structure large amounts of data. Our main insight was that the resulting internal representations – word embeddings in this case – are not just useful for classification or prediction tasks, but also as a historical lens into the data-producing society.”

Fairness is an important area of research in AI and machine learning. Previous research has shown that machines can be infected with the same prejudices as humans if trained on biased data. The paper focused less about these issues, and more about how machine learning can aid social sciences.

James Zou, coauthor of the paper and an assistant professor of biomedical data, told us he viewed this project as “a new powerful microscope to study historical social changes, which has been challenging to quantify using standard approaches.”

Londa Schiebinger, coauthor of the paper and a professor of history of science, added: “It underscores the importance of humanists and computer scientists working together. There is a power to these new machine-learning methods in humanities research that is just being understood.” ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like