Algorithms reveal changes in stereotypes | Stanford News - 2 views
news.stanford.edu/...hms-reveal-changes-stereotypes
psychology stereotypes society bias social media knowledge history

-
The researchers used word embeddings – an algorithmic technique that can map relationships and associations between words – to measure changes in gender and ethnic stereotypes over the past century in the United States.
-
Our prior research has shown that embeddings effectively capture existing stereotypes and that those biases can be systematically removed. But we think that, instead of removing those stereotypes, we can also use embeddings as a historical lens for quantitative, linguistic and sociological analyses of biases.”
-
Take the word “honorable.” Using the embedding tool, previous research found that the adjective has a closer relationship to the word “man” than the word “woman.”
- ...4 more annotations...
-
One of the key findings to emerge was how biases toward women changed for the better – in some ways – over time.
-
For example, adjectives such as “intelligent,” “logical” and “thoughtful” were associated more with men in the first half of the 20th century. But since the 1960s, the same words have increasingly been associated with women with every following decade, correlating with the women’s movement in the 1960s, although a gap still remains.
-
For example, in the 1910s, words like “barbaric,” “monstrous” and “cruel” were the adjectives most associated with Asian last names. By the 1990s, those adjectives were replaced by words like “inhibited,” “passive” and “sensitive.” This linguistic change correlates with a sharp increase in Asian immigration to the United States in the 1960s and 1980s and a change in cultural stereotypes, the researchers said.
-
“It underscores the importance of humanists and computer scientists working together. There is a power to these new machine-learning methods in humanities research that is just being understood,”