A team of researchers are working to fight the threat posed by Artificial Intelligence — not that robots will somehow take over the world anytime soon, but that AI has blind spots affecting women and minorities. Those blind spots, say scientists, could prove harmful in the long run.
Scientists have known about technology’s blind spots for some time — after all, computers learn from programmers and can therefore inherit biases based on how they were programmed.
Jeannette Wing, the director of Columbia University’s Data Sciences Institute, tells Bloomberg Technology that the inherent biases found in technology could be harmful to society, considering so many large companies are using AI to make important decisions: “The worry is if we don’t get this right, we could be making wrong decisions that have critical consequences to someone’s life, health, or financial stability.”
Scientists at Boston University and Microsoft Research New England took a closer look at how gender bias, specifically, could impact algorithms. What they found is that “word-embeddings” (turning text into numbers to allow for machine learning) contain biases “that reflect gender stereotypes present in broader society.” So, for instance, “doctor” is seen by some machines to be a masculine word, while “nurse” is deemed feminine.
Researchers Combat Gender and Racial Bias in Artificial Intelligence
Companies use AI to predict everything from the credit worthiness to preferred cancer treatment. The technology has blind spots that particularly affect women and minorities.
The problem, the scientists say, is that “word embeddings not only reflect such stereotypes but can also amplify them. This poses a significant risk and challenge for machine learning and its applications.”
The result of the researchers’ work was the creation of a gender-bias-free public dataset — like designating the words “doctor” and “nurse” as equally male or female. They are currently also conducting research on a dataset that would be without racial biases, though it will still take several years before bias can be entirely removed from machine-learning.