Diversity is at the heart of the problem and the solution
White men make up the majority of the workforce creating AI, and the products they build don’t always account for — or work for — people who don’t look like them.
For example, when Joy Buolamwini was a computer science undergrad working on a facial recognition project, the technology she was working with couldn’t detect her dark-skinned face. In a 2017 TEDx Talk, she describes how she was forced to rely on her white roommate’s face to get it to work. Buolamwini brushed the issue aside until it happened again while she was visiting a startup in Hong Kong. During a demo of the company’s social robot, she was the only person in a group that the robot didn’t see.
For Buolamwini, these encounters with AI that couldn’t see her were due to the lack of black faces in the technology’s training sets. One of the most popular datasets for face recognition is estimated to be 83.5 percent white, according to one study. In her master’s thesis at MIT, Buolamwini found that three leading tech companies' commercially available face recognition tools performed poorly at guessing the gender of women with dark skin. In the worst cases, the odds were a coin toss, while white men had a zero percent error rate.
“These tools are too powerful, and the potential for grave shortcomings, including extreme demographic and phenotypic bias is clear,” she wrote in her testimony for the House Oversight and Reform Committee’s hearing on facial recognition technology last week.
In response to this threat, Buolamwini helped co-found the Algorithmic Justice League, a project based out of MIT that aims to highlight concerns around AI and design ways to fight back, through activism, inclusion and even art. She’s not the only one. Tech company employees are speaking out against working on systems they fear are ripe for misuse. Nonprofits like AI Now and EqualAI, are popping up to highlight the social implications of AI and create processes for better systems. Conferences like Fairness, Accountability, and Transparency are springing up to share work on how to make AI better for everyone.
The “pale male dataset,” as Buolamwini calls it, used by developers who are building these systems potentially exacerbates the problems of bias. A publication by AI Now gathered statistics on the lack of diversity in AI. Women make up 15 percent and 10 percent of AI researchers at Facebook and Google, respectively. At premiere AI conferences, only 18 percent of authors of accepted papers are women, and more than three-quarters of professors teaching AI are men.
The path forward is to diversify the inputs. The report argues that lack of diversity and bias issues “are deeply intertwined” and that adding more people to the mix will go a long way toward solving both problems. “If you put a team together with a similar background, you can end up with a situation where people agree without a lot of discussion,” says Alex Thayer, the chief experience architect and director for HP’s Immersive Experiences Lab. More diverse teams have a wider range of life experiences to apply as they think through and catch potential issues before a product is released.