Imagine a scenario where you need your car’s onboard navigation system to place an emergency call, but it won’t. Or arriving extra early for every international flight because airport security scanners never recognize your face.
For many people—especially people of color and women—these scenarios can be a frustrating reality. That’s because the AI that’s supposed to make life easier for us all doesn’t include diverse enough data to work for everyone. This is a big problem, but one that can be fixed.
Personal dignity, travel safety, and job hunting are just some of the aspects of living that can be improved with algorithms--if the technology learns to recognize and properly classify a full range of voice and faces. However, New York University’s AI Now center reported in April that a lack of diversity in the AI industry contributes to
How AI-powered biometrics can be biasedMultiple studies have found that facial and voice recognition algorithms tend to be more accurate for men than women. Facial recognition programs also have
For example, Georgia Institute of Technology researchers found that
In airports, the Department of Homeland Security is
Like the biased pedestrian-recognition AI, these facial recognition algorithms were trained with datasets that skewed white and male. And as with pedestrian recognition, facial recognition algorithms need to learn from datasets that contain a fair mix of skin tones and genders.
Voice recognition technology is supposed to make lots of everyday tasks easier, like dictation, internet searches, and navigation while driving. However, since at least 2002, researchers and the media have documented cases of
University of Washington graduate student and Ada Lovelace Fellow Os Keyes has raised concerns that facial recognition systems could
It’s tempting to think that because algorithms are data-driven that they will generate impartial, fair results. But algorithms learn by identifying patterns in real-world datasets, and those datasets often contain patterns of bias—unconscious or otherwise. The challenge for AI developers is to find or build teaching datasets that don’t reinforce biases, so that the technology moves all of society forward.
Solving the AI bias problemUsing more inclusive datasets for AI learning can help create less biased tools. One startup, Atipica, is working to
Other steps are needed, too. Right now, the AI industry includes very few women, people of color, and LGBTQ people. A more diverse AI workforce would bring more perspectives and life experiences to AI projects. Transparency and bias-testing of AI systems can identify problems before products go to market and while they’re in use.
NYU’s AI Now center recommends bringing in experts in a range of fields beyond AI and data science to
AI has the potential to make our lives safer and easier, if we train the systems to be inclusive and use them wisely. By taking steps now to eliminate biases in AI, we can make sure that advances in this technology move everyone forward.