Skip to main content
Everest Group names TTEC a ‘leader’ in its 2024 CXM Services PEAK Matrix Assessment Read the report

Taking the Bias Out of AI

Taking the Bias Out of AI

 

Robots make great science fiction, but we should never forget that AI is, well, artificial. When precautions aren’t taken, any AI tool can be intentionally hijacked by not-so-well-meaning humans (including political extremists, Russians, or angry ex-partners). 

But in addition to that, AI engines are subject to the same unintentional or unconscious biases that afflict human beings. Machines “learn” by absorbing massive data sets and real-world feedback. But all data sets and feedback are influenced by humans at the source, and ALL humans are biased, in different ways. The human brain cannot help but make generalizations about people, situations, and the world around it. Most of these generalizations are helpful (strangers are not as trustworthy as friends), but many of the generalizations perpetuate harmful or prejudicial stereotyping. 

The fact is that virtually all of us do have implicit biases. We try to overcome them, but our biases can easily be revealed by analyzing the word patterns, images, and language we use in our ordinary lives—the same images and word patterns that make up the data sets used to train AI engines. 

And no matter how pristinely designed any data set is, it starts with human beings and their own generalizations. Machines ingest whatever human generalizations are embedded in a data set, and then these generalizations are reinforced by repeated learning, to the extent that even a subtle distinction or bias can be magnified in an AI’s output:  

• At the University of Virginia some of the data sets used to train AI engines associated kitchens and cooking more with women than with men. One AI tool incorrectly identified a man as a woman solely because he was pictured standing next to a stove. 
• A female pediatrician in the U.K. was unable to get in to the women’s locker room at her gym because her profile clearly said “doctor,” and the algorithm used by the facility had “learned” that doctors are male. 
Another drawback of AI is that there is a “black box” element in machine learning. A neural network learns by observing and then imitating patterns, but the reasons behind those patterns and the resulting algorithms are often impossible to discover. So when bias does appear, it is virtually impossible to prove. 

In the end, it may be incredibly challenging to develop artificially “intelligent” systems that aren’t at least somewhat contaminated by human biases. But if you want to ensure that your own machine learning tools are as objective as possible, then you can at least minimize this bias. Here are some tips:

• Hire diverse teams to scan for implicit biases (i.e., diverse races, cultures, age groups, and genders)
• Poke holes in your assumptions, or perhaps even get someone from outside to do it for you
• Bring in experts from disciplines other than engineering or computer science and give them authority to make decisions. 

* Don Peppers can be reached at dpeppers@cxspeakers.com