Recent advances in artificial intelligence have had a profound effect on how people interact with technology. In 2017 alone, Amazon and Google sold millions of home assistant devices, while waves of companies introduced chatbot programs into websites and mobile apps.
With so much data, input, opinions, and everyday interactions from a global community, what kind of trouble can AI get itself into? Can AI become as flawed as its human creators, and how easily can AIs and chatbots become confused, biased, or even racist? How can businesses use AI strategically and ethically?
In 2016, Microsoft’s AI chatbot Tay tweeted racial slurs and genocidal comments after manipulation from malevolent users. Microsoft quickly handled the situation, but it serves as a stark reminder of AI’s inability to distinguish social norms and distasteful manners. Fortunately, this was a mistake the company could learn from for future AI products. Instances such as Tay mean that executives have to rethink what they know about AI, and what AI can learn from them.
Susan Etlinger, an industry analyst for Altimeter and author of the research report, The Customer Experience of AI, has widely spoken about the unlimited potential of big data and AI, along with their hidden risks. She recently spoke with us about the ethical implications of AI and how to take a strategic and human approach to a digital problem.
Customer Strategist: How pervasive is AI in daily life?
Susan Etlinger: The history of AI goes back almost 70 years to Alan Turing’s question, “Can machines think?” But what we are seeing now with the incredible amount of data we have, reduced costs of parallel computing, and improved algorithms is really the first time that AI or machine learning technology can deliver on the promise of being able to classify, analyze, make decisions, and learn at scale.
In fact, most of us, without realizing it, interact with AI every day. If you do a search, use predictive text in messaging, Google translate, or look at newsfeed in social media—all these things are enabled by AI.
Now there are also chatbots, predictive analytics, robots, Alexa, and driverless cars. When you start thinking about all the different ways AI can be used, some of them are kind of invisible. Like predictive text, you don’t necessarily think about it being enabled by AI, but it is. Then some things are obvious, like if you’re talking to Alexa.
CS: Why is it important to understand ethics in today’s AI?
SE: Ethics is basically just another way of talking about the values that govern our norms of behavior. We have norms of behavior in the physical world in terms of how we deal with other people, and we have to think about norms of behavior for how we interact in the digital world.
The reason that ethics is so important is that now we have machine intelligence that sits between us and the organizations that we are dealing with. So it’s important for us to establish behavior standards that relate to intelligent products and services the same way that we have behavior standards for the way that we would interact by just walking into a store.
CS: Are ethics a top priority of AI initiatives?
SE: It’s becoming much more of a priority, which is great to see, but there is still a lot of work to be done. One of the biggest, worst kept secrets about AI is that algorithms aren’t innately neutral. Any data that we use to train a system is going to have bias in it. And that can take many forms; some are minor, and some can have huge consequences. Big companies like Google, Microsoft, and Facebook all know this because they’ve been working with AI for a long time.
As we see AI take hold we start to see the biases that it exposes. We’ve seen a lot of examples of this in image search, in hiring software, in financial services and insurance, and the criminal justice system where algorithms can determine if someone gets a loan, or a job, wins a beauty contest, or qualifies for parole. This kind of bias has always existed, but now AI can compound and reveal it. So we need to make honest choices about how we deal with that.
As companies like insurers, pharmaceuticals, and retail banks start using AI in their systems, they are going to start to see ways in which undesirable bias creeps in. One of the things programmers and data scientists are really wrestling with is how to address it. It’s really challenging, but there are many smart people in business, tech, and academia working on these issues from a data, culture, and design perspective.
CS: How do you teach AI to behave ethically?
SE: Machines don’t have a concept of ethics; they behave based on the data and experiences they’ve learned from. So if we want machines to behave ethically, we have to train them on the norms we want them to follow. The first thing is to acknowledge that bias exists in the first place. Then we need to establish a set of behaviors that govern what the experience should look like.
In my research report, The Customer Experience of AI, I identified five core principles:
1. Utility. It must be clear, useful, and satisfying (even delightful) for the user.
2. Empathy and respect. It must understand and respect people’s explicit and implicit needs.
3. Trust. It must be transparent, secure, and act consistently.
4. Fairness and safety. It must be free of bias that could cause harm—in the digital or physical world, or both—to people and/or the organization.
5. Accountability. It must have clear escalation and governance processes and offers recourse if customers are unsatisfied.
CS: What role does the team play in creating AI programs?
SE: Having diverse teams is absolutely critical because you can’t ask questions unless they occur to you in the first place. As humans, we have tons of blind spots. There’s a great story in The Guardian by Naaman Zhou about Volvo testing its driverless cars in Australia, and they kept crashing into kangaroos. The car was engineered in Sweden where the most frequent animal-related accidents are deer, moose, and elk. They walk, they don’t hop. The reference point for the car is the ground, but that doesn’t really help you if there is a kangaroo in midair.
That’s one of those things where AI, which can be so brilliant in so many ways, can be so clueless in others. Because you can ask any 2-year-old child how kangaroos move, and they say they hop, and you can ask anyone in Australia what’s the most common sort of vehicular accident and they’ll tell you it’s kangaroos. But Volvo missed this in their initial model.
The good news is this was a test, and Volvo tested in different regions precisely to find these kinds of things. You can’t always think of everything, and what’s obvious to one person might not be so obvious to someone else.
CS: What other unintended consequences of AI should companies be concerned about?
SE: Well, let’s not forget that this technology is transformative, but it’s also a tool. It can be used for tremendous good, like predicting pandemics and getting aid to hurricane victims. But it can be used for tremendous harm, as well. This has been true of technology since the first caveperson picked up the first rock. So AI is no different in that respect. But if we are creating technologies that mimic human behavior and human cognition, we need to be careful and explicit about the kinds of things we teach them to do.
It’s not like Microsoft, Amazon, and Google aren’t aware of this. Of course they are. And it’s important that anyone who is considering AI in their business understand these realities and risks so they can use this technology strategically and responsibly.
CS: What can executives do to help prevent unintended consequences of AI?
SE: There are three main areas where people are focusing today. One is in design; privacy by design, inclusive design, design communities. It’s important for leaders to support those efforts, as the folks on the front lines need to learn and refine these best practices in real time so they become standards for the benefit of all.
Next, customer-focused or human-focused AI needs to be part of the culture. The incentives can’t be in opposition to ethics—in AI or anything else—or they won’t work. Executives should model and support that.
Finally, there is work and research being done on debiasing techniques and audit processes to reveal and remediate bias the way you might audit for other types of risk. Data scientists, audit, HR, and finance all have roles to play here.