Artificial Intelligence (AI) is all the rage—from self-driving cars and Siri personal assistants, to chatbots and email scheduling associates that will take routine tasks out of human hands.
Every technology organisation—whether start up or long established—now has “AI” in its offerings and many are making promises of, shall we say, aspirational functionality. For the executive trying to make sense of this confusing landscape, being able to separate the reality from the myths in the marketplace is essential.
In 1950, Alan Turing devised the eponymous Turing Test to deal with the question of whether machines can think. In this test, a person asks questions of a “cognitive” machine and a human via written text. For the machine to pass the test, the person must be unable to distinguish which answers came from the machine and which came from the human respondent.
But how often do machines pass this test? Not very. Machines cannot, at this point, answer questions and interact the way people do. Applications like intelligent assistants are beginning to emulate that capability, but they are dependent on careful development of information sources that enable them to function correctly and accurately interpret and answer questions. The mechanism by which this process works is mysterious to many business and technology leaders investigating the potential of AI.
It’s easy to be misled by the popular myths and misconceptions about the space. Below are some of the myths about AI that you should be aware of in order make better decisions about where to invest time and resources.
Myth 1:
AI algorithms can magically make sense of any and all of your messy data.
Reality:
AI is not “load and go,” and the quality of the data is more important than the algorithm.
The most important input for an AI tool is data—not just any data, but the right data. That means data that is relevant to the problem being solved and specific to a set of use cases and a domain of knowledge. Many in the technology industry erroneously claim that an AI solution can just be pointed at data and that the right answer will be produced by powerful machine learning algorithms. The term I have heard used is “load and go,” where “all” the data is ingested into the system.
The problem with this approach lies in the vast landscape of explicit and codified enterprise knowledge. AI cannot make sense of data that is too broad or has not been processed in a way that makes it digestible by the system. When IBM researchers were developing Watson to play Jeopardy, they found that loading certain information sources negatively impacted performance.
Rather than ingest anything and everything, an AI system needs information and content that has been carefully curated and is of high quality. Bad data provides bad results, no matter what the system. An algorithm is a program, and programs need good data. When a system is using “machine learning,” the program arrives at an answer through continuous approximations and “learns” the best way to get to that answer by making adjustments to how it processes that data. Having the right data is more important than the algorithm.
Myth 2:
You need data scientists, machine learning experts, and huge budgets to use AI for the business.
Reality:
Many tools are increasingly available to business users and don’t require Google-sized investments.
Some types of AI applications do require heavy lifting by Ph.D.s and computational linguists; however, a growing number of software tools that use AI are becoming more accessible to business users. AI technology at one end of the spectrum does require deep expertise in programming languages and sophisticated techniques. Most organisations will opt to leverage business applications developed on top of tools that companies such as Google, Apple, Amazon, Facebook, and well-funded startups build.
For example, Amazon’s Alexa has already solved the tough problem of speaker-independent voice recognition and noise canceling technology that allows use of voice commands in less than ideal environments (i.e., noisy rooms with poor acoustics). Developing a voice interface to a business application then becomes an easier (though not trivial) problem to solve. The business value lies in using existing AI tools to address components of the application and configuring those components to the specific needs of the business. That process requires less data science expertise and more knowledge of core business processes and needs.
“Training” an AI is a somewhat mysterious concept that is frequently shrouded in technical language and considered a task only for data scientists. Yet for some applications (such as chatbots to support customer service), the information used to train AI systems is frequently the same information that call centre associates need in order to do their jobs. The role of the technical staff is to connect AI modules together and integrate with existing corporate systems. Other specialists are also involved in the process (content experts, dialog designers, user experience specialists, information architects, etc.).
Myth 3:
“Cognitive AI” technologies are able to understand and solve new problems the way the human brain can.
Reality:
“Cognitive” technologies can’t solve problems they weren’t designed to solve.
So-called “cognitive” technologies can address the types of problems that typically require human interpretation and judgment, which standard programming approaches cannot solve. These problems include the use of ambiguous language, image recognition, and execution of complex tasks where precise conditions and outcomes cannot be predicted.
The first example might be the interpretation of the correct meaning of “stock” from its usage in a particular context—whether something that a retailer keeps on hand or the instrument that a financial advisor recommends. Through the use of ontologies, which define relationships among different elements, the system can distinguish between the two meanings by interpreting the correct usage from sentence syntax, usage, and other contextual clues. The second example is recognition of people, animals, or other images under varying conditions of lighting, scenery, or physical positioning. An example of the third (a complex task with an unknown outcome) would be navigating a physical space under changing conditions as with self-driving vehicles or manufacturing robots.
Cognitive AI simulates how a human might deal with ambiguity and nuance; however, we are a long way from AI that can extend learning to new problem areas. AI is only as good as the data on which it is trained, and humans still need to define the scenarios and use cases under which it will operate. Within those scenarios, cognitive AI offers significant value, but AI cannot define new scenarios in which it can successfully operate. This capability is referred to as “general AI” and there is much debate about when, if ever, it will emerge. For computers to answer broad questions and approach problems the way that humans do will require technological breakthroughs that are not yet on the horizon. (See Myth 4).
Myth 4:
Machine learning using “neural nets” means that computers can learn the way humans learn.
Reality:
Neural nets are powerful, but a long way from achieving the complexity of the human brain or mimicking human capabilities.
One of the most exciting approaches to powering AI is the use of “deep learning,” which is built on so-called “artificial neural networks.” This design allows computer chips to emulate the way biological neurons learn to recognise patterns. The approach is being used to address a number of challenges, from improving language translation to speech recognition, fraud identification, image recognition, and self-driving cars.
While neural nets can solve many types of problems, they are not capable of enabling the creative synthesis of diverse concepts and information sources that are characteristic of human thinking. Some believe the capabilities of the Jeopardy-playing Watson computer exhibit this creative conceptual synthesis. It does emulate wide-ranging ability to associate disparate concepts. For example, when given the clue: “A long, tiresome speech delivered by a frothy pie topping” Watson came up with “What is a meringue-harangue?”
Watson is gaining wide recognition as a powerful technology, and the Jeopardy win was an amazing achievement. But that success does not translate into every domain, problem, and information source without a large amount of work. The Jeopardy project used carefully selected sources and finely tuned algorithms. It required three years of effort and $25 million.
The human brain contains more 200 billion neurons, with each neuron connecting with as many as 10,000 other neurons through synapses. However, a synapse is not like an on-off switch. It can contain up to 1,000 molecular switches. Add to this the fact that there are approximately 100 neurotransmitters that regulate how neurons communicate and the level of complexity is astonishing. By one estimate, a human brain has more switches than all of the computers, routers, and internet connections on Earth. So it’s not really surprising that the technology available now cannot duplicate human thought.
Myth 5:
AI will displace humans and make contact centre jobs obsolete.
Reality:
AI is no different from other technological advances in that it helps humans become more effective and processes more efficient.
Technology has been threatening jobs and displacing jobs throughout history. Telephone switching technology replaced human operators. Automatic call directors replaced receptionists. Word processing and voicemail replaced secretaries, email replaced inter-office couriers. Call centre technology innovation has added efficiency and effectiveness at various stages of standing up customer service capabilities—from recruiting new reps using machine learning to screen resumes, to selecting the right training program based on specific learning styles, to call routing based on sentiment of the caller and disposition of the rep, to integration of various information sources and channels of communication. In each of these processes, technology augmentation enhanced the capabilities of humans. Were some jobs replaced? Perhaps, but more jobs were created, albeit requiring different skills.
The use of AI-driven chatbots and virtual assistants is another iteration of this ongoing evolution. It needs to be thought of as augmentation rather than complete automation and replacement. Humans engage, machines simplify. There will always be the need for humans in the loop to interact with humans at some level.
Bots and digital workers will enable the “super CSR” of the future and enable increasing levels of service with declining costs. At the same time, the information complexity of our world is increasing and prompting the need for human judgment. Some jobs will be lost, but the need and desire for human interaction at critical decision points will increase, and the CSR’s role will change from answering rote questions to providing better customer service at a higher level, especially for interactions requiring emotional engagement and judgment.
Conclusion
The bottom line is that while you should not believe the myths, you should believe in AI. It is part of the inevitable evolution of how humans use tools and technology. Your organisation needs to continue the blocking and tackling of core customer service while thoughtfully investigating new approaches to adding efficiency and effectiveness to call centre processes. Digital workers powered by AI are here—they are already working for you with existing technologies that use AI under the covers. Now the job is to bring that augmentation to the next level.