For a long time, researchers have been fascinated with the idea of being able to talk to computers and receive prompt responses. Even before the term “chatbot” was coined, they started working on machines in the 1950s that interact with humans through natural language. Many algorithmic breakthroughs occurred later in the 1980s and 1990s.
But the customers have become more aware of chatbots’ possibilities only in 2011 with the release of Apple’s Siri. Although its capabilities were limited at that time, functionality improved quickly in the following years, which can be attributed to the new competition in the current global chatbot market, led by major players like Google Assistant, Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa.
Reports suggest that there are more than 300,000 different chatbots on Facebook alone. Nearly 45% of end-users today prefer chatbots as the main means of communication for customer service inquiries. About 1.4 billion people in the world use messaging apps and are willing to talk to chatbots.
Estimates suggest that the use of chatbots will result in cost savings of more than $8 billion annually by 2022, up from $20 million in 2017. According to Google, chatbots will have a near human-level lingual ability by 2029. This post will present a brief history of chatbots from the 1950s until today.
Alan Turing’s 1950 paper “Computing Machinery and Intelligence” is considered the first milestone in the chatbot’s history. The paper discussed some of the fundamental concepts in chatbots, including the Turing Test, a method to determine whether a machine possess human-like intelligence or not.
The Turing Test consists of a human test subject, interacting with two parties via textual messages. One is another human, and the other is a machine. The test subject does not know upfront which a party is a machine and which one is a human. But the subject knows that only one of the two parties is a machine.
He interacts with the two parties through textual messages, and his job is to identify the machine, using any variation of messages. If he fails to tell which of the two parties is a machine, the machine passes the Turing Test. Though Alan Turing did not invent the chatbot, his thoughts are still central to many discussions about artificial intelligence. The Turing Test was a motivation for the following developments. Even today, new systems still have to challenge the test.
Fourteen years after the introduction of the Turing Test, Joseph Weizenbaum began working on a program that would pass the Turing Test. He began working at MIT Artificial Intelligence Laboratory in 1964 and released the first known chatbot ELIZA in 1966. ELIZA was a text messaging-based agent, which worked on pattern matching and substitution methodology.
Written in a programming language called MAD-Slip, created by Joseph Weizenbaum himself, ELIZA ran on the IBM 704 computer. ELIZA’s most famous implementation was DOCTOR, which simulated a Rogerian psychotherapist. Most of the answers it gave were questions for further details about the information that the client mentioned previously. After the successful experiment, Joseph published the book “Computer Power and Human Reason: From Judgment to Calculation” in 1976.
In the book, he admitted that he did not realize “that extremely short exposure to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” This idea coined the term “Eliza Effect,” which describes that people quickly assume computers to behave like humans. This term is still used today.
Psychiatrist Kenneth Colby developed another popular program named PARRY in 1972. It was an attempt to simulate a human with paranoid schizophrenia. PARRY’s most famous demonstration was at the 1972 International Conference on Computer Communications (ICCC), where PARRY and ELIZA had a chat with one another. Later on, PARRY also passed a version of the Turing Test.
In 1981, a British Programmer Rollo Carpenter created a chatterbot Jabberwacky. The chatterbot aimed to simulate natural human chat in an interesting, entertaining, and humorous manner. It was mainly designed to mimic human interaction and to carry out conversations with users. It was not designed to carry out any other functions. The intention of creating this bot was that the program moves from a text-based system to a fully voice-operated module that could learn directly from sound and other sensory inputs.
Its creator believed that Jabberwacky could be incorporated into devices in the homes, such as robots or talking pets, entertaining and keeping people company. The internet version of the bot was launched in 1997. The evolved version of Jabberwacky was launched in 2008 with the name “Cleverbot.” It has joined twitter in May 2011 and has 23815 followers.
In 1991, Creative Labs released Dr. Sbaitso, an ELIZA-like chatbot for a sound card, one of the first chatbots for MS-DOS-based personal computers. It was an artificial intelligence speech synthesis program, which “conversed” with the user as if it were a psychologist. Most responses were along the lines – “Why do you feel that way?” rather than any complicated interaction. When confronted with a phrase it could not understand, it would often reply with something such as “That’s not my problem.”
In 1995, another chatbot was introduced by Richard Wallace with the name ALICE. It became famous for its realistic behavior, based on heuristic patterns instead of static rules. ALICE was a natural language processing chatbot that engaged in conversation with a human by applying heuristical pattern matching rules to the human’s input.
In 2001, ActiveBuddy Inc. introduced a chatbot SmarterChild, which was a brainchild of Robert Hoffer, Timothy Kay, and Peter Levitan. The idea was to add natural language comprehension functionality to the increasingly popular instant messaging and SMS platforms. SmarterChild was meant for fun and personalized conversation that the company planned to turn into customized, niche-specific products. ActiveBuddy changed its name to Colloquis and prospered selling a superior automated customer service SAS offering to large companies. Microsoft acquired Colloquis in 2007 for $46 Million and proceeded to decommission SmarterChild and discontinue the Automated Service Agent business.
In 2011, Apple released the popular virtual assistant Siri, which uses a natural-language user interface and voice queries and to answer questions, make recommendations, and perform actions by delegating requests to a set of Internet services. With continuous uses, the software adapts to users’ language usages, searches, and preferences.
Siri is a spin-off from a project developed by SRI International Artificial Intelligence Center with advanced machine learning and speech recognition engine, provided by Nuance Communications. Its original American, Australian, and British voice actors recorded their respective voices in 2005, without knowing that they were recording for Siri. The voice assistant was released as an app for iOS in February 2010. Two months later, it was acquired by Apple. Siri was later integrated into iPhone 4S at its release in October 2011.
In the same year, Watson was released by IBM as a question answering (QA) computing system, built to apply advanced natural language processing, machine learning, information retrieval, knowledge representation, and automated reasoning, to open domain question answering. It was named after IBM’s founder and first CEO, industrialist Thomas J. Watson. In February 2013, IBM announced that Watson’s first commercial application would be for decision management in lung cancer treatment at Memorial Sloan Kettering Cancer Center, New York City, in conjunction with WellPoint (now Anthem).
In 2012, “Google Now” chatbot/ virtual assistant was introduced by Google. For the first time, it was included in the Android 4.1 version on the Galaxy Nexus smartphone. The service became available for iOS in 2013, without most of its features. In 2014, Google added Now cards to the notification center in Chrome OS and in the Chrome browser. Later, however, they removed the notification center entirely from Chrome. In 2016, the evolved version of Google Now was introduced with the name Google Assistant.
In 2014, Amazon introduced their virtual assistant Alexa, capable of voice interaction, music playback, setting alarms, making to-do lists, streaming podcasts, playing audiobooks, and providing weather, traffic, sports, and real-time news. Alexa can also control smart devices, using itself as a home automation system. Users can also extend the Alexa capabilities by installing “skills,” i.e., additional functionalities developed by third-party vendors.
In the same year, Microsoft introduced its virtual assistant Cortana, named after a synthetic intelligence character in Microsoft’s Halo video game franchise. The character’s voice actress Jen Taylor returned to voice the personal assistant’s US-specific version. Cortana can set reminders, recognize natural voices without the requirement for keyboard input, and answer questions using information and web results from the Bing search engine.
In 2016, Google introduced another virtual assistant Allo, an instant messaging mobile app based on phone numbers. Its “Smart Reply” uses Google’s machine learning to suggest a few appropriate replies to the last message. It also analyzes images to suggest suitable responses. Similar to the Smart Reply in Google’s Inbox, it learns from the user’s behavior and improves its suggestions over time.
In 2016, Microsoft released another chatbot with the name “Tay” after the acronym “Thinking about you” via twitter. The bot was shut down after 16 hours of release as the bot posted inflammatory and offensive tweets.