What ELIZA can teach us about therapeutic AI

Posted on
4 min read

Chatting Robots — “chatbots” — have made their way into every industry, and now, they are making waves in healthcare. The concept of a virtual doctor seems novel — versions of the idea have disappointed patients and clinicians alike.

The practice of making a computer respond like a doctor is a bad idea, but it is not a new idea —  it spans decades. Long before there was Siri, or Alexa, there was DOCTOR ELIZA — it’s creator specifically warned us not to do this.

In 1966, scientists at MIT developed a set of computational rules they called “Natural Language Processing” — aptly named, the algorithm allowed a computer to parse natural language into meaningful elements. Today, when I type “where is the closest Chipotle?” into a search engine, the results I receive are not documents with the phrase “where is closest Chipotle?” — it returns a list of actual Chipotle locations near me. This translation between my search terms, and the results I receive, is Natural Language Processing at work.

This is the same technology we see in chatbots today, and it is indeed, deceptively simple:  

A user inputs a statement. The computer inspects that statement for a specific keyword or phrase. When the word or phrase is found, the computer finds a “rule” associated with the keyword, and uses the rule to generate a response. 1

MIT’s  Joseph Weizenbaum was the first to build natural language processing capabilities inside a computer, a chatbot program he called ELIZA.  Weizenbaum himself regarded ELIZA as parody of human-computer interaction; by his own admission, ELIZA represented the limited potential of “man-to-machine” intelligence.  He wrote of his concern over his colleagues tendencies to infer emotional assessments of ELIZA after ‘conversing’ with the program, despite its lack of processing or comprehension capabilities.

As a demonstration of the artificial “closeness” that ELIZA could generate, he created a version of the software with a modified rule: the DOCTOR rule.  

A user inputs a statement. The computer inspects that statement for the general meaning, rephrases it, and reflects the statement back to the user, or a question related to the sentiment, if one can be found.

Upon the publication of his ELIZA work, Weizenbaum wrote:  

“[…] machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked […] its magic crumbles away. The object of this paper is to cause just such a reevaluation […] Few programs ever needed it more.”

Today, ELIZA is best known for these mannerisms that mimic verbiage of a psychotherapist. The concept has been reincarnated in various forms in the decades since, including the creation of therapeutic chat-bots for your smartphone, like Youper, Wysa, and Reflectly.

Researchers have warned that the digital age has left us more prone to feelings of loneliness, isolation, and depression3; they’ve also shown that digital tools can help us feel more connected to people we love4, and can sometimes provide useful tools to improve our mental health5 .

The tech leaders of Silicon Valley have become infamous for their motto“Move Fast and Break Things” —a saying that has not aged well amidst lawsuits, privacy scandals, and other public missteps.  The science on the potential harm of chat-bots is limited, but we know that feeling ‘misunderstood’ by a technology can have a direct and immediate effect on our basic needs and sense of belonging6.

One fact is undeniable — technology has the potential to revolutionize healthcare. But perhaps, we should heed the advice of those who built these tools, andmove more slowly.

And try not to break anything.

References

  1. Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.
  2. Weizenbaum, Joseph (1976). Computer Power and Human Reason: From Judgment to Calculation. New York: W.H. Freeman and Company.
  3. Pittman, M., & Reich, B. (2016). Social media and loneliness: Why an Instagram picture may be worth more than a thousand Twitter words. Computers in Human Behavior, 62, 155-167.
  4. Shaw, L. H., & Gant, L. M. (2004). In defense of the Internet: The relationship between Internet communication and depression, loneliness, self-esteem, and perceived social support. Internet Research, 28(3).
  5. Bakker, D., Kazantzis, N., Rickwood, D., & Rickard, N. (2016). Mental health smartphone apps: review and evidence-based recommendations for future developments. JMIR mental health, 3(1).
  6. Filipkowski, K. B., & Smyth, J. M. (2012). Plugged in but not connected: Individuals’ views of and responses to online and in-person ostracism. Computers in Human Behavior, 28(4), 1241-1253.

Diana M. Steakley-Freeman © 2019 | All Rights Reserved.
Privacy Policy
Proudly Powered by WordPress.
Theme By Yours Truly