Can you have a meaningful conversation with a robot?
The state of advancement of conversational AI, some ethical considerations and future prospects
I don’t know about you, but I remain positively curious about how the adventure with artificial intelligence will turn out. While there is plenty of worry and wringing of hands with regard to a dystopian future where we’re all managed by uncontrollable autonomous robots, I think we would do well to focus on ourselves first. If there is a market for geriatric robot companions, therapeutic bots to compensate for a lack of sufficient psychologists to deal with the current mental health crisis, and chatbots that can work 24/7 to manage an increasing demand for instantaneous customer service, it is that we consumers are creating the need. Moreover, if we don’t learn to practice among ourselves the art of having meaningful conversations, to develop our empathy and listening skills, and to adopt a measure of patience, it’s only natural that companies will turn to AI and machine learning to meet the burgeoning needs. Let’s fix ourselves before worrying about the evolution of AI. We’ll only program ‘good’ code if we know how to be good ourselves.
Will there be a day when our conversation with a robot will be genuinely meaningful? Another way to frame that question, meanwhile, is whether it will ever be more meaningful than one held with another human being? The answer to this latter question is a highly confident YES, because it’s already happening. The reasons are multiple. First off, we, the comparative human being, are all too frequently incompetent or unwilling to engage in meaningful conversation. The second reason is that recent progress in the development of Natural Language Processing (NLP), combined with enhanced datasets and processing power show spectacular promise. Thirdly, we are already entirely capable of attributing meaning to what a machine says. Just think about how we extract meaning from horoscopes or shapes in clouds. In this regard, we’ve already witnessed (and I’ve seen personally, as described in my book, Heartificial Empathy) people engage in an exchange with a machine that is meaningful to the human being. Granted, the definition of meaningful will vary widely from one person to another. But let me show how much progress we’ve made and peak into the future.
Listen to me
The experience of a meaningful exchange with a machine began back in the 1960s. In 1964, MIT professor Joseph Weizenbaum created ELIZA, a bot that was initially designed as a sort of spoof, but ended up providing far more than a sense of entertainment for the other lab colleagues. Using the principles of Rogerian therapy, ELIZA used open ended questions bouncing off the initial input by the user. The net result was that some users really felt engaged in a genuine conversation. As such, they found themselves revealing intimate details to the machine. To wit, one woman famously requested that Weizenbaum leave the room while she pursued her exchange with ELIZA. [Source] Although ELIZA was merely reformulating and asking questions, its ability to continue the exchange without judgment was enticing. Then, in 1972, in an effort to provide the bot with personality, Kenneth Colby created PARRY that has often been described as an ELIZA with attitude. There was more than intrigue in these bots. People were able to make connections and imbue meaning into their exchanges. You can find a fairly long exchange between PARRY and ELIZA on this Phrasee blog post that shows off their early conversational skills.
Giants standing on the shoulders of minnows…
Since these two pioneering bots, there has been a vast array of other celebrated initiatives. Many of these have been sunsetted along the way, such as Jabberwacky (via Web.Archive), A.L.I.C.E., Smarterchild and Tay (Microsoft’s aborted 2016 effort). Thanks to the evolution in computational power, advancements in our understanding of AI and a flourishing market for conversational AI bots, there’s been considerable progress in NLP and ever more sophisticated machine learning. All the big tech players have their initiatives, including Watson (IBM), Siri (Apple), Google Assistant, Alexa (Amazon), Cortana (Microsoft virtual assistant), and Messenger (Facebook). There are also more independent initiatives, including Cleverbot, the creation of Rollo Carpenter, and Kuki, the brainchild of Steve Worswick.
What do you mean?
While Cleverbot doesn’t strike me as terribly smart, it is worth checking out. If nothing else the warning statement on the top of the page is revealing on so many levels. It cautions:
may not be suitable for children - must be agreed by parent or guardian
it learns and imitates, is social content and aims to pass the Turing Test
can seem rude or inappropriate - talk with caution and at your own risk
the bot pretends to be human - don't give personal info even if it 'asks'
cleverbot does not understand you, and cannot mean anything it 'says'
I pick up on two specific elements. First, there are the legal and ethical challenges that programmers face. And secondly, in the last sentence, I smirked at the notion of meaning. The fact that the bot doesn’t ‘mean’ what it says doesn’t signify that someone interacting with it won’t impart meaning. And this is an important distinction.
Another independent initiative (part of PandoraBots) and by far the most eloquent of all in a text-based exchange, is Kuki (formerly Mitsuku). Kuki has won the prestigious Loebner Prize (a Turing Test competition evaluating the machine that is closest to a human being) five times. My personal interactions with Kuki have systematically been entertaining, if not enlightening. She has been coded with personality (i.e., a quirky sense of humor) as well as a solid sense of right and wrong. I’d highly encourage you to check her out.
Commercial needs oblige
The vast majority of conversational bots, though, are designed to deal with limited interactions and respond to specific objectives that companies are looking for. There are dozens of companies (e.g., Aurachain, Nice, Botsify, Unibot) proposing white label solutions to manage chatbots online, whose main purpose is more about warding off the deluge of individuals seeking immediate customer service than engaging in a long-form meaningful conversation. And this corresponds to a market need. The Chatbot market is projected by Fortune Business Insights to grow from $396 million in 2019 to $1.9 billion in 2027 at a CAGR of 22.5% in the 2020-2027 period. In a survey, meanwhile, 64% of consumers claim that 24/7 service is the most helpful chatbot functionality. (Source: The Chatbot) A recent survey from Kustomer, a customer service CRM solution that is now owned by Meta (i.e. Facebook), showed that the tolerance for waiting for customer service responses has literally been whittled down to next to zero! 87% of the 3,000 global respondents said that they wanted more immediate resolutions to their enquiries and problems. Having a round-the-clock chatbot will become a sine qua non for larger consumer companies.
In a prior Dialogos chapter on how Artificial Intelligence impacts conversation, I addressed many of the different ways that AI has seeped into our lives, sometimes without us even being aware. But the work on conversational AI is fascinating in that it brings us face to face with our own reality and limitations. Outside of the sheer scale that the vast computing powers allow, these machines offer an endless supply of listening ears. The machines can process unfathomable amounts of data and have a memory that is far more robust than our own. Having bots that work consistently 24/7 and that won’t fret about working conditions is clearly a massive benefit to companies and can be useful to customers, albeit still in limited cases. But, in all this work on conversational AI and analyzing the nature of our brains, we will surely gain a better understanding of ourselves as well. As recounted by Roland Rust and Ming-Hui Huang in their book, The Feeling Economy, the developments in AI will force us all to be more empathic, intuitive and creative.
Practical use cases
As I mentioned in the prior article (Tech Take #3), there are other bots whose purpose is more to augment customer service agents (e.g. Pegasystems and DigitalGenius). And there is also a raft of bots designed around helping on mental health issues. This can go from simple screening all the way up to therapy. For example, there’s Mpathic AI, a project run by Dr Grin Lord, whom I’ve had on my podcast. Other examples include Replika, Woebot and Wysa, all of which use at least in part, Cognitive Based Therapy (CBT).
The path forward
The question now is: where to from here? To what extent will we be able to create truly effective bots that can hold protracted meaningful conversations with us. Certainly, projects such as GloVe, ELMo, BERT and Word2Vec have made huge strides in understanding human-form language. 2018 has been described a pivotal year for machine learning handling text (NLP). [See here for more] The example using Google Assistant (Duplex AI) to book a hairdressing appointment or restaurant reservation over the phone for a client is absolutely astonishing for its effectiveness and, dare I say it, humanity. Check out the conversations that were done using voice via YouTube here.
For having worked 16 years in the hairdressing industry, I saw how retaining good talent – especially at the front desk – is a regular plight for salon owners. Maybe Google Assistant will soon be able to manage salon and restaurant reservation lines as well?
Is perfection meaningful?
How far will bots go? Notwithstanding the tantalizing / controversial discussions around general artificial intelligence, we are obviously careening straight into some mightily important questions that reflect on the nature of our society. For example, to the extent we can program a computer to have boundless information (handily beating the best Jeopardy players, chess and Go grandmasters), its supreme intelligence will actually need to be dummied down in order to be credible. Many of the bots answer my questions with such lightning speed that it is disconcerting. My brain doesn’t have the processing time to feel or reflect as one does in normal conversations with human beings. If it’s a know-it-all, that can be useful at times, but it surely can grate on you, too. As part of our human condition, we are programmed to feel useful, to explore the unknown and to overcome challenge. I don’t know about you, but an all-knowing – a.k.a. perfect -- ‘friend’ is hardly appealing to me. Unlike coding where any imperfection in the form of a bug is a deal-breaker, perfection in conversation is neither possible nor desirable.
How does one define meaningfulness?
The bottom line about meaningfulness is that context matters. As Grin Lord, PsyD ABPP, wrote to me, “I would define meaningfulness as a subjective experience where humans are interpreting the conversation to have relevance to them or their context.” Thomas Telving, author of Killing Sophia and an expert on ethics of AI and robotics, wrote to me:
“I will answer through an example... The last time you and I spoke [in our podcast], I think we had quite a meaningful conversation about social robots, conversational AI and its possible impact on human civilization. It was meaningful to me, because I learned something from it myself, and because I took pleasure in talking to you. But, perhaps even more importantly because it was also two-sided in the sense that I also felt that you gained from the conversation. It developed in the sense that you asked a question which I tried to answer. This made you utter your reflection about my answer and so the conversation moved on in a sort of dialectic process bringing new insights to both of us.”
Making a conversation meaningful
In our quest to develop bots that can entertain meaningful conversation, the question of how – and who – defines meaningfulness is key. If we’re looking for the bot (like Cleverbot above) to ‘mean what it says’ and to do so with a form of conscious intentionality, this is implausible. Casting aside the controversy around the self-declared sentience of Google’s LaMDA (Language Model for Dialogue Applications) project, the salient point is what the human being interacting with the machine feels. If it is for the interlocutor to attribute meaning in the conversation with a machine, the reality is that we’re already there. As Dr Lord expanded about the potential to have meaningful conversations with a machine:
“I think [a meaningful conversation with a machine is] not only conceivable, it happens all the time. When I worked at a therapy chatbot company, regularly those that received coaching from the bot reported that the bot had empathy and understanding; the bot conversations provided a non-judgemental and supportive environment for conversations because the bot (unlike humans) followed strict conversational rules to elicit connection.”
Telving elaborated how “[r]esearch has showed that we humans tend to apply human characteristics to humanlike AI. This fits well with empirical observations showing how some people fall in love with chatbots.”
For now, it’s impossible to believe that a machine will reciprocate the feelings. However, there will certainly be efforts to quantify and encode feelings. For example, Liveperson, a publicly traded company and leader in conversational AI, created a Meaningful Automated Conversation Score (MACS). The proprietary system has two AI-based models:
A model that predicts what issues are present in the conversation
A model that estimates a conversation quality score.
It’s worth noting that the MACS score – used only on closed conversations – has just three possible levels: below average, average, and good. For now, we can assume average is not always acceptable. Were there to be a wider range, I can imagine that there’d be good use of “very bad” and rarely use for “excellent.”
The future of meaningful conversational AI
Moving forward, it will be fascinating to see how programmers will seek to make the machine more meaningful in its intention. A raft of philosophers, sociologists, psychologists and language specialists have produced many tomes on the meaning of meaning (cf. the theory of meaning). Paul Grice (1913-1988), a philosopher of language, broke down conversation into four rules: quantity of appropriate information provided, quality (in the form of truth), relevance and manner, where things are said in a sensible and orderly way. [Read here for more] To the extent it is we — fallible and imperfect human beings — who are briefing and encoding the machines, we’re far from owning a good blueprint for machines to copy. As per the observations that instigated this very Dialogos, meaningful conversation and debate between us human beings is sadly missing. I would argue, further, that our conversations are about revealing meaning as much as they are about providing meaning. In other words, we form meaning through the signs, symbols and exchanges that happen throughout a conversation. Our emotional filters, prior experiences, pattern recognition and desired relationships come together to compound if not cloud our sense-making capabilities. This is a tall order for a computer to mimic, even if it attempts to dummy itself down.
As described by Shayaan Jagtap, in this article in Towards Data Science,
“[t]hese are (more or less) the essential elements that give humans the ability to converse about abstract, grand, and subjective topics. The way we talk about religion, emotion, art, meaning, and other “human” values is purely based on the way our brain interprets the world around us, and how we reason through that understanding. It would be no shocker to say that a machine needs to do the same to contribute to such a discussion in a meaningful way, at least one that we can connect with.”
Future hurdles to overcome?
While we are already able to impart meaning into a conversation with a machine, there are several hurdles that go well beyond technical know-how into making machines hold meaningful conversations. Many of these hurdles are human imposed, not least of which is our inability to hold meaningful conversations amongst ourselves. Meanwhile, there are a number of challenges that are ethical and existential in nature. A new study by MIT Sloan Management and BCG, that explored responsible AI, showed that 79% of those companies implementing responsible AI had frameworks that “remain limited in size and scope.” Corporations that are using AI that don’t ramp up their efforts in creating and implementing an effective ethical framework.
From listening to manipulation
Telving wrote about how machines, well informed and subtly programmed, will be able to manipulate the user. A ‘sucker’ for the machine’s affections could easily be duped into doing or buying things against his/her logical reasoning. Telving wrote, that “these can take form of offering sexual conversations and even marriage proposals. This may sound downright crazy to many but nevertheless it is, as far as I am informed, a thriving business. Other forms of commercial abuse are possible too, of course.” Who will be overseeing those ethical parameters, especially when they lie hidden in black boxes, away from the regulators’ eyes? On a more upbeat note, Dr Lord wrote how there is positive research to be done in developing therapeutic AI that is more adapted “across language and cultures.”
So, the gauntlet is thrown down: shall we lean in to have more meaningful conversations amongst ourselves or hold out for Silicon Valley to fabricate a bot that beats us in our most human empathic, listening, and intuitive skills?
If we don’t learn to practice among ourselves the art of having meaningful conversations, to develop our empathy and listening skills, and adopt a measure of patience, it’s only natural that companies will turn to AI and machine learning to meet the burgeoning needs. Let’s fix ourselves before worrying about the evolution of AI. We’ll only program ‘good’ code if we know how to be good ourselves.
My thanks to my network for helping to research this article and especially to Dr Grin Lord and Thomas Telving for their valuable inputs.
DIALOGOS - Meaningful Conversation is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.