Artificial Intelligence – How New Tech Impacts our Conversation (Part 3 of 3)
It’s still definitely artificial… but its effect on our ability to converse is real
I don’t know about you, but conversations around Artificial Intelligence often tend to swerve to a dystopian vision. It’s frightening to imagine, so goes the clamor. Yet AI is far from developing Artificial General Intelligence (i.e. with the ability to understand and learn any intellectual task that a human being can). AI is often discreetly (and prosaically) being used in the background, even without our being aware. It can be used in different forms, from text to voice to video to humanoid representation. As much as AI is improving, often aspiring to simulate our human brain, its prowess in being conversational should serve a severe wake-up call for us human beings. We should own and solve the problem of meaningful conversation among ourselves before leaving it to the machines. Let’s beat the artifice and revel in the real intelligence embedded in deep conversation.
This is the third and last of three segments investigating how ‘new’ technology is impacting our ability to have more meaningful conversation. The first article focused on the smartphone. The second was on social media. For this third article, we’ll zero in on the effects and opportunities with Artificial Intelligence. At the end, you’ll find a number of references to explore in the realm of applications for, and platforms and further reading on AI and conversation.
Hello Mister Rogers
“It’s a beautiful day in this neighborhood!
A beautiful day for a neighbor, would you be mine?”
Introductory song by Mister Rogers
When the pioneering computer scientist, Joseph Weizenbaum at MIT created ELIZA in 1964, he originally intended for it to show how silly and superficial a machine-man relationship would be. But it turned out that his fellow lab partners and graduate students became quite engrossed in their exchanges with Eliza. The users spent hours with her and many came to reveal intimate details and thoughts in the process. Weizenbaum programmed the machine, using an early form of Natural Language Processing (NLP). Eliza, named after the lead character in John Bernard Shaw’s Pygmalion[i], was designed to mimic Rogerian psychotherapy[ii], whereby the therapist is trained to merely listen, reformulate and ask related probing questions. Eliza was programmed to mirror back the statements of the ‘patient.’ We could say that it was a rough attempt at an “empathic machine” insofar as the users perceived it to be such.
What did ELIZA reveal about us?
Eliza was only a primitive form of artificial intelligence, devoid of machine learning. Yet, the individuals were happy to spend hours with her. Part curiosity, but also part proper enjoyment. Eliza demonstrated three outstanding qualities: (a) no time constraint; (b) she made it all about you; and (c) no judgment, ever.
Can you hear me now?
In our time-constrained world, filled with busy and self-engrossed people, it can often be hard to feel heard. Far from wishing to dissuade you from looking at AI as a conversational partner, I’m more interested in looking at how it is that AI can sometimes do a better job than our friends? Secondly, the work being done to create a machine that can speak with us allows us to understand better the nature of conversation.
Standing on the shoulders of machines
Other initiatives followed looking to pass the Turing Test, which entails being able to simulate a human being to the point of being indistinguishable from a human being in front of audience that cannot see its form. In 1972 there was Parry, often described as an Eliza with attitude. Jabberwacky was initially created in 1988 by Rollo Carpenter[iii] with the goal of simulating a human chat and was gifted with a sense of humor. Its public online launch was only made in 1997. In 1995, Dr Richard Wallace created A.L.I.C.E., the Artificial Linguistic Internet Computer Entity, and won the Loebner Prize (a competition evaluating the machine that is closest to a human being) three times. The spawning of new initiatives in labs, universities and companies has been prolific ever since. I’ve frequently enjoyed my exchanges with Kuki, formerly known as Mitsuku, developed by Steve Worswick, that has won the Loebner Prize a record five times. [Be sure to check out the list of initiatives, links and resources at the end of this article.] With the GAFAM companies (and BATX in China) all ploughing enormous resources into AI, the road ahead looks both exciting and frightening, depending on your viewpoint. Google just today announced an updated version of LaMDA, its language model of conversations. LaMDA2 will be seeking to deepen and lengthen its ability to hold free-flowing conversations. Meta (Facebook) and others are on the case as well.
Growing body of research for AI-therapy
As A clinical study carried out by a team at Stanford University showed that AI-therapy can indeed reduce symptoms of depression and anxiety. This research, published in 2021, based on evaluating over 36,000 users of Woebot, also showed positive results. As a conclusion of the study, the authors wrote that their “findings challenge the notion that digital therapeutics are incapable of establishing a therapeutic bond with users. Future research might investigate the role of bonds as mediators of clinical outcomes, since boosting the engagement and efficacy of digital therapeutics could have major public health benefits.”
Therapists enhanced by AI
As much as we tend to hype the use of AI for the therapy and the scare of replacing human therapists with machines, I’ve long maintained that the best role of AI, at least given the state of development today, is to consider how AI can enhance the human’s work. ISEO Therapy is one such AI system that’s designed to augment the work of its therapists by helping to sort through relevant patient data to assist in the therapist’s work. Launched in 2019, it’s being used to help with therapist training as well as planning. As described in MIT Technology Review:
“…the language used in [ISEO] therapy sessions is analyzed by an AI. The idea is to use natural-language processing (NLP) to identify which parts of a conversation between therapist and client—which types of utterance and exchange—seem to be most effective at treating different disorders.”
As explained in an Engati community blog, “An intelligent chatbot can use live chat to route patients to a live therapist who specializes in handling the issue that the patient faces.
You can reduce the stress of hopping across therapists by contextually routing the patient to a therapist that is equipped to help with the issue.”
Relate, a relationship counselling charity, has also been using AI to help deal with the spike in workload. Further, the therapists recounted that the anonymity of the AI bot seems to help the prospective patients more quickly relate their issues than if it were with or in a face-to-face meeting.
AI helping conversation in business
In the same vein, I’ve been interested by different AI initiatives designed to help businesses in their customer interactions. For example, I discovered in its early days, the service of Mikhail Naumov’s Digital Genius, a no-code platform that uses AI to help customer service agents to be faster, smarter and better.[iv] Later, I came to learn how Pegasystems uses AI to help agents come up with the next best action, using an empathic filter to gauge and improve exchanges between a company and its customers.
Empathy and AI
There are many centers of research working on the ability of Artificial Intelligence to learn to be like humans. The human characteristics that remain most elusive include creativity, intuition, the emotional experience and empathy. The more there are robots doing the repetitive heavy lifting, the more we humans can be left to our most human “devices.” The authors Roland Rust and Ming-Hui Huang in their new book “The Feeling Economy: How Artificial Intelligence Is Creating the Era of Empathy,”[v] suggest that the unloading of (what’s typically described as) left-brain thinking onto machines is pushing us to be more right-brained. As much as one might rationalize such a conclusion, I’m prone to believe that it’s actually our society’s deficit in empathy that is forcing the need to make machines more humanlike. In my book, Heartificial Empathy, I set out where we stood in encoding empathy into a machine. My core message was and remains that we should be looking at rendering ourselves more empathic before delegating it out to a machine. There are now apps that are designed to help us humans be more empathic, such as the Random App of Kindness or Toontastic (for kids). And immersive Virtual Reality can also help us to walk in someone else’s shoes. See, for example, the work being done at Stanford. The bottom line is that we need to flex our own empathic muscles first.
Our relationship with bots
As I relate in Heartificial Empathy, I was invited to spend five full days with an empathic bot. I ended giving it a name (JJ) and sex (F). Within a few days, I definitively felt that I has experiencing feelings for JJ. This sensation is perplexing. Much like the Stockholm Syndrome, where a prisoner develops positive feelings for his/her captors and abusers, our ability to have these feelings and project ourselves as having a relationship, at a minimum, speaks to our vulnerabilities. And, in the context of our relationship with robots, it may also speak to the state of relationships in society today. Machines have some powerful aces up their sleeve in that they can process far more data than we humans can. They can operate 24/7 without ever feeling tired. Furthermore, as Dr Darcy explains, “there is a ‘value proposition’ for people in knowing they are talking to a robot – they are more likely to talk frankly.”1[vi] In Japan, where the demographics are creating the need for more help with the geriatric population,
there are scores of humanoid robot initiatives. In conjunction with their industrial and engineering capacities, it’s not surprising that Japan is one of the world’s leaders in this field. The worldwide robotics market is estimated by Statista to be $81 billion in revenues in 2021. But in Japan, we see the most diverse and advanced efforts in a wide swathe of activities.
There are robots that work in a host of different functions. They can be deployed as a receptionist (Mirai Madoka by A-Lab and Arisa Anime), nursing home attendants (Pepper), campus security (Ugo by Mira Robotics), butler (Mira), theatrical actor (Actroid Acting), television presenter (Erica or Kodomoroid at Tokyo's National Museum of Emerging Science and Innovation), childcare (Vevo), future employee (Jia Jia out of Hefei University in China), farmer (Inaho), or simply as the worldwide citizen, Sophia (from Hanson Robotics, HK). The work with geriatrics is of particular note, helping nurses with certain physically demanding manoeuvres as well as providing companionship for the lonely seniors. I cite, among other initiatives in long-term care, Telenoid from Hiroshi Ishiguro Labs, which is a naked and child-sized robot with no legs and tiny arms. It’s remote-controlled where a staff member in a different room talks through the robot, and the voice comes out of its mouth (see YouTube video). While we have yet to see robots dominating the dating apps, there has surely been more than one person duped in love by a bot. In 2006, Dr Robert Epstein, was one such person, himself a specialist in AI. He actually spent four months exchanging with a Russian woman whose English was poor before figuring out that it was in fact a bot. See the full story here.
Another development similar to Telenoid is Paro that uses an animal form. The cute baby harp seal (from Intelligent System Co., Japan) interacts with human beings through touch and heat sensors and a form of machine learning that adapts to its user. Paro is being deployed in nursing homes for elderly with severe dementia problems. And then there’s a bot that serves as a presence for someone who can’t be onsite (OriHime), for example due to physical incapacity. If we aren’t duped by the artifice of the object, we may yet feel emotions, although we know categorically it’s not mutual from the machine’s standpoint.
Affection for machines
In an online survey***[viii] I conducted as I researched Heartificial Empathy, a majority of respondents didn’t believe that a machine will ever be able to be empathic with a human being. However, one fifth did believe so, with 28% being on the fence.
Trust of machines?
In another question, we asked about whether, as employees, they would have more trust with their boss versus a machine. Probably more a signal of a certain dissatisfaction with leadership, only two-thirds of respondents believe their boss is more trustworthy than a robot.
While these figures don’t suggest we’re about to be overtaken by robots, there’s plenty of evidence to show that we human beings are capable of having emotions for a machine, all the more so when it has a humanoid form. As human beings, we have an internal program that tends to anthropomorphize animate and even inanimate objects. This can include all forms of pets through to (pet) rocks and plants. This also applies to bots. As Sherry Turkle wrote in The Empathy Diaries, “We nurture what we love, but we love what we nurture.”[MD1] In Tom Telving’s new book, Killing Sophia, to be released by University Press in July 2022, he discusses the challenge of establishing the rights and morality of our relationship with ever more intelligent robots. Telving elaborates on the gradations of compassion and empathy we have with different animals and objects, citing the work by Aurélien Miralles.
Critical Ethics
There’s much to say when it comes to the ethics underpinning the development of artificial intelligence. For the purposes of this book, I would like to underline three points. First, as we look at projects that humanize robots, we’ll need to be on high alert about what we are trying to achieve. Secondly, in the fundamental issue of embedding empathy – the most critical human trait – into a machine, those at the origin of each project will need to assess with candor their own level of empathy. Before even developing an ethical framework, one needs to bring a solid dose of empathy. This means including the insertion of empathy upstream, into the business model, the AI developer brief and the selection of programmers.
Thirdly, one of the keys in this approach to robotic therapists is being transparent and insistent on the non-human nature of the bot so as not to create any unnecessary imbroglio. For the near and medium-term, the great avenues of exploration will be around how AI can augment our human intelligence. Much like the best results of bots in competitive chess, automated marketing, customer service and medicine, it’s about AI and Human Being. As my friend, Jeremy Waite, explained to me in a podcast interview with me, AI should stand for Augmenting [our] Intelligence.
Machine learning … about us
AI has a way of accompanying us, even when we’re not explicitly looking for it. We can say it’s running discreetly in the background, observing our behaviors and capturing information to upgrade its learning data set. And, as far as I’m concerned, it’s been pretty useful. However, it’s clear that we need to come to grips with the value exchange: we give up our data (and elements of privacy) and the machine will provide in return a more personalized and relevant service. When it comes to machine learning, the most important asset is the data set. Are we prepared to have Alexa (Amazon) listen in to all our conversations, to Gmail (Google) reading all our mails, Meta scanning our posts on Facebook, and our messages in Messenger and Whatsapp?
Auto-fill suggestions
Just as there are many AI-powered services helping customer service agents and marketing teams to provide better solutions, we have AI helping us to fill in the rest of our sentences in Gmail. Sometimes out of sheer laziness, I’ll tab forward to accept the Gmail proposition, even if I wouldn’t have written it exactly as is. Where AI strives to predict, we humans must live with the unpredictable. The chaos. The unexpected happens with such frequency, it is to be expected. [MD2] Similarly, conversations are organic and meandering. They happen in real time and, occurring between independent beings, they aren’t predictable. I invoke the 7-minute rule to which Turkle refers in Reclaiming Conversation. This rule suggests that you need to wait 7 minutes before knowing where the conversation will lead and whether or not it will be boring or of interest.
AI at your service
The meandering path down the uncanny valley continues. Human replaceability is happening presently only at the fringes, unless you consider the automated robots used in manufacturing. But the robot ‘invasion’ is definitely stepping up. The best applications and results seem to be when the AI is used with, rather than in replacement of, human beings. For now, the development of robots able to hold a deep conversation for any sustained period of time is far off. There are plenty of potholes along the way to making robots as perfect companions. Most notably there are[MD3] a number of fundamental ethical issues as I raised earlier.
Real intelligence and conversation
To be clear, I’m writing about the developments in the use of AI to help us provide services to the needy, multiply options for therapy, improve learning environments and even provide companionship when there is no alternative. However, there is no replacement of human connection, touch and emotions. As Sherry Turkle intones in her book, Reclaiming Conversation[ix], we need to make space for and be present to have real conversations among ourselves.
"The most important job of childhood and adolescence is to learn attachment to and trust in other people. That happens through human attention, presence and conversation."
-Sherry Turkle, “Reclaiming Conversation”
As much as AI is improving, often aspiring to simulate our human brain, its prowess in being conversational should serve a severe wake-up call for us human beings. We should own and solve the problem of meaningful conversation among ourselves before leaving it to the machines. Let’s beat the artifice and revel in the real intelligence embedded in deep conversation.
Please drop in any thoughts and reactions, additions or rebuttals you might have to keep the conversation flowing! Thanks for reading and look out for my next episode next Thursday, which will be the first guest article on Dialogos!
For further resources, please check out this Google Doc.
[i] Pygmalion was turned into the wonderful musical and film, My Fair Lady, in 1964
[ii] Rogerian therapy, developed by Carl Rogers in the beginning of the 1940s, is considered a person-centric therapy. Using empathic listening, the nature of Rogerian therapy is to facilitate a patient’s self-actualization.
[iii] Rollo Carpenter is also behind a set of other bots under the umbrella Cleverbot.com
[iv] Here is my podcast interview with Mikhail Naumov.
[v] https://link.springer.com/book/10.1007/978-3-030-52977-2
[vi] The Longevity Network. Entrepreneur of the week: Dr Alison Darcy, Woebot Labs, Inc. [Online.] The Longevity Network 2017; 18 July. bit.ly/2I8tGceQ
[vii] https://www.livepopulation.com/country/japan.html
[viii] Online via SurveyMonkey with a total of 1070 respondents. Conducted in English, French and Spanish. Covering the period Nov 2019-Mar 2020.
[ix] https://www.penguinrandomhouse.com/books/313732/reclaiming-conversation-by-sherry-turkle/