Ready to talk to a bot therapist?
In a context of rising mental health issues and a dearth of therapists, will AI be able to pick up the slack?
I don’t know about you, but I’ve started to get more comfortable with talking to a machine. What about you? Do you like the experience of talking to a machine or are you perhaps spooked by it? I started with Alexa (Amazon), then moved to Siri (Apple). Talking to Siri on my watch seems so Star Trekky. I’ve spoken to a few Pepper robots in various parts of the world, most recently in the form of a tourist agent in Madrid, Spain. Is it perfect? Hardly. Does Siri get it right all the time? Gosh no. But, will we ever be able to have a long, hearty conversation with a bot? The short answer is probably not in my lifetime. However, things are progressing at a rate of supersonic knots, so it’s not out of the question. As far as I’m concerned, having a meaningful conversation has two important outputs: there are the thoughts, feelings and learnings that I experience personally; and, secondly, there is the connection formed with the other person, who is hopefully experiencing the same vibe. I believe that a conversation with a machine can certainly be useful, entertaining and oh-so-convenient in the absence of someone with a beating heart. Moreover, as the giants of AI grapple with neuro-linguistic programming and the ever-expanding large language models (LLMs), they will inevitably learn a lot more about how our brains function. The biggest learning may be in getting a better understanding of how we all function and how we might learn to converse better.
“It’s good to talk,” so went a famed British Telecom ad many decades ago. Check out one of the spots with actor Bob Hoskins, below:
In today’s world, the message seems like it’s absolutely contemporary and perhaps more needed than ever. Back in the 1990s, it was good to talk. Now the focus seems to be more on the need to be heard. Which is subtly different. Where social media promised connection, it’s now more frequently a broadcast soap box where few listen. The average Facebook user likes 12 posts, comments on 5 and shares one each month. [Source] The average photo posted on Instagram gets 0.81% engagement. We’re starved for listeners and all seem to have a million things to say. People have a lot of things on their chest and no one to deliver it too. Article after article describes the rise of loneliness, depression, and suicide among the younger generations. This recent news report from Sky out of Australia describes, in an interview with Professor John Carroll, how the younger generation is at once suffering from more mental health problems while also losing their social skills. Carroll cites the damaging principle of “safetyism” in universities that proposes much censorship, that was well documented by Jonathan Haidt and Greg Lukianoff in their book, The Coddling of the American Mind. It’s now cliché, but we’ve never been more connected yet felt disconnected. And, I would insist that this issue preceded the troubles caused by COVID, that certainly exacerbated an already bad situation. Moreover, with the rise of mental health awareness and mounting numbers of people with mental health issues, the waiting lists to see therapists have lengthened horribly, making the need for listening ears all the more urgent. A site like 7cups is interesting because it provides live ears, even if not all those listeners are trained therapists. But the prevailing option seems to be to use Artificial Intelligence to fill the gap. Personally, I’ve been using my writing as my best therapy.
When I wrote my book, Heartificial Empathy, published in 2019, it was born out of my own need to delve deeper into empathy. On a personal level, I had just lost a close friend to suicide and was thinking about how I might have done better. At a societal level, I was observing the animosity and morosity, and thinking how much empathy could be a useful tool for everyone to develop. The central question was whether AI is going to help us or teach us to be more empathic? At the time of researching and writing of the first edition of the book, there weren’t many specialists looking at the encoding of empathy into a machine. In the intervening five years, the need for empathy and access to therapy has been heavily accentuated. The pandemic and the consequences of the lockdowns certainly contributed. We’ve also got economic uncertainty, a war on the doorstep of Europe and evermore strident worry about climate change. In the same interval, much has happened with tech and, specifically with AI. The prospects of having a machine that’s capable of demonstrating empathy have started to come into focus. The rush to make the biggest Large Language Model (LLM) has been remarkable. If you’ve not tried any of these conversational AI bots, let me present a few of the standouts.
Kooky Kuki
In a prior Dialogos article, I looked at the notion of how a bot might be able to engage in a meaningful exchange with a human being. I explored how artificial intelligence is being trained to hold longer, more ‘human’ conversations, that include agency, humour and imperfection. The Kuki (formerly Mitsuku) bot created by Steve Worswick (Pandorabots) is remarkable and has won the Loebner Prize Turing Test five times, a tremendous performance. I’ve exchanged with Kuki dozens of times over the years and always get a kick out of it (granted, I’m a bit geeky). Evidently, I’m just one of 25 million users with which Kuki has exchanged over 1 billion messages. [source] To move a bot like Kuki into the uncanny valley will take many more years of learning. For starters, it would need to reply a little slower. Its speed of response is breathtaking, so much so that it makes conversation completely unnatural. You’re no sooner finished typing your question and hitting <enter> than it’s already flushed up a response. In one of the quirks of the kooky Kuki, Worswick gave the bot agency to push back when it feels attacked. Using the AIML combined with human-assisted replies, Kuki is able to avoid toxic or malevolent actors.
The meta conversation app
Another formidable development is the open-sourced Blenderbot, created by Meta (Facebook). Blenderbot 3 was released in August 2022 and is able to stay up-to-date through contemporary access to the Internet and it ‘remembers’ past exchanges with users, making it a complete package for businesses wishing to take its off-the-shelf system. In addition to a huge dataset and 175 billion parameters with which to work, it was ‘trained’ using a Blended Skill Talk (BST) challenge, which entails a mixture of three skills: expressing a distinct personality, applying updated knowledge, and demonstrating empathy. For now, you’ll need to be based in the US or use a VPN to get access. If your curious, you can check out an exchange here.
ChatGPT — a marketing success…
ChatGPT, run by OpenAI and created on the back of GPT 3.5, launched a few months after Blenderbot 3, and has brought another level of intrigue into the natural conversational style, including its ability to admit that it’s wrong (on occasion). Whatever its failings, ChatGPT has been a marketing coup. It has entered the mainstream and has popped up in conversations in all sorts of unexpected situations. Even retired friends have had fun playing with it. While the technology is far from perfect, it’s certainly raised a lot eyebrows. With so much attention and money being thrown at AI, in so many different capacities, it looks like the frosty winter of AI has permanently thawed. Unlike Blenderbot, ChatGPT has the odd particularity of having been stuck in time relative to the dataset on which it was trained that basically means it doesn’t ‘know’ anything after 2021.
The AI marathon is turning into a sprint….
Whereas Artificial Intelligence has been talked about for many decades, it seems we’ve turned a proverbial corner. To the extent that the success of AI will depend on a blend of humongous datasets, vast amounts of energy and the best minds and programmers, the race won’t go unnoticed or without a wrinkle. Without a doubt, it will be the deep-pocketed businesses, i.e. the big tech GAFAM players and BATX in China, who will take the lead on building out these LLMs. Each company has their own approach, more or less proprietary. In light of these changes and exciting developments, I’ve taken on updating Heartificial Empathy for a new and updated 2nd edition. You can sign up for news of its release here.
Ready for an AI therapist?
Back in the 1960s, with the creation of the ‘joke’ ELIZA that used a simplistic Rogerian style of programming, there was an irony that people involved in the project were legitimately hooked. Even if they knew only too well it was a ‘dumb’ machine, they were attracted by the fact that, at last, something was listening to them. Fast forward sixty years and the need to be listened to has only increased. Mental health issues have seemingly spiralled out of control, the Diagnostics Statistics Manual is 5x bigger than its first edition and there aren’t enough therapists to meet the demand. For sure, there is no magic, all-knowing machine that is able to provide perfect therapy. For sure, ChatGPT doesn’t have it … yet. An interesting use case was run through KokoCares, a peer-to-peer mental health service out of SF. Using ChatGPT to provide keyed responses, humans tending to the Koko app were able to vet and edit the responses before publishing each time. In total, some 30,000 messages using the GPT prompt were sent out to 4,000 users. Koko cofounder Rob Morris noted that “People who saw the co-written GPT-3 responses rated them significantly higher than the ones that were written purely by a human. That was a fascinating observation.” When Morris tweeted the results (below) on Jan 6, 2023, he received a barrage of criticism.
Obviously, there are myriad ethical questions to sort through and a need to evaluate the efficacy for all manner of ‘patients.’ When it comes to machines taking the place of humans, there’s an inherent tendency (need?) to gauge the machine’s effectiveness with a higher standard than used for human beings. We have clearly not reached an acceptable level yet. At one end of the spectrum, you see people wanting to layer therapy into these commercially trained LLMs. While these commercially available bots may be accessible, they’re far from ready for real-world extended conversations, much less providing any therapeutic benefits. On the other side of the spectrum, therapists are trying to custom build a dedicated AI. In all events, from whichever way you come at it, the legitimate therapists know that the route to bona fide therapy using AI is long. Yet, do I believe that having access to an updated and smarter version of ELIZA could be useful.
Humanity in AI
In all my discussions with experts working on AI, there is no doubt that we’re getting better at encoding AI with humanistic elements and natural language. While there’s a temptation to go for the complete package, the smarter initiatives seem more intent to focus on perfecting a singular element of humanity. For example, the AI will be trained to recognize certain emotions (Affectiva), or for improving speech to text (Siri) or physically anthropomorphizing a machine (Sophia). Obviously, over time, these skill sets will start to be merged. But, for the medium-term anyway, there’s little prospect that a machine will be able to reproduce the sum total of all our human characteristics, leading to what’s often termed “Artificial General Intelligence” (AGI). Yet, there is a good deal of optimism that we’ll get machines to learn how to interact with human beings in more extended conversations. With due training and safety measures, machines will become better at providing a degree of mental health care. There’s Sonde Health that has biovocal marker technology to help detect mental health and respiratory issues through your voice. Kintsugi can detect clinical depression and anxiety through short clips of the voice. There’s Sparx, a serious game out of New Zealand, that helps young people with mild to moderate depression, stress or anxiety through an interactive online game. There’s a lot of work going on at the USC Institute for Creative Technologies (funded at least in part by the US military) around helping identify and treat mental health issues among veterans, including through Virtual Reality exposure therapy for patients with post-traumatic stress (PTSD). I recall the project (from DARPA) of SimSensei that uses camera technology to effectively detect mood, including depression and anxiety. There are a good dozen of bots that are being developed and improved in the domain of automated online therapy, such as Wysa, Woebot and Youper. For my podcast, I’ve interviewed two entrepreneurs, both with deep backgrounds in therapy, who are working to create online therapy using AI: Dr Grin Lord for Mpathic.ai and Scott Sandland with Cyrano.ai. With both entrepreneurs coming from a therapist background, you can be sure they are careful, unlike other conversational bots (as mentioned above) whose owners are commercial, publicly traded entities. I note that, for Scott and Grin, if their end game is to provide affordable and accessible mental healthcare, they are both honing their product via enterprise solutions, for the less regulated and less complex issues of business.
In short, there are many initiatives that, when combined, will undoubtedly be able to provide some form of effective therapy. Though, as reported by Ganes Kesari in this Forbes article, “AI is as good as the data from which it learns. [Head of Predictive Analytics at a large US pharma company, Sathiyan] Kutty notes that AI solutions can be biased because the data often come from people experiencing mental health struggles rather than those who are healthy.”
Our relationship with machines
In an online survey I conducted in 2019, I asked the question: “Do you believe you could ever have feelings for a machine?” 34% replied yes and a further 20% said maybe, suggesting that over 50% were already predisposed.
If I updated that that survey, I have no doubt that the numbers will rise. There are many variables to figure out as advancements in tech and our understanding of our brains (and minds) evolves. Our trust in a machine will surely be in proportion to the trust in the business (and business model) that is behind it (assuming that fact is known). Beyond the therapeutic efficacy, to what extent will the data we submit to the machine be kept properly confidential? How will such information/history be treated by the medical profession, mental health care practitioners, public health institutions, pharma and insurance companies? Harkening back to the ELIZA project, there’s obviously a space for a machine that, with almost infinite time on its ‘hands’, could provide a useful ear. With an ability to remember all interactions with each individual FAR better than a human being who relies on taking notes, the machine as therapist offers an important avenue for providing help to those in need.
The conversation as healer
While I’m excited by the developments in AI, I remain concerned about the root cause of the burgeoning mental health crisis. I don’t feel that we are having enough deep conversations about this topic. In an interview I had with Martijn Schirp, cofounder of The Synthesis Institute in the Netherlands, that provides safe, legal, medically supervised psychedelic retreats, we discussed the gaping challenge from which even well-people suffer: self-knowledge. This has given rise to what I term the Avatar Trap, where we attach more importance to the image of our self, seeking a safer, more risk-free and longer existence, presenting a profile that shuns pain, is detached from reality, and is enlivened through major causes. However, in the gap between this portrayed personality and our real selves, we create a void. And in that emptiness, we suffer from a lack of meaning. With the best of intentions, society (schools, parents, media, business…) has privileged any and all solutions that provide safety and avoid pain or suffering, with a slew of medical solutions. If all this seems like progress, why is it that mental health is so much on the rise? Like machines, psychedelic-assisted therapy (PAT) is held to a higher standard than any medication, whose efficacy levels are systematically inferior and whose list of secondary effects is patently longer. It’s as if we don’t want to hear of any other solution. Meanwhile, as Johann Hari proposed in his book, Lost Connections, I think the largest avenue for a better future will pass through good old-fashioned connection and talking. As a society, we need to be able to hold more meaningful conversations, about trickier and more sensitive topics. We need to control our wish to ‘ghost’ others or to break up via a text message, which has unfortunately become an accepted practice. In a 2018 survey by SimpleTexting with 500 millennials, 57% admitted to breaking up with someone over a text message, while “69% said they've been on the receiving end of a breakup text before.” [source]. And, I suggest, we should connect with one another, especially people with alternative opinions and backgrounds, or even strangers. Call an old friend you’ve not spoken with in years. Contact an elderly relative who’s living alone. Write a handwritten letter to someone you care about. Send a message of gratitude to someone who deserves it from you. These are among the old-fashioned methods that my granny might have encouraged me to do.
I believe that a conversation with a machine can certainly be useful, entertaining and oh-so-convenient in the absence of someone with a beating heart. Moreover, as the giants of AI grapple with neuro-linguistic programming and the ever-expanding large language models (LLMs), they will inevitably learn a lot more about how our brains function. The biggest learning may be in getting a better understanding of how we all function and how we might learn to converse better.