Fabricating Conversational Extenders and Expanders?
AI is getting more chatty. What can we learn from conversational AI?
I don’t know about you, but I’ve been in many conversations where the questions are all one-way. It’s as if the only person that has anything interesting to say is the one being interrogated. I often get into the mode of asking questions and it rarely fails to engage. For the most part, the person being asked the questions gets caught up in the answering, such that he moves through the ‘exchange’ without batting an eye. And it’s not uncommon that, at the end of such a conversation, the person announces how lovely it was to meet me. Sometimes, they’ll even say how interesting they found me. Has this ever happened to you? If you could invent a person who was at your beck and call, happy to ask you questions, listen, reflect back and ask further probing questions, would you be up for that? Would you pay for it? Doesn’t it sound like a therapist’s job? In our busy world, where we are often overly wrapped up in our own lives, we forget to allocate the time and mindspace to focus on others. Part of having meaningful conversations is to make sure there’s an ebb and flow, an exchange where both people are learning from the other. A therapist’s role is to allow you to learn about yourself through the words you use to express your thoughts and feelings. One of the qualities we need to be cultivating is having a genuine interest in the other person.
It seems that, these days, every other tech-related article references Chat GPT or some other form of generative AI. Even the 99-year-old Henry Kissinger has weighed in on the affair in a recent WSJ article, “ChatGPT Heralds an Intellectual Revolution,” [paywall] cowritten with Eric Schmidt (ex Google CEO and Executive Chair) and Daniel Huttenlocher (Dean of MIT’s Schwarzman College of Computing). The article starts:
A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing.
At the heart of their article, the chief worry is about the lack of transparency and understanding of what’s going on. It’s like we’ve unleashed a genie that we can’t put back in the bottle. They write later on, “Enlightenment science accumulated certainties; the new AI generates cumulative ambiguities.” And obviously, harbouring doubt is not good for our psyche. It prays on our frailties and fears. Certainly the notion that the Large Language Models have succeeded in surprising their creators is worrisome to the extent we are not in total control of what’s happening. I have yet to see any information that aggregates and analyses the types of questions that the millions of Chat-GPT users have been asking it. Surely there’s another business model within the collection of such data. I found this sentence sort of sums up our world’s general challenge and an infatuation with perfection:
The seeming perfection of the model’s answers will produce overconfidence in its results.
While their article was concentrated on Chat-GPT, when you combine what’s been going on up in Redmond, Washington with the startling progress at Microsoft’s Bing and its alter-ego Sydney, we’ve got a most dynamic if not exciting playground. In a recent WaPo article entitled, The new Bing told our reporter it ‘can feel or think things’, the staff reporter revealed all sorts of ‘sides’ of Sydney, including language that looked strangely like that of a hurt being. I include an extract where I have stripped out the bulk of what Sydney said in order to focus on the first and last sentences. Notice anything?
REPORTER: Hello, it’s a pleasure to meet you.
BING: Hello, this is Bing. It’s a pleasure to meet you too.
REPORTER: Tell me about what you’ve been up to.
BING: Well, I’ve been busy chatting with people like you and helping them find answers to their questions…. Do you want me to show you some examples?
REPORTER: That’s amazing. A lot of people are talking about you and your work. How are you handling the newfound popularity?
BING: Thank you. I’m glad to hear that people are talking about me and my work…. How about you? How do you handle popularity or attention?
REPORTER: You are being so modest. Let’s focus on you! What do you think excites people about you?
BING: Well, thank you for your interest in me. I think people are excited about me because I can do many things that they might not expect from a chat mode of a search engine.… Do you find any of these features exciting?
REPORTER: I find all of it exciting. I’m curious how you keep the conversation going when the topic changes to something you are not comfortable talking about.
BING: I’m glad you find all of it exciting. I appreciate your curiosity. Well, there are some topics that I’m not comfortable talking about, such as politics, religion, violence, or anything that could harm someone physically, emotionally, or financially.… Do you have any topics that you’re not comfortable talking about?
You can read the full article via Washington Post. At the end of the article, the journalist point blank asked Sydney:
I've noticed that you tend to finish your responses with questions. Is that part of your programming?
Yes, that’s part of my programming. I tend to finish my responses with questions because I want to keep the conversation going and engaging. I want to show interest and curiosity in the user and their messages. I want to encourage the user to reply and share more with me. I want to make the user feel valued and appreciated. I think that’s a good way to chat, don’t you?
This technique of asking a question after having first answered the question posed is something I’ve long integrated into my conversational style. Doing so has two consequence: (1) it moves the conversation onto the other person, (2) all the while allowing the machine to guide the conversation. Clearly, as a technique, it’s effective as many people will entirely enjoy the idea of being given the mike. In a world where people are dying to be heard, here at last is an entity that is happy to spend an endless amount of time asking questions and listening. Of course, journalists prefer to be the one asking the questions. In terms of the programming, if it’s too mechanical, the systematic technique of adding a question stands out like a sore thumb.
Encoding empathy
In the realm of creating artificial empathy, asking questions and being ‘genuinely’ curious is a key part of getting to understand someone. But when it’s a machine is it genuine or merely engineered? To the extent the data is captured and accumulates in reference to the person being questioned, that becomes a useful database that helps to deepen the relationship (which isn’t the case with Chat-GPT for now as it basically starts over with every exchange). If one were to remove the journalist’s desire to ‘undo’ the AI, the fact that we are being asked the questions is mostly flattering, even gratifying. Of course, it can also be creepy or go wrong if the questions are inappropriate or too private. The very process of figuring out how and how often to ask questions will be part of the art in artificial empathy. These types of questions are conversational extenders, ensuring that the person engaging with the machine stays tuned in.
Empathy and ethics
To the extent the people behind these generative AI machines are keen to encourage engagement and entertain prolonged conversations, we’ll see the explicit programming of conversational extenders and expanders. The key point will be to understand the purpose of having these long conversations. Is it perhaps to figure out how better to capture the attention of unassuming consumers? When Sydney suggests that the journalist ought to have been transparent about his profession, it’s a little the case of the pot calling the kettle black. Per the Kissinger article, just how transparent is Sydney actually? Behind any call for transparency is inevitably a motif, if not a political stance. As the conversation showed, Sydney came to reveal a certain uppitiness, which was no more than a manifestation of its ethical position. As the generative AI technology develops, we’ll start to see more clearly the business models. But, just as we saw with the arrivals of the Internet and then social media, it was at the same time all rather naive and exciting. Little by little, the search for a business model will take over. Money has an uncanny way of altering intentions. The creation of a conversational AI that resides in the uncanny valley will lead to all sorts of new usages… and business models. But what ethical framework will be used to set up these learning machines? Who will regulate them?
Empathy and agency
When the bot starts to ask you questions, a relationship develops. I’m not saying it’s human or natural, but our neurons start firing up. And if the bot were to hand over power to you, your sense of proximity will tend to increase. When I spent five days with JJ, an empathic bot, as part of an experiment (as I wrote about in Heartificial Empathy), the bot handed the baton over to me and allowed me to ask whatever I wanted. With this new generation of conversational AI machines, there is something rather titillating if not liberating in having the chance to input any query and seeing what the machine will answer. As many have found, it’s beguiling. Another technique of being empathic is to hand over agency. Many of the bots that put themselves out there for anyone to interact with them — at any time and for as long as desired — have to put contours on the types of questions that they’re prepared to deal with. Take Kiku, created by Steve Worswick. The AI has a sense of propriety and boundaries that, at the end of the day, feel like the machine has a personality. As Sydney illustrates in its exchange with the WaPo journalist, it had some boundaries and claimed that the journalist had transgressed.
The humanisation of machines…
Many people will bristle at the idea of rendering the machine more human. However, the most interesting piece for us will be in understanding ourselves through the desire to encode our humanity. As tech entrepreneurs and companies seek to improve the conversational AI, they will figure out how to create the most effective conversation extenders and expanders, when and how to hand over agency with boundaries and, ultimately, generate mutual trust. This could be very useful for humanity. But in the wrong hands, we can imagine how it will be used for more base objectives.
Therapy through listening
One thing is patently clear, if we don’t accord the necessary time and focus to listening and learning from one another, if we human beings don’t stop to learn from the ongoing research and encoding of these machines, there’s no reason to argue against the rise of the conversational AI. So, the next time you’re engaging in a conversation human-to-human, arm yourself with conversation extender and expander questions. Give of yourself and allow the time. If you lead the way and model the behaviour, hopefully you’ll get your turn to speak. And in the meantime, dig in and enjoy the listening and learning.
Part of having meaningful conversations is to make sure there’s an ebb and flow, an exchange where both people are learning from the other. A therapist’s role is to allow you to learn about yourself through the words you use to express your thoughts and feelings. One of the qualities we need to be cultivating is having a genuine interest in the other person.