Managing the Intersection of Empathy and Artificial Intelligence - The Bard v ChatGPT
Verdict: What the Google Bard replied... is meaningful
On the heels of the release of the second and significantly modified version of Heartificial Empathy, Putting Heart into Business and Artificial Intelligence (DigitalProof Press 2023), I see that the subject of AI and empathy is now everywhere. I’m speaking about it this evening to the CFA Society in Stockholm. I recently participated an Empathy Circle (EC) with Edwin Rutsch and Dr Lidewij Niezink, who cofounded the EC concept, along with Dr Shelton Davis, where we examined what’s happening and how we should approach Empathy in AI. You can check it out here.
On Twitter, meanwhile, my customised filter “Empathy & Artificial Intelligence” is constantly populated, filled with people’s mostly negative comments. These tweets often bemoan AI’s encroachment of our humanity or call out as to how humans must adapt and/or protect what makes us different from machines. My main viewpoint is that the genie is out of the bottle and we’re going to have to learn to live with it.
To address the intersection of empathy and artificial intelligence, I called on AI to write the article (eek, I hear from content creators). My human input came in the form of the prompt (below) and an analysis (at the end). I didn’t touch the output.
I gave Google Bard and Open AI ChatGPT the following prompt:
“Write an article of around 1500 words that explores the intersection of empathy and artificial intelligence. How does one encode empathy into AI? The article must include examples of companies that are actively working on making AI more empathic. The article must address the key ethical dilemmas. It should explore the opportunities and dangers of making AI more empathic. What are the most acceptable business models and how can legislators most effectively regulate how empathy is being applied into AI.”
Here’s why Bard won. It immediately gave me three drafts from which to choose, which is quite impressive. You can find links to the drafts below. In none of them did I find any egregious faults or hallucinations as we are wont to read about in the media. Certainly, not all three were equal and it was interesting to see how far apart the styles were. Herewith my first choice from Google Bard:
From Bard - Empathy and AI
Empathy is the ability to understand and share the feelings of another person. It is a fundamental human emotion that allows us to connect with others and build relationships. Artificial intelligence (AI) is rapidly becoming more sophisticated, and there is growing interest in the potential for AI to be used to encode empathy.
There are a number of ways that empathy can be encoded into AI. One approach is to train AI models on large datasets of human conversations. This allows the models to learn how to identify and respond to emotional cues. Another approach is to use reinforcement learning to teach AI models to behave in an empathetic way. This involves rewarding the models for taking actions that are beneficial to others.
There are a number of companies that are actively working on making AI more empathic. One example is Google AI, which has developed a chatbot called LaMDA that is capable of carrying on conversations that are indistinguishable from those with a human. Another example is Affectiva, which develops AI software that can be used to measure and analyze human emotions.
The encoding of empathy into AI raises a number of ethical dilemmas. One concern is that AI could be used to manipulate or exploit people. For example, an AI could be used to target people with advertising that is based on their emotional vulnerabilities. Another concern is that AI could be used to discriminate against people. For example, an AI could be used to make decisions about who gets a job or a loan based on their emotional responses.
Despite the ethical challenges, there are a number of opportunities for AI to be used to promote empathy. For example, AI could be used to provide emotional support to people who are going through difficult times. AI could also be used to help people understand and connect with others from different cultures.
The most acceptable business models for AI-based empathy are those that are transparent and ethical. Companies that develop AI-based empathy products should be clear about how the products work and how they use data. Companies should also be committed to using AI to promote empathy and understanding, not to manipulate or exploit people.
Legislators can most effectively regulate AI-based empathy by setting clear standards for how the technology can be used. These standards should be based on the ethical principles of transparency, fairness, and accountability. Legislators should also work to ensure that there are strong enforcement mechanisms in place to protect people from the misuse of AI-based empathy.
The intersection of empathy and AI is a complex and rapidly evolving field. There are a number of challenges and opportunities that need to be considered as AI becomes more sophisticated. By working together, we can ensure that AI is used to promote empathy and understanding, not to manipulate or exploit people.
Examples of Companies That Are Actively Working on Making AI More Empathetic
Google AI
Affectiva
IBM Watson
Microsoft
Amazon
Facebook
OpenAI
DeepMind
Emotient
Empatica
Key Ethical Dilemmas
The potential for AI to be used to manipulate or exploit people
The potential for AI to be used to discriminate against people
The lack of transparency and accountability in the development and use of AI
Opportunities and Dangers of Making AI More Empathetic
Opportunities:
AI could be used to provide emotional support to people who are going through difficult times.
AI could be used to help people understand and connect with others from different cultures.
AI could be used to promote empathy and understanding in society.
Dangers:
AI could be used to manipulate or exploit people.
AI could be used to discriminate against people.
AI could be used to harm people.
Most Acceptable Business Models
Transparency: Companies that develop AI-based empathy products should be clear about how the products work and how they use data.
Fairness: AI-based empathy products should be used in a way that is fair and does not discriminate against people.
Accountability: Companies that develop AI-based empathy products should be accountable for the way their products are used.
How Legislators Can Most Effectively Regulate How Empathy Is Being Applied into AI
Set clear standards for how AI can be used.
Establish strong enforcement mechanisms to protect people from the misuse of AI.
Work with companies to develop ethical guidelines for the development and use of AI.
My feedback and analysis of the Google Bard’s answers:
For starters, in all 3 drafts, the Bard ignored the 1,500 word count. The 3 articles were between 728 and 825 words. But more isn’t always better, eh? In any event, in this (first) draft above, the fact that it used bold paragraph headings and bullet points to answer several of the points, made the text feel easier to read and follow. In terms of key learnings regarding the prompt, the Google Bard followed the structure of the questions/instructions as they were laid out in my prompt, as opposed to creating its own organisation. This would suggest that the better the prompt is organised, the better the output. As we’ve seen, there is already a new job title cropping up in the form of Prompt Engineer. In all the proposed drafts, I noted that Bard was clearly keen to promote Google products, proof that there is ALWAYS going to be an agenda. As we’ve seen in other cases, the extent to which we, as consumers, understand that agenda/bias within these Large Language Models (LLMs) will be important. What is the agenda and ethical framework underpinning the data curation, machine learning and human interjections (i.e. through Reinforced Learning from Human Feedback)? If we shouldn’t be foolish enough to put our head in the sand about these AI developments, this doesn’t mean we should roll over and let things happen willy-nilly. I support some form of regulation, including making companies render explicit their ethical framework.
In terms of mentioning other initiatives of AI with empathy, the suggested projects and companies were all rather basic. It only proposed a couple of smaller companies, including Empatica, an MIT spin-off, that is a pioneer in offering continuous, passive, and unobtrusive monitoring for patients with neurological conditions. It didn’t talk about other more obvious, if niche, initiatives that are specifically and more deeply involved in the cross-section of AI and empathy, such as Mpathic.AI, Hume.ai and Cyrano.ai.
The worst of the three Bard articles was draft 3, which was both repetitive and disordered. In draft 2, the flow was better (than draft 3) and had a few different points (for example, regarding ChatGPT… a competitor to Bard). While draft 3 did include an essay title, draft 2 had neither a title nor paragraph headings. But the best effort by Bard was in draft 1.
Bard versus ChatGPT
If I compare the Bard with ChatGPT (both with their free versions), the latter provided an answer in 641 words here that was incomplete and thus not very satisfying, nor useable as a standalone. At least, Bard has the good sense to provide a complete answer each time (all 3 drafts). That said, the way ChatGPT formulated some of its paragraphs was quite effective. Specifically, I liked this paragraph:
"Additionally, there are questions about the authenticity of empathy in AI. Can machines genuinely feel and understand human emotions, or are they merely mimicking empathy? Some argue that even if machines can accurately recognize and respond to emotions, they lack the underlying subjective experience that makes empathy truly human. It is crucial to distinguish between simulated empathy and genuine human empathy to avoid creating false expectations and potential harm.”
Overall, I stand by the choice of Bard in this particular exercise. What do you think? Did you learn anything from the article as presented by Bard?
Here are the key takeaways:
My key driving point is that we are going to have learn how to manage these new more powerful AI machines. In every industry, I see opportunities and threats. Basically, I see one of three responses to the way AI is evolving: either, you’re going to be out of a job, need to alter/enhance your expertise(s) and/or learn to work with AI.
(a) In terms of writing your prompt, it’s best to have in mind the logical flow of the paper/article in advance.
(b) Thanks to having three drafts, you can certainly use bits and pieces from them all of them in order to craft a better version using human inputs!
(c) The best output will involve human oversight and inputs, to verify information, and depending on the intention add some spice and value through a touch of personality, a sense of humour or maybe a more esoteric expertise. For example, in the case of this article, I was able to critique and enhance the initiatives that Bard proposed.
What are you thoughts? Scary? Impressive? Blah?
I think it’s really important to have transparency; I don’t ever want to be fooled by the ‘feelings’ expressed by a machine! I speak to my car at times when I drive but I clearly know it’s not a person!
Your article is thought provoking!
Cool experiment! And I agree with the genie out of the bottle wrt AI. The shift that’s coming is going to be more about jobs displacement - this time white collar - than anything else. But as we’ve learned that can wreak havoc.
A machine feeling empathy seems like a null concept ofc so that’ll have to be worked out. Human connection is not about to be replaced soon.
Meanwhile I’m very interested in trying an empathy circle and learn what all that’s about... Thanks for writing and sharing this, Minter!