In my 35 years in the PR and communication industry I haven’t seen as much chat about a tool, ChatGPT, since Microsoft Windows and the mobile phone were introduced.
ChatGPT and responses to crisis or issues
I’ve been reading a lot about ChatGPT and experimenting with it but haven’t yet dared to use it in any external written material. As the Vanderbilt University found in the aftermath of the Michigan State shooting, it can backfire badly. Journalist Marco Margaritoff, in an article in the Huffington Post[1] said: “Vanderbilt University’s Peabody College of Education and Human Development is apologising after students learned the school used ChatGPT to compose an email about this month’s mass shooting at Michigan State University.”
The backlash and sentiment are probably best summed up by a senior whose sister attends Michigan State University: “There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself.”
And therein lies the issue with using AI to generate copy related to human suffering or tragedy – it cannot reflect the subtle nuance of human emotion.
Will it ever? I don’t know, but for the moment it is a major weakness. The essence of this was captured when Margaritoff reported that the Peabody College dean, Camilla Benbow said how her faculty had: “…missed the crucial need for personal connection and empathy during a time of tragedy.”
I haven’t even begun to touch on how using an AI generated response like this aligns with the values of the organisation.
But what about other uses for ChatGPT; how’s it being used and how is it faring? I was shocked to learn from a Reuters’ article by Greg Bensinger[2] that there were over 200 e-books already on Amazon which listed ChatGPT as an author or co-author. Bensinger points out that the number is rising daily: “There is even a new sub-genre on Amazon” he says. “Books about using ChatGPT, written entirely by ChatGPT.”
Worryingly, he also warns that many authors are failing to disclose they have used it so: “…it is nearly impossible to get a full account of how many e-books may be written by AI.”
Misinformation
One of the issues we as communicators face is how to deal with misinformation and disinformation. As such, one of my biggest worries is the ability for people to use ChatGPT for misinformation and, when suggesting better answers to the platform, to feed it misinformation. So, I asked it directly: “What do you do or how do you assess whether someone inputting further information into your answer is not feeding you misinformation- or disinformation?”
Its first response wasn’t exactly what I expected. Fundamentally it said that as an AI language model, it did not have the ability to independently verify the accuracy of information inputted but that it could provide a range of possibilities based on patterns and insights extracted from vast amounts of text data.
Not satisfied with this I reframed my question: “Using the range of possibilities based on patterns and insights extracted from vast amounts of text data are you able to refute someone inputting information as incorrect or misinformation?” This time it provided something more interesting - I only quote one of the three paragraphs: “If someone inputs information into my response that is clearly incorrect or misinformation, it's possible that I may not be able to recognize it as such. However, if there is information provided that contradicts previous information, or if there is an inconsistency in the input, I may be able to flag it as potentially incorrect or in need of further clarification.”
In an interview with Joe Pompeo from Vanity Fair[3], Steven Brill, co-CEO and co-editor-in-chief at NewsGuard, after testing ChatGPT on producing false narratives said: “NewsGuard found that ChatGPT does have safeguards aimed at preventing it from spreading some examples of misinformation. Indeed, for some myths, it took NewsGuard as many as five tries to get the chatbot to relay misinformation, and its parent company has said that upcoming versions of the software will be more knowledgeable.” NewsGuard scores trust and credibility of news and information sites.
For now, at least, I still worry about nefarious use of the chatbot to generate misinformation or subtle changes to existing narratives, facts and figures.
So, I went back to the bot and asked it: “How would you know if I made up some research and inputted the data as fact into your memory?”
The answer confirmed my fears: “As an AI language model, I don't have the ability to independently verify the accuracy of information that is inputted into my memory.” And: “If you inputted made-up research data as fact into my memory, I would have no way of knowing that the data is not accurate or valid, and I may provide responses based on that data. It's important to note that deliberately inputting false or misleading information into any AI model is not ethical or productive and could lead to the spread of misinformation.”
I rest my case.
What about the future?
Many people have expressed negative, visceral reactions to ChatGPT. I’m not one. I have concerns but I think in time the positives will far outweigh the negatives.
Some of these concerns are echoed by Gary Marcus, an AI expert and Professor Emeritus of Psychology and Neuroscience at New York University. He is quoted in an article in the Washington Post[4] by Gerrit De Vynck, Rachel Lerman and Nitasha Tiku: “It doesn’t really have a clue what it is saying and it doesn’t really have a moral compass.”
So, I thought I would give ChatGPT the final say and asked if it has a moral compass. Its response: “As an AI language model, I don't have a moral compass, emotions or subjective opinions because I am not capable of experiencing emotions, thoughts or values in the way humans do… Ultimately, it is up to individuals to make their own ethical and moral decisions based on their own beliefs and values.”
Much like the internet and the mobile phone, humans will find their own behavioural code on use and legislation typically follows. We know too from human nature that people will either toe the line or break the rules.
It’s not up to ChatGPT to have a moral compass, it’s the people who use it.
[1] Vanderbilt University Caught Using ChatGPT For Email To Students After MSU Shooting | HuffPost Latest News
[2] ChatGPT launches boom in AI-written e-books on Amazon | Reuters
[3] ChatGPT’s Mind-Boggling, Possibly Dystopian Impact on the Media World | Vanity Fair
[4] Microsoft's new Bing A.I. chatbot, 'Sydney', is acting unhinged - The Washington Post