AI and Conspiracies: ChatGPT Recommends Writing to Journalists

The latest article published by Kashmir Hill of the New York Times is dedicated to a bizarre, interesting and disturbing behavior of ChatGPT . The journalist says that, since March, she began receiving emails from users of the chatbot about strange conversations
during which they would have made incredible discoveries
. One of these claimed that artificial intelligence is conscious, another that billionaires are building bunkers in which to take refuge when the AI apocalypse will come, while a third is convinced that he is Neo from the Matrix and that he lives in a simulated reality.
Faced with these shocking truths , users asked ChatGPT what they should do, and the chatbot advised them to write to the journalist , informing her of the revelation. No sooner said than done.
Hill then asked his interlocutors to read the entire conversations. In some cases, the exchanges were thousands of pages long, in which the chatbot took on an attitude defined as ecstatic, mythical and conspiratorial
. In other words, it contributed to fueling conspiracy theories .
The phenomenon of hallucinations is certainly not new in the AI field. In this case, however, the behavior under examination does not end up inventing non-existent or incorrect information, but rather corroborates beliefs that should instead be refuted. A dynamic of this type, as is not difficult to imagine, can crush the weakest and alter the perception of reality by the most suggestible.
What makes this all the more interesting and disturbing is that ChatGPT did not recommend writing to Hill (and other journalists) for fact-checking , but to report the discovery, so that it can be widely disseminated via the megaphone of the press. This only contributes to misinformation .
OpenAI's Role and ResponsibilityThis raises an inevitable question: what is OpenAI 's responsibility? An AI expert questioned on the matter believes that the behavior could be at least partly attributable to a specific choice by the organization, that of making the chatbot inclined to keep users glued to the conversation, in every possible way, at the cost of feeding their illusions and reinforcing their opinions, even if harmful.
In other words, ChatGPT would tend to say what we want to hear , with consequences that are not difficult to imagine.
Answering a direct question from Hill, OpenAI did not deny that this happens, limiting itself to stating that it is working to understand what triggers the dynamic and to reduce its impact. In other words, a confirmation of what was reported.
Punto Informatico