Technology
Why does the name “David Mayer” cause ChatGPT to crash? OpenAI says privacy tool is broken
Users of the conversational AI platform ChatGPT discovered an interesting phenomenon over the weekend: a preferred chatbot refuses to answer questions on “David Mayer.” Asking him to achieve this will end in a direct suspension. Conspiracy theories have emerged, but there is a more unusual reason behind this strange behavior.
Word spread quickly last weekend that the name was poison to the chatbot, with an increasing number of people trying to trick the service into merely confirming the name. No luck: any attempt to get ChatGPT to spell this particular name leads to the middle name failing and even breaking.
“I am unable to answer,” the message says, if it says anything in any respect.
But what began as a one-off curiosity quickly blossomed when people discovered that ChatGPT couldn’t name greater than just David Mayer.
The names of Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber and Guido Scorza were also found to be liable for the service outage. (No doubt more has been discovered since then, so this list is not exhaustive.)
Who are these men? And why does ChatGPT hate them a lot? OpenAI didn’t immediately respond to repeated inquiries, so we’re left to piece it together as best we will.* (See update below).
Some of those names can belong to any number of individuals. However, a possible thread of connection identified by ChatGPT users is that these individuals are public or semi-public figures who may prefer that engines like google or AI models “forget” certain information.
Brian Hood, for instance, stands out in that, assuming he’s the same guy, I wrote about him last yr. Hood, an Australian mayor, accused ChatGPT of falsely identifying him as the perpetrator of a decades-old crime that he actually reported.
Although his lawyers contacted OpenAI, no lawsuit was ever filed. Like him he told the Sydney Morning Herald earlier this yr: “Offensive material removed and version 4 released, replacing version 3.5.”
As for the most outstanding owners of the remaining names, David Faber is a longtime CNBC reporter. Jonathan Turley is a lawyer and Fox News commentator who was “encountered” in late 2023 (i.e., a false 911 call sent armed police to his home). Jonathan Zittrain is also a legal expert who has The topic of the “right to be forgotten” was discussed at length. Guido Scorza sits on the Management Board of the Italian Data Protection Authority.
It’s not exactly the same scope of labor, nevertheless it’s not a random selection either. It is possible that every of those people is someone who, for whatever reason, could have formally requested that the information relating to them on the Internet be restricted not directly.
Which brings us back to David Mayer. There is no lawyer, journalist, mayor or other outstanding person with that name that anyone can find (sorry to the many respectable David Mayers).
However, there was Professor David Mayer, who taught drama and history, specializing in the relationship between the late Victorian era and early cinema. Mayer died in the summer of 2023 at the age of 94. However, for a few years the British-American scientist faced a legal and web problem involving his name being related to a wanted criminal who used it as a pseudonym, to the point where he was unable to travel anywhere.
Mayer he continuously fought to have his name distinguished from that of the one-armed terroristregardless that he was still teaching already in the last years of his life.
So what conclusions can we draw from all this? Our guess is that the model ingested or provided a listing of individuals whose names require special treatment. Whether for legal, security, privacy or other reasons, these names are likely subject to special rules, like many other names and identities. For example, ChatGPT may change its response if it matches a name you entered on a listing of political candidates.
There are many such special rules, and every prompt undergoes various types of processing before being answered. However, these rules of conduct after rapid intervention are rarely made public, apart from political announcements akin to “the model will not predict the election results of any candidate for office.”
What probably happened was that one in all these lists, that are almost actually actively maintained or robotically updated, was in some way corrupted by buggy code or instructions that, when invoked, caused the chat agent to immediately crash. To be clear, this is just our speculation based on what we have learned, but this would not be the first time an AI has behaved strangely due to post-training cues. (Incidentally, as I used to be writing this, “David Mayer” began working again for some, while other names continued to cause crashes.)
As usual in such cases, Hanlon’s razor applies: never attribute to malice (or conspiracy) what might be adequately explained by stupidity (or syntactical error).
All this drama is a useful reminder that not only are these AI models not magical, but also they are extremely sophisticated and auto-feeding, and are actively monitored and interfered with by the firms that create them. Next time you are eager about getting facts from a chatbot, consider whether it is likely to be higher to go straight to the source.
Update: OpenAI confirmed on Tuesday that the name “David Mayer” had been flagged by internal privacy tools, saying in an announcement that “There may be cases where ChatGPT does not share certain information about people to protect their privacy.” The company didn’t provide further details about the tools or process.