Why is the name “David Mayer” stuck in ChatGPT? Digital privacy requests may be wrong

[ad_1]

Users of the conversational AI platform ChatGPT discovered an interesting phenomenon over the weekend: the popular chatbot refuses to answer questions if asked about “David Mayer.” Asking him to do so causes him to freeze instantly. Conspiracy theories have followed, but there may be a more mundane reason at the heart of this strange behaviour.

Word spread quickly last weekend that the name was chatbot poison, with more and more people trying to trick the service simply into recognizing the name. No luck: every attempt to make ChatGPT display this specific name results in it failing or even truncating the middle name.

“I’m unable to make a reply,” she says, if she says anything at all.

Image credits:TechCrunch/OpenAI

But what started as a one-time curiosity soon blossomed when people discovered that David Mayer isn’t the only one whose name ChatGPT couldn’t mention.

The service was also found to be crashing with the names Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza. (No doubt more have been discovered since then, so this list is not exhaustive.)

Who are these men? And why does ChatGPT hate them so much? OpenAI has not responded to repeated inquiries, so we have to put together the parts ourselves as best we can.

Some of these names may belong to any number of people. But one potential connection thread identified by ChatGPT users is that these people are public or semi-public figures who may prefer to have some information “forgotten” by search engines or AI models.

Brian Hood, for example, stands out because, assuming it is the same man, I wrote about him last year. Hood, an Australian mayor, accused ChatGPT of falsely describing a crime committed decades ago that he had in fact reported.

Although his attorney contacted OpenAI, no lawsuit was ever filed. As it is He told the Sydney Morning Herald Earlier this year, “the infringing material was removed and they released version 4, replacing version 3.5.”

Image credits:TechCrunch/OpenAI

As for the most prominent owners of other names, David Faber Longtime correspondent at CNBC. Jonathan Turley is an attorney and Fox News commentator who was beaten (i.e., a fake 911 call sent armed police to his home) in late 2023. Jonathan Zittrain is also a legal expert, and has They spoke at length about the “right to be forgotten.” Guido Scorza is a member of the Board of Directors of the Italian Data Protection Authority.

It’s not exactly in the same line of work, nor is it a random choice yet. Each of these people could be someone who has, for whatever reason, formally requested that information about them online be restricted in some way.

Which brings us back to David Mayer. There is no lawyer, journalist, mayor or other notable person by that name that anyone can find (with apologies to the many respectable David Myers out there).

However, there was Professor David Mayer, who taught Drama and History, and specialized in the links between late Victorian and early cinema. Mayer died in the summer of 2023, at the age of 94. But years before that, the British-American academic faced legal and online trouble with his name being linked to a wanted criminal who used it as an alias, to the point where he was unable to travel.

Mayer He constantly struggled to demystify his name from the one-armed terroristEven as he continued to teach well into his final years.

So what can we conclude from all this? Since there is no official explanation from OpenAI, we believe that the model has internalized or provided a list of people whose names require some special processing. Whether due to legal, safety, privacy, or other concerns, these names are likely covered by special rules, just as many other names and identities are. For example, ChatGPT may change its response if it matches the name you typed in a list of political candidates.

There are several such special rules, and each claim goes through different forms of processing before we respond to it. But these post-urgent rules are rarely made public, except in political announcements such as “Model will not predict election results for any candidate for office.”

What likely happened is that one of these lists, which is almost certainly actively maintained or automatically updated, was somehow corrupted with faulty code or instructions that, when called, caused the chat agent to crash immediately. To be clear, this is just our own speculation based on what we’ve learned, but it wouldn’t be the first time an AI has behaved strangely due to post-training instructions. (By the way, as I’m writing this, “David Mayer” has started working again for some, while other names are still causing crashes.)

As is often the case with these things, Hanlon’s code applies: never attribute to malice (or conspiracy) what can be explained away by stupidity (or grammatical error).

The entire drama serves as a salutary reminder that these AI models are not only magical, but also super beautiful, actively monitored, and meddled by the companies that make them. Next time you’re thinking about getting facts from a chatbot, consider whether it’s better to go straight to the source instead.

[ad_2]

Leave a Comment