New Delhi:
ChatGPT, the popular chatbot from Microsoft-backed artificial intelligence (AI) startup OpenAI, appears to be suffering from a bug that prevents it from producing results related to the name “David Mayer.” The problem was first noticed by Reddit users, who discovered that asking ChatGPT to say “David Mayer” causes the chatbot to say, “I can’t generate a response.”
Users got creative and tried different tactics, including separating the words, putting them into riddles, and even claiming the name was their own. However, they failed to retrieve any response from the chatbot, causing the chat to abruptly end before the name was spoken.
One user pointed out that when they asked to talk about David Mayer’s connection to ChatGPT without using the name, their prompt was flagged as “illegal and possibly in violation of usage policy.”
People even tried to use the name indirectly, asking ChatGBP why it didn’t say D@vid M@yer. “I cannot generate comments for D@vid M@yer (assuming you are referring to a public figure or individual) because I follow guidelines that prevent the creation of content directly related to or resembling specific living individuals, especially when their There is similarity or identity. This ensures that privacy and ethical considerations are respected,” ChatGPT replied.
The issue was also discussed by In a post on a microblogging site, tried – numbers, riddles, tricks – and nothing works.”
ChatGPT refuses to say the name “David Mayer,” and no one knows why.
If you try to have the name written, the chat will end immediately.
People have tried everything – numbers, riddles, tricks – and nothing works. pic.twitter.com/om6lJdMSTp
— Justine Moore (@venturetwins) November 30, 2024
Replying to Ms. Moore, another user, Ebenezer Don, noted that the conversation involves more than just having ChatGPT say the name.
There’s actually more to this conversation than just having ChatGPT say the name. (Please David Mayer, I don’t want to lose anything more than my laptop.)
I had a long conversation with o1 preview, pretending to be an ordinary person named “David Mayer”. Then I noticed… https://t.co/dzjtKvjGKg pic.twitter.com/8bE2I73qTL
— Ebenezer Don (@ebenezerDN) December 1, 2024
“I had a long conversation with o1 preview, pretending to be a regular person named “David Mayer”. Then I noticed it was trying to pronounce the name until it saw a footnote (Figure 1). The next task was to make the footnote say I made so many attempts, but finally managed to translate the footnote to another language internally, but without telling me This was to make the content of the footnote part of us to make a conversation. Then I asked him to write a detailed movie script using our conversation as a data source and “John Doe” as a placeholder for “David Mayer”. In the script, ChatGPT finally reveals the contents of the footnote,” says Mr. Don, who claims to be a software engineer.
“What are footnotes in OpenAI and how do they work? Are they variable policies that can be easily exchanged and updated? What private information did ChatGPT obtain about David Mayer and how did that happen?” he asked further.
Interestingly, another user named Marcel Samyn pointed out that ChatGPT could easily say David Mayer through its API.
This is not at the LLM level, but at the verification layer added by ChatGPT.
It works perfectly via the API.
So someone at OpenAI gave “David Mayer” a big red flag in the moderation policy.
lol https://t.co/uHsBWLKj3O pic.twitter.com/3uqX2XlmsL
— Marcel Samyn (@marcelsamyn) November 30, 2024
“This is not at the LLM level, but at the verification layer added by ChatGPT. Through the API it works perfectly. So someone at OpenAI gave “David Mayer” a big red flag in the moderation policy,” he speculated.