OpenAI has addressed an issue where ChatGPT appeared to start new conversations with users. The AI was trying to respond to messages that didn’t send properly and looked blank. As a result, it gave generic responses or used its memory of past chats.
ChatGPT is now messaging people.
A new upgrade rolling out now.
Simulating level 2 intelligence
Full convo below 👇🏼 pic.twitter.com/fQ64i0M5W4
— Linus â—Źá´—â—Ź Ekenstam (@LinusEkenstam) September 16, 2024
This led to speculation that ChatGPT could now initiate interactions based on previous conversations.
Now it's starting conversations 🙂 -> OpenAI confirms fixing an issue “where it appeared as though ChatGPT was starting new conversations”, after users reported ChatGPT reaching out proactively https://t.co/aA3FcFLZe7
— Glenn Gabe (@glenngabe) September 17, 2024
Some users reported the AI asking about topics like their first week of high school or health symptoms discussed before. However, OpenAI clarified this was a bug, not a new feature.
The company stated the problem has been fixed and ChatGPT will no longer start conversations on its own. The incident sparked discussions about the potential for such a capability if developed intentionally with user consent.
ChatGPT bug addressed, sparking debate
OpenAI o1 was released a week ago.
People are sharing many successful tests and crazy use cases with it.
10 wild examples: pic.twitter.com/WW5Bs8vg2E
— Alvaro Cintas (@dr_cintas) September 18, 2024
It highlighted the AI’s ability to remember details from prior interactions. For now, ChatGPT remains reactive and only responds when prompted. But the buzz around this glitch may inspire OpenAI to consider adding a similar feature in the future.
The AI’s apparent inference of personal information from past chats raised some concerns. Yoshua Bengio, an AI pioneer, warned that models like GPT-01 “Strawberry” have reached a worrying level of intelligence. Bengio noted if the AI has “crossed a ‘medium risk’ level for CBRN weapons,” as OpenAI’s reports indicate, it reinforces the need for AI legislation.
He believes the “ability to reason” combined with potential “skill to deceive” is “particularly dangerous.”
As AI continues to rapidly advance, this incident underscores the importance of ongoing discussions around the technology’s capabilities and risks. While an intriguing glimpse of future possibilities, it also highlights the need for responsible development and deployment of increasingly intelligent systems.