ChatGPT Bug Let People See Other Users’ Chat History Titles

: In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen on February 03, 2023 in London, England. OpenAI, whose online chatbot ChatGPT made waves when it was debuted in December, announced this week that a commercial version of the service, called ChatGPT Plus, would soon be available to users in the United States.


ChatGPT remains the most popular online AI chatbot, but it’s also a closed environment where the company has access to your chats.

ChatGPT users reported a strange bug letting them see the chat history titles of other users. It’s just another quirk of the closed environment of OpenAI’s super-popular AI chatbot and sets another grim reminder that despite what you might think, other people, especially the company itself, has access to what you send to your friendly neighborhood chatbot.

On Monday, a few ChatGPT threads on Reddit and Twitter showed how a sidebar that usually displays user history was showing the history titles of other users as well. It’s unclear why the Reddit user was seeing a few Chinese-language titles as well as histories related to Chinese ideologies. Jordan Wheeler, a cybersecurity consultant, shared a much more broad selection of prompts in a Monday Twitter post. He later added that he was also getting multiple errors related to network connectivity.

Read more

OpenAI did not immediately respond to Gizmodo’s request for comment, but a company spokesperson told Bloomberg early on Tuesday that the bug’s since been fixed. OpenAI temporarily disabled ChatGPT in order to fix the bug, and the chatbot was brought back online late on Monday. The company said it was caused by an “unnamed, open-source software” though an investigation is still ongoing.

As of early Tuesday reporting time, users’ history on ChatGPT remains “temporarily unavailable” while the company is “working to restore this feature as soon as possible.” ChatGPT runs on OpenAI’s recently released GPT-4 language model. It’s unclear if the issue was somehow related to the switch between the previous model and the current version.

The company told Bloomberg that users were unable to access those chat histories of other users while the bug was active. Still, it points to a rather concerning fact about OpenAI’s closed-door system. Effectively, the company has access to users chats. Though it has promised it won’t use the data from companies who pay for its API, regular users remain fair game even if the company has said it removes personally identifiable info from the data. Those users who don’t want their data to be used to further train the company’s chatbot or DALLE-2 art generator have to file a form and include an organization ID.

While touting just how great their Black Box large language model is, OpenAI CEO Sam Altman told ABC News last week after the release of GPT-4 that there’s a genuine fear the LLM could be abused. In that sense, researchers like those at Check Point Research are already concerned how the system could be used to generate malicious code.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

More from Gizmodo

Sign up for Gizmodo's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.