ChatGPT, developed by OpenAI, is an impressive AI that can hold lengthy dialogues, answer questions, write poems, jokes, code, and movie screenplays.
Unfortunately, the chatbot has some shortcomings and its output can be inaccurate. This phenomenon, known as hallucinations, can result in incorrect answers to unclear questions or false data.
Fixes a bug that exposed conversation titles
On Tuesday morning, ChatGPT owner OpenAI announced they had resolved an important problem with their AI-powered chatbot. CEO Sam Altman expressed his regret over the outage but assured customers that the bug had been rectified.
On Monday (March 21), the popular conversational chatbot had to be temporarily shut down due to reports that users were seeing other people’s conversations in their chat history sidebar. Reddit users shared screenshots showing a sidebar with titles of other people’s chats, while others on Twitter also reported similar experiences.
Privacy and security questions
Though not a major issue, this oversight raises important privacy and security questions as more people entrust chatbots with sensitive data.
The company’s FAQ cautions users that they should only share sensitive data if they are certain it won’t be misused by the bot in any way.
Businesses often feel the urge to stay ahead of their competition with technology, leading them to share confidential data without authorization. Unfortunately, this bug serves as a reminder that even the best companies can make errors or overlook issues that have unintended consequences.
ChatGPT experienced an issue due to a bug in an open-source library OpenAI was using for its software. As CEO Sam Altman tweeted, they had fixed it and would be conducting a technical postmortem to identify what went wrong.
After the incident, OpenAI disabled access to the chat history sidebar for all users – paid and non-paid alike. They later clarified that this glitch only displayed brief descriptions of other people’s conversations and had nothing to do with actual content.
Over the past year, an AI-powered chatbot has been creating thousands of responses to users’ inquiries. Leveraging a large language model powered by immense amounts of data, it generates human-like answers that range from song lyrics and school essays to film scripts. According to UBS, the application now boasts 100 million monthly active users – making it the fastest-growing consumer application in history.
While the bug wasn’t as widespread as one might hope for, it has forced many to take a step back and consider how they use this technology more carefully.
Makers of the bot
OpenAI, the maker of the bot, has identified and fixed the bug and will restore its feature within a few days. CEO Sam Altma shared on Twitter that they had identified who was responsible for the error and taken steps to prevent a repeat occurrence.
Though the company did not specify which software it used to fix the bug, it did state it was an important issue affecting all U.S. customers. While people weren’t able to view full transcripts of their conversations, rather only brief summaries of the most pertinent ones, this glitch still causes problems for some users; in addition to this one-time issue, some are also experiencing network connectivity errors and an inability to load history data.
On Wednesday afternoon, however, the problem was corrected but some still experience issues.
It was discovered on Wednesday that ChatGPT’s neural network training library contained a bug that allowed users to
On Wednesday, ChatGPT discovered a bug in an open-source library used for training its neural network that allowed some users to view other people’s conversation titles. The issue was rectified on Thursday.
ChatGPT’s AI system is incredibly simple, consisting of billions of tiny elements that perform one simple operation: take a bunch of numerical inputs and then combine them with some weights.
- This process allows it to generate its own text, follow prompts, and so on.
- While this is an impressive feat of machine learning, its methods sometimes veer off in unhuman-like directions.
- Last month, the CEO of AI company OpenAI warned about how artificial intelligence could impact workforces and elections while spreading disinformation.
- Now there’s another cautionary tale regarding allowing your conversations with an AI chatbot to be saved in its history database: they could potentially be viewed by anyone with access to this database.
- The company took the opportunity to fix the bug, but it serves as a timely reminder that never share sensitive data with bots or allow chatbots to access your social media accounts
While this bug may not be the first to draw the attention of privacy experts, it certainly raises some interesting questions. We’ll keep an eye out for further developments related to this story.
Posted on Twitter
Sam Altman, CEO of OpenAI, posted on Twitter to explain the issue as being caused by “a bug in an open-source library.” He further tweeted that a fix had been released and a technical postmortem would follow shortly thereafter. According to Sam, only a small number of users had seen these private chats; however, ChatGPT disabled this feature on Monday to address the problem.
Users’ conversations were affected by the bug
According to a Bloomberg report, the bug caused users’ conversations with ChatGPT to appear under the title but without showing the content of those exchanges. Once someone logs in, their chat history will appear on the left-hand side of the site. The service has grown enormously popular over the past year as people around the world use it for creating poetry, lyrics, and other written works.
Unfortunately, the chatbot has some shortcomings and its output can be inaccurate. On Tuesday morning, ChatGPT owner OpenAI announced they had resolved an important problem with their AI-powered chatbot. On Monday (March 21), the popular conversational chatbot had to be temporarily shut down due to reports that users were seeing other people’s conversations in their chat history sidebar. Businesses often feel the urge to stay ahead of their competition with technology, leading them to share confidential data without authorization.