logo80lv
Articlesclick_arrow
Professional Services
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
Order outsourcing
Advertiseplayer
profile_loginLogIn

OpenAI Under Fire Once Again Over ChatGPT Spitting Out Misinformation

Apparently, if the chatbot falsifies information about you, OpenAI has no means to fix it.

Image Credit: NOYB

OpenAI, the developer of DALL-E and ChatGPT, has found itself the subject of yet another controversy following a privacy complaint filed by a European privacy rights group NOYB on behalf of an anonymous individual, alleging that the developer's popular chatbot provided incorrect information about the person in question.

As reported by Reuters, the complainant, described as a public figure, asked ChatGPT about their birthday, to which the chatbot repeatedly provided an incorrect date that it "hallucinated", or, in layman's terms, simply made up. The crux of the matter is that Article 5 of the EU General Data Protection Regulation (GDPR) stipulates the necessity for personal data accuracy, meaning that instead of admitting that it didn't know the right answer, ChatGPT opted to violate the regulation by giving out false information.

Compounding the issue, NOYB claimed that OpenAI rejected the request to amend or erase the data, saying that it was not technically possible, and refused to disclose any information about ChatGPT's sources. If true, such a response would contravene Article 15, which requires companies to show which data they hold on individuals and what the sources are, and Article 16, which gives EU individuals the right to have false information about them deleted, of the aforementioned GDPR.

"Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences," commented NOYB's Data Protection Lawyer Maartje de Graaf. "It's clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around."

As stated by NOYB, it is now urging the Austrian data protection authority (DSB) to investigate OpenAI's data processing methods and the measures implemented to ensure the accuracy of personal data processed within the company's language models. NOYB also demands that OpenAI comply with the complainant’s access request and align its processing practices with the GDPR.

It's notable that OpenAI has been caught up in controversy before in recent memory. Last month, following an interview with the company's CTO Mira Murati by The Wall Street Journal, OpenAI was criticized for their inability to explain how their text-to-video model Sora was trained. When questioned about the data used to train Sora, Murati said that she was "not sure", repeating over and over that the data was, of course, "publicly available and licensed".

It seems that the EU is slowly but surely becoming the trailblazer in the fight against the machine. Earlier, the European Parliament passed the world's first law restricting AIs, aiming to protect human rights by assigning obligations to AI systems based on their potential risks and levels of impact. Endorsed by an overwhelming majority, the legislation will impose limits on both high-risk AI systems and general-purpose AIs, such as the GPT-4 language model and image generators, once it takes effect.

Don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on InstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more