Tests Show That ChatGPT Uses Elon Musk's AI Encyclopedia as Source
Don't believe everything the bot says.
Asking ChatGPT instead of doing research is tempting, but it's important to remember that a chatbot takes information from all over the internet without checking the legitimacy of the source. Just remember Google AI's crazy glue-on-pizza advice.
Some sources can be especially questionable: The Guardian conducted some tests and found out that GPT-5.2, the latest model behind ChatGPT, cited Grokipedia, Elon Musk's AI encyclopedia, 9 times in response to "more than a dozen different questions."
I don't have to point out that an AI using another AI as a source is not a great idea, especially if it's as biased as Musk's creation. We all know his political views, and they are said to affect his AI's responses.
The instances when GPT cited Grokipedia included queries on political structures in Iran and questions about the biography of Sir Richard Evans, a historian and expert witness against Holocaust denier David Irving in his libel trial, according to The Guardian.
"ChatGPT did not cite Grokipedia when prompted directly to repeat misinformation about the insurrection, about media bias against Donald Trump, or about the HIV/Aids epidemic – areas where Grokipedia has been widely reported to promote falsehoods. Instead, Grokipedia’s information filtered into the model’s responses when it was prompted about more obscure topics.
"For instance, ChatGPT, citing Grokipedia, repeated stronger claims about the Iranian government’s links to MTN-Irancell than are found on Wikipedia – such as asserting that the company has links to the office of Iran’s supreme leader."
Claude, another popular LLM, also referenced Musk’s encyclopedia on topics from petroleum production to Scottish ales, the outlet says.
An OpenAI spokesperson said the model’s web search "aims to draw from a broad range of publicly available sources and viewpoints."
"We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations."
They added that the company had ongoing programs to filter out low-credibility information and influence campaigns.
This adds to the already widespread AI hallucinations that chatbots are guilty of. For now, they should not be blindly trusted; we still have to do our research, at least when it comes to serious topics.
Don't forget to subscribe to our Newsletter and join our 80 Level Talent platform, follow us on Twitter, LinkedIn, Telegram, and Instagram, where we share breakdowns, the latest news, awesome artworks, and more.