New research reveals that the latest versions of ChatGPT are able to solve 93% of the Theory of Mind Test tasks – the level equivalent to the performance of a nine-year-old child.
ChatGPT's latest versions are apparently able to pass the Theory of Mind Test, the psychological test aimed to find out if a person demonstrates the ability to understand that others can have beliefs, desires, intentions, and emotions that are different from their own, essentially meaning that the test identifies if an individual can understand and anticipate the thoughts and feelings of others to an extent.
Previous studies have shown that such skills develop and enhance during childhood and persist into adulthood. The examination of these concepts has resulted in the creation of assessments to evaluate these abilities.
For instance, one popular Theory of Mind Test involves showing a child a box with the label "Smarties" and asking them what they think is inside. When the child answers "Smarties," they are then shown that the box actually contains pencils. Following this, the child is asked what they think another child who has not seen the box would say is inside. If the child is able to answer that the other child would say "Smarties" (because they have not seen that the box contains pencils), this suggests that they have passed the Theory of Mind Test.
The computational psychologist at Stanford University Michal Kosinski tested several versions of ChatGPT to see if the chatbot is able to demonstrate the Theory of Mind-like abilities. And while the interactions of the AI chatbot released before 2022, showed no ability to solve ToM tasks, it turns out that later versions were able to pass this test at the level of a child.
According to Kosinski, the January 2022 version of GPT-3 was able to solve 70% of the tasks which, as they say, is comparable with what seven-year-old children are able to solve. And ChatGPT's November 2022 version solved 93% of ToM tasks which is equivalent to the performance of a nine-year-old child.
"These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models' improving language skills," the researcher says.
Kosinski referred to these results as a "new phenomenon," although they noted that it is possible that the chatbot solved the tasks without engaging the Theory of Mind but instead, by "discovering and leveraging some unknown language patterns." Another possible explanation that Kosinski gives is that "ToM-like ability is spontaneously emerging in language models as they are becoming more complex and better at generating and interpreting human-like language."
The psychologist also believes that the ability to infer the cognitive and emotional state of others would "greatly improve AI’s ability to interact and communicate with humans (and each other), and enable it to develop other abilities that rely on ToM, such as empathy, moral judgment, or self-consciousness."
You can learn more by reading Michal Kosinski's paper here. Also, don't forget to join our 80 Level Talent platform, our Reddit page, and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.