Brain scans have shown that LLM users have only half as many Alpha connections compared to those who rely solely on their brains.
About 30 minutes – that would be my estimate for how much time you'd need to spend scrolling Twitter or LinkedIn to stumble upon dozens of posts clearly written by artificial intelligence, realize that all of them sound formulaic and somehow recreate the effect of the Uncanny Valley in the written form, see hundreds of comments – also AI-generated – praising the original posts, and begin questioning whether we as a species are becoming dumber because of AI.
As it turns out, we actually are. According to a massive research paper authored by a team of scientists from MIT, Wellesley College, and MassArt, individuals who rely on AI have fewer brain connections compared to those who only use their brains, can't quote from the essays they themselves produced shortly prior, and, most disturbingly, perform suboptimally even after ditching LLMs, demonstrating the long-term damage AI chatbots may inflict.
For their study, the team assigned three groups of participants – students 18 to 39 years old from five universities in the greater Boston area – to write an essay, with the first group tasked to leverage LLM – ChatGPT, to be precise – to complete the task, the second to utilize search engines, and the last one to rely exclusively on their brains.
To investigate the impact of AI on one's cognitive abilities in the long run, the researchers also conducted a fourth session in which participants from the LLM group were required to complete the task without any tools – a condition the team referred to as "LLM-to-Brain" – and vice versa. They then employed electroencephalography (EEG), NLP analysis, and interviews with the participants to figure out how the subjects were affected by the way they had to approach the task at hand.
Based on the team's findings, participants who wrote their essays using LLMs exhibited significantly weaker semantic processing networks, with only 42 Alpha Band connections – commonly linked to internal attention and semantic processing during creative ideation – compared to 79 in the Brain-only group. Theta Band activity – closely associated with working memory load and executive control – also showed a stark difference, with 65 connections for the Brain-only group versus 29 for the LLM group.
"Our findings offer an interesting glimpse into how LLM-assisted vs. unassisted writing engaged the brain differently," the team writes. "In summary, writing an essay without assistance (Brain-only group) led to stronger neural connectivity across all frequency bands measured, with particularly large increases in the theta and high-alpha bands.
This indicates that participants in the Brain-only group had to heavily engage their own cognitive resources: frontal executive regions orchestrated more widespread communication with other cortical areas (especially in the theta band) to meet the high working memory and planning demands of formulating their essays from scratch."
Moreover, the study revealed that reliance on AI greatly impairs participants' ability to accurately recall quotes, with 83.3% of the LLM group failing to correctly quote their own essays, while only 11.1% of participants in both the Search Engine and Brain-Only groups faced the same difficulty.
"Taken together, the behavioral data revealed that higher levels of neural connectivity and internal content generation in the Brain-Only group correlated with stronger memory, greater semantic accuracy, and firmer ownership of written work. Brain-Only group, though under greater cognitive load, demonstrated deeper learning outcomes and stronger identity with their output.
The Search Engine group displayed moderate internalization, likely balancing effort with outcome. The LLM group, while benefiting from tool efficiency, showed weaker memory traces, reduced self-monitoring, and fragmented authorship."
Circling back to the team's final "LLM-to-Brain" and "Brain-to-LLM" experiments, the study showed that the LLM-to-Brain group repeatedly focused on a narrower set of ideas, as evidenced from n-gram analysis and supported by interview responses, which the researchers described as "perhaps one of the more concerning findings."
The team notes that this repetition suggests many participants may not have engaged deeply with the topics or critically examined the material provided by ChatGPT and reflects the accumulation of cognitive debt, a condition where repeated reliance on LLMs replaces the cognitive processes required for independent thinking, essentially proving that the repeated use of AI actually makes people dumber.
"Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives."
That said, please note that the team itself admits their study may not represent the absolute, incontrovertible truth due to the fact that they had a very limited number of test subjects – only 54 people, all university students, from the same geographical area and with relatively similar backgrounds – and therefore is not immune to criticism.
Read the full research paper here and don't forget to join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
Preview image by Jaap Arriens/NurPhoto, Getty Images