Tuesday, July 8, 2025

What Happens To Your Brain When You Use ChatGPT?

Brain
Your brain works differently when you're using generative AI to complete a task than it does when you use your brain alone. Namely, you're less likely to remember what you did. That's the somewhat obvious-sounding conclusion of an MIT study that looked at how people think when they write an essay -- one of the earliest scientific studies of how using gen AI affects us.

The study, a preprint that has not yet been peer-reviewed, is pretty small (54 participants) and preliminary, but it points toward the need for more research into how using tools like OpenAI's ChatGPT is affecting how our brains function.

The findings show a significant difference in what happens in your brain and with your memory when you complete a task using an AI tool rather than when you do it with just your brain. But don't read too much into those differences -- this is just a glimpse at brain activity in the moment, not long-term evidence of changes in how your brain operates all the time, researchers said.

"We want to try to give some first steps in this direction and also encourage others to ask the question," Nataliya Kosmyna, a research scientist at MIT and the lead author of the study said.

The growth of AI tools like chatbots is quickly changing how we work, search for information and write. All of this has happened so fast that it's easy to forget that ChatGPT first emerged as a popular tool just a few years ago, at the end of 2022. That means we're just now beginning to see research on how AI use is affecting us.

Here's a look at what the MIT study found about what happened in the brains of ChatGPT users, and what future studies might tell us.

The MIT researchers split their 54 research participants into three groups and asked them to write essays during separate sessions over several weeks. One group was given access to ChatGPT, another was allowed to use a standard search engine (Google), and the third had none of those tools, just their own brains. The researchers analyzed the texts they produced, interviewed the subjects immediately after they wrote the essays, and recorded the participants' brain activity using electroencephalography (EEG).

An analysis of the language used in the essays found that those in the "brain-only" group wrote in more distinct ways, while those who used large language models produced fairly similar essays. More interesting findings came from the interviews after the essays were written. Those who used their brains alone showed better recall and were better able to quote from their writing than those who used search engines or LLMs.

It might be unsurprising that those who relied more heavily on LLMs, who may have copied and pasted from the chatbot's responses, would be less able to quote what they had "written." Kosmyna said these interviews were done immediately after the writing happened, and the lack of recall is notable. "You wrote it, didn't you?" she said. "Aren't you supposed to know what it was?"

The EEG results also showed significant differences between the three groups. There was more neural connectivity -- interaction between the components of the brain -- among the brain-only participants than in the search engine group, and the LLM group had the least activity. Again, not an entirely surprising conclusion. Using tools means you use less of your brain to complete a task. But Kosmyna said the research helped show what the differences were: "The idea was to look closer to understand that it's different, but how is it different?" she said.

The LLM group showed "weaker memory traces, reduced self-monitoring and fragmented authorship," the study authors wrote. That can be a concern in a learning environment: "If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it."

After the first three essays, the researchers invited participants back for a fourth session in which they were assigned to a different group. The findings there, from a significantly smaller group of subjects (just 18), found that those who were in the brain-only group at first showed more activity even when using an LLM, while those in the LLM-only group showed less neural connectivity without the LLM than the initial brain-only group had.

When the MIT study was released, many headlines claimed it showed ChatGPT use was "rotting" brains or causing significant long-term problems. That's not exactly what the researchers found, Kosmyna said. The study focused on the brain activity that happened while the participants were working -- their brain's internal circuitry in the moment. It also examined their memory of their work in that moment.

Understanding the long-term effects of AI use would require a longer-term study and different methods. Kosmyna said future research could look at other gen AI use cases, like coding, or use technology that examines different parts of the brain, like functional magnetic resonance imaging, or fMRI. "The whole idea is to encourage more experiments, more scientific data collection," she said.

While the use of LLMs is still being researched, it's also likely that the effect on our brains isn't as significant as you might think, said Genevieve Stein-O'Brien, assistant professor of neuroscience at Johns Hopkins University, who was not involved in the MIT study. She studies how genetics and biology help develop and build the brain -- which occurs early in life. Those critical periods tend to close during childhood or adolescence, she said.

"All of this happens way before you ever interact with ChatGPT or anything like that," Stein-O'Brien told me. "There is a lot of infrastructure that is set up, and that is very robust."

The situation might be different in children, who are increasingly coming into contact with AI technology, although the study of children raises ethical concerns for scientists wanting to research human behavior, Stein-O'Brien said.

0 comments:

Post a Comment