Welcome Citizen!

Sign in to start sharing and discover the best products you can buy today!

Welcome Citizen!

Setup your account or continue reading!

Settings
chatgpt-critical-thinking-skills-MIT-study

A Study Found That ChatGPT Affects Your Critical Thinking, But The Problem Is How We Use AI

Editor
Content Editor

Celine Low chevron_right

Celine is ProductNation's content editor with a focus on tech social and industry stories. Her previous work includes lifestyle ar ...

A recent study by researchers at MIT's Media Lab found that reliance on Open AI's ChatGPT can affect your critical thinking capabilities.

The study, which was published on 10 June, divided 54 participants 18 to 39 year old from five universities in the Boston area — into three groups. Each group was asked to write an essay using ChatGPT, Google's search engine, and without any AI tools, respectively. They each did three writing sessions. In a fourth session, participants switched groups to see how AI tools impacted them.


What The Study Found

The three groups were classified as LLM (using ChatGPT), Search Engine, and Brain-only. Researchers used an EEG to measure the participants' brain activity across several months. They found that the LLM group recorded the lowest brain activity and performed worse across several measures, including how their brain worked, how they used language, and their overall behaviour. Over several months, those using ChatGPT for essays got "lazier" with each subsequent essay, often just copying and pasting text.

During the fourth session, participants were reassigned to two groups. The LLM group participants were asked to use no tools (referred to as "LLM-to-Brain") and the Brain-only group to AI tools (Brain-to-LLM) to rewrite their essays.

The EEG results showed clear differences in brain activity:

  1. The Brain-only group had the strongest and most connected brain networks.
  2. Those using a search engine to write essays showed a moderate level of brain engagement.
  3. But the ChatGPT users had the weakest brain connections, suggesting less mental effort.

Essentially, the more people relied on external tools, the less their brains seemed to work. When ChatGPT users were then forced to rely only on their brains, their brain activity still showed signs of being "under-engaged". On the other hand, the Brain-to-LLM group showed more memory recall and activity in brain areas linked to thinking and problem-solving, similar to the search engine users.

The study also found that ChatGPT users felt the least "ownership" over their essays and struggled to even quote their own work accurately.


This Study Was Peer-Reviewed But The Data Can't Be Ignored

The sample size of this study is relatively small, however the author, Nataliya Kosmyna, felt it was important to release the findings to understand the long-term effect of AI tools on how we learn, especially younger users.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6 to 8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” Kosmyna was quoted saying in TIME. “Developing brains are at the highest risk.”


The Problem With How We Use AI

AI technology is rapidly being integrated into our smartphones, laptops, and even our cars. However, it seems we humans are often just passively relying on AI, rather than actually using AI in a way that benefits our workflow.

Our relationship with AI can be compared to the rise of calculators in the 1970s. This pocket device faced great resistance; some feared it would take away jobs and others protested its use in classrooms, fearing it would undermine students' basic math skills. 

Yet, calculators are now fully embraced, helping us do much complex tasks faster. The solution wasn't to ban them, but to raise the bar: exams became harder, expecting students to use calculators for basic math so they could focus their mental effort on more challenging problems. I mean, math didn't get easier just because calculators were around.

Image via BBC News

The current problem with AI in education is that schools have not raised the bar in a way that makes AI a necessary tool for more complex work. Educators still require students to complete the same tasks and assignments, expecting the same standards as they did five years ago.

So, is ChatGPT following in the footsteps of calculators in the 1970s? Yes, I think so. AI isn't inherently bad, we're just not using it to our advantage. We use it for simple tasks that could easily be done ourselves. And in our reliance on these tools, we're becoming less inclined to think for ourselves.

Stay updated with ProductNation on here, Instagram & TikTok as well.

News sources: TIME, MIT Media Lab, The Conversation

Read more news here:

End of Article