AI Tools and Critical Thinking: Are We Losing Focus?

Do AI tools weaken critical thinking? Studies suggest cognitive offloading may be impacting human reasoning and decision-making skills.
Young adult overwhelmed by AI interface with glowing data, symbolizing impact on critical thinking and cognitive offloading
  • A 2025 study found higher AI use strongly correlated with lower critical thinking scores.
  • Younger users (ages 17–25) exhibited the weakest critical thinking and highest AI dependence.
  • Higher educational attainment appeared to protect users from AI-related cognitive declines.
  • Passive trust in AI-generated answers leads to reduced inquiry and active reasoning.
  • Balancing AI use with manual cognitive effort helps preserve mental engagement and critical thought.

Artificial intelligence tools have become essential in modern life—streamlining tasks, generating content, and offering instant information. But as we increasingly lean on these digital assistants, researchers warn we might be outsourcing more than just tasks—we may be offloading vital cognitive functions. A 2025 study by Michael Gerlich considers the risks posed by this overreliance, highlighting the complex connection between AI tools, critical thinking, and a phenomenon known as cognitive offloading.


What Is Cognitive Offloading?

Cognitive offloading is the use of external tools to handle mental tasks that would otherwise call for memory or reasoning. This idea includes everything from writing reminders on a sticky note to using a calculator for basic math. It’s a useful approach that permits users to shift mental effort—making the brain available to concentrate on more complex matters.

Everyday Examples of Cognitive Offloading

  • GPS apps replace internal navigation skills.
  • Word processors with grammar suggestions reduce language processing effort.
  • AI chatbots generate emails, essays, and reports, offering quick solutions without requiring users to create their own arguments or conduct research.
  • Search engines provide instant access to information, reducing the need to recall facts.

While offloading these tasks simplifies modern living, the psychological trade-off is that frequent reliance on external tools may weaken our “cognitive muscles.” Over time, tasks we once performed by thinking deeply become automated—leaving room for reduced problem-solving skills and less critical engagement.


students taking written test in classroom

The Study: AI’s Impact on Critical Thinking Abilities

Michael Gerlich’s 2025 research, published in the peer-reviewed journal Societies, offers one of the most comprehensive looks into this issue to date. With 666 participants from diverse age groups and educational backgrounds, the study combined surveys, critical thinking assessments, and interviews to consider the relationship between AI tool use and cognitive engagement.

Methodology Highlights

  • Participants were given tasks assessing core components of critical thinking: interpretation, analysis, evaluation, inference, and self-regulation.
  • Researchers gathered AI usage data through self-reported frequency across different scenarios (academic, professional, personal).
  • Advanced statistical techniques like random forest regression were used to isolate patterns.
  • Participants’ qualitative responses during interviews added context to the numbers.

Key Finding

The more participants relied on AI tools, the lower their critical thinking scores tended to be. The connection was strong across age groups, with notable dips in interpretive depth and reasoning complexity among heavy AI users.


person staring blankly at laptop screen

Critical Thinking Suffers Under AI Dependency

Relying on artificial intelligence for answers can lead to a passive form of engagement with information. Gerlich’s findings show that habitual users were more likely to accept AI-generated solutions without questioning their validity.

Mechanisms at Work

  • Reduced Effortful Thinking: When AI automates reasoning, users are less likely to engage in thoughtful consideration.
  • Surface-Level Understanding: Users accept outputs at face value without considering basic ideas.
  • Confirmation Bias Reinforcement: AI, trained to provide “palatable” answers, may strengthen pre-existing views rather than challenge them.

In other words, the act of thinking—questioning, weighing, and deciding—is being short-circuited by the convenience of machine-generated responses.


young adult scrolling phone outside

Generational Disparities: Why Young Adults Are Most at Risk

One of the notable insights from Gerlich’s study is how age changes the impact of AI use on critical thinking. Individuals aged 17 to 25 consistently showed

  • The highest frequency of AI tool use.
  • The greatest reliance on AI-generated content for decision-making and learning.
  • The lowest scores on tasks involving independent analysis.

Digital Natives, Digital Dependents?

This age group, often termed “digital natives,” has never known a world without the internet. Their familiarity with technology makes them confident users—but familiarity also creates dependency. Given their frequent exposure to AI-powered solutions, younger users may develop habitual cognitive offloading before developing the critical thinking skills required for proper discernment.

Conversely, participants aged 46 and older demonstrated greater resilience

  • Less frequent use of AI tools.
  • Higher confidence in personal judgment.
  • Stronger performance on reasoning-based assessments.

The learning curve or mistrust some older users feel toward AI may, ironically, be preserving their deeper analytical capacities.


graduate student reading book in library

The Protective Power of Education

One of the most hopeful findings in Gerlich’s work is that education appears to protect against AI’s cognitive risks. Participants with college degrees or higher

  • Engaged more critically with technology.
  • Questioned AI outputs more frequently.
  • Demonstrated more consistent use of fact-checking and verification strategies.

Metacognition as a Safeguard

Highly educated respondents showed strong metacognitive skills—thinking about how they think. This quality allowed them to

  • Use AI as a supplement rather than a crutch.
  • Reflect on the quality and source of an AI’s response.
  • Integrate machine output into broader cognitive frameworks.

This suggests that formal education doesn’t just impart knowledge—it builds cognitive habits that help users engage with AI wisely.


person writing sticky note reminder

Trusting the Machine: Psychological Risks and Passive Consumption

Many of the interviewees—particularly younger participants—described placing automatic trust in AI-generated information. They reported

  • Rarely verifying the accuracy of responses.
  • Using AI outputs “as-is” in assignments or work.
  • Believing AI as an authority or final word.

This level of unchecked trust raises concerns about a rapidly growing “passive consumption syndrome,” where individuals accept information without evaluating its credibility, logic, or relevance.

This willful disengagement may severely limit

  • Decision-making quality in personal or professional contexts.
  • Creative expression, as users regurgitate AI-generated content.
  • Problem-solving flexibility, especially when novel or ambiguous conditions arise.

Correlation Isn’t Causation… Yet

It’s important to clarify that Gerlich’s study is correlational, not causal. AI tool use is associated with lower critical thinking performance—but that doesn’t necessarily mean one causes the other.

Alternative explanations include

  • Individuals with weaker cognitive confidence may naturally gravitate toward AI for support.
  • Pre-existing educational or socioeconomic factors might mediate how AI is used.

Despite this, the consistency of the data across large, diverse samples suggests there’s reason for concern and merit in more targeted investigation.


Strategies for Balancing AI Use with Cognitive Engagement

To counteract the potential downsides of cognitive offloading, conscious strategies can help keep your mental muscles strong

Delay Your Dependence

Before turning to AI, spend several minutes attempting to work through the problem yourself. This boosts memory encoding and neural connectivity.

Ask Clarifying and Critical Questions

After receiving an AI response, pose follow-up inquiries such as

  • “What assumption does this rest on?”
  • “Is this generalizable across contexts?”
  • “How would I explain this without AI help?”

Always Verify, Especially in High-Stakes Situations

Whether it’s financial advice, medical guidance, or professional judgment—cross-reference AI answers with trusted human sources.

Cultivate Curiosity

Consider the “why” behind AI outputs. Curiosity activates analytic regions of the brain and strengthens critical capacities.

Reinforce Analog Skills Regularly

Practice mental math. Read long-form articles without skimming. Write essays without assistance. Such analog mental habits preserve neural adaptability.


Implications for Education and AI Tool Design

If AI is going to become a fixture in education and professional training, how it is incorporated will determine whether it reinforces or replaces critical thought.

Solutions for Educators

  • Teach AI Literacy: Instruct students on the limits, biases, and verification steps involved in using AI tools.
  • Prompt Reflection: Use open-ended questions and debates that require independent reasoning, not just fact-recall.
  • Blend Digital with Manual Learning: Encourage note-taking by hand, idea mapping, and “AI-free” practice sessions.

Guidance for Developers

  • Design for Transparency: Show the user how a response was generated.
  • Offer Multiple Perspectives: Instead of single-answer outputs, offer several viewpoints with pros and cons.
  • Include Metacognitive Triggers: Prompt users to rate reliability, consider alternatives, or reflect on their decisions post-response.

Impacts in the Professional and Workplace Settings

The cognitive risks of AI offloading don’t end with students. A Microsoft Research report (cited in Gerlich’s study) found that knowledge workers, too, are susceptible

  • Those with less confidence in their professional decision-making were more likely to accept AI advice unquestioningly.
  • Employees with higher confidence used AI as a complement—fact-checking or ideating in collaboration with AI rather than handing it full control.

In sectors requiring analytical rigor—like law, policy, science, finance—the consequences are significant. A literal misinterpretation of AI output can lead to flawed judgments, liability, or public disinformation.


Future Research Directions: Where Do We Go From Here?

While Gerlich’s study opens the door to necessary conversations, more work is needed to refine understandings and develop safeguard strategies

  • Longitudinal Studies: Measure how hundreds or thousands of hours using AI affect reasoning over time.
  • Controlled Experiments: Compare group performance with and without AI access under standardized tasks.
  • Cross-disciplinary Models: Combine cognitive psychology, education, and computer science to build better AI-human interaction models.

Gerlich aims to build on his current work to design future-ready curricula and policy guidelines that help societies use AI purposefully—without losing cognitive sovereignty.


person sitting in nature thinking deeply

The Human Factor in AI: Time to Reclaim Our Mental Autonomy

While technological capabilities drive us forward, we must pause to ask: what do we sacrifice when machines think for us? Human cognition is complex, nuanced, and fundamentally irreplaceable. Automating routine tasks is beneficial—but automating curiosity, reflection, and analysis may fundamentally shift what it means to “know” something.

Instead of merely asking, “What can AI do for me?” we should ask, “What am I no longer doing because AI is doing it for me?”


Practical Takeaways to Stay Cognitively Engaged in the AI Age

Here’s a simple checklist to fortify your brain while using AI tools

  • Use AI to extend your ability—not replace it.
  • Always verify facts from independent or primary sources.
  • Regularly give your brain a “workout” through puzzles, writing, or no-AI problem solving.
  • Engage in real-time discussions and debates where AI isn’t mediating.
  • Avoid auto-accepting AI outputs; use interpretive reasoning to check for logic and bias.

Remember: AI can speed up your life. But it’s your job to steer.


Closing Thoughts: AI Tools as Allies, Not Mental Crutches

AI isn’t inherently dangerous to thinking—but how we use it defines the outcome. Like any powerful tool, its benefits come with responsibilities. Engage actively, verify constantly, and think deeply. With the right mindset, AI can illuminate new ideas. But without that discipline, it may reduce our most valuable light—our own intellect.


Citations

  • Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006
  • Microsoft Research (as referenced in Gerlich, 2025). Findings on AI trust and cognitive offloading among knowledge workers.
Previous Article

Are Mental Health Meds Really a 'Threat' to Kids?

Next Article

Amygdala Activity: Can It Predict PTSD Risk?

Write a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *



⬇️ Want to listen to some of our other episodes? ⬇️

Subscribe to our Newsletter

Subscribe to our email newsletter to get the latest posts delivered right to your email.
Pure inspiration, zero spam ✨