Back to Blog
Cognitive Biases
February 3, 2026 9 min read

How AI Is Reshaping Our Cognitive Biases (And Creating New Ones)

AI tools promise to make us smarter decision-makers. But research shows they're also introducing entirely new cognitive biases we've never seen before.

By Tim Raja

How AI Is Reshaping Our Cognitive Biases (And Creating New Ones)

When calculators became widespread, people didn't stop making math errors — they started making different math errors. They trusted the calculator even when they'd entered the wrong numbers. The same pattern is emerging with AI, but the stakes are far higher.

Automation Bias: Trusting the Machine

Automation bias — the tendency to favor suggestions from automated systems over contradictory information from non-automated sources — has been studied in aviation since the 1990s. Pilots sometimes follow autopilot instructions even when their own instruments and visual cues clearly indicate a problem.

Now this bias is appearing everywhere AI touches daily life. A 2025 study from Stanford found that when users were told an AI system recommended a particular medical diagnosis, 67% of participants agreed with it — even when presented with conflicting expert opinions. When the same diagnosis came from a human doctor, only 42% agreed when experts disagreed.

The danger isn't that AI is wrong. It's that we stop thinking critically when AI is involved.

The New Biases

AI Anchoring Effect: When an AI provides an initial suggestion, it anchors our thinking just like any other number — but more strongly, because we attribute greater authority to algorithmic outputs. If an AI pricing tool suggests $99 for a product, teams struggle to deviate from that number even when market data suggests $149 is optimal.

Explanation Satisfaction Bias: AI systems that provide explanations for their recommendations are more persuasive — regardless of whether the explanation is actually correct or complete. Humans are wired to feel satisfied when they receive a "because" explanation, even when the reasoning is circular or incomplete.

Algorithmic Aversion Flip: Research shows a peculiar pattern — people initially over-trust AI (automation bias), but after seeing it make even one mistake, they swing to the opposite extreme and refuse to trust it at all, even when the AI is statistically more accurate than human judgment. This oscillation between blind trust and total rejection prevents the optimal approach: calibrated, context-dependent trust.

Prompt Framing Bias: The way we phrase questions to AI dramatically shapes the answers we receive, yet most users are unaware of this. Ask an AI "What are the risks of this investment?" and you'll get a very different perspective than asking "Is this a good investment?" — even though both questions are about the same decision. Users take the AI's framed response as objective truth.

Traditional Biases That AI Amplifies

Confirmation bias gets turbocharged by AI. When you ask an AI to "find evidence supporting X," it will find compelling evidence — because that's what you asked for. The AI becomes an incredibly efficient confirmation machine, providing articulate, well-sourced arguments for whatever you already believe.

The Dunning-Kruger effect takes on new dimensions. AI tools give novices the ability to produce expert-looking outputs, creating the illusion of competence. Someone who has never written code can produce a working application; someone who has never studied law can generate legal-sounding arguments. The gap between "looks competent" and "is competent" widens.

How to Protect Yourself

  1. Form your own opinion first. Before consulting AI, write down your initial assessment. This anchors you to your own thinking rather than the AI's.
  2. Ask the AI to argue against itself. After getting a recommendation, ask: "Now give me the strongest arguments against this recommendation."
  3. Track AI accuracy. Keep a simple log of AI recommendations and actual outcomes. This builds calibrated trust over time.
  4. Use AI as one input among many. Treat it like a smart colleague with opinions — valuable but not infallible.
  5. Question the framing. Rephrase your question multiple ways and see if the answer changes. If it does, the original framing was influencing the output.

AI is the most powerful thinking tool ever created. But like all powerful tools, it requires wisdom to use well. The goal isn't to reject AI or blindly accept it — it's to develop a calibrated, context-aware relationship with algorithmic intelligence.

About the Author

Tim Raja is the founder of OverThinQ.ai, an AI-powered decision intelligence platform, and a former executive at one of the Big 4 consulting firms. He writes about cognitive bias, behavioral science, and the future of human decision-making. More of his writing can be found at overthinq.ai/blog.

artificial intelligence
cognitive biases
AI decision making
automation bias

Analyze your decisions with AI

OverThinQ identifies cognitive biases in your thinking and helps you make decisions you won't regret.