site advertisement

SNU Unveils Cost-Effective Eco Catalyst for Hydrogen

Institute for Operations Research and the Management Sciences

Research: ChatGPT Mirrors Human Bias in Decision Tests

BALTIMORE, MD, April 1, 2025 – Can we really trust AI to make better decisions than humans? A new study says … not always. Researchers have discovered that OpenAI’s ChatGPT, one of the most advanced and popular AI models, makes the same kinds of decision-making mistakes as humans in some situations – showing biases like overconfidence of hot-hand (gambler’s) fallacy – yet acting inhuman in others (e.g., not suffering from base-rate neglect or sunk cost fallacies).

Published in the INFORMS journal Manufacturing & Service Operations Management, the study reveals that ChatGPT doesn’t just crunch numbers – it “thinks” in ways eerily similar to humans, including mental shortcuts and blind spots. These biases remain rather stable across different business situations but may change as AI evolves from one version to the next.

AI: A Smart Assistant with Human-Like Flaws

The study, “A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?,” put ChatGPT through 18 different bias tests. The results?

Why This Matters

From job hiring to loan approvals, AI is already shaping major decisions in business and government. But if AI mimics human biases, could it be reinforcing bad decisions instead of fixing them?

“As AI learns from human data, it may also think like a human – biases and all,” says Yang Chen, lead author and assistant professor at Western University. “Our research shows when AI is used to make judgment calls, it sometimes employs the same mental shortcuts as people.”

The study found that ChatGPT tends to:

“When a decision has a clear right answer, AI nails it – it is better at finding the right formula than most people are,” says Anton Ovchinnikov of Queen’s University. “But when judgment is involved, AI may fall into the same cognitive traps as people.”

So, Can We Trust AI to Make Big Decisions?

With governments worldwide working on AI regulations, the study raises an urgent question: Should we rely on AI to make important calls when it can be just as biased as humans?

“AI isn’t a neutral referee,” says Samuel Kirshner of UNSW Business School. “If left unchecked, it might not fix decision-making problems – it could actually make them worse.”

The researchers say that’s why businesses and policymakers need to monitor AI’s decisions as closely as they would a human decision-maker.

“AI should be treated like an employee who makes important decisions – it needs oversight and ethical guidelines,” says Meena Andiappan of McMaster University. “Otherwise, we risk automating flawed thinking instead of improving it.”

What’s Next?

The study’s authors recommend regular audits of AI-driven decisions and refining AI systems to reduce biases. With AI’s influence growing, making sure it improves decision-making – rather than just replicating human flaws – will be key.

“The evolution from GPT-3.5 to 4.0 suggests the latest models are becoming more human in some areas, yet less human but more accurate in others,” says Tracy Jenkin of Queen’s University. “Managers must evaluate how different models perform on their decision-making use cases and regularly re-evaluate to avoid surprises. Some use cases will need significant model refinement.”

Link to full study.

About INFORMS and Manufacturing & Service Operations Management

INFORMS is the leading international association for data and decision science professionals. Manufacturing & Service Operations Management, one of 17 journals published by INFORMS, is a premier academic journal that covers the production and operations management of goods and services including technology management, productivity and quality management, product development, cross-functional coordination and practice-based research. More information is available at www.informs.org or @informs.

https://www.informs.org/News-Room/INFORMS-Releases/News-Releases/AI-Thinks-Like-Us-Flaws-and-All-New-Study-Finds-ChatGPT-Mirrors-Human-Decision-Biases-in-Half-the-Tests

View Original | AusPol.co Disclaimer

Have Your Say

We acknowledge and pay our respects to the Traditional Owners of country throughout Australia


Disclaimer | Contact Us | AusPol Forum
All rights are owned by their respective owners
Terms & Conditions of Use