site advertisement

Unexploded Ordnance Threatens Baltic Sea Environment

Vrije Universiteit Brussel

AI Bias Exposed: Study Shows Tech May Widen Inequality

The researchers challenge the widespread belief that AI-induced bias is a technical flaw, arguing instead AI is deeply influenced by societal power dynamics. It learns from historical data shaped by human biases ,absorbing and perpetuating discrimination in the process. This means that, rather than creating inequality, AI reproduces and reinforces it.

“Our study highlights real-world examples where AI has reinforced existing biases.” Prof. Bircan says. “One striking case is Amazon’s AI-driven hiring tool, which was found to favor male candidates, ultimately reinforcing gender disparities in the job market. Similarly, government AI fraud detection systems have wrongly accused families, particularly migrants, of fraud, leading to severe consequences for those affected. These cases demonstrate how AI, rather than eliminating bias, can end up amplifying discrimination when left unchecked. Without transparency and accountability, AI risks becoming a tool that entrenches existing social hierarchies rather than challenging them.”

AI is developed within a broader ecosystem where companies, developers, and policymakers make critical decisions about its design and use. These choices determine whether AI reduces or worsens inequality. When trained on data reflecting societal biases, AI systems replicate discrimination in high-stakes areas like hiring, policing, and welfare distribution. Professor Bircan’s research stresses that AI governance must extend beyond tech companies and developers. Given that AI relies on user-generated data, there must be greater transparency and inclusivity in how it is designed, deployed and regulated. Otherwise, AI will continue to deepen the digital divide and widen socio-economic disparities.

Despite the challenges, the study also offers hope. “Rather than accepting AI’s flaws as inevitable, our work advocates for proactive policies and frameworks that ensure AI serves social justice rather than undermining it. By embedding fairness and accountabilit into AI from the start, we can harness its potential for positive change rather than allowing it to reinforce systemic inequalities.” Prof. Bircan concludes.

Reference:

Tuba Bircan, Mustafa F. Özbilgin, (2025) Unmasking inequalities of the code: Disentangling the nexus of AI and inequality. Technological Forecasting and Social Change, Volume 211, 123925. https://doi.org/10.1016/j.techfore.2024.123925 .

https://doi.org/10.1016/j.techfore.2024.123925

View Original | AusPol.co Disclaimer

Have Your Say

We acknowledge and pay our respects to the Traditional Owners of country throughout Australia


Disclaimer | Contact Us | AusPol Forum
All rights are owned by their respective owners
Terms & Conditions of Use