site advertisement

Albanese Faces Chinese Burn, Grattan Reports

Public Urged to Question Tech’s Good AI Claims

While there’s been much negative discussion about AI, including on the possibility that it will take over the world, the public is also being bombarded with positive messages about the technology, and what it can do .

This “good AI” myth is a key tool used by tech companies to promote their products. Yet there’s evidence that consumers are wary of the presence of AI in some products. This means that positive promotion of AI may be putting unwanted pressure on people to accept the use of AI in their lives.

AI is becoming so ubiquitous that people may be losing their ability to say no to using it. It’s in smartphones, smart TVs, smart speakers like Alexa and virtual assistants like Siri. We’re constantly told that our privacy will be protected. But with the personal nature of the data that AI has access to in these devices, can we afford to trust such assurances?

Some politicians also propagate the “good AI” promise with immense conviction , mirroring the messages coming from tech companies.

Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.

My current research is partly explained in a new book called the The Myth of Good AI . This research shows that the data feeding our AI systems is biased, as it often over-represents privileged sections of the population and mainstream attitudes.

This means that any AI products that don’t include data from marginalised people, or minorities, might discriminate against them. This explains why AI systems continue to be riddled with racism , ageism and various forms of gender discrimination, for instance.

The speed with which this technology is impinging on our everyday life, makes it very hard to properly assess the consequences. And an approach to AI that is more critical of how it works does not make for good marketing for the tech companies.

Positive ideas about AI and its abilities are currently dominating all aspects of AI innovation. This is partly determined by state interests and by the profit margins of the tech companies .

These are tied into the power structures held up by tech multi-billionaires, and, in some places, their influence on governments. The relationship between Donald Trump and Elon Musk, despite its recent souring, is a vivid manifestation of this.

And so, the public is at the receiving end of a distinctly hierarchical top-down system, from the big tech companies and their governmental enablers to users. In this way, we are made to consume, with little to no influence over how the technology is used. This positive AI ideology is therefore primarily about money and power .

As it stands, there is no global movement with a unifying manifesto that would bring together societies to leverage AI for the benefit of communities of people, or to safeguard our right to privacy. This “right to be left alone” , codified in the US constitution and international human rights law, is a central pillar of my argument. It is also something that is almost entirely absent from the assurances about AI made by the big tech companies.

Yet, some of the risks of the technology are already evident. A database compiling cases in which lawyers around the world used AI, identified 157 cases in which false AI-generated information – so called hallucinations – skewed legal rulings.

Some forms of AI can also be manipulated to blackmail and extort, or create blueprints for murder and terrorism.

Tech companies need to programme the algorithms with data that represents everyone, not just the privileged, in order to reduce discrimination. In this way, the public are not forced to give into the consensus that AI will solve many of our problems, without proper supervision by society. This distinction between the ability to think creatively, ethically and intuitively may be the most fundamental faultline between human and machine.

It’s up to ordinary people to question the good AI myth. A critical approach to AI should contribute to the creation of more socially relevant and responsible technology, a technology that is already trialled in torture scenarios, as the book discusses, too.

The point at which AI systems would outdo us in every task is expected to be a decade or so away. In the meantime there needs to be resistance to this attack on our right to privacy, and more awareness of just how AI works.

Arshin Adib-Moghaddam does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

View Original | AusPol.co Disclaimer

Have Your Say

We acknowledge and pay our respects to the Traditional Owners of country throughout Australia


Disclaimer | Contact Us | AusPol Forum
All rights are owned by their respective owners
Terms & Conditions of Use