site advertisement

Albanese Faces Chinese Burn, Grattan Reports

AI: Scientific Tool or Research Misconduct Catalyst?

In February this year, Google announced it was launching “a new AI system for scientists”. It said this system was a collaborative tool designed to help scientists “in creating novel hypotheses and research plans”.

It’s too early to tell just how useful this particular tool will be to scientists. But what is clear is that artificial intelligence (AI) more generally is already transforming science.

Last year for example, computer scientists won the Nobel Prize for Chemistry for developing an AI model to predict the shape of every protein known to mankind. Chair of the Nobel Committee, Heiner Linke, described the AI system as the achievement of a “50-year-old dream” that solved a notoriously difficult problem eluding scientists since the 1970s.

But while AI is allowing scientists to make technological breakthroughs that are otherwise decades away or out of reach entirely, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.

Academic papers can be retracted if their data or findings are found to no longer valid. This can happen because of data fabrication, plagiarism or human error.

Paper retractions are increasing exponentially , passing 10,000 in 2023. These retracted papers were cited over 35,000 times.

One study found 8% of Dutch scientists admitted to serious research fraud, double the rate previously reported. Biomedical paper retractions have quadrupled in the past 20 years , the majority due to misconduct.

AI has the potential to make this problem even worse.

For example, the availability and increasing capability of generative AI programs such as ChatGPT makes it easy to fabricate research.

This was clearly demonstrated by two researchers who used AI to generate 288 complete fake academic finance papers predicting stock returns.

While this was an experiment to show what’s possible, it’s not hard to imagine how the technology could be used to generate fictitious clinical trial data, modify gene editing experimental data to conceal adverse results or for other malicious purposes.

There are already many reported cases of AI-generated papers passing peer-review and reaching publication – only to be retracted later on the grounds of undisclosed use of AI, some including serious flaws such as fake references and purposely fabricated data.

Some researchers are also using AI to review their peers’ work. Peer review of scientific papers is one of the fundamentals of scientific integrity. But it’s also incredibly time-consuming, with some scientists devoting hundreds of hours a year of unpaid labour. A Stanford-led study found that up to 17% of peer reviews for top AI conferences were written at least in part by AI.

In the extreme case, AI may end up writing research papers, which are then reviewed by another AI.

This risk is worsening the already problematic trend of an exponential increase in scientific publishing, while the average amount of genuinely new and interesting material in each paper has been declining .

AI can also lead to unintentional fabrication of scientific results.

A well-known problem of generative AI systems is when they make up an answer rather than saying they don’t know. This is known as “hallucination”.

We don’t know the extent to which AI hallucinations end up as errors in scientific papers. But a recent study on computer programming found that 52% of AI-generated answers to coding questions contained errors, and human oversight failed to correct them 39% of the time.

Despite these worrying developments, we shouldn’t get carried away and discourage or even chastise the use of AI by scientists.

AI offers significant benefits to science. Researchers have used specialised AI models to solve scientific problems for many years. And generative AI models such as ChatGPT offer the promise of general-purpose AI scientific assistants that can carry out a range of tasks, working collaboratively with the scientist.

These AI models can be powerful lab assistants . For example, researchers at CSIRO are already developing AI lab robots that scientists can speak with and instruct like a human assistant to automate repetitive tasks.

A disruptive new technology will always have benefits and drawbacks. The challenge of the science community is to put appropriate policies and guardrails in place to ensure we maximise the benefits and minimise the risks.

AI’s potential to change the world of science and to help science make the world a better place is already proven. We now have a choice.

Do we embrace AI by advocating for and developing an AI code of conduct that enforces ethical and responsible use of AI in science? Or do we take a backseat and let a relatively small number of rogue actors discredit our fields and make us miss the opportunity?

Jon Whittle works at CSIRO which receives R&D funding from a wide range of government and industry clients.

Stefan Harrer works at CSIRO which receives R&D funding from a wide range of government and industry clients. He is affiliated with IEEE, the New York Academy of Sciences and serves as an advisor to Harvard Medical School.

View Original | AusPol.co Disclaimer

Have Your Say

We acknowledge and pay our respects to the Traditional Owners of country throughout Australia


Disclaimer | Contact Us | AusPol Forum
All rights are owned by their respective owners
Terms & Conditions of Use