Image
Review

Science is drowning in AI spam

Who doesn’t want to talk to a whale or hear their dog tell them a story? Using AI, scientists think they’re closer to bridging the communication barrier between humans and animals. Artificial intelligence has decoded the sperm whale’s “phonetic alphabet”, claim scientists. The Seti Institute, which looks for signs of alien life, has even attempted ...

Who doesn’t want to talk to a whale or hear their dog tell them a story?

Using AI, scientists think they’re closer to bridging the communication barrier between humans and animals.

Artificial intelligence has decoded the sperm whale’s “phonetic alphabet”, claim scientists. The Seti Institute, which looks for signs of alien life, has even attempted a 20-minute “conversation” with an Alaskan humpback.

And why not? If machine learning, which underpins today’s AI, is good for anything, it’s pattern recognition, sifting for clues in huge amounts of data. This helped sell the idea to today’s policy classes, who lack the scientific or technical understanding to make informed, independent judgments on whether AI will accelerate scientific discovery.

A proof point seemed to be Google DeepMind’s AlphaFold, which won the Nobel Prize for Chemistry in 2024 and earned Sir Demis Hassabis a knighthood.

“They cracked the code for protein’s amazing structures,” said the committee chairman at the time, adding it was “fulfilling a 50-year-old dream”.

As a result, they were able to predict the structures of virtually all of the 200 million proteins that researchers have identified from their amino acid sequences.

But ask scientists, and you get a quite different story. With the advent of generative AI, science is starting to go backwards.

The first to notice were academic journals.

Teams at UC Berkeley and Cornell examined millions of recent papers and found that while the volume of submissions had risen by up to 50pc in some fields, the quality had fallen. Science is a process that is being overwhelmed by noise and quality issues.

“Scientific articles that were mostly automated are of substantially lower quality than human-written papers,” they reported. “The resulting flood of polished but potentially superficial work is making it harder for reviewers, funders, and policymakers to separate worthy papers from unimportant and potentially misleading work. It’s happening to any open system based on trust.”

Publishers are getting half a million requests for every one legitimate visitor. Open-source projects are so flooded with bot-generated code that they shut down contributions. Open APIs across the web are being locked down, crushing access to software applications for researchers, lamented Anil Dash, a web entrepreneur, last week. Wikipedia just voted to ban the use of all generative AI tools.

Even before the latest bout of AI mania, science had problems. In some fields, such as psychology, the replication crisis has revealed thousands of “breakthrough” papers that couldn’t be reproduced by others. Thousands of bogus “paper mill” articles were lurking in the corpus, Nature reported in 2021. The mass production of low-quality but plausible text has made the bad stuff a lot easier.

Why not use the power of AI to fight AI spam? Things aren’t that simple. It creates subtle, insidious problems. A research paper published last month explained some of them.

“Large Language Model (LLM) edits frequently alter the person’s intended conclusions, removing content that makes a particular claim, and editing the essay to be more neutral or positive about the technology of self-driving cars,” it said.

AI-generated reviews have a flattening effect, which over-scores conformity and devalues originality and insight.

“LLMs have begun to change the very criteria that researchers use when evaluating peer-reviewed scientific research.” In other words, institutions will get dumber as they use AI.

As AI-generated material proliferates, so do the mistakes it makes, the “hallucinations”, as they are called.

Another form of degradation occurs when scientists use AI-generated material and then cite it, poisoning the food chain.

Far from ushering in a new age of abundance, LLMs have infected our institutions and critical processes with a kind of brain fungus.

A study from the Massachusetts Institute of Technology, titled Your Brain on ChatGPT, showed that the mental capacities of students who used chatbots dropped off dramatically compared to those who used Google or nothing at all.

It increased what they called “cognitive atrophy” and “cognitive debt” – the brain is just another muscle, which gets flabby through lack of exercise.

This is confirmed by further studies.

Two professors of the Wharton School at the University of Pennsylvania found that “cognitive surrender” increases the more you use AI. Those keenest on AI’s potential fell fastest. It’s a form of self-lobotomy.

How were hopes raised so high?

In 2008, Chris Anderson, then the editor of Wired magazine, proposed that we could dispense with the scientific method altogether. “Hypothesise, model, test is becoming obsolete,” he wrote.

We now had vast oceans of data, so the answers must be in there; we needed to interrogate them wisely.

It would be like using Google, he said.

His supposition was that the old ways and habits were bad, and only new ways could move us forward. Inductive reasoning would conquer the remaining scientific hurdles. The confidence in AI accelerating scientific discovery today shares the same exuberance.

We forget that, before it was rebranded in the 19th century, what we today call “science” was once called “natural philosophy”.

That reminds us it’s a process that requires careful thought.

Individual creativity, intelligence and intuition are also needed for scientific progress.

AI doesn’t have them, but continues to excel at all the things nobody wants or asks for, destroying institutions based on trust and flattening the world.

“Today’s LLMs are phenomenal at pattern recognition, but they don’t truly understand causality,” Google DeepMind’s Sir Demis himself now acknowledges.

Not only are we not seeing the productivity miracle we were promised, but it isn’t curing cancer either.

Recommended

AI becomes a battleground between tech bros and the masses

Read more

Sign up to the Front Page newsletter for free: Your essential guide to the day's agenda from The Telegraph - direct to your inbox seven days a week.

logo logo

“A next-generation news and blog platform built to share stories that matter.”