AI slop has already become prevalent across the internet (and in the classroom). But according to multiple reports, artificial intelligence is posing an increasingly serious threat to scientific literature, too. Researchers have uncovered a concerning number of AI-generated papers published in reputable journals, even experts can no longer identify AI-generated “science” images, and the amount of AI slop submitted to publishers will only increase as the technology continues to improve. Let’s take a look at what this means for the future of science.
I’ve worried for some years now about the impact that artificial intelligence will have on science. Because many, if not most, research areas are already struggling with nonsense research and flailing peer review – and things are getting worse. The crazy tariff formula of the Trump administration, for example, has widely been suspected to have been AI-generated, but I guess it’s okay because the UK tech secretary, too, has used ChatGPT for policy advice. Terrence Howard, the actor who believes that 1 times 1 equals 2, recently put forward a supposed solution to the three-body problem, almost certainly also AI generated. And I have now received half a dozen “theories of everything” that were, by admission, written with the help of AI. But that’s only the tip of the iceberg because AI is about to swamp the science literature. In January 2025, a group of economists demonstrated that current generative AI can rapidly mass-produce finance papers. ” They let GPT and Claude analyse stock data and generated 288 complete papers. These are not AI gibberish papers, they ran a reasonable protocol that analysed real data. You could do the same in, say, cosmology or particle physics. Take some existing data set and analyse if it supports some dark matter model or the modified gravities of the day. You could generate thousands of papers about this on the snap of a finger. And people will do it, because it’s how they earn money. We know that even the early versions of GPT and Claude were used to produce junk papers, fairly easy to track just by searching for phrases like “as of my last knowledge update.” Just in January, a group of researchers reported that such AI generated papers are most common for research on… fish. If there are any fish in the audience, please let us know what’s up with that. But generative AI has advanced and there are no obvious text markers any more. Just in January, the company Proofing, that specializes in the detection of image manipulation in scientific publications, reported that AI is now so good that, in a test they did, most researchers could not tell apart the AI generated scientific images from real ones. This is a huge problem in research areas where images are used more commonly than data series, such as most areas of biology and some of material science. There are other reasons that we can be pretty sure AI fraud is getting increasingly common, for example the strange story of “vegetative electron microscopy.” What is “vegetative electron microscopy”, you may ask? It isn’t anything. Yet, this expression has appeared now in more than twenty scientific papers as a label for images taken by scanning electron microscopes. Probably. If the images are even real. The phrase seems to have first emerged in a 1959 two-column article, where “vegetative cell wall” appeared in one column, and “electron microscopy” in the other. So if you read across that it gave “vegetative electron microscopy”. Once the phrase existed, it was probably picked up by AI, helped by the fact that in Farsi the word for “scanning” is almost exactly the same as the word for “vegetative.” It has now appeared in papers published by journals from major publishers. You might say, these are just some funny anecdotes, but we can’t dismiss this so easily. These are the cases that we happen to know of because there was something weird about them. But chances are that by now the vast majority of AI generated scientific content, reasonable or fraudulent, goes undetected. Yes there is an increasing number of people working on identifying this stuff but, well, that is running into problems, too. PubPeer is a platform where researchers can post papers and discuss potential problems. This is widely used especially in the life sciences. The problem is that, much like with fake reviews, you can abuse this system to make perfectly fine papers look suspicious. Sylvain Bernès, a researcher investigating scientific misconduct, recently stumbled into this. He received an email saying “Hey professor, I don’t like this group of researchers. Can you help me retract these papers… I will be happy if the first author loses his job.” And, after Bernes didn’t respond quickly enough, followed up “I am waiting for your response. If not, I will put all your papers on PubPeer in order to obtain their retractions.” This is like leaving fake one-star reviews to make your competitors’ restaurant look bad. Finally, if you remember that a few months ago, we talked about the cat with academic citations that were generated by uploading fake papers to Research gate, that were then indexed by Google scholar. Yes, that cat. Possibly tenured by now. Well, in a preprint that appeared in March, a group reports that they encountered a cluster of authors who boosted their citation count with this very method. In other news, ResearchGate recommended that Geoffrey Hinton checks out a paper that he wrote with Yann LeCun. This collaboration was news for both of the supposed authors. What all this means is that we’re nearing a point where science will drown in AI slop. This may not be entirely a bad thing, because it might force scientists to finally do something about their publish or perish culture. But the way it currently looks, the future of science isn’t about finding truth — just about generating statistically plausible sentences about it. Artificial intelligence, I believe, is the beginning of a new phase of human civilization. If you want to learn more about how it really works check out the courses on Brilliant because I found them to be really useful. Brilliant offers courses on a large variety of topics in science, computer science, and mathematics. All their courses have interactive visualizations and come with follow-up questions. Whether you want to learn to think like an engineer, brush up your knowledge of algebra, or want to learn coding in Python – Brilliant has you covered. It’s a fast and easy way to learn and you can do it whenever and wherever you have the time. And they’re adding new courses each month. I even have my own course on Brilliant that’s an introduction to quantum mechanics. It’ll help you understand what a wave function is and what the difference is between superpositions and entanglement. It also covers interference, the uncertainty principle, and Bell’s theorem. And after that you can continue, maybe, with a course on quantum computing or differential equations. And of course, I have a special offer for viewers of this channel. If you use my link brilliant.org/sabine or scan the QR code, you’ll get to try out everything Brilliant has to offer for a full 30 days and you’ll get 20% off the annual premium subscription. So go and check this out. Thanks for watching, see you tomorrow.