Totally A Thing

Golf courses, humans and AI data centres?


Listen Later

Data centres for AI use huge amounts of electricity and cooling water. But there’s folks that want to muddy the waters about these facts, and so they are using AI to generate junk science. As I say in the video quibbling over water that is “consumed” versus “withdrawn” is pointless argumentation used by AI boosters to cloud the facts. When climate change denialists wanted to attack clean energy initiatives they took aim at Toyota’s Prius hybrid saying the Sherman-tank-like Humvee was “cleaner”, and they used a junk science report as the vehicle. Of course the truth is not that the Prius is cleaner (it is) but that creating fake culture wars over cars like this is a way to distract from sensible advancement of facts.

Humvee vs Prius is now Data Centre Water & Power Use

Yes, data centres stress the water supply; but no your individual ChatGPT app use is not going to empty a lake. If you’re an AI vendor’s PR company you influence those talking sense about AI data centres impact into defending a dumb position that is irrelevant to environmental regulation of AI companies, then you’ve won your battle. Pick some fringe junk science study that would normally sink without a trace and arrange for it to get inflammatory press.

* 6 months ago I posted about the environmental impact of AI data centres

* Someone posted in response “LOL, No” - and linked this paper:

* The carbon emissions of writing and illustrating are lower for AI than for humans” It’s in Nature a prestigious journal… so case closed?

* Trouble is the whole thing is junk science as far as I can tell

* In the Arxiv PDF for the pre-print of the paper they admit they used ChatGPT to generate it.

* That admission is gone from the Nature scientific publication

* Junk science from paper mills and eager grad students wanting to burnish their careers with ChatGPT generated pre-prints have been reported even in the mainstream news as a huge problem for science.

* The “LOL” guy linked to the “Nature” version of the junk science report which is easily mistaken as being part of the journal Nature, but is in fact one platform of the publisher Springer Nature.

* Its not rigorously peer reviewed, and you can “pay to play” if you want something in “Scientific Reports”

* But surely it can’t be junk science if its published in Nature? Its peer reviewed!

* Folks in PubPeer who try to do rigorous academic peer review are inundated with bad faith AI generated “papers” and despair of correcting junk science that finds its way into the Scientific Reports. Which is not Nature.

* They report “tortured phrases” such as saying “profound learning” instead of “deep learning”. These clearly the paper’s purported authors didn’t write it, but actually outsourced to generative AI.

* The peer reviewer also point out corrections required to very imprecise terms such as “AI” where it should in fact read machine learning. So the articles are junk, but worse they are AI generated junk.

So Scientific Reports can in general be junk, but what is the relevance to the “carbon emissions of humans” paper?

* Here’s a section from the ArxIv version of this paper:

Despite these current and potential future forms of societal transformation and harm, profound benefits to society could accrue through the use of AI.

AI? What kind of AI? Writing and illustrating are generative AI, they are LLMs. Imprecise terms in a “scientific paper” are a giant red flag. And “profound benefits” — like what? The footnotes 26, 27 and 28 don’t point to page numbers.

Are Some People Actively Trying to Rewrite Ground Truth?

In a 2025 article Nick McGreivy, PhD Princeton, reported on a trend noted by many others: AI “science” wasn’t working. Even Google DeepMind who trumpeted finding new materials with AI, later admitted the findings were mostly junk and not new materials at all. AI papers had “data leakage” problems and the results don’t replicate.

But in 2023 according to an article on the University of California, Irvine website these folks pictured below all got together and decided — golly gee — to see if they could write scientific academic papers using ChatGPT.

Tomlinson who has the first name on the “Carbon” paper has the computer science credentials but Torrance is a lawyer. Black is a professor under Tomlinson at UC Irvine, working in computer science with an impressive resume of writing about Harry Potter fan fiction and education, while Computer Science Professor Don Patterson is very interested in how AI can be leveraged to seed fake news by publishing fake scientific papers:

Patterson also has a Blockchain startup. I don’t mean to make light of their credentials — especially Black, as any woman who’s gotten to her level in AI deserves credit — but the whole thing has a playful air to it that doesn’t fit with how seriously everyone has taken their paper.

What the Heck is Going on with AI Science?

In this thread back in June I commented on the “carbon” paper. In that thread I linked a most thorough and data-driven peer review of the Tomlinson paper. The numbers quoted used a LCA toolset; linked to a Github showing methodology, and showed order of magnitude differences in the consumption figures:

Weirdly not only is this piece from Rockpool.tech gone, the site is deregistered as a domain name. To make matters worse the InternetArchive is down and I cannot find any archive of the rockpool work. Its just wiped off the face of the internet. I’ll update this if I’m later able to locate it.

The Carbon Emissions Paper is not LCA

There are a number of other damning critiques of the “carbon emissions” paper. Professor Stefan Pauliuk, of the University of Freiburg, an expert in Life Cycle Analysis of environmental impacts called the paper worse than a “student term paper due at the end of a two week block course on LCA”. He tears into the analysis of the human carbon outputs as:

a time-based downscaling a person’s average annual footprint to the one hour of writing a page of text is not appropriate, as this footprint includes … things that are clearly not attributable to the writing and painting process

In other words its about as accurate as any ChatGPT generated student term paper. Weirdly when you read the UC Irvine article, and the notes at the end of the Arxiv PDF they express worry that they avoid any unintentional plagiarism as being much more of a concern than the accuracy of the content.

The fakery around Hummer vs Prius was exactly this redrawing of all the boundaries of the analysis along lines that completely warp the figures. Pauliuk points out how bogus this is in the case of the “carbon of AI vs humans” paper.

The Paper’s Data is Doubtful

Since the paper refers to random articles on Medium as authority for its data, I’m going to do the same: here’s an article that points out how the framing of the carbon calculations is wrong. It is based on figures from that medium article and calculated out from there: see all that “derived from above” then the Chris Pointon medium article link:

The point of this is not that Pointon’s data is wrong: its just that OpenAI is not releasing any data about its carbon footprint. No-one knows what it is. And bogus math obtained by dividing the total carbon of the whole USA by the US population is not helping clarity.

Don’t Use ChatGPT for Science - it Doesn’t Work

* As above with McGrievy’s article, you cannot generate science with a large language model and expect it to hold up. The authors of the “carbon” paper say they edited it heavily, but they also say they regenerated a whole new draft. Basing all your data off a Medium article seems very ChatGPT to me.

* Google is actually doing some proper LCA on Data Center resource usage: look there instead of dodgy ChatGPT generated papers. Their usage will not be very representative since their hardware is different to ChatGPT. They have their own TPUs and ChatGPT uses Microsoft Azure and AWS, so its likely a lot more.

* If you want a good informal video explainer the ABC has a great video on how our individual prompts to generative are not a big user of water, but the data centres are contributing to water stress and are a problem.

* The point of my article and my video rant above is that the astroturfers have arrived with their junk science generation toolkit and that in itself is a sign that we’ve turned a corner in the struggle to get AI vendor companies to be decent citizens of planet earth, be regulated sensibly for everyone’s safety.

Conclusion

Beware of junk science. Especially when its a distraction from the real issues. Arguments about how many bottles of water your prompt uses are a red herring. As I previously reported we don’t want new AI data centres stressing water and power delivery. And being drawn into fake arguments about fake science is just the playbook of the climate denialists being run again in the AI age.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit totallyathing.substack.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

Totally A ThingBy Sarah Smith