I have already asked the shoggoths to search for me, and it would probably represent a duplication of effort on your part if you all went off and asked LLMs to search for you independently.
KEVIN: Well, I'm glad. We didn't intend it to be an AI focused podcast. When we started it, we actually thought it was going to be a crypto related podcast and that's why we picked the name, Hard Fork, which is sort of an obscure crypto programming term. But things change and all of a sudden we find ourselves in the ChatGPT world talking about AI every week.
One of the world’s largest academic publishers is selling a book on the ethics of artificial intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.
Ben Williamson, editor of the journal Learning, Media and Technology:
Checking new manuscripts today I reviewed a paper attributing 2 papers to me I did not write. A daft thing for an author to do of course. But intrigued I web searched up one of the titles and that's when it got real weird... So this was the non-existent paper I searched for:
Williamson, B. (2021). Education governance and datafication. European Educational Research Journal, 20(3), 279–296.
But the search result I got was a bit different...
Here's the paper I found online:
Williamson, B. and Piattoeva, N. (2022) Education Governance and Datafication. Education and Information Technologies, 27, 3515-3531.
Same title but now with a coauthor and in a different journal! Nelli Piattoeva and I have written together before but not this...
And so checked out Google Scholar. Now on my profile it doesn't appear, but somwhow on Nelli's it does and ... and ... omg, IT'S BEEN CITED 42 TIMES almost exlusively in papers about AI in education from this year alone...
Which makes it especially weird that in the paper I was reviewing today the precise same, totally blandified title is credited in a different journal and strips out the coauthor. Is a new fake reference being generated from the last?...
I know the proliferation of references to non-existent papers, powered by genAI, is getting less surprising and shocking but it doesn't make it any less potentially corrosive to the scholarly knowledge environment.
As is typical for educators these days, Heiss was following up on citations in papers to make sure that they led to real sources — and weren’t fake references supplied by an AI chatbot. Naturally, he caught some of his pupils using generative artificial intelligence to cheat: not only can the bots help write the text, they can supply alleged supporting evidence if asked to back up claims, attributing findings to previously published articles. [...] That in itself wasn’t unusual, however. What Heiss came to realize in the course of vetting these papers was that AI-generated citations have now infested the world of professional scholarship, too. Each time he attempted to track down a bogus source in Google Scholar, he saw that dozens of other published articles had relied on findings from slight variations of the same made-up studies and journals. [...] That’s because articles which include references to nonexistent research material — the papers that don’t get flagged and retracted for this use of AI, that is — are themselves being cited in other papers, which effectively launders their erroneous citations.
ACM is now showing an AI “summary” of a recent paper of mine on the DL instead of the abstract. As an author, I have not granted ACM the right to process my papers in this way, and will not. They should either roll back this (mis)feature or remove my papers from the DL.
Or, since we already know that it's insipid fashtech with the cortical impact of moonshine, we could... not do that. Instead of wasting carbon on a joke about failing to synthesize a joke, maybe pet a cat? Drink a hot cocoa? Sing along to the "oh oh oh"s in "Sweet Caroline"?
Merriam-Webster’s human editors have chosen slop as the 2025 Word of the Year. We define slop as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” All that stuff dumped on our screens, captured in just four letters: the English language came through again.
Large Language Models are useless for linguistics, as they are probabilistic models that require a vast amount of data to analyse externalized strings of words. In contrast, human language is underpinned by a mind-internal computational system that recursively generates hierarchical thought structures. The language system grows with minimal external input and can readily distinguish between real language and impossible languages.
I search my name on a regular basis, not only because I am an ego monster (although I try not to pretend that I’m not) but because it’s a good way for me to find reviews, end-of-the-year “best of” lists my book might be on, foreign publication release dates, and other information about my work that I might not otherwise see, and which is useful for me to keep tabs on. In one of those searches I found that Grok (the “AI” of X) attributed to one of my books (The Consuming Fire) a dedication I did not write; not only have I definitively never dedicated a book to the characters of Frozen, I also do not have multiple children, just the one.
Revealed: World's shittiest "tag yourself" meme