Misinformation Does Not Need Large Language Models

Tags: research, musings

Published on
« Previous post: 2023 in Numbers — Next post: Tools for Decreasing Your Productivity »

One of my favourite pastimes involves making notes about some controversial topic and then revisiting them some months later. This is often a nice way to practice stoicism: many times, what looked like a big storm turns out to be nothing more than air disturbed by a mayfly.

A recent topic that is worth reconsidering involves the misinformation brought on by large language models. Last year, some people demanded a moratorium for large-scale AI development; some even went so far as to demand the bombing of data centres for ! The proponents of these ideas were claiming that the AI Apocalypse was nigh and humanity was at stake. And even if not, so they claimed, we would see a new age of misinformation, essentially flooding the internet with bad AI-generated content.1

Now, here we are, almost a year later. Society, so far, has endured, and AI has not killed us yet. I understand that AI, in particular large language models, can easily lead to harm, but in one aspect at least, things are not as problematic as they seem, viz. misinformation. As nerds, we often underestimate the efficacy of simple solutions. This is wonderfully illustrated in an xkcd cartoon on password security. I feel seen here, since I also used to believe that good cryptography would protect me from bad state actors. In reality, of course, the cartoon gets it right: A bad state actor would not care about my well-being, so they might as well punch the password out of me.

With misinformation, this seems to be similar. I recently got access to the ‘Community Notes’ feature of X. This provides me with the option to rate notes or write my own, and so far, the pattern is pretty clear. An immense amount of misinformation comes from the same old sources: Extremist media with a clear agenda, untrustworthy news sources, and the like. It seems to me that they do not even have to use a lot of badly-written and obviously-biased articles, because, and here you have to imagine me putting on my arrogant academic hat, the people consuming this content do not even require many articles to consolidate their faith in certain positions. That’s the beauty of confirmation bias to you.

Thus, amidst the anti-vaxxers, anti-maskers, deep state believers, and lovers of conspiracies, a little actually goes a long way. I doubt that will change much over time, human nature being what it is. There are things to be worried about when it comes to new technologies, but misinformation, for once, is not one of them—unfortunately, we got that one covered pretty well on our own.


  1. I think we humans probably will remain the masters of creating crap, but what do I know? ↩︎