Most AI Interactions are Like Junk Food

Tags: research, musings

Published on
« Previous post: Why Write? — Next post: The Missing Screwtape Letters from … »

Large language models have taken the public and academia by surprise. Almost every day, someone finds a new thing they can—or cannot—do reliably. Setting aside all questions about ethics and other aspects, I want to briefly comment on the thing that irks me about such models in the general discourse, namely: LLMs are like junk food.

Yes, their feats are great and it is kind of funny to rewrite texts so that they sound more Shakespearean and all that jazz. But I draw a heavy line at going beyond that. I do not believe for a second that a large language model in its current1 form is suited for being some kind of substitute for actual human interaction.2

Instead, I liken the current models to junk food: you can eat it, and it will fill you for a while, but over time, it is not good for you in many dimensions. Current large language models are like that as well. If you use them for prolonged periods of time to replace humans or human contact, it will ultimately be detrimental for you. The main reason for this is that the models are set up to please you: there is no consciousness3 in there and the models have no concept of self, as opposed to many other species.

A large language model will not challenge you to change your belief about a certain issue, it will not confront you with different ways of viewing the world, and, most importantly, it will not call out a ‘BS statement’ that you make. If you are lucky and have great friends this is what they will do, though: giving you opportunities to grow and be a better version of yourself. Just like proper food will serve as nourishment and do you good in the long run, so, too, will hanging out with real human beings.

Have a healthy AI diet, until next time!


  1. My opinion might change in the future, assuming some major advances in the capabilities of such models here. ↩︎

  2. Coming from an introvert like myself, this may sound like a ridiculous statement, but please bear with me for a while! ↩︎

  3. At least in the current models, but I frankly do not see how stacking more layers will suddenly lead to more emergent behaviours. Then again, I am not a neuroscientist, so take my claims with a big grain of salt. ↩︎