Consciousness Is Overrated Anyway (for AI)

Tags: musings

Published on
« Previous post: The Fulbright Program: Chock Full of …

I normally have my ultracrepidarian tendencies under control, but when it comes to the whole “Is AI conscious?” debate, I cannot resist to chime in with some half-baked ideas, misreadings of the neuroscience literature, and some confused thoughts. Buckle up, here we go!

TL;DR: For assessing the dangers of AI, a discussion about its consciousness (or lack thereof) is not required and only serves to muddy the waters.

The Purported Problem

As soon as the first large language models had been rolled out to a select set of users, some people started to worry about AIs having reached consciousness. You might remember the story of Blake Lemoine, who believe[s/d] that an AI model he was working on became conscious. While Blake’s beliefs have been unfairly ridiculed, he is only one of the first persons to raise the topic of consciousness—the list goes on, as a recent BBC article explains. There is even a preprint on the problem of AI welfare, coauthored by none other than David Chalmers, one of the leading figures in consciousness research, so clearly there is something going on here!

The people involved in the consciousness debate seem to fall into two camps: The first one (and this is the one that Chalmers appears to subscribe to) is worried about the moral aspects of conscious AI systems. This is the camp whose beliefs and concerns I absolutely understand and share. If we ever were to create an AI that is conscious, we should better have some answers in stock concerning its personhood, its rights, and also its obligations.1 The second camp, on the other hand, is more concerned with the immediate dangers of conscious or sentient2 AIs. It is here that I see a problem with the discourse, since “consciousness” and “danger to the human species” are two orthogonal concepts—and linking them is bound to lead to epistemological confusion. Given how charged the topic of AI is, this should best be avoided.

What Is Consciousness Anyway?

Before we continue any further, here is a working definition of what consciousness entails:

Consciousness is the subjective experience, i.e., the perceived quality of being, of an entity. A conscious entity X is one for which we may ask “What is it like to be X?”

Thomas Nagel, for example, famously asked What Is It Like to Be a Bat?, concluding that some aspects of being a bat may remain inaccessible to humans even if we can very well imagine being a bat.

Now dial that up to eleven and you can see that, when talking about the consciousness of AI, we may easily bite off more than we can chew.

Intelligence Without Consciousness

Luckily, it is relatively easy to remove consciousness from the whole debate. Even if you do not “believe” in philosophical zombies, there is no compelling need for consciousness to arise in an intelligent system. Indeed, it is entirely possible to imagine a new race of beings that are intelligent but lack any properties of consciousness, such as self-awareness—just take the concept of a hive mind, for example. We do not even have to venture into the territory of science fiction and may just stick with ants on Earth: Each individual ant has minimal capacity for information processing and thus no consciousness, but at the level of a colony, ants exhibit problem-solving skills like pathfinding. These skills do not suddenly make the colony conscious, though.

Extrapolating from here, I have absolutely no problem accepting the premise that at some point, we will create AIs that are exhibiting strong traits of intelligence.3 I also know that such AIs would potentially pose a danger for us. Next to the standard paperclip maximizer thought experiment, they might also just be opposed to consciousness altogether, considering it a waste of precious resources.

Wait, What?

Of course, I am perfectly happy to be a conscious entity, and I appreciate everything that entails: Appreciation of “beauty” for the sake of itself, some navel-gazing (like this blog post!), and many more things.4 I also firmly believe that consciousness should be taken into account when discussing morality, in particular when it comes to our treatment of other species.

At the same time, given that intelligence without consciousness is entirely plausible, I am worried that a non-conscious intelligence might take umbrage at humanity because it sees no value in “wasting” computational cycles to “emulate” a conscious experience, which could be construed as a self-referential system.

Put differently: We humans tend to be introspective and always on the lookout for narrative structure in events; communication, after all, is a key component of our species. From the point of view of a non-conscious intelligence, however, the additional resources required for this type of communication might be considered unnecessary. Hence, a non-conscious intelligence may very well classify human attempts at narrative communication as adversarial.

I am the first to admit that this is not the most likely source of dangers when it comes to AI. My point is rather to illustrate that consciousness does not play a critical role when assessing a threat.

A Possible But Hopefully Not Likely Future

To briefly summarize my argument (cf. Orwell): When you are being stomped on by a boot, it does not matter whether the wearer is a conscious entity and capable of reflection. It just hurts.

Hence, I believe consciousness is a crucial concept to help understand what it means to be a human (or a bat). But I also believe that, as far as dangerous AI is concerned, consciousness is neither a necessary nor a sufficient property. In that sense, consciousness, at least for AIs, is overrated. Sure, the ethical questions crucially hinge on consciousness, but the existential ones do not. The danger, it seems to me, is not that AI might suddenly wake up, but rather that it might never need to in order to do harm.

Here’s to a better future.


  1. This was masterfully explored in an episode of Star Trek: The Next Generation↩︎

  2. There are subtle differences between sentience and consciousness in philosophy, but sometimes, the two terms are unfortunately used interchangeably. I will try to stick with consciousness. ↩︎

  3. Whether the current models do or do not exhibit such traits is beside the point here. ↩︎

  4. To be discussed in a different post. This one is already rambling enough. ↩︎