Human in the Gaps: Thriving in the New AI Age
The recent introduction of large language models has finally brought AI to the attention of the general public. New business models are being created, papers are being written, and the headlines are full of ChatGPT and their ilk. I saw the best minds of my generation raptly typing prompt over prompt while real life continued outside. The fights between pundits were pandæmonium, with some welcoming these models as the long-awaited beginning of Artificial General Intelligence (AGI),1 many others already expecting societal upheaval because of crucial changes in the job market, and some demanding a moratorium for future research.
What is there left to be said that has not already been said by more eminent minds? The answer is a lot. In fact, I want to focus on one actor in the whole large language model debate that has been overlooked so far: the human being per se (not in terms of a participant of a market but in other roles). My aim is to provide some thoughts about what it actually means to be a human in a world with ChatGPT & co. Moreover, being ever the optimist, I want to point out some options for a brighter future.
Before we delve into this, however, here is a summary of my stance vis-a-vis the more usual points raised in the debate. Feel free to skim or skip.
Is this AGI? Very likely not. We are just being fooled by the fact that the system uses natural language to communicate with us. This is a kind of ‘hack’ or ’exploit’ innate to human beings: if it talks like a conscious entity, it is probably conscious. Since we do not really understand consciousness (see below), we have no clue of knowing what is going on in there. If you are policymaker reading this: please provide more funding for interdisciplinary projects analysing such models under different aspects! We have a lot to learn.
But will this lead to AGI? I do not think that ‘more data’ is necessarily the way to go, but I feel unqualified to have a strong opinion here. I doubt, however, that we will reach conscious models that way, mostly because evolution and nature seem to imply that at least human beings acquire knowledge about the world not innately but through interaction.
How do you define consciousness anyway? 無. Three pounds of flax. I do not think we currently have a useful models of consciousness to answer this question correctly. While I am tempted to answer ‘I know it when I see it,’ I am not even sure whether this is entirely true.
Is this an existential risk? Very likely not. However, this does not mean that the new models are without any ill effects, on the contrary. Advanced technology at the behest of humans with primitive morals is a recipe for disaster.
But surely the good things will outweigh the bad things? (or vice versa) I have no clue, sorry (and I suspect the people that will provide you with a stronger opinion might also not have a clue). Speculating, I can easily imagine a future in which the benefits prevail, but it requires a shift in attitude from our side. That is what this article is all about!
Letting Humanity Vanish, One Step At A Time
One type of argumentation put forward by critics could be summarised as follows: ‘I tried model X version N on task Y and found that it is [subtly|less subtly|obviously] wrong. Gotcha—these models suck forever.’ The traditional counterargument will of course be ‘Sure, but have you tried version N+1 of model X? It is going to solve all of this.’
Let me be frank: this type of debate is tiring and makes my eyes glaze over. Moreover, it is detrimental to our understanding of how to position ourselves with respect to these models because it pushes humans in the gaps, and those gaps might become tighter as the version number advances. For instance, maybe your favourite model cannot yet write documents that are more than 50 pages long.2 Great—being human now is reduced to being able to write more than 50 pages of text about a topic! This holds until another version is release and presto, here we go again. We must resist this at all costs.
A somewhat better argument than the previous one comes from the business side of things. It is typically paraphrased as follows:
AI will not replace people doing task Y. People working on task Y that refuse to use AI will be replaced by those who do.
While an admirable sentiment because it provides some hope, this turns AI into a mere tool; if (and I admit that this is a big if) it turns out that future versions of AI develop a consciousness, I do not think this statement will hold much longer unless we are willing to deprive a conscious entity of its freedom.
Moreover, it is short-sighted in the sense that there are certain tasks that machines—not only AIs—can solve better than human beings. We already know that. That does not mean that there is not an intrinsic value in doing these tasks for human beings. For instance, when learning to draw, it can be useful for aspiring artists to simply learn to manually copy a drawing in order to train their muscle memory. Likewise, aspiring musicians practice their music even though there are high-quality recordings available.
Mirrors and Guardians
These examples also illustrate the fundamental conundrum of this new age: we need to ask ourselves what the existence of these models means for an individual human being and for our collective psyche. I believe that addressing these questions involves interacting with AIs, both the current models and all the new ones that are yet to come. During such an interaction3 we may ask whether AI is a mirror—potentially a distorting mirror—that can reflect certain aspects of humanity such as creativity or the joy of playing. With ever more powerful models, one vision of the future might be that AIs eventually become our ‘guardians,’ enabling us to experience and contemplate existence together.4 Of course, this is just a pipe dream for now, but I think it is important that we continuously ask what AI reveals about the human condition.
Being Like Children
This brings me to another issue with the current discourse: by always asking what the new models can do better, we are following a path towards an ever-utilitarian perspective of humanity. If taken to its very extreme, this might be a much larger risk to human existence than nefarious AIs deciding to get rid of us altogether. I believe that this is what numerous philosophical and religious texts are referring to when they ask us to be more like children. Children have an innate sense of wonder and joy about just being. A child drawing and scrawling is expressing something; such a child is not dissuaded by the fact that there are more advanced painters.
I think we need to refrain from a purely utilitarian perspective of the world in which superhuman AI performance provides us with a bleak existence bereft of joy and hope. If we ask ‘What is the purpose of human beings anyway?,’ I think the only acceptable answer can be ‘To experience reality.’ Kierkegaard famously wrote:
Life is not a problem to be solved, but a reality to be experienced.
I think we would do well to consider our place as conscious, living beings, or, if you are so inclined, ‘as the universe experiencing itself.’ If we assume that perspective, we can weather any AI revolution and may yet hope to find ways to coexist and prosper together, with childlike wonder.
Now, I have absolutely zero doubts that achieving this state will not be easy. If indeed we ever find consciousness in our own creations, we have a large responsibility to shape a better society in which to integrate it. Such a society will indeed look radically different from the one that we have today—and I dare say that it will probably be a better one.
Enjoy the ride, until next time!
Epilogue: More Hope?
For those of you that feel that this marks the end of meaningful human pursuits or machine learning research, let me provide you with a quote by Albert A. Michelson, which is often incorrectly attributed to Kelvin:
While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals.
How wrong this turned out to be! Machine learning needs more researchers now more than ever. There is more to machine learning that training large language models—as impressive as they might be. Moreover, for finite beings like ourselves, the universe is inexhaustible, both in a figurative and in a quite literal sense. Statements like Gödel’s incompleteness theorems remind us that there is always a place for human ingenuity. Again: let’s enjoy the ride.
Potentially including a soupçon of ’the singularity is nigh!’ ↩︎
This is deliberately stupid example; I hope you can see my point. ↩︎
I am not fain to use the more suggestive term ‘conversation’ here, because I am not sure to what extent we are dealing with proper interlocutors here. ↩︎
I know that this is a naive and romantic view. Of course the proponents of the ‘AI as an existential risk’ idea would say that just hoping for something does not make it true, but I am outlining a possible nicer future here with humanity in it. ↩︎