Epistemic Humility in the Age of AI

Tags: research, musings

Published on
« Previous post: The AI Marketing BS Index

During my doctoral studies, I learned a lot of things about me that I did not know before. I gazed into the abyss, as one is wont to do, and the abyss winked back at me. I rejoiced at new insights just to discard them again a couple of days because they turned out to be wrong. But most importantly, I learned the value of cultivating epistemic humility. That is, I learned to understand that, despite my best efforts, it is impossible to grok the world in itself, or, as Kant would have it, the thing-in-itself.1

As a side effect of operating with epistemic humility, I was not only able to summon “crippling self-doubt, like any healthy human being should,”2 I also got to be progressively well-calibrated when it came to understanding my own understanding. This, I believe, is one of the most important skills to have in the age of AI—not only as a scientist but especially as a user or consumer of AI-based products.

From my (admittedly limited) vantage point, this skill seems to be eroded or potentially stifled by the presumed expert voice of modern large language models: Every month, I receive missives from some poor souls that believe that they have discovered something profound and absolutely need to post it on arXiv now to share it with the world. Often, they ask for my endorsement or my feedback. And as much as it breaks my heart, I typically refuse such requests—mostly because in almost all cases, there is not a lot of content to actually deal with, except for some speculation, hidden in jargon-laden, pseudo-profound paragraphs.

For instance, one person proposed to use topology to define intelligence. A tall order, and maybe interesting for a speculative blog post, but there was not a lot of criticizable content in there. In fact, there was not of whole lot of anything in there; the words proved as slippery as fishes and as solid as fog. I thus politely declined with a variant of this text:

Thanks for reaching out! I have studied the attached paper, but I am not comfortable providing an endorsement. The paper gives me the impression of the vestiges of a philosophical argument, written by an LLM. As such, I believe that there are better dissemination venues. The arXiv is currently drowning in such submissions and I do not intend to exacerbate this situation.

I do not want to discourage you from this line of inquiry, on the contrary! But if you intend to turn this into a scientific argument, the work also needs defensible hypotheses. As it stands now, there is a lot of hedging about what your proposed concept is not, and a lack of details of what it is.

To their credit, the person took it in stride, but nevertheless managed to disseminate their work through other channels. Other cases are a bit more tragic, like a startup founder being duped by their co-founder, a tale involving seemingly breakthrough technologies and IP that, ultimately, turned out to be, if anything, smoke and mirrors. I am not privy to the full details but my impression is that both parties believed in the technology and the merits of the work, which makes the story all the more tragic.

Regardless of the circumstances, when I am confronted with this, I try to answer with empathy first and foremost while trying to instill at least some epistemic humility. Having done this a couple of times now, I even developed a kind of “template:”

From what I can tell by looking over your materials, there is a severe disconnect between the claims and the resulting models. That is not to say that the models are not calculating something, but they are definitely not calculating what the text is claiming.

To be perfectly candid: Looking over more than a hundred pages of this type of content is more akin to a “distributed denial of service attack” than science. I trust that you have good intentions and curiosity, which are all the hallmarks of an excellent scientist, but I would really like to implore you to focus on quality and depth over quantity here.

I do not want to give you the impression of a gatekeeper here. I believe that many great innovations come from those that are walking non-traditional paths. But there are no easy shortcuts other than studying the materials at hand. AI might give you a boost if you suspect interesting connections but this is not a substitute for diving into the topics.

Depending on the reaction, I might also be upfront about the frustration experienced as a scientist:

The current situation is also quite frustrating for researchers. We spend years honing our craft and knowledge and understanding of things just to fight windmills now.

I am not sure you want to hear this but I believe it is important to state: What you sent over is laden with jargon. The topics you try to tackle are vast, deep, and diverse. Please do not fool yourself into believing that you understood all of them based on brief self-study.

Make sure your epistemology is strong and approach these topics with the beginner’s mind. Otherwise, AI models will just lead you astray and tell you want you want to hear. Do not get fooled by the fake expert voice of LLMs; there are no shortcuts to understanding stuff.

I hope you take these words in the spirit of encouragement and support in which they are given.

As draining as such interactions are, I hope to at least help some people to some extent with this. If one is not well-calibrated about the things one does not know, one might take the shallow understanding offered by an LLM to be profound insight. Over time, this will erode one’s ability to reason from a perspective of epistemic humility. Since “no man is an island,” this has direct consequences for others; it does not remain an isolated problem for a single person for long.

I do not believe our institutions have been built to deal with this sort of onslaught. There may be many antidotes, but the most potent one—or so it seems to me—is to cultivate our own internal defenses. Epistemic humility is the single most important aspect of those. If we approach the process of learning or understanding from this vantage point, we have a realistic shot.

Here’s to all forms of humility, until next time!


  1. It sounds better in Klingon German. ↩︎

  2. Paraphrased from Matt Amodio↩︎