There Are Probably No Silver Bullets

Tags: academia, research, musings

Published on
« Previous post: Tools for Decreasing Your Productivity — Next post: Doing ML Research: Gates Open, Come on … »

A persistent pattern I observe in academia—in particular machine learning—is a largely unwarranted belief in Silver bullets, i.e. the existence of new methods that suddenly ‘solve’ a large variety of problems much more efficiently and effectively. These beliefs are manifest in paper titles, like the meme of ‘X Is All You Need.’

I have already seen the rise—and subsequent fall—of several of these paradigms, the most recent one being diffusion models, which are being replaced by flow matching, but the pattern repeats continuously and on multiple levels, including software frameworks. It always starts by many of us believing that all other solutions, except for the new one, are bad and should be avoided at all costs. This is followed by vigorous use and experimentation with the new technique, software, etc., followed by the inevitable disappointment when we discover that, no, it cannot solve all the issues for us. There are two things that still surprise me about this:

  1. Machine learning (or science in general) is sufficiently large to afford more than one useful paradigm. Our beliefs in a silver bullet are tantamount to stating that henceforth, everything has to be phrased in vector spaces because they are clearly superior to rings or groups.

  2. The hype nevertheless continues. The field is constantly switching from one paradigm to the next, without actually exploring anything in depth.1 The detrimental outcome is that we often operate with incorrect premises, such as the incorrect assumption that regular graph neural networks cannot handle long-range dependencies.

I can only speculate as to the source of this belief. Maybe it is grounded in the equally-bad belief in the ’novelty’ of methods.2 Maybe it is just the desire to impose some some order on a chaotic world. Maybe our field is not yet mature enough.3 The only tool that comes even close to a scientific Silver bullet is category theory, and few humans have the godlike intellect to use it effectively; I certainly am not one of them…

Note that I do not deny the way our field advances for one second. There are great leaps forward and machine learning methods are definitely maturing. I merely reject the idea that there is one architectural solution to all problems, or the idea that there is a single framework or programming language to solve all our current issues and address our future needs. This is not how it works, at least as long as we do not have an AGI yet.

In any case, I sincerely hope our focus on Silver bullets does not cloud our judgement. Until next time!


  1. Given that many of us are doing deep learning, this lack of depth might seem ironic. ↩︎

  2. In case you are interested, Michael Black wrote an outstanding guide to novelty in machine learning research↩︎

  3. I believe that many aspects of contemporary machine learning research are characteristic of a protoscience rather than a fully-developed science. We are getting there, though! ↩︎