The Machine Learning Scientist as Toolsmith: Remembering Fred Brooks
Tags: academia, musings, research
Preamble
(This article was originally written in June 2022. Dr. Brooks passed away in November 2022. I have since rewritten this article to commemorate his legacy.)
I had the honour of meeting Dr. Brooks and talk to him for a while, conducting a type of informal ‘fireside chat’ at my graduate school. Even though Dr. Brooks was well into his 80s at the time, he exuded vigour, energy, and, above all, a type of serene kindness. Speaking carefully, precisely enunciating every word, Dr. Brooks had everyone listening in rapt silence. Here was one of the greats of computer science, still taking the time to talk candidly to a bunch of undergraduates!
His writing style mirrored his speech: it was never overly preachy or condescending, but also never shied away from discussing hard, inconvenient truths and showing them as they are. Even if you disagreed with some of his claims, you had to appreciate the way they were being presented. This is what made his writing and discussions so special.
His legacy encompasses so many domains, and the fact that he coined some of the most popular paradigms in software engineering that are still relevant to this day shows that his work is, in some sense, timeless. In the rapidly-changing world of software, this is a worthy feat!
With Dr. Brooks passing, computer science has lost a critical but caring voice, a conscience, someone willing and able to look beyond the hype, and, above all, someone that emphasised the human aspects of computing.
May he rest in peace, and may his memory be a blessing to all of us.
Introduction
One of the most influential and formative essays of my undergraduate days is The Computer Scientist as Toolsmith II by Fred Brooks of The Mythical Man-Month fame. Written in 1996 as a speech of someone looking back to a long and successful career, it contains many nuggets of wisdom, showing the vision one man has for his chosen research field. The essay, which I shall henceforth abbreviate as CST-II, can also be read as a passionate love letter to computer science—specifically, computer graphics—with a special appeal to not lose track of the problems that matter. Having just celebrated its 25th anniversary, I want to honour CST-II with an essay of my own, dealing with machine learning.
Does AI Solve X?
A lot has changed since CST-II was published. The field of machine learning or artificial intelligence research has really taken off with the recent ‘deep learning’ revolution that is still ongoing. The field attracts a lot of media attention whenever someone claims that ‘AI has solved X,’ a very interesting snowclone that is usually succeeded by articles that show that ‘X’ is more complex than initially anticipated and maybe should not be considered ‘solved’ yet. In the past few years, ‘X’ has taken on quite a few values—I shall not even attempt a proper enumeration—but the hopeful, somewhat boisterous claims have not manifested so far: reality, it seems, is always a little bit more complex for us and our algorithms to fathom.
Some people even apparently derive some pleasure in hooting when it comes to grandiose claims that did not manifest. I am not one of them; in fact, I would love to see solutions to specific issues,1 notwithstanding the fact that machine learning algorithms also have an implicit ‘dark side,’ mimicking our own biases or lending credence to unfair2 decisions.
Few Toolsmiths Here
In the tradition of CST-II, it seems to me that one of the reasons for the disconnect between what AI can do in potentia and what AI delivers realiter is that machine learning scientists are not fain to embrace the toolsmith moniker. We rather prefer to be working on fast-paced research projects, racing quickly in order to avoid being scooped, and finally admiring the fruit of our labours: a pristine publication in a top-tier machine learning conference that contains (a) an overview figure showing a diagram of how our method is supposed to operate, and (b) a table demonstrating how neatly our new method is outperforming everything else on a set of benchmark data sets. But this type of fame is never lasting, and in a few months, our paper will be outperformed by the next thing, and so the cycle repeats.
Why are we like this? This mentality, as well as much of the descriptions accompanying publications, is much more akin to that of salespersons. This has already been decried at other places, so I shall not chime in and instead try to explain why I sometimes get absorbed into the fast-paced machine learning conference treadmill I described above: I have observed that I like to believe that I have fully understood a problem; I feel immense joy in creating something novel and original, and seeing how it fares in what I consider to be the real world. But of course, benchmark data sets are not the real world. In fact, all benchmark data sets are wrong, but some of them are useful. I often forget about this, and trick myself into not asking the difficult questions, such as ‘Does this really solve a specific problem beyond these preselected data sets?’ In the meantime, the real problems on real data sets remain unsolved, as I sprint towards the next conference…
Saving Grace versus Hubris
Now, this essay is not meant as a confession of what I perceive to be my sins, or worse, the condemnation of the sins of the community.3 I do not think that there is necessarily anything wrong with the aforementioned quick-paced approach. Adhering to good scientific practice,4 such gradual, bumbling advances will still lead us somewhere. The big question is what the ultimate goal should be.
Here, I observe a problematic inclination that requires some counterbalance: the evangelists of the field5 purport that our ultimate goal is AGI, i.e. Artificial General Intelligence, an AI that is capable of learning on its own to ultimately surpass human intellect. Given that we know nothing about the ‘alignment,’ i.e. the ethics and world view, of such an AI, some consider the advent of AGI to be an actual extinction-level scenario for humankind. With the prevalence of the aforementioned snowclones about AI having solved problem X in mind, I remain somewhat sceptical about the actual probability of this happening quickly,6 and I would be perfectly content with specialised AI that can still solve some ill-defined, highly-complex tasks such as driving a vehicle. Nevertheless, AGI appears to be a good goal, but in the spirit of CST-II, I would be remiss if I did not point out that creating AGI without thinking about the ultimate impact it might have on society is steeped in hubris. Once again, like originally criticised in CST-II, our field sounds more like the original builders of the Tower of Babel:
And they said, “Come, let us build ourselves algorithms, and an AI whose power exceeds that of the heavens.”
Let me clear here: there is nothing wrong with having a grand vision for the future! The problems start when that vision starts to eclipse the here and now, forcing us, like the ancient alchemists,7 to oversell our own products. I think we would do well to understand that machine learning, even if it should ultimately lead to some form of AGI, will always have a certain toolsmith component about it. The study of algorithms by themselves does not teach us anything about the universe; our tools only have worth insofar as they have a certain utility for others. For me, this understanding does not hamper my joy in research. On the contrary: believing that my work also has some utility in addition to the (somewhat abstract) intellectual joy it generates, was a crucial reason for me to move into computer science and machine learning in the first place.8
Providing Utility
I think the point on utility bears repeating and also necessitates some delineation. As Dr. Brooks notes in CST-II, having applications and users of our methods in mind keeps us (intellectually) honest. I think this is a strong argument for embracing the toolsmith identity. That is not to say that every bit of our research needs to be subordinate to a specific application. On the contrary! My thinking process as an ‘impure,’ i.e. applied, mathematician is often directly inspired by a real-world phenomenon. That does not mean that I will always come up with a solution to this problem, of course, but my past discussions with life scientists and clinicians have provided me with a profound respect for the complexity of their profession and their daily challenges, many of which are still beyond our current ken (or at least beyond mine).
And it is also here where I perceive the flaws of the prevalent mindset the most: how can we assume to make progress in AGI if we do not involve those that study intelligence in living creatures? How can we make progress in improving patient care and well-being if we do not involve nurses and doctors? In many other disciplines, not involving the experts themselves is the mark of crackpots—countless proofs of the Riemannian hypothesis do not even survive a brief critical glimpse of the expert mathematician’s eyes!9 Of course, machine learning is not a field comprised of crackpots; lest it remain that way, I find reaching out to other disciplines vital and potentially crucial for our future.
Moving Forward
Having spent quite a few words on admonishments, I want to close with positive thoughts and suggestions about how to embrace the toolsmith identity a little bit better:
-
Find better words to express what we are doing. I know that people dislike the word toolsmith, so maybe alternative forms are more apt. How about ‘artifex,’10 ‘builder,’ or ‘steward of AI?’ I am sure the community can come up with more and better suggestions easily. Embracing this as a guiding principle does not make us lesser scientists.
-
Stop taking pot-shots at each other when discussing whether technology Y is the true solution for the field. To borrow another paradigm by Dr. Brooks: there are (probably) no silver bullets in our field. Whether symbolic approaches are helpful or not is something that only experiments and more research will show. There is room for more than one theory, and certainly room for more than one approach! Just like mathematics consists of different vibrant subfields, machine learning might also experience such a partition in the future.11
-
Do not confuse benchmark data sets with the real world. Always ask to what extent these data sets are ‘wrong’ or ‘overly optimistic.’ Make sure you understand their provenance and how they have been created; treat them as the first step towards an eventual realisation of your methods, not as the last.
-
Make sure to discuss the limitations of your methods. While a hammer can also be used to open a bottle, it is probably not the right tool for this purpose. Explain the nails that accompany your hammer. This might also entail refraining from making claims that are too bold or too big and might not survive close scrutiny. Your papers will not only be read by experienced researchers, but also by the public. The honest paper may generate less publicity, but will also be much more impactful over time.
-
Ensure that your methods are reproducible. Just like mathematicians lay their proofs bare for the world to see,12 we also need to ensure, above all, that our methods can be properly critiqued by the community and understood by potential users.
I hope that these suggestions are useful; I am writing them primarily to myself, to reread whenever I have lost sight of the larger scheme of things amidst chasing after yet another conference deadline. I am convinced that to a certain extent, the issues our community is facing also stem from the fact that machine learning as a science is still trying to find its place in the world; we are still experimenting wildly and widely, advancing numerous hypotheses, and yet are continuously stumped when it comes to explaining some phenomena we observe in our tools. As the field matures—and mature it will—I wish for us to arrive at the same point St. Paul arrived when penning the following lines:13
When I was a child, I spoke as a child, I understood as a child, I thought as a child; but when I grew up, I put away childish things. Now we see but a poor reflection in a mirror, but then we will see everything with perfect clarity.
Here is to striving for that clarity, until next time!
Epilogue. For a more concrete list of issues that plague modern machine learning, I strongly recommend reading Troubling Trends in Machine Learning Scholarship. I would love to read a revised version of this paper in which the authors comment on what has and has not changed over the past five years.
-
I have a personal stake in advances that concern personalised medicine, in particular oncology and neurology. I ardently wish for AI-driven improvements here. ↩︎
-
For instance, policymakers appear to have an almost magical belief in the correctness of computer algorithm outputs and decisions (an observation that might well lead to another blog post in the future); they tend not to challenge them, much to the detriment of people that are impacted by such decisions and get their loan denied or their job application rejected. ↩︎
-
In fact, I would rather remove the beam in my own eyes before checking for potential specks in the eyes of others. ↩︎
-
Another term worthy of its own essay in the future! ↩︎
-
I cannot help but notice how apt this term is, given that CST-II relies heavily on quotations from the bible. ↩︎
-
These are words that are almost perfectly made for being regretted later. If you are reading this post-singularity, please know that I really believed what I wrote! ↩︎
-
Ali Rahimi expressed his thoughts about this wonderfully in a ‘Test of Time’ Award Speech. Predictably, it also created a response from none other than Yann LeCun! ↩︎
-
Having started my studies of mathematics roughly 16 years ago—tempus fugit—I think it would not be wrong to characterise my career trajectory as ‘This guy is getting more and more applied by the minute.’ ↩︎
-
Maybe the field is ripe to have its own ‘crackpot index,’ though, like the one for mathematics, which was originally created by John Baez. ↩︎
-
Shamelessly borrowed from Neal Stephenson’s ‘The Diamond Age.’ ↩︎
-
Whether such a partition is more a splintering into different fiefdoms with different coinages and languages, or more akin to a union of states through which trade and knowledge flows freely has yet to be shown. ↩︎
-
Except perhaps Fermat, and look how that turned out! ↩︎
-
This is not a verbatim quote; I strove for more inclusive language here, while also keeping in line with the inimitable CST-II, which is peppered with allusions and quotes of biblical texts. ↩︎