Bastian Grossenbacher Rieck
Welcome to my personal and academic space! I am a Professor of Machine Learning at the University of Fribourg, leading the AIDOS Lab, which focuses on research at the intersection of geometry, topology, and machine learning; I sometimes use the monikers topological machine learning or topological deep learning (see below for a brief explanation). I also hold a secondary appointment at the Institute of AI for Health and the Helmholtz Pioneer Campus of Helmholtz Munich. Finally, I have the honour to be a TUM Junior Fellow and a member of ELLIS.
You can give me anonymous feedback. I am particularly interested in feedback related to my research or my content. Alternatively, you can shoot me an e-mail.
I am looking for Ph.D. students and postdoctoral researchers, so if you like my research and would like to work with me, please reach out to me with your CV and a brief motivation letter!
This site contains some things I care about.
Affiliations
-
I am a co-director of the Applied Algebraic Topology Research Network (AATRN). Topology may not save the world, but it may well save something that is at least homotopy-equivalent to the world. Check out our great seminar recordings, available for free at the AATRN YouTube Channel, which has over 475 videos, over 4,500 YouTube subscribers, and averages more than 24 hours watched per day.
-
I am a member of the Topology, Algebra, and Geometry in Data Science (TAG DS) initiative. Our aim is to make more people in machine learning and data science aware of the wondrous possibilities of these subjects.
-
I maintain DONUT - The Database of Original & Non-Theoretical Uses of Topology, which is based on a well-curated list of publications, created by Barbara Giunti.
Background
Previously, I was a senior assistant in the Machine Learning & Computational Biology Lab of Prof. Dr. Karsten Borgwardt at ETH Zürich. I obtained my Ph.D. in computer science from Heidelberg University, which is also where I got my master’s degree in mathematics.
I am now at a stage where I often have to send in a biography, so I started a small selection of biographies, not all of which are entirely serious.
Miscellaneous
- If you are a student interested in working with me, please check out my notes for potential collaborations.
- I have a lot of great colleagues and friends who helped me get to where I am today; I keep track of this in a dedicated site about the acknowledgements for my career.
- I might also sometimes appear under the name of Bastian Grossenbacher or Bastian Grossenbacher-Rieck (with or without a hyphen). Technically, my full legal name is ‘Bastian Grossenbacher, né Rieck,’ but the vagaries of international law—my spouse and I do not presently share the same citizenship—make choosing a consistent naming scheme a considerable challenge. To prevent any further confusion, I still publish under my birth name, i.e. as Bastian Rieck.
- I am trying this Mastodon thing and you can also find me on Bluesky.
Remarks
Why is topology useful?
Topology is a useful inductive bias for machine learning because it can provide qualitative insights when quantitative ones are harder to obtain. Moreover, topological characteristics are typically invariant under certain transformations, making them highly robust. I do not for a second believe that topology is a panacea on the way to better machine-learning methods, but it is also abundantly clear to me that a topological perspective can help obtain insights into the very foundations of our algorithms.
What is topological machine learning or topological deep learning anyway?
Both of these terms refer to fields that deal with the integration of concepts from topology into models. Topological machine learning (TML) is the more general umbrella term, just like machine learning is a superset of deep learning. If you use hand-crafted topological features with a support vector machine, that’s an application of TML. By contrast, topological deep learning (TDL) is the part of TML that is all about neural networks! Topology, being a vast field, can be employed in numerous ways here:
- Concepts from point set topology can be used to extend message-passing algorithms to simplicial complexes.
- Concepts from algebraic topology can be used to imbue neural networks with knowledge about topological features. For instance, an excellent example would be to make a graph neural network aware of connected components and cycles.
- Concepts from differential topology can be used to study smooth functions on data sets.
More generally, topological deep learning refers to such uses of topological concepts that are either of an interventional nature—meaning that they somehow influence a model and its learned representations—or of an observational one—meaning that they do not have a direct bearing on the internals of a model. Notice that observational does not rule out influencing the training process as such. A stopping criterion based on topological information could be considered interventional.
Can I use some of your materials?
By all means, go ahead! The footer on the page provides hints as to how to properly acknowledge me. Notice that I am not asking for any citations. I am merely asking that, if you are using some of my figures or illustrations, you add some text that about me being the original author. If you want, drop me a note and let me know about your work; I am always happy to see where my content ends up. By the way: You do not need my permission to use any of my materials. The license already gives you that right as long as you acknowledge me.
Why did you not reply to my e-mail?
I used to be able to reply to every e-mail I received, but this is just not sustainable any more. If your e-mail is unspecific, such as a generic application e-mail that does not refer to my group or my work, I will probably not reply to it. This is because I need to be frugal with my time these days.