‘What is a Manifold?’, Redux: Some Non-Examples

Tags: howtos, research

Published on
« Previous post: I Have No Group, and I Must Scream — Next post: Extracting Data from the Swiss … »

Manifolds are still the ‘bread and butter’ of modern machine learning research. I discussed them in an earlier post, aptly titled ‘What is a manifold?’. Here, I want to describe some non-examples, i.e. spaces that are not manifolds. The main motivation for this post is to dispel some false notions about manifolds—the term has been so common in certain circles that it is almost synonymous with ‘data.’ However, as I tried to explain in my previous post, a manifold is a space that is exceptionally well-behaved and, most importantly, homogeneous.1 To reiterate my favourite definition:2 a $d$-dimensional manifold $\mathcal{M}$ is a space that locally looks like a $d$-dimensional Euclidean space, i.e. like some $\mathbb{R}^d$. The important thing is that $d$ must not vary; every point in $\mathcal{M}$ needs to satisfy this definition for the same value of $d$.

The ubiquitous manifold hypothesis

The ‘manifold hypothesis’ refers to the assumption that a given data set $\mathbf{X}$ is actually a discrete sample of some manifold $\mathcal{M}$. Often, the hypothesis is extended by stating that $\mathcal{M}$ must have a significantly lower dimension than the data $\mathbf{X}$. If $\mathbf{X}$ is sampled from some $\mathbb{R}^D$, we would hope that $\mathcal{M}$ is $d$-dimensional, with $d \ll D$.

This hope is not in vain! For instance, consider $n = 1000$ samples from a unit circle. While we only need two dimensions to describe them, we could artificially extend our samples with $998$ zeroes, resulting in a set of $n = 1000$ samples from a $1000$-dimensional space.

A circle

A circle. With a sufficient number of samples, it appears to be one object. If we were to live on this space like the depicted bug, we would think that it is $1$-dimensional—we may only move forward or backward.

Yet, the underlying model of this space only requires a single parameter, viz. the angle. The other dimensions are superfluous in the sense that they are not related to the parameter(s) creating the underlying shape of the space. This, in a nutshell, is an example illustrating the manifold hypothesis. It explains why we use dimensionality reduction methods such as autoencoders, PHATE, principal component analysis, t-SNE, and UMAP to create low-dimensional representations of our data sets.

Now that we are on the same page again when it comes to the manifold hypothesis, let’s consider a few spaces that are not manifolds.

Manifold soup

The aforementioned restriction to the dimension $d$ already rules out certain spaces. Since $d$ must remain fixed, spaces that I would like to be known as ‘manifold soup’ are not manifolds. For instance, taking samples from spheres of different dimensions does not result in a data set that satisfies the manifold hypothesis. At best, one could say that such a data set can be partitioned into different manifolds, provided that the spheres are not otherwise entangled or intersecting.

A 1-sphere and a 2-sphere

A tasty manifold soup, containing a 1-sphere, i.e. a circle, and a 2-sphere.

This example might be considered a little bit pedantic, but I think it neatly highlights that the manifold hypothesis might need a slight reformulation here. Notice that if we were to take samples from this soup, we might inadvertently conclude that the space has dimension $d = 2$ because—unless the circle is more densely sampled than the $2$-sphere—we have a higher probability of encountering samples from the $2$-sphere than from the circle. This, in turn, could lead to problems in analyses; we might not even be able to detect that there are different dimensions in the data set…

Manifold wedges

We can also turn two manifolds into something that is not a manifold by calculating its wedge sum. Staying within the realm of culinary terminology, such ‘manifold wedges’ are obtained by taking two manifolds $\mathcal{M}$ and $\mathcal{N}$ and identifying their basepoints. This has the effect of ‘gluing together’ $\mathcal{M}$ and $\mathcal{N}$ in exactly one point.

Calculating the wedge sum of two copies of $S^2$, i.e. the ‘standard’ sphere, we obtain $S^2 \vee S^2$, which, with colours based on the $z$ coordinate, looks surprisingly like meatballs:3

The wedge sum of two 2-spheres

The wedge sum of two $2$-spheres.

Again, this space is not a manifold—the point at which we identified the two spheres does not locally look like a $2$-dimensional Euclidean space. In fact, the connection point is a singularity in the sense that it is different than all the other points.

Let’s briefly process this: just by virtue of changing a single point, we were able to turn two nice manifolds into ‘something else.’ Now, the implications of this change are more nuanced than in the manifold soup example above. There are two common scenarios that I encountered when it comes to the behaviour of manifold learning algorithms here:

  1. The space is treated as a single large sphere.
  2. The space is treated as two (disconnected) spheres.

In the first case, the singularity is given way too much importance, whereas in the second case, it is completely ignored. Both cases fail to properly describe this data set; in my experience, there are few algorithms capable of handling such spaces.4

To offer a more conciliatory perspective: existing algorithms might not be able to fully handle such spaces, but at the same time, they are at least not ‘crashing.’ The existence of some (isolated!) singularities is therefore not an issue for these algorithms. Moreover, in the presence of noisy samples, it is sometimes hard to decide whether such singularities are actual features of a data set or just a by-product of noise.

Just a pinch…

Last, I want to show you what happens if you pinch a manifold. This is best illustrated by the most famous of all pinched manifolds, viz. the pinched torus—also known as a croissant surface or, as I like to call it, a gipfeli.5

A pinched torus

A pinched torus (a gipfeli), colour-coded by its mean curvature.

You can see that this looks indeed like a gipfeli. To show the effects of the pinching somewhat better, I colour-coded the object by its mean curvature (I am discussing curvature in another post, in case you are interested). This object is also not a manifold because of the pinch point, which—you guessed it—constitutes a singularity.

When I first learned about this procedure, I was intrigued: here is another very simple operation that destroys the ‘manifoldness’ of a space. Again, most algorithms will ignore the singularity and treat this object as a thickened circle, thus losing some structural information in the process.

What’s next?

These examples demonstrate how quickly one can escape the confines of the manifold hypothesis. Simple transformations give rise to spaces that are almost, but not quite manifolds. Most algorithms are not sufficiently sensitive to detect these differences, so we need a new class of methods to tackle such data sets.

All of these examples have one thing in common, by the way: the data set might not be a manifold in total, but it is a manifold if considered in parts! For instance, removing the ‘pinch point’ of the pinched torus results in a manifold data set.6 The same holds for the wedge sum—removing the ‘glue point’ also turns the data set into a manifold again. Interestingly, both the singularity and the ‘space sans singularity’ are manifolds (albeit of different dimensions, as in the manifold soup example).

Now, mathematics would not be mathematics if someone had not considered what to do in this situation! In fact, there is a nice underlying theory of studying spaces with such singularities: instead of studying the whole space at once—thereby causing problems because the manifold hypothesis is violated—it is suggested that the space be partitioned into strata that need to fit together.7 Each stratum has to be a manifold, but the dimension is allowed to vary between strata (in a controlled fashion, not willy-nilly). This leads to the definition of a topologically stratified space, which is beyond the scope of this article.

If you are interested in a more practical discussion of this issues, you might enjoy the paper Persistent Intersection Homology for the Analysis of Discrete Data. Here, my co-authors and I describe how to ‘sort of’ detect singularities in data, thus turning a data set into a stratified space, and how to analyse its properties. We show that this increases the expressive power of methods, leading to a more holistic understanding.

The manifold hypothesis thus remains highly relevant—but if you suspect that your data set suffers from singularities, care should be taken when it comes to interpreting the results.

Hoping your manifolds remain tasty and fresh, until next time!

Acknowledgements: This post was inspired by discussions with members of SUMRY 2021, the summer programme of Yale university’s maths department. I have the honour of serving as a mentor this year.


  1. I use the term homogeneous here in order to denote that the space has the same dimension everywhere. In other words, it ‘behaves’ the same at all points. Thanks to ZenoRogue and Dr. Donut (aka Bryan Bischof) for pointing this out. ↩︎

  2. It is not the most precise definition, but it does not require a lot of additional jargon. ↩︎

  3. Topology already has a lot of agricultural metaphors. Why not introduce a few culinary ones, as well? ↩︎

  4. We will learn about some solutions later on. ↩︎

  5. This is the Swiss-German word for croissant, and it rolls off the tongue so much better, I think. ↩︎

  6. The reader may convince themselves by a sketch—the nicest way of proving things in topology. ↩︎

  7. A formal definition of this requires some additional technical details that are best covered in another article. ↩︎