You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I am reading "Sheaves in Geometry and Logic", by Saunders Mac Lane and Ieke Moerdijk. After presenting the notion of subobject classifer, they explain that the idea is modeled on other "classifying" ideas in topology. They go on with a brief presentation of the classifying space of a group as a -Bundle classifier. And they also mention the importance of classifying toposes (which they present in later chapters; I've not arrived there yet, although I looked at the definition already).
I think I roughly got the pattern. Some functor acting by "pullback" is representable, . Hence, the data naturally corresponds to morphisms . The identity yields a "universal" object in , in the sense that every object in is a "pullback" of the universal object.
It is said, and seems to be well-known, that such "classifying" ideas have been very important in many areas. But I think I don't know enough to appreciate the depth of these ideas. My question is more of the "general mathematical culture" kind:
Is there an example of a "killer application" of such a classifying object?
ps: For instance, Galois theory provides the famous correspondance between intermediate fields in a Galois extension and the subgroups of the Galois group. And the "killer app" is the unsolvability of the quintic equation.
pps: I'm also fine with an answer like "keep reading Mac Lane & Moerdijk's book".
Loop spaces and classifying spaces
For the theory associated to the infinite unitary group, U, the space BU is the classifying space for stable complex vector bundles (a Grassmannian in infinite dimensions). One formulation of Bott periodicity describes the twofold loop space, of BU. Here, is the loop space functor, right adjoint to suspension and left adjoint to the classifying space construction. Bott periodicity states that this double loop space is essentially BU again; more precisely, is essentially (that is, homotopy equivalent to) the union of a countable number of copies of BU. An equivalent formulation is
Either of these has the immediate effect of showing why (complex) topological K-theory is a 2-fold periodic theory.
In the corresponding theory for the infinite orthogonal group, O, the space BO is the classifying space for stable real vector bundles. In this case, Bott periodicity states that, for the 8-fold loop space, or equivalently, which yields the consequence that KO-theory is an 8-fold periodic theory. Also, for the infinite symplectic group, Sp, the space BSp is the classifying space for stable quaternionic vector bundles, and Bott periodicity states that or equivalently
Thus both topological real K-theory (also known as KO-theory) and topological quaternionic K-theory (also known as KSp-theory) are 8-fold periodic theories.
https://en.m.wikipedia.org/wiki/Bott_periodicity_theorem
It's awesome to hear that you're working on "Sheaves in Geometry and Logic"! I'm currently also working on learning about things relating to sheaves and toposes! (I recently resumed working in this thread: #learning: reading & references > reading through Baez's topos theory blog posts.)
I don't know the answer to your question, but I'm interested in your question!
To take a first guess, perhaps in some cases a morphism is easier to work with than the corresponding element of ?
As a second - and related - guess, perhaps in some cases has some additional structure, which induces some useful structure "pointwise" on . This structure could then (hopefully) be carried over to , using our natural isomorphism . Maybe this could be helpful for studying ?
However, I don't know any specific examples of where some strategy along these lines is helpful. I would be quite interested to learn of some examples, though, if they exist!
Is there an example of a "killer application" of such a classifying object?
for example, a little lemma discovered by Yoneda...
@fosco 's comment reminded me of a related example:
Let be a cone functor, so it sends each object of to the set of cones with apex over some fixed diagram . We can write where is a diagram. Then, if has a limit in , we have .
In this example, I guess that is our "classifying object".
(Although I'm not sure if this is the kind of functor you had in mind!)
@Peva Blanchard
Is there an example of a "killer application" of such a classifying object?
I'd say the example you already mentioned, of classifying spaces for -bundles in topology, is really a killer app. You might enjoy my explanation of them here:
This only touches on a few of their uses; for example I don't explain how cohomology classes on the classifying space give rise to cohomology classes on any space equipped with a principal -bundle. These cohomology classes, called [[characteristic classes]], are fundamental throughout geometry and topology. If you've ever heard of [[Chern classes]], those are the most famous examples.
John Baez said:
This only touches on a few of their uses; for example I don't explain how cohomology classes on the classifying space give rise to cohomology classes on any space equipped with a principal -bundle. These cohomology classes, called [[characteristic classes]], are fundamental throughout geometry and topology. If you've ever heard of [[Chern classes]], those are the most famous examples.
Thank you! Indeed, I read the entry "classifying space" of the nlab and wikipedia: they mostly include examples of classifying spaces, but not really applications of them. Now, I think I can connect the dots to form a possible learning path: topological space, cohomology of that space, relation with cohomology of , classifying space as loop space, etc.
David Egolf said:
fosco 's comment reminded me of a related example:
Let be a cone functor, so it sends each object of to the set of cones with apex over some fixed diagram . We can write where is a diagram. Then, if has a limit in , we have .In this example, I guess that is our "classifying object".
(Although I'm not sure if this is the kind of functor you had in mind!)
I think this example illustrates more the notion of representability. I don't know if there is more cultural subtlety in the word "classifying space" than "representation of a functor". By the definitions I've seen, a classifying object is "just a representation of a functor". But the word "classifying object/space" seems to be used in some specific cases related to "geometry" (subobject classifier, classifying space of a bundle, classifying topos). I guess experience and practice will tell.
No, the two things are essentially the same. Whenever you look for a classifying object you're really looking to represent a certain functor.
I don't know how much I will say that you already know, but the reason why representability results are so powerful is essentially the idea behind Yoneda lemma, because if is representable, say by an object , then is nonempty, and pointed by whatever corresponds to the identity map .
fosco said:
No, the two things are essentially the same. Whenever you look for a classifying object you're really looking to represent a certain functor.
Oh nice, thanks! Then the notion is clearer to me now.
This is a triviality with some self-referential flavour (if is "built out of" , look at ) but it turns out -and I believe this is why Yoneda lemma is so powerful- that YL can be stated in the form: " determines uniquely"
is a "universal object" also in the sense that it has a universal property: the pair is initial (if is covariant) or terminal (if is contravariant) in the category of elements of .
...category of elements which has its own universal property, but that's a slightly different story (or rather, still the same story, but for a second moment)
now, with classifying spaces you're trying to represent the singular cohomology functors; when you do that, you know how to see cohomology classes as [[cohomology operation]] s
David Egolf said:
It's awesome to hear that you're working on "Sheaves in Geometry and Logic"! I'm currently also working on learning about things relating to sheaves and toposes! (I recently resumed working in this thread: #learning: reading & references > reading through Baez's topos theory blog posts.)
Yes, I am still reading your thread, and looking forward to discuss John's blog series there! I got hyped about toposes to the point that I bought a hard copy of Mac Lane & Moerdijk :)
David Egolf said:
To take a first guess, perhaps in some cases a morphism is easier to work with than the corresponding element of ?
As a second - and related - guess, perhaps in some cases has some additional structure, which induces some useful structure "pointwise" on . This structure could then (hopefully) be carried over to , using our natural isomorphism . Maybe this could be helpful for studying ?
Yes! I think we have a simple example of that: in Sets, the subobject classifier is which can be equipped with the structure of a Boolean algebra. Hence these operations transfer to charactersistic functions , and then to the subsets (subobjects) of . In other words, subsets of form a Boolean algebra.
I think there might be another example. When is an abelian group, the classifying space is also an abelian group (I read that from John's link. Hence, "characteristic maps" also form an abelian group (pointwise addition). But since characteristic maps correspond to -bundles over , this should imply that -bundles over form an abelian group as well. I find the idea of adding and subtracting topological spaces quite intriguing.
isomorphism classes of -bundles, better :smile:
without "isomorphism classes", the result stays true up-to-isos of a monoidal structure
Classifying stacks, of which classifying spaces of G-bundles in algebraic topology are a homotopical shadow, are hugely important in the generalisation to other settings and with more general input. For instance, moduli stacks in algebraic geometry, which are the beefier and fancier versions of moduli spaces (which often fail to exist in cases of interest). A very famous family of moduli stacks are the stacks of n-pointed genus-g algebraic curves. The stack of elliptic curves is the case n=1 and g=1.
More generally, any "moduli problem" is really about a classifying object of some rich sort.
Here's a very down-to-earth application that shows the power of representing objects.
Say you're a topologist, and you're interested in checking whether two spaces are the same or not. The standard move is to build some invariant of your spaces, and compute the invariant on each of your spaces. In case the invariants differ, that tells you your spaces were different.
Now one of the most famous invariants is the [[cohomology]] of a space. It's famous because we can really compute it in practice, which makes it more useful than other invariants which are harder to compute. Unfortunately, sometimes two different spaces can have the same cohomology groups! (for example, and ).
But we can push cohomology further! If you look at all the cohomology groups at the same time, they assemble into a graded ring with multiplication, and it turns out the multiplication is also computable in practice! So we can distinguish and since in the former the multiplication is interesting, while in the latter it's the zero map.
But then we're sad again, since there can be two spaces with the same cohomology ring that are nonetheless different! But there's another operation on cohomology, called the Massey Product, which is like a "multiplication" that eats three elements and combines them into a single element. And as before, there are spaces with the same cohomology ring, but which we can distinguish since they have different massey products!
Now that you've seen this pattern, it's reasonable to ask how far we can go. Is there a way to understand all of the operations on cohomology?
Enter representability!
We know that the cohomology of is representable by an space called . That is, we know that . But now we use yoneda! The multiplication
can be rewritten, using this isomorphism, as a map
(naturally in ). So yoneda tells tells us that it comes from a map .
In fact, yoneda tells us that operations on cohomology are the same thing as operations on . So if we want to understand all the operations on all the cohomology groups (which is a huge and complicated problem!) we can instead work on understanding the algebraic structure of this single object , which is much simpler.
In general, this is one of the big things representability buys us. If we have a representable functor , then any structure we put on automatically gives us that structure on for every !! So by understanding this single representing object , we're able to get our hands on a whole family of compatible structures on all the simultaneously.
I hope this helps ^_^
(oh, and I should say that representability is crucial here! These higher operations on cohomology are very well studied. You might want to check out steenrod algebras if you're interested! It's natural to ask if there's some "dual" version of this story that works on homology, and I don't know of one. The reason for this, in some sense, is that homology isn't representable! So we can't simplify the problem by studying a single representing structure!)
Very nice story, @Chris Grossack (they/them)!
At the risk of making things more scary again - while you were trying to make them less scary - that gizmo is not really a space, but a 'spectrum'. But folks should not run in terror: the point is simply that cohomology gives not just a single group for a space but a list of groups , so we must represent it not by a single space but by a list of spaces, and this is a spectrum.
(Since the different cohomology groups are related, a spectrum can't be an arbitrary list of spaces: they need to get along in some way.)
Anyone who wants a fairly painless intro to spectra and the cohomology theories they represent (called 'generalized cohomology theories') can try my explanation here.
My main example is the one Chris was just talking about. Here the list of spaces involved are the spaces called . They are fascinating and the first few, which are the most important, have very nice descriptions.
I probably should have mentioned that they form a spectrum called the Eilenberg-Mac Lane spectrum .
Chris Grossack (they/them) said:
I hope this helps ^_^
It helps a lot! Thanks for spelling out the story.