You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Over the last few days, the concept of a monad has (finally) started making some sense to me! It feels like this opens up a huge amount of new things that I can learn about now, which is exciting.
By the way, the resources that made things start to intuitively click for me were:
At this point, I'm curious to learn what people use monads for, at a very high level! Here are some of the examples I've seen so far:
Broadly, what are some other ways in which people use monads?
To make burritos! https://emorehouse.wescreates.wesleyan.edu/silliness/burrito_monads.pdf
Monads are used in homological algebra because they contain the necessary data to carry out the "bar construction."
As a first approximation, you can think of homological algebra as being an approach to algebra which allows us to embed ordinary algebra of groups, rings and modules into the broader universe of topological rings, groups and modules. Sometimes you can then replace your complicated group with a topological group which is simpler in some regards.
More concretely, say you have an Abelian group which has lots of torsion in it which makes it hard to study. Homological algebra suggests trying to replace with a similar "topological group" (a simplicial complex with a group structure) which is equivalent to in some respects but its group complexity is spread out and stratified through the dimensions of . So has vertices, edges, faces and so on, its vertices form a group , its edges form a group , its faces form a group , and so on, and these operations are all compatible with each other. The important thing is that each of the individual groups should all be relatively simple and better behaved/easy to understand than . For example they should all be free Abelian groups. Thus in some sense we're trading off algebraic complexity for the geometric complexity of having a space spread out across multiple dimensions.
Monads can be used in an algorithm called the bar construction which associates to each algebraic object a "simplicial complex" of algebras which are free in each dimension. They are fundamental to homological algebra and many basic concepts of homological algebra can be understood in terms of monads.
Ryan Wisnesky said:
To make burritos! https://emorehouse.wescreates.wesleyan.edu/silliness/burrito_monads.pdf
That was a delightful read, and surprisingly helpful for providing some intuition e.g. for the concept of a strong monad!
Patrick Nicodemus said:
More concretely, say you have an Abelian group which has lots of torsion in it which makes it hard to study. Homological algebra suggests trying to replace with a similar "topological group" (a simplicial complex with a group structure) which is equivalent to in some respects but its group complexity is spread out and stratified through the dimensions of .
Thus in some sense we're trading off algebraic complexity for the geometric complexity of having a space spread out across multiple dimensions.
Monads can be used in an algorithm called the bar construction which associates to each algebraic object a "simplicial complex" of algebras which are free in each dimension.
That's fascinating! In very broad terms, I guess this example shows that a monad can be used to "distribute" the complexity of an object of interest, across multiple simpler things.
I've been having fun looking at the very start of some books on homological algebra recently (e.g. I've been working on trying to understand the concept of a singular p-simplex and the boundary operator the last few days). Now that I know this very cool "bar construction" exists, that gives me some more motivation to keep learning things in this direction!
Yes, the singular p-simplex is an example of what i'm talking about here. There is something called the "cone monad" on topological spaces, which associates to each topological space the cone = , where is a singleton space, and is the smallest equivalence relation on the space such that for all . I will let you work out what the multiplication and unit natural transformations are.
If you apply this to the empty space times in a row, you get the simplex. So a singular -simplex in a topological space is constructed by applying the monad to the empty space times and then homming it into .
Simplicial methods in topology and homological algebra are very closely tied to monads or monoids in the general sense of monoidal category theory. Anywhere you see one, you are likely to see the other nearby.
The boundary operator is also constructed from the unit natural transformation of the monad .
(by precomposing with it)
To every adjunction , the composite is a monad! Indeed, in case of algebraic structures, like groups, the free-forgetful adunction is exactly the monad describing the algebraic gadget which you've already mentioned. Now, why does it matter that adjunctions give rise to monads?
Say you have a category and a functor . A very natural question is to try and understand the image of . Well in nice cases, admits a left adjoint , and this gives us something to work with!
Indeed, every object in the image of is automatically an algebra for the monad $$RL : \mathcal{D} \to \mathcal{D}$! (Can you see why? As a hint, consider the counit ). So if we're trying to understand the image of we know that we can restrict attention to the -algebras in !
This is already great, but we can do more! Indeed, every admits the structure of an -algebra, and so we can look at the category of -algebras in . We know that has an adjunction over , and the category also has a free-forgetful adjunction over . Moreover, we've just said that factors through , and now we have more structure (the structure of -algebras) to help us understand the image of .
There are general theorems which tell us when this lifted functor is fully faithful (in which case is called a descent morphism) or in the best case when is an equivalence (in which case we call it effective descent). These monadicity theorems are extremely important, since they let us understand the (potentially complicated) category in terms of its image in the (potentially simpler) category (as long as we only look at the -structured objects in ).
(also as an aside, adjunctions are incredibly useful, and in some sense monads allow us to detect adjunctions. Indeed, we know that the composite will always be a monad. In fact the converse is also true -- every monad is of the form for some adjunction!
The kleisli category and the eilenberg-moore category give us two such "factorizations" for our monad! It turns out we have a category worth of factorizations, and the kleisli category is initial among the factorizations, while the eilenberg-moore category is terminal!)
In case the category is posetal (i.e. it comes from a partially ordered set), a monad on is a closure operator (a monotone, extensive and idempotent function). One could make a list of how closure operators are used just as long as what's above.
All great answers! And nobody has even talked much about how monads are used in computer languages like Haskell... although I imagine that famous "burrito" explanation is aimed at programmers. (I haven't actually read it, just heard people talk about it!)
I noticed that @David Egolf mentioned the Kleisli category of a monad. He should really get to know its partner in crime, the Eilenberg-Moore category. That's what I like best about monads: a monad on a category is a way of describing a kind of algebraic gadget that has an underlying object living in , and the category of those algebraic gadgets is the Eilenberg-Moore category of that monad.
For example there's a monad on called "the monad for groups", and the Eilenberg-Moore category of that monad is the category of groups.
The Kleisli category of that monad is the category of free groups, and that's how it always works.
I try to call the Eilenberg-Moore category of a monad the category of "algebras" of that monad, because 1) that's what it is, and 2) the phrase "Eilenberg-Moore" is long, uninformative, and initially quite intimidating.
I try to call the Kleisli category the category of "free algebras" of the monad.
Patrick Nicodemus said:
Yes, the singular p-simplex is an example of what i'm talking about here. There is something called the "cone monad" on topological spaces...
If you apply this to the empty space times in a row, you get the simplex. So a singular -simplex in a topological space is constructed by applying the monad to the empty space times and then homming it into .
Thanks for introducing me to this "cone monad"! The idea of repeatedly applying the endofunctor of a monad to some particular object to get a sequence of objects is quite interesting. While doing some reading to better understand your answer, I ran across these notes (from a lecture given by John Baez, taken by Derek Wise). From those notes, I found the two images below to be very interesting to compare, and I think they relate closely to the idea of repeatedly applying the endofunctor of a monad:
repeatedly apply "round trip" of an adjunction
special morphisms between parts of a simplex
Comparing these two images in the context of the cone endofunctor, I wonder if the cone monad not only generates simplexes of increasing dimension (by repeatedly applying the cone endofunctor to the empty space), but potentially also provides the face and degeneracy maps relating these simplexes. The cone monad would (potentially) provide these maps in terms of certain parts of the unit and counit of some adjunction that induces the cone monad.
Looking again at the first image above, it seems like we can get a lot of objects and morphisms between these by starting with two ingredients:
It intuitively seems like these objects and morphisms could be used to generate a category that would contain a lot of information about (in the form of relationships between and objects generated using and an adjunction). Maybe this could be a useful trick if one ever wishes to "zoom in" on objects in a category, and view those objects as categories in their own right.
Perhaps a related idea could also be interesting, where one repeatedly applies the endofunctor of a monad and attempts to generate a category.
David Egolf said:
Looking again at the first image above, it seems like we can get a lot of objects and morphisms between these by starting with two ingredients:
- an object in a category
- an adjunction , , where is left adjoint to
It intuitively seems like these objects and morphisms could be used to generate a category that would contain a lot of information about (in the form of relationships between and objects generated using and an adjunction).
Congratulations, you've reinvented an important idea! It's best to start by doing the 'abstract' case where you ignore the specific object and even your specific choice of adjunction , and just think about the so-called 'walking' or 'free-floating' adjunction, devoid of any specific details.
The walking adjunction is a 2-category with two objects and , two morphisms and , and two 2-morphisms obeying the usual equations that the unit and counit of an adjunction obey.
Then, any specific adjunction is a 2-functor .
This maps the abstract objects and to specific categories, and to specific adjoint functors, etc.
The general structure of adjunctions, and all the things you can do with them, is more conveniently studied in the walking adjunction, freed from the clutter of specific details.
Then you can apply that knowledge to specific cases, like the category you're building. (You'll notice that while you were just acting as if you're talking about a specific adjunction, you're not telling us what it is, so you might as well be working abstractly.)
I think the Spring 2007 seminar notes you were reading give a gentle introduction to a lot of these ideas and how they're used in math.
But I may have sidestepped discussion of the walking adjunction .
For an introduction to that - and a lot of the same material as in the Spring 2007 notes, viewed from a different angle - try these other seminar notes:
Weeks 4-7 lead up to a treatment of the walking adjunction! Then I go on and use it to study the bar construction.
You can see morphisms in the category you're talking about in pictures like
this:
but in the "walking" case, not any concrete case. (Also, this particular picture focuses on what you can do with an object , not your . They're both equally interesting!)
@David Egolf at this point it's useful to start synthesizing your comments with what Chris pointed out, that adjunctions give rise to monads. For any adjunction (L, R), RL is a monad, and you can choose the unit of the monad to be the unit of the adjunction, and the multiplication would be the natural transformation .
Dually, is a comonad, with counit given by the counit of the adjunction, and comultiplication given by .
In the notes by Derek Wise you posted, all the important morphisms in those pictures are built out of not just the unit and counit of the adjunction but more specifically the maps , , and by repeated application of the functor . For this reason I claim that the picture could be also written, after a change of notation, using only the symbols , , and . The theorem "every adjunction can be used to construct simplicial objects" can be broken down into two parts as "every adjunction gives rise to a comonad, and every comonad can be used to construct simplicial objects".
So your question "is there an adjunction which can be used to produce the face and degeneracy maps between simplices?" The answer is yes, and the other answers explaining how you can use the Kleisli category and Eilenberg-Moore category to build adjunctions from monads are a hint to show how to get there. But those face and degeneracy operators between simplices can be constructed directly from the unit and multiplication of the cone monad without using this monad <=> adjunction translation.
I wanted to thank everyone for their responses! This has been a lot of fun so far. I'm still working through each answer roughly in order, so it's going to take me a while to properly read the most recent comments.
Chris Grossack (they/them) said:
Say you have a category and a functor . A very natural question is to try and understand the image of . Well in nice cases, admits a left adjoint , and this gives us something to work with!
Indeed, every object in the image of is automatically an algebra for the monad ! (Can you see why? As a hint, consider the counit ). So if we're trying to understand the image of we know that we can restrict attention to the -algebras in !
(I added a missing "$" to the above).
Thanks for your interesting comment! I am currently working on understanding this part: "every object in the image of is automatically an algebra for the monad ".
To define an algebra, we need two things:
In our case, the monad endofunctor is . And our object for our algebra is .
So, we are looking for a morphism from to . Using the provided hint, we consider the component of the counit at , namely . Since is a functor, we can apply it to the morphism to obtain . So, I am guessing that we get an algebra for each object in the image of as follows:
However, if this is to actually be an algebra, and need to satisfy a couple additional conditions. Namely, the "unit" and "composition" diagrams need to commute. This is the point where I start feeling a bit out of my depth, and like I need to work some exercises with these concepts.
(For example, it would probably be good for me to figure out exactly how an adjunction induces a monad. The monad endofunctor is , but to define a monad also requires specifying a unit natural transformation and a multiplication natural transformation.)
David Egolf said:
So, I am guessing that we get an algebra for each object in the image of as follows:
- our object is an object in the image of , namely
- our morphism is , where is the component at of the counit for the adjunction
That's exactly right!
However, if this is to actually be an algebra, and need to satisfy a couple additional conditions. Namely, the "unit" and "composition" diagrams need to commute. This is the point where I start feeling a bit out of my depth, and like I need to work some exercises with these concepts.
(For example, it would probably be good for me to figure out exactly how an adjunction induces a monad. The monad endofunctor is , but to define a monad also requires specifying a unit natural transformation and a multiplication natural transformation.)
These sound like excellent exercises! The unit of the monad should sound very similar to the unit of the adjunction , so you should get that one quickly. The multiplication should be , and you'll want to reuse an idea from your solution to the previous exercise (about an algebra structure on ). The real exercise, though, will be checking that is associative and compatible with . While you're at it, I recommend thinking about exactly what the arrows you're building (and the diagrams you're chasing) mean in the case of simple free/forgetful adjunctions for groups and vector spaces. If you have any other favorite adjunctions (and if you don't, check out these questions: 1, 2) it's worth working out more of these examples to keep building intuition ^_^