You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I was looking into Lenses in the thread Optics and Web Servers, where coalgebras of the store comonad pop up (and also a dual algebraic account).
So that got me to wonder
I was asking because the Store monad functor is such a different animal when it is the "coalgebra of the store monad" and when it appears all by itself. So I thought it may be worth looking for other examples.
All by itself the Store Monad allows one to implement Conway's game of life, by making use of the duplicate function to explore the surroundings of a space. Here for two functors and with then . The adjunction that gives rise to the Store comonad is . We have extract as and duplicate as . The first projection of duplicate is thus a function that allows one to create stores with foci placed at different positions.
In game of life implementation using the Store comonad (there are others such as this one using Zippers), the is a location on the board and . So a Store instance, is a pair of a function determining for any pair of numbers if that position is alive or dead, and a position on the board. From that it is easy to create a function that takes a store and looks at all the surroundings positions of the focused position to determine if in the next generation it will be alive or dead. Call this function for next. If we map over the duplicated store, we have a function which now tells us if the next generation of the cell is alive or dead.
The Lens acts very differently. The Lens which is the coalgebra of the store given by the function (called the coaction in Relating Algebraic and Coalgebraic Descriptions of Lenses) take the state S (of an object) to both a view V on that object (the observation), and a function to change the state (of the whole object) with only V changed. So here we don't really use the duplication to explore the surroundings. We just have an equation shown in the "Relating Algebraic..." article that . Where the Store comonad gives one a way to explore the space, the lens is focused on narrow changes. That is quite easy to see if we use the same types as for the Game of Life with and . It is clear then that the coaction can get very little interesting information out of the 2 state.
To toot my own horn: in section 5.4.1 of my lecture notes there are a few comonads with their coalgebras listed.
Thanks @Paolo Perrone . I find your explanation of the coalgebra for the Stream comonad very helpful.
So the inifinite stream comonad arises from the functor . If is such a stream then and
(This is also nicely explained in Bart Jacobs book "Introduction to Coalgebra" p 260)
What you then show very nicely (Bart Jacobs gives too abstract a reason) is how the coalgebra for the Stream comonad, is restricted by the counit and coalgebra square equations.
For a coalgebra , the coalgebra comonad has to be a sequence of the form .
The coalgebra forces the stream to be constructed iteratively from a seed by iterated application of .
I have a feeling that the coalgebras for a comonad discussed above are those that come in the Eilenberg-Moore category for the given comonad. I can't quite find out where I saw this, but I think those are very closely related to comonads (and then the same for monads and algebras).
Well Bartosz Millewski had a course on Eilenberg Moore where he showed: that the adjoint to the forgetful functor to the Eilenberg Moore category of a monad T gives rise to the same Monad T that one started with.
Dually, does that mean that these coalgebras have something particularly important to say about the comonad? So that knowing that a lens is the coalgebra of a Store comonad, tells us something especially important about the Store comonad?
Henry Story said:
I have a feeling that the coalgebras for a comonad discussed above are those that come in the Eilenberg-Moore category for the given comonad. I can't quite find out where I saw this, but I think those are very closely related to comonads (and then the same for monads and algebras).
Well Bartosz Millewski had a course on Eilenberg Moore where he showed: that the adjoint to the forgetful functor to the Eilenberg Moore category of a monad T gives rise to the same Monad T that one started with.Dually, does that mean that these coalgebras have something particularly important to say about the comonad? So that knowing that a lens is the coalgebra of a Store comonad, tells us something especially important about the Store comonad?
Do you mean the adjunction with the forgetful functor from the Eilenberg Moore category of a monad? I think that's how it usually works. It's an interesting question. More generally, for a monad , there is a category of adjunctions which give rise to T. The Eilenberg Moore category is the target of an adjunction which is terminal in this category and the Kleisli category is the target of an adjunction which is initial in this category. For example see Riehl's category theory in context: image.png
Does anybody know if/how this result dualizes to comonads? It's not immediately obvious to me.
That's a good question. Some of this stuff is "formal" and works just as well for monads in any 2-category, so it also works for comonads, which are monads in some dualized version of .
In other words, I'm saying you can define Eilenberg-Moore and Kleisli categories of a comonad to be initial and terminal objects in a certain category that's formally like , but defined in a dualized version of .
(Just don't ask me to say which is initial and which is terminal! And while you're at it, don't ask me which dualized version of I mean: or .)
I understand that. I think that I will need to put pen to paper to get all the arrows right on this one...
The way I remember it is that the EM-category of a comonad, like the EM-category of a monad, consists of "pictures drawn in" the underlying category. Therefore, it should also be a limit construction, i.e. terminal in some category. Since reversing the 1-morphisms is what turns limits into colimits, we need to reverse the 2-morphisms instead to get a different kind of limit. So it follows that EM-categories of comonads are terminal in something built out of .
For the other dual, you can remember that the left adjoint from a category to the Kleisli category of a monad is surjective (in fact, bijective) on objects, so it makes sense to think of it as sort of a "quotient", i.e. a colimit. So Kleisli categories of both sorts are initial, and we get them from EM-objects by turning the 1-morphisms around.
Mike Shulman said:
The way I remember it is that the EM-category of a comonad, like the EM-category of a monad, consists of "pictures drawn in" the underlying category. Therefore, it should also be a limit construction, i.e. terminal in some category. Since reversing the 1-morphisms is what turns limits into colimits, we need to reverse the 2-morphisms instead to get a different kind of limit. So it follows that EM-categories of comonads are terminal in something built out of .
What do you mean by pictures drawn in the underlying category?
An algebra for a monad on consists of objects, arrows, and commutative diagrams in .
Specifically, one object, one arrow, and two commutative diagrams.
A less handwavy way of saying this is that by naturality, a functor into the EM-category consists of a functor into together with a natural structure of -algebra on each of its values. So since functors into the EM-category have a nice description, it is a limit construction.
Jade Master said:
More generally, for a monad , there is a category of adjunctions which give rise to T. The Eilenberg Moore category is the target of an adjunction which is terminal in this category and the Kleisli category is the target of an adjunction which is initial in this category. For example see Riehl's category theory in context: Image of Category of Adjunctions
Thanks very much for that pointer.
To my surprise I was able to follow the first two parts of chapter 5 of Emily Riehls' book. I guess having a good example in mind, and a few questions helps a lot.
It is a pitty that Bart Jacob's book does not give the category of adjunctions view, as that does bring out really well the importance of algebras to monads. So if Eilenberg Moore of algebras on a monad is the final object in the category of adjunctions, that highlights its significiance: something to meditate further on.
And the same the other way around, though I guess the question is if the Eilenberg Moore category for a coalgebra monad is the final or initial object there? (And what that would mean.)
I should go back and listen to Milewski's course on Lenses. He explained that somewhere too.
Mike Shulman said:
A less handwavy way of saying this is that by naturality, a functor into the EM-category consists of a functor into together with a natural structure of -algebra on each of its values. So since functors into the EM-category have a nice description, it is a limit construction.
Oh I get it now. Thanks!
Henry Story said:
Jade Master said:
More generally, for a monad , there is a category of adjunctions which give rise to T. The Eilenberg Moore category is the target of an adjunction which is terminal in this category and the Kleisli category is the target of an adjunction which is initial in this category. For example see Riehl's category theory in context: Image of Category of Adjunctions
Thanks very much for that pointer.
To my surprise I was able to follow the first two parts of chapter 5 of Emily Riehls' book. I guess having a good example in mind, and a few questions helps a lot.It is a pitty that Bart Jacob's book does not give the category of adjunctions view, as that does bring out really well the importance of algebras to monads. So if Eilenberg Moore of algebras on a monad is the final object in the category of adjunctions, that highlights its significiance: something to meditate further on.
And the same the other way around, though I guess the question is if the Eilenberg Moore category for a coalgebra monad is the final or initial object there? (And what that would mean.)
I should go back and listen to Milewski's course on Lenses. He explained that somewhere too.
Of course, you're welcome :)
So here is something a bit surprising. Because Monads are associated with EM categories of Algebras, I have associated Monads with Algebras. But perhaps too hastily. I came across this recently, but in a programming intro to Lenses, so I was not sure if I should take it seriously. But now re-reading Bart Jacobs' 450p Intro to Coalgebras he writes on p248
Before seeing examples of monads we immediately illustrate their usefulness in the theory of coalgebras: they introduce sequential composition of coalgebras. The composition will be presentied in slightly more general form via Kleisli categories
Lemma 5.1.2 For a monad , coalgebras form a monoid - where X is a fixed object.
The proof is the kleisli composition operation. Then in the section on Kleisli Categories p 363.
Kleisli maps may be seen as generalised coalgebras , with different domain and codomain.
I guess that shows one has to be very careful in what associations one makes...
Henry Story said:
I guess that shows one has to be very careful in what associations one makes...
What do you mean by this?
Also, do you know that the objects of the Kleisli category for a comonad correspond exactly to the cofree coalgebras?
Is this perhaps a mistaken association? @Morgan Rogers (he/him)
So could one say: the Category of adjunctions for a monad , starts from the Kleisli category of (generalised) comonads on ( following Bart Jacobs' point above about being a coalgebra) and ends with the Eilenberg-Moore category of algebras on ?
@Morgan Rogers (he/him) wrote:
Also, do you know that the objects of the Kleisli category for a comonad correspond exactly to the cofree coalgebras?
I read it, but have not yet integrated that.... I wonder how that fits in with my previous question...
Henry Story said:
So could one say: the Category of adjunctions for a monad , starts from the Kleisli category of (generalised) comonads on ( following Bart Jacobs' point above about being a coalgebra) and ends with the Eilenberg-Moore category of algebras on ?
Ah, it sounds like you might be confused about algebras/coalgebras. The story is dual for the two concepts:
Every monad on a category has an associated EM category consisting of the algebras for that monad, and the full subcategory on the free algebras constitutes the Kleisli category (this is interesting because the usual presentation of the Kleisli category doesn't make it so obvious that you can identify the objects with free algebras). Both the Kleisli and EM categories admit a canonical adjunction with in which the right adjoint is the "forgetful functor" with codomain , and both adjunctions induce the monad . In the category of monads inducing , these are the initial and terminal, objects respectively.
Dually, any comonad on a category has an associated (co)EM category consisting of the coalgebras for that comonad, and the full subcategory on the cofree coalgebras constitutes the coKleisli category . Both the coKleisli and coEM categories admit a canonical adjunction with in which the left adjoint is the "forgetful functor" with codomain , and both adjunctions induce the comonad . In the category of comonads inducing , these are still the initial and terminal objects, respectively.
(remembering that the Kleisli/coKleisli categories consist of the free/cofree things and so admit an inclusion into the EM/coEM categories is a good way of remembering that the former are the initial objects in the categories of adjunctions)
Thanks @Morgan Rogers (he/him) . That is approximately the picture I had in mind after reading Emily Riehl's "Categories in Context" chapter 5.0 to 5.2.
Then I read the passage by Bart Jacobs I cited above, where it looks like Kleisli categories are coalgebras. (which is why I was suprised I guess now)
Perhaps it a situation where one can see it both ways? I think it is the case that initial algebras have morphisms going both ways, so that initial algebras can be seen as coalgebras too. (The same happens for final coalgebras too, which can be seen as algebras). Perhaps that is the situation here too?
The reason for those observations is that there is a notion of "algebra/coalgebra" for an endofunctor (ignoring the extra monad/comonad structure), and annoyingly this seems to be the notion that is more common in computer science contexts:
for an endofunctor , an algebra in this sense is an object equipped with a morphism , while a coalgebra is an equipped with a morphism , with no extra conditions. Bart Jacobs is using the term "coalgebra" in this sense, since for a typical monad there is no way to talk about coalgebras in a more structured sense.
cf https://ncatlab.org/nlab/show/coalgebra+for+an+endofunctor
Ah yes, that is the definition of coalgebra that Bart Jacbos works with in his book "Introduction to Coalgebra". He starts with the simple definition of a coalgebras of an endo-Functor as a pair of an object and a morphism , looks at categories thereof, with initial and final objects. Then later he look to add the extra conditions to speak of coalgebra of a comonad or algebra of a monad.
Monadic functors give the ability to sequentially compose coalgebras, so looked a like that it does appear that the kleilsi category is one of such composable coalgebras.
It's a shame this is how the terminology worked out, it seems to leave a lot of room for confusion, and I personally don't think that the notion of (co)algebra for an endofunctor fundamentally captures anything algebraic without the extra conditions.
@Morgan Rogers (he/him) I don't think this terminological difference has much impact on this question, since in a kleisli category we do have the extra monadic structure. A Monad is an endofunctor with certain properties. A Morphism in the Kleisli category of a Monad is a function which in the underlying category is , which has the structure of a (generalised) coalgebra - perhaps a composition of a strict coalgebra with a map of a morphism inside the Monad . Similarly a morphism in the Kleisli category for a Comonad would have the structure in the underlying category of which would be an algebraic type morphism.
Right, but what exactly makes a generic morphism coalgebraic in nature?
The name "algebra" for objects of EM-categories presumably comes from the fact that for most of the motivating examples, these are exactly the algebraic objects we expect. For the free group monad, for example, the EM category is equivalent to the category of groups. Treating this as an endofunctor, however, a generic morphism (an "algebra" for this endofunctor) ignores the extra data of the monad which, to me, encodes the algebraic aspects.
Henry Story said:
A Monad is an endofunctor with certain properties.
there is a notion of "algebra/coalgebra" for an endofunctor (ignoring the extra monad/comonad structure), and annoyingly this seems to be the notion that is more common in computer science contexts
This has confused me as well. Are there any contexts in which one would care about such a (co)algebra where the functor doesn't have the structure of a (co)monad? i.e. are non-monadic algebras actually interesting?
Are there any contexts in which one would care about such a (co)algebra where the functor doesn't have the structure of a (co)monad?
One of the early examples in Bart Jacobs' book, is coalgebras for the functor from which one can build a coalgebra . (Now I think the disjunction already means that it cannot be a comonad, because there would be nothing to extract for .) The coalgebra gives a state machine: you give it a state and applying you either get no further state just a , or you get another state of a pair of an that can be observed, and a further state that can be plugged back in to to get the next observation/state or end. This is a state machine that potentially goes on for ever. In the category of such coalgebras the final coalgebra is
An instancce of could be a finite or infinite sequence. Clearly it cannot be constructed in one go, and it can only be viewed in pieces. Perhaps it is a Web Service that returns a word or a 404, or it is an interface implemented by some other company. In any case when one has such a sequence on can do one can ask for which will allow one to find out if it ended, or get a pair of some observable value and the next potentially infinite sequence.
if then the final coalgebra can represent real numbers between 0 and 1 in decimal notation. The other (non-final) coalgebras are ways of stepping through such a number seen more as a state machine.
Nick Smith said:
there is a notion of "algebra/coalgebra" for an endofunctor (ignoring the extra monad/comonad structure), and annoyingly this seems to be the notion that is more common in computer science contexts
This has confused me as well. Are there any contexts in which one would care about such a (co)algebra where the functor doesn't have the structure of a (co)monad? i.e. are non-monadic algebras actually interesting?
Yes. But I hope it's already clear: even if a functor has the structure of a monad, an "algebra of the functor" is different from an "algebra of the monad". They're just different concepts. The distinction between them, and some of their relationships, are explained here:
https://ncatlab.org/nlab/show/algebra+for+an+endofunctor
Unfortunately no examples in this article. But as you hinted earlier, there are tons of examples from computer science. Henry explained one.
There is this very nice article by J.J.M.M Rutten from 2000 Universal Coalgebra: A theory of systems which gives many examples of coalgebras.
The link to the nlab wiki page John Baez just posted itself links to a page on coalgebras which points to a couple of blog posts by @David Corfield of which coalgebraically thinking. He gives an overview of Bart Jacobs' book, and points out that the Functor I called above can also give a category of algebras, known in computing as Lists. (I was actually looking for that just earlier). These are finite structures and have to be fully constructed first.
Actually it's interesting that the final coalgebra for called being final has an inverse (which is therefore an algebra) and which constructs a stream
allowing one to create the empty stream, or if you already have an infinite stream and an allows you to build a slightly longer infinite stream. :-)
Thanks to you both. I haven't had time to study these topics properly yet, so some of the details are still hard to absorb.
I hope it's already clear: even if a functor has the structure of a monad, an "algebra of the functor" is different from an "algebra of the monad". They're just different concepts.
Right, but according to the article and earlier discussion "an algebra of the functor" is equivalent to "an algebra of the free monad generated by the functor", so it seems that even if a T-algebra for a monad T isn't the same as the F-algebra for the functor underlying the monad T, an F-algebra is still ultimately a T'-algebra for a different monad T'. (As long as F is a functor capable of generating a free monad T'.) Is this correct? So we still end up with an "algebra over a monad", even if we weren't trying to define one.
I think the disjunction already means that it cannot be a comonad, because there would be nothing to extract for *: 1.
is a polynomial functor, so it has a corresponding cofree comonad according to David Spivak and Nelson Niu (their book on Poly).
There is a really long discussion about the relation of algebras and coalgebras in both blog post from 2008 by David Corfield on The Status of Coalgebra and the question as to the pure functorial vs monoidal difference comes up
Proposition 2.5.3 in "Introduction to Coalgebras" p 97:
For a finite Kripke polynomial functor , the forgetful functor has a right adjoint. It sends a set to the carrier of the final coalgebra of the functor
(A Kripke polynomial is a functor built out of functors for identity, products, coproducts, exponentials and the powerset. )
That is a bit weird because we have a different functor appear in the adjunction: but it is ok, it is appearing in the carrier. So after the proof of the above proposition Jacobs goes on to write:
the unit is a homomorphism from a coalgebra to the cofree coalgebra on its carrier. It is obtained by finality. And the counit maps the carrier of a cofree coalgebra back to the original set .
So I think that could be the relation that @Nick Smith was speaking about between coalgebras and some free relation to them and monads. But I think that just shows why the minimal definition of a (co)algebra on a functor -- without the monad -- is useful, as it allows one to show exactly where different pieces of the puzzle come together.
I'm not saying it's not useful, I'm saying I don't see how it captures any notion of "(co)algebraic structure". Basically, I think it was a bad choice of name.
Assuming that the Eilenberg-Moore notion came first, you can see how this name arose, but it's a shame that it stuck.
Nick Smith said:
Thanks to you both. I haven't had time to study these topics properly yet, so some of the details are still hard to absorb.
I hope it's already clear: even if a functor has the structure of a monad, an "algebra of the functor" is different from an "algebra of the monad". They're just different concepts.
Right, but according to the article and earlier discussion "an algebra of the functor" is equivalent to "an algebra of the free monad generated by the functor", so it seems that even if a T-algebra for a monad T isn't the same as the F-algebra for the functor underlying the monad T, an F-algebra is still ultimately a T'-algebra for a different monad T'. (As long as F is a functor capable of generating a free monad T'.) Is this correct? So we still end up with an "algebra over a monad", even if we weren't trying to define one.
This is true in many well-behaved settings, but it's not always possible to generate an algebraically free (co)monad from an endofunctor.
And maybe this goes without saying, but even in the (very common) situation where one has algebraically-free monads, the notion of algebra for an endofunctor is still important. For instance, without that notion we can't even define what "algebraically-free" means.
Nick Smith said:
John Baez wrote:
I hope it's already clear: even if a functor has the structure of a monad, an "algebra of the functor" is different from an "algebra of the monad". They're just different concepts.
Right, but according to the article and earlier discussion "an algebra of the functor" is equivalent to "an algebra of the free monad generated by the functor", so it seems that even if a T-algebra for a monad T isn't the same as the F-algebra for the functor underlying the monad T, an F-algebra is still ultimately a T'-algebra for a different monad T'. (As long as F is a functor capable of generating a free monad T'.) Is this correct?
This is correct except - sorry, I'm about to make a pedantic point but I think it's important - you shouldn't just say "the free monad generated by the functor", since - as you later note - such a monad may not exist.
In math we use "the" when something exists and is unique, so e.g. "the largest prime number" is not something one should ever say (since we know it does not exist), or "the largest perfect number" (since we don't know if it exists). This is also true when the possibly existent entity depends on a variable, e.g. "the smallest perfect number greater than " or "the free monad generated by the functor ".
The usual thing to do in cases where uniqueness is known but not existence is add the phrase "if it exists":
"the largest prime number, if it exists"
"the smallest perfect number greater than , if it exists"
"the free monad generated by the functor, if it exists".
Anyway, returning to the practical point: while we can sometimes "reduce" the study of algebras of an endofunctor to the study of algebras of a monad that it generates, I don't know of cases where this actually makes things simpler or easier. But I'm not at all an expert on this subject, so maybe it does help sometimes!
What if you’re in a broader framework where you’re already working with (co)algebras of (co)monads? I’m thinking of Poly, where you have categories Cat# and P which nicely capture polynomial comonads, maps between them, coalgebras/comodules over them, monoidal structures on them, limits etc. If you want to study coalgebras of (polynomial) endofunctors in this setting, it seems like looking at their cofree comonads is the right way to go about that? Otherwise you’d be carving out a separate island.
This is the motivation for me thinking about this whole endofunctor/free monad distinction in the first place. “Algebra of an endofunctor” is a common meme in functional programming when talking about recursion schemes, and I want to understand how to think about these concepts in Poly.
If @David Spivak were here he'd tell you... maybe someone here knows. I'm avoiding learning about Poly, which is really hard now because I'm at the Topos Institute and he's teaching a course on polynomial functors, and writing a book about them, and talking to everyone about them... but I need to think about thermodynamics and other stuff.
I did already quiz David a little, but he’s a busy guy. Maybe I’ll get a chance to ask during the Poly course. It’s a shame I can’t attend in person.
You might ask Nelson Niu, who is writing the book with David. He's probably less overwhelmed with demands on his time.
On page 272 of "Category Theory 2nd edition" Steve Awodey writes
Finally, we ask, what is the relationship between algebras of endofunctors and algebras for monads? The following proposition, which is a sort of "folk theorem", gives an answer.
Proposition 10.14: Let the category have finite coproducts.
Given an endofunctor , the following conditions are equivalent:
1. The P-algebras are the algebras for a monad. Precisely there is a monad , and an equivalence between the category of P-algebras and the category of algebras for the monad. Moreover, this equivalence preserves the respective forgetful functors to .
2. The forgetful functor has a left adjoint .
3. For each object of , the endofunctor has an initial algebra.
I have always felt Kleisli to be a bit mysterious, but I trusted the mathematicians that it worked out. But as I now wanted to really understand about the category of adjunctions for a (co)monad I studied @Paolo Perrone 's very helpful lecture notes and even worked on exercise 5.1.21 and 5.1.22 - and found typos here and there :-) . Awodey only has half a (very clear) page on Kleisli (p277 of Category Theory). But both say that the objects of a Kleisli category are the same as that of the category for a monad (triple) . But my feeling is that really kleisli is collapsing all the Ts for an object. So that the initial left adjunct functor takes as shown in this diagram I tried to draw out. So my guess is that one could say they are the same objects, but after a whole bunch of them have been merged with an equivalence relation?
Diagram of Functor to Kleisli
This is what Awodey writes:
This category is called the Kleisli category of the adjunction, and is defined as follows:
- the objects are the same as those of , but written
- the arrow is an arrow in ,
- the identity arrow is the arrow in
- for composition, given and , the composite is defined to be as indicated in the following diagram...
So my thinking is that if one has a functor that maps to the identity on in , then it has to map the objects and , but then since identity composes it would need all sequences of arrows to collapse to .
Henry Story said:
But both say that the objects of a Kleisli category are the same as that of the category for a monad (triple) .
That's right. In our minds, an object of the Kleisli category is a free algebra for the monad . This will be the free algebra on some object . But it turns out to very convenient to simply say this object of the Kleisli category is itself! Then what makes the Kleisli category different from is that we change what are the morphisms.
But my feeling is that really Kleisli is collapsing all the Ts for an object. So that the initial left adjunct functor takes as shown in this diagram I tried to draw out.
No, that's not true. It's just not true, usually, that for the monad coming from an adjunction with left adjoint . Furthermore this is not a good way to think about what's going on here. That is: not only is this technically false, it's morally false.
Ok. I need to start from scratch and try again... Perhaps I'll start with the object from and draw the kelisli arrows on top in a different color...
I suggest taking some adjunction whose monad you understand, and working out its Kleisli category according to the standard definition of Kleisli category. For me, being a mathematician, the best example is the adjunction between groups and sets. Then I can take the definition of Kleisli category, turn the crank, and see that the Kleisli category is equivalent to the category where the objects are free groups and the morphisms are group homomorphisms.
The "trick" is that instead of talking about "the free group on the set ", the standard definition of Kleisli category just says "". That is, the objects in this particular Kleisli category are just sets!
But the morphisms from a set to the set in this Kleisli category are not functions from to , but rather homomorphisms from the free group on to the free group on .
Ah yes. Paolo has an example the Power Set monad, for which the Kleisli category from is equivalent to the category of Sets and relations (which is neat). So yes, forgetting that those new morphism are no longer functions is perhaps what was confusing me.
Algebras for the power set monad are suplattices, so the Kleisli category for this consists of free suplattices and suplattice morphisms between these. The free suplattice on a set is the power set of , . The homomorphisms of suplattices from to happens to correspond to relations between and .
However, unless you are comfortable with suplattices, this example can be confusing.
To me, as a mathematician, monads on Set describe "algebraic gadgets" like groups, abelian groups, vector spaces over your favorite field, rings, suplattices, etc....
But I learned about groups long before I learned about suplattices, so groups seem like a better example when I'm trying to test out ideas. (Computer scientists will probably prefer other examples, like monoids: the free monoid on is the set of lists of elements of .)
And also, the fact that suplattice morphisms from to correspond to relations from to is a special feature of this case that takes a bit of thought to intuit - so this fact could be a bit distracting if you're trying to wrap your brain around the general idea of Kleisli category.
I don't have any trouble using Monads in programming at all, where I use them all the time, and from afar Kleisli looks very simple. It's more trying to see how to think draw these out to make sure I really understand. :-)
For me the relation of the power set monad to is very interesting because functional programmers seem to live in and it seems that RDF folks are in the Powerset Kleisli category of Set (According to Evan Patterson's "Knowledge Representation in Bicategories of Relations" paper).
It's very interesting, but I can't see it as the royal road to understanding Kleisli categories in general: it has too many special features.
To understand the Kleisli category of a monad you need to understand the algebras of that monad, and then the free algebras of that monad.
ah. that's interesting.
As I mentioned, the algebras of the power set monad are suplattices. So if you don't already know and love suplattices, this example is not good for you!
At least, not good as a starting-point for learning about Kleisli categories.
the simplest I know is the Option monad which gives the Kleisli of partial functions I believe.
What are the algebras of this monad?
(It took me a minute to figure it out - this is not my favorite monad. But it could be a good example, because its algebras are pretty easy to understand.)
with the right equations. But it will take me a lot of time to figure out step by step what the algebra of that monad are. But that should be easy to read up on.
The Option Monad is used a lot in programming.
It's better to figure out yourself what the algebras of the monad are. You can just guess...
When would you have a set with a map satisfying some equations?
I can answer if you want.
ah so you need either 1 to map to a special element or X to map to X. I remember now: pointed sets.
Right!
A map is the same as a map and a map . Every set has a god-given map , namely the identity. "The right equations" - the definition of monad - wind up saying we have to use this map . So the only choice we get to make is the map . And this is point of .
So an algebra of this monad is a pointed set.
And: the free pointed set on the set is .
And pointed sets can be represented as the over-slice category if I remember correctly.
Yes.
But here's a sad feature of this example: every algebra of this monad is isomorphic to a free algebra! In other words, every pointed set is isomorphic to a pointed set of the form for some sets .
I feel makes it hard to appreciate the special features of the Kleisli category, which is the category of free algebras, and how it's different from the Eilenberg-Moore category, which is the category of all algebras.
The example of groups doesn't suffer from this problem: free groups are very special, most groups aren't isomorphic to free groups.
Mhh. Other monads we come across in programming are the Either Monad (), the list monad (), the state monad (), the continuation Monad (but that is probably even more difficult than powerset, and even difficult to understand though widely used in JavaScript, (see continuation monad), ... Is there one that is similar to groups in that it appears differently kleisli and EM?
(I had been reading up on groups and its relation to the rubix cube...)
The continuation monad has Ultrafilters - morphisms from propositions 𝓟(X) to the final algebra 𝟮 - as special case (C=2). Wondering about this, lead me to Odersky's 2010 article on continuations in #Scala The first 3 sections are very clear. https://infoscience.epfl.ch/record/149136/files/icfp113-rompf.pdf https://twitter.com/bblfish/status/947041248975687680/photo/1
- The 🐟 BabelFish (@bblfish)I thought that the list monad was . The endofunctor (which I don't think is a monad) is the one whose initial algebra (in the endofunctor sense) is .
It's interesting that the top five monads any computer scientist (or at least Henry) will list as easily understood examples are disjoint from the top five monads any mathematician (or at least me) would list.
With the arguable exception of , and also we have an entirely different name for the list monad.
Mike Shulman said:
I thought that the list monad was . The endofunctor (which I don't think is a monad) is the one whose initial algebra (in the endofunctor sense) is .
yes, you are right. I was putting those monads together from memory just before going to bed. What is ? Ah it is , is the free datatype, recursively defined. Bart Jacobs often presents the algebraic view of List in his book on coalgebras.
There is a catalogue of standard monads on the Haskell wiki. I think I got most of them. The Monads that really made be click (coming as I did from Java 10 years ago) were the scala Future "Monad" (which is not quite a monad) and the Free Monad.
I am - by slow immersion reading these math books - learning about the mathematical monads too. Hopefully when I have the mapping math <-> programming clarified, I will start feeling the urge to use them too :-)
I know John Baez said I should not do this, but while he is asleep let me try it out. Let's hope that it works out better than for Mickey Mouse...
So because of the link to RDF the Powerset monad actually makes a lot of sense to me.
John wrote above
Algebras for the power set monad are suplattices, so the Kleisli category for this consists of free suplattices and suplattice morphisms between these. The free suplattice on a set is the power set of , . The homomorphisms of suplattices from to happens to correspond to relations between and .
So I'll try out Awodey's definition on it.
Is it not a a bit odd to say that "homomorphisms of suplattices happen to correspond to relations"? Is not everything in mathematics necessary? How could there be an accident here?
(Ah I see from nlab: perhaps the fact that suplatice is also a lattice is accidental.)
Its the same way that frames happen to be Heyting algebras, but maps of frames don't necessarily preserve the Heyting structure.
This is what Awodey writes:
This category is called the Kleisli category of the adjunction, and is defined as follows:
So we are taking T to be the Powerset functor
- the objects are the same as those of , but written
Ok, that makes sense, if I jump right to the conclusion that the Kleisli for the Powerset is the same as the category of sets and relations. The objects are just sets. ((So I guess we can also have PowerSets of PowerSets, products and sums, ... in ?))
- the arrow is an arrow in ,
I think this is what is confusing as a definition, because how can the arrows in two categories be the same if they don't have the same domain or codomain? I guess that what is really meant is implementation wise if one is in the category then one can think of as an arrow in the Kleisli category. And indeed if A is People and B is cats then, a function takes a person to the cats that he owns, including possibly none, which makes a relation, since we can have one to many and one to none.
- the identity arrow is the arrow in
Again, this only make sense implementation wise. I can consider a function to be a relation , which takes each person to . Otherwise the talk of "is the arrow" is weird, again since the domain and codomain are different in .
- for composition, given and , the composite is defined to be as indicated in the following diagram...
so here if we have and we apply it twice to some person P in set we would end up with , so we apply to get back . And that would give us friend of a friend relation in the Kleisli category or .
Henry Story said:
This is what Awodey writes:
This category is called the Kleisli category of the adjunction, and is defined as follows:
So we are taking T to be the Powerset functor
- the objects are the same as those of , but written
Ok, that makes sense, if I jump right to the conclusion that the Kleisli for the Powerset is the same as the category of sets and relations. The objects are just sets. ((So I guess we can also have PowerSets of PowerSets, products and sums, ... in ?))
- the arrow is an arrow in ,
I think this is what is confusing as a definition, because how can the arrows in two categories be the same if they don't have the same domain or codomain? I guess that what is really meant is implementation wise if one is in the category then one can think of as an arrow in the Kleisli category. And indeed if A is People and B is cats then, a function takes a person to the cats that he owns, including possibly none, which makes a relation, since we can have one to many and one to none.
- the identity arrow is the arrow in
Again, this only make sense implementation wise.
I claim that the definitions of Awodey make sense literally, and not merely "implementation wise", whatever the latter means. To define a category it is enough to tell what the objects are, for each pair of objects what are the morphisms between them, and define how composition works and check that the end result is a category (i.e. satisfies the relevant axioms). It does not matter at all that those morphisms might happen to be equal (in the context of material set theory) to entities that are used elsewhere in mathematics for some other purposes, whether as numbers, objects of some other category or as morphisms in n some other category or anything else. For instance, a morphism in is a morphism in .
More artificially (just to drive the point home), consider the unit which is a morphism of Set. Let us build a new category as follows. has one object which we will denote by . We will define the only hom-set we need to define by setting , and define the composition in the only possible way by setting . This results in a notationally awful but a mathematically valid way of constructing the terminal category. Of course, no one would want to define it in such a manner, but it shows that it's perfectly possible to have a morphism of one category going between distinct objects to be an identity morphism in some other category. When defining , this flexibility is actually convenient, which explains why Awodey defines things the way he does.
To put it another way, if in , then reuses this but types it differently, so that it becomes a map in . This is fine because such typing data is part of the data of a category rather than something fixed in advance once and for all: that repurposes from is not a crime.
Thanks @Martti Karvonen. I'll go over it again, remembering to keep HomSets in mind. That looks like where the problem was.
Everything Martti said looks right. The key, as far as I'm concerned, is that because is a monad, we can define a way to compose two morphisms and in the Kleisli category of , namely by taking this composite in the category :
where is the multplication of the monad .
I'm sure Awodey explains this, since this is the most interesting feature of the Kleisli category - and without saying how morphisms are composed one hasn't specified a category!
yes, Awodey gives the composition of two morphisms.
What has been hurting my head is the use of the word "is". The is the morphism . :hurt:
But I do believe the HomSet is the right way out. It's still hurting though...
Put quotes around the "is" if it makes you feel better. But it's perfectly correct if you're doing math based on traditional set theory, as Awodey clearly is here. The set of morphisms in the Kleisli category IS the set of morphisms in the original category: these sets are equal, so each element of one set is equal to an element of the other.
If you want to use structural set theory, or type theory, or something else, then it's your job to translate traditional math textbooks into your own favored framework.
Feeling better allready. Should be out soon :hospital: :-)
I think this would be equally true in type theory. In type theory you would define a category by giving a type of objects and a type family of morphisms, with composition and identity operations like . Then given such a category and a monad on it, you would define the Kleisli category by and . The latter equality is then definitional, and one would then define and so on.
(Quite generally, traditional math textbooks are arguably written in a language that's closer to dependent type theory than to ZFC.)
But it's true that in structural set theory (like ETCS) the equalities might be only isomorphisms, because it often doesn't have a basic notion of "family of sets".
That looks like how Agda Categories defines the Kleisli Monad.
I'll go through those for my rehab exercises. https://twitter.com/bblfish/status/1419049461007929345
I am feeling better than James Brown thanks to some help by Martti Karvonen and @johncarlosbaez helping me to understand the Kleisli category of a monad. https://mattecapu.github.io/ct-zulip-archive/stream/229199-learning:-questions/topic/coalgebras.20and.20comonads.html#247046659 https://www.youtube.com/watch?v=-TxHInkyNpg
- The 🐟 BabelFish (@bblfish)I was catching up to this thread, and about to point to our mechanization of it... until I saw that it was already done. It's all completely straightforward, except for the proof that composition is associative, which does require real work. If one were using string diagrams, the proof would mostly 'collapse' because it's largely re-assembling things. What the current proof reveals is a peek at what's going to happen one level up (i.e. in bicategories): the associator will be quite a bit of work to define.
@Henry Story Have you seen Dan Marsden's "category theory using string diagrams" ? I'm going over this again today (after doing a lot of my own string-diagramology) and he has a very nice description of monads & the Kleisli category and much more..
Ah yes, I was reading that in Jan 2020, but I did not finish it, as I was sidetracked into something else.
@BartoszMilewski Best to read the article "Category Theory Using String Diagrams" last after going through Bartosz course and reading the bicategories of relations. Also annotate the diagrams in the article carefully, for it to make sense, and try writing a few equations like this https://twitter.com/bblfish/status/1221425869899599873/photo/1
- The 🐟 BabelFish (@bblfish)Currently I need to hone into how these can be used for the web server I am building. The problem is it takes a lot of time to read maths and I am not even sure yet I know how to translate good maths into Scala code. I can use libraries that have already been written though. It would be great if I got to a place where I could read CT and just write it out if it fit what I needed.
Here is a recent article that looks very interesting: Algebraic and Coalgebraic Perspectives on Interaction Laws. It even mentions lenses...
A very well written set of blog posts A Scala Comonad Tutorial that came from work by a group at Verizon. The last post has very good illustrations for the graph comonad, and comes with a Quiver implementation.
@Henry Story said:
Here is a recent article that looks very interesting: Algebraic and Coalgebraic Perspectives on Interaction Laws. It even mentions lenses …
Abstract
Monad algebras, turning computations over return values into values, are used to handle algebraic effects invoked by programs, whereas comonad coalgebras, turning initial states into environments (“cocomputations”) over states, describe production of coalgebraic coeffects that can respond to effects. (Monad-comonad) interaction laws by Katsumata et al. describe interaction protocols between a computation and an environment. We show that any triple of those devices can be combined into a single algebra handling computations over state predicates. This method yields an isomorphism between the category of interaction laws, and the category of so-called merge functors which merge algebras and coalgebras to form combined algebras. In a similar vein, we can combine interaction laws with coalgebras only, retrieving Uustalu’s stateful runners. If instead we combine interaction laws with algebras only, we get a novel concept of continuation-based runners that lift an environment of value predicates to a single predicate on computations of values. We use these notions to study different running examples of interactions of computations and environments.
Wow. That's the first blurb I've read where I felt like I understood at least the questions they were talking about, if not the answers they were exploring.
I guess it's the clearly stated importance of the distinction between effectful computation and a coeffectful environment that is so interesting. There is a really nicely designed web page Coeffects: Context-aware programming languages which describes the 2014 PhD thesis by Thomas Petricek. Coeffects are of course modelled with comonads.
Henry Story said:
I guess it's the clearly stated importance of the distinction between effectful computation and a coeffectful environment that is so interesting. There is a really nicely designed web page Coeffects: Context-aware programming languages which describes the 2014 PhD thesis by Thomas Petricek. Coeffects are of course modelled with comonads.
This brings up many issues ...
Not sure it's proper to raise them in this, er, context.
But what the hecuba ...
I have a sense that the Taylor series is a coalgebraic construction, via some kind of unrolling operation on (say) smooth functions
Does anyone know of a good reference for material in this direction?
Comonads should be perfectly tailored for this!
Wherever there is differential structure (some sort of "you are here"), you will find comonads.
Perhaps Pavlovic-Escardo Calculus in coinductive form?
It sounds like you may be looking for the jet comonad, since the sequence of derivatives that you wrote down is exactly the jet of at . The jet comonad captures PDEs, since its Eilenberg-Moore category is exactly the category of PDEs! See for example Synthetic geometry of differential equations: I. Jets and comonad structure or Jets and differential linear logic for more. I learned about the latter paper from @JS Pacaud Lemay, who may be able to say more about the jet comonad in general than I could. Also the original reference A note on the category of partial differential equations may be worth taking a look at.
You may also be interested in the "Hurwitz series ring" over an algebra , which is the cofree differential algebra over , and thus induces a comonad on the category of algebras. In characteristic zero, is isomorphic to the power series ring over . If is a differential algebra, then it is an -coalgebra where is defined as:
which is similar to what you wish to obtain.
So setting , which is a differential algebra with the standard derivative, you get an -colagebra structure . Then taking the evaluation at zero map (which is an algebra morphism), the taylor series is recaptured via the composition:
.
Conor McBride said:
Wherever there is differential structure (some sort of "you are here"), you will find comonads.
do you have an elaboration on this? is there any kind of exposition you can recommend here
Conor McBride said:
Wherever there is differential structure (some sort of "you are here"), you will find comonads.
In philosophy (of language) those are called "indexicals". (So for me now having come across "differential structure" this gives me a potential view into "differential structure").
David K. Lewis famously argued that the word "actual" as in "the actual world" is an indexical like "here", "now", "there", in that they all require a relation to a speaker-in-the-world to make sense. Since it has been argued that "modal logics are co-algebraic" this link to modalities makes complete sense, and so to action too. After all you can only act if you are in a situation in the world at a certain position that can effect things by changing them.
(One can also think of "ought", "should", etc as working by giving a point in the world, though in those cases the points are idealised, often non-actual ways things would be if they were good, to which one should strive.)
There is a very interesting discussion on Twitter on provability and truth, where I think that something like action mediates them. If algebra is thought of as more linguistic, and colagebra as more semantic - or at least action oriented - one may have the material for providing category theoretic foundations for pragmatism?
https://twitter.com/bblfish/status/1450006207427719171
@taooftypes @andrejbauer @mietek In "Between Saying and Doing" Brandom gives a relation between what Pragmatic ability is needed in order to be able to be said to be correctly using a vocabulary and what vocabularies can be used to describe such abilities. He mentions #CategoryTheory here. cc @DavidCorfield8 https://twitter.com/bblfish/status/1450006207427719171/photo/1
- The 🐟 BabelFish (@bblfish)Most of my writing on this has been on Haskell StackOverflow: e.g., https://stackoverflow.com/a/25530311/828361 where differentiation acts on data structures, giving rise to spatial notions of linear context.