Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: (co)algebras and (co)monads


view this post on Zulip Henry Story (Jul 11 2021 at 15:16):

I was looking into Lenses in the thread Optics and Web Servers, where coalgebras of the store comonad pop up (and also a dual algebraic account).
So that got me to wonder

view this post on Zulip Henry Story (Jul 11 2021 at 20:25):

I was asking because the Store monad functor D(A)=AV×VD(A) = A^V \times V is such a different animal when it is the "coalgebra of the store monad" and when it appears all by itself. So I thought it may be worth looking for other examples.

All by itself the Store Monad allows one to implement Conway's game of life, by making use of the duplicate function to explore the surroundings of a space. Here for two functors FF and GG with FGF \dashv G then δ=FηG\delta = F \eta G. The adjunction that gives rise to the Store comonad is _×A_A \_ \times A \dashv \_^A. We have extract as ϵA(f,v)=f(v)\epsilon_A(f, v) = f(v) and duplicate as δA(f,v)=(λu(f,u),v):D(A)DD(A)=D(A)(AV×V)V×V\delta_A(f,v) = (\lambda u \to (f, u), v) : D(A) \to DD(A) = D(A) \to (A^V \times V)^V \times V. The first projection of duplicate is thus a function that allows one to create stores with foci placed at different positions.

In game of life implementation using the Store comonad (there are others such as this one using Zippers), the VV is a location on the board V=Z×ZV = \Z \times \Z and A=2AliveDead A = 2 \cong \text{Alive} | \text{Dead}. So a Store instance, is a pair of a function determining for any pair of numbers if that position is alive or dead, and a position on the board. From that it is easy to create a function that takes a store and looks at all the surroundings positions of the focused position to determine if in the next generation it will be alive or dead. Call this function n:D(A)A n : D(A) \to A for next. If we map nn over the duplicated store, we have a function λu(f,u);n:V2 \lambda u \to (f, u); n : V \to 2 which now tells us if the next generation of the cell is alive or dead.

The Lens acts very differently. The Lens which is the coalgebra of the store given by the function l:SD(S)=SSV×Vl: S \to D(S) = S \to S^V \times V (called the coaction in Relating Algebraic and Coalgebraic Descriptions of Lenses) take the state S (of an object) to both a view V on that object (the observation), and a function SVS^V to change the state (of the whole object) with only V changed. So here we don't really use the duplication to explore the surroundings. We just have an equation shown in the "Relating Algebraic..." article that Dll=δSlD l \circ l = \delta_S \circ l. Where the Store comonad gives one a way to explore the space, the lens is focused on narrow changes. That is quite easy to see if we use the same types as for the Game of Life with S=2S = 2 and A=Z×ZA = \Z \times \Z. It is clear then that the coaction n:22Z×Z×(Z×Z)n: 2 \to 2^{\Z \times \Z} \times (Z \times Z) can get very little interesting information out of the 2 state.

view this post on Zulip Paolo Perrone (Jul 12 2021 at 08:33):

To toot my own horn: in section 5.4.1 of my lecture notes there are a few comonads with their coalgebras listed.

view this post on Zulip Henry Story (Jul 12 2021 at 19:20):

Thanks @Paolo Perrone . I find your explanation of the coalgebra for the Stream comonad very helpful.
So the inifinite stream comonad arises from the functor XXNX \mapsto X^\N. If α\alpha is such a stream then ϵ=α(0)\epsilon = \alpha(0) and δ=λn.λm.α(n+m)\delta = \lambda n. \lambda m. \alpha(n+m)
(This is also nicely explained in Bart Jacobs book "Introduction to Coalgebra" p 260)
What you then show very nicely (Bart Jacobs gives too abstract a reason) is how the coalgebra for the Stream comonad, is restricted by the counit and coalgebra square equations.
For a coalgebra f:SSNf: S \to S^\N, the coalgebra comonad has to be a sequence of the form ss,f(s),f(f(s)),f3(s),...s \mapsto \langle s, f(s), f(f(s)), f^3(s), ... \rangle.
The coalgebra forces the stream to be constructed iteratively from a seed ss by iterated application of ff.

view this post on Zulip Henry Story (Jul 12 2021 at 21:35):

I have a feeling that the coalgebras for a comonad discussed above are those that come in the Eilenberg-Moore category for the given comonad. I can't quite find out where I saw this, but I think those are very closely related to comonads (and then the same for monads and algebras).
Well Bartosz Millewski had a course on Eilenberg Moore where he showed: that the adjoint FTF^T to the forgetful functor UTU^T to the Eilenberg Moore category of a monad T gives rise to the same Monad T that one started with.

Dually, does that mean that these coalgebras have something particularly important to say about the comonad? So that knowing that a lens is the coalgebra of a Store comonad, tells us something especially important about the Store comonad?

view this post on Zulip Jade Master (Jul 12 2021 at 22:03):

Henry Story said:

I have a feeling that the coalgebras for a comonad discussed above are those that come in the Eilenberg-Moore category for the given comonad. I can't quite find out where I saw this, but I think those are very closely related to comonads (and then the same for monads and algebras).
Well Bartosz Millewski had a course on Eilenberg Moore where he showed: that the adjoint FTF^T to the forgetful functor UTU^T to the Eilenberg Moore category of a monad T gives rise to the same Monad T that one started with.

Dually, does that mean that these coalgebras have something particularly important to say about the comonad? So that knowing that a lens is the coalgebra of a Store comonad, tells us something especially important about the Store comonad?

Do you mean the adjunction with the forgetful functor from the Eilenberg Moore category of a monad? I think that's how it usually works. It's an interesting question. More generally, for a monad T:CCT: C \to C, there is a category AdjT\mathsf{Adj}_T of adjunctions which give rise to T. The Eilenberg Moore category is the target of an adjunction which is terminal in this category and the Kleisli category is the target of an adjunction which is initial in this category. For example see Riehl's category theory in context: image.png

view this post on Zulip Jade Master (Jul 12 2021 at 22:04):

Does anybody know if/how this result dualizes to comonads? It's not immediately obvious to me.

view this post on Zulip John Baez (Jul 12 2021 at 22:13):

That's a good question. Some of this stuff is "formal" and works just as well for monads in any 2-category, so it also works for comonads, which are monads in some dualized version of Cat\mathbf{Cat}.

view this post on Zulip John Baez (Jul 12 2021 at 22:16):

In other words, I'm saying you can define Eilenberg-Moore and Kleisli categories of a comonad to be initial and terminal objects in a certain category that's formally like AdjT\mathrm{Adj}_T, but defined in a dualized version of Cat\mathbf{Cat}.

(Just don't ask me to say which is initial and which is terminal! And while you're at it, don't ask me which dualized version of Cat\mathbf{Cat} I mean: Catop,Catco\mathbf{Cat}^{\rm op}, \mathbf{Cat}^{\rm co} or Catcoop\mathbf{Cat}^{\rm coop}.)

view this post on Zulip Jade Master (Jul 12 2021 at 22:18):

I understand that. I think that I will need to put pen to paper to get all the arrows right on this one...

view this post on Zulip Mike Shulman (Jul 12 2021 at 22:43):

The way I remember it is that the EM-category of a comonad, like the EM-category of a monad, consists of "pictures drawn in" the underlying category. Therefore, it should also be a limit construction, i.e. terminal in some category. Since reversing the 1-morphisms is what turns limits into colimits, we need to reverse the 2-morphisms instead to get a different kind of limit. So it follows that EM-categories of comonads are terminal in something built out of Catco\rm Cat^{co}.

view this post on Zulip Mike Shulman (Jul 12 2021 at 22:45):

For the other dual, you can remember that the left adjoint from a category to the Kleisli category of a monad is surjective (in fact, bijective) on objects, so it makes sense to think of it as sort of a "quotient", i.e. a colimit. So Kleisli categories of both sorts are initial, and we get them from EM-objects by turning the 1-morphisms around.

view this post on Zulip Jade Master (Jul 12 2021 at 23:10):

Mike Shulman said:

The way I remember it is that the EM-category of a comonad, like the EM-category of a monad, consists of "pictures drawn in" the underlying category. Therefore, it should also be a limit construction, i.e. terminal in some category. Since reversing the 1-morphisms is what turns limits into colimits, we need to reverse the 2-morphisms instead to get a different kind of limit. So it follows that EM-categories of comonads are terminal in something built out of Catco\rm Cat^{co}.

What do you mean by pictures drawn in the underlying category?

view this post on Zulip Mike Shulman (Jul 12 2021 at 23:16):

An algebra for a monad on CC consists of objects, arrows, and commutative diagrams in CC.

view this post on Zulip Mike Shulman (Jul 12 2021 at 23:16):

Specifically, one object, one arrow, and two commutative diagrams.

view this post on Zulip Mike Shulman (Jul 12 2021 at 23:17):

A less handwavy way of saying this is that by naturality, a functor into the EM-category consists of a functor into CC together with a natural structure of TT-algebra on each of its values. So since functors into the EM-category have a nice description, it is a limit construction.

view this post on Zulip Henry Story (Jul 13 2021 at 16:57):

Jade Master said:

More generally, for a monad T:CCT: C \to C, there is a category AdjT\mathsf{Adj}_T of adjunctions which give rise to T. The Eilenberg Moore category is the target of an adjunction which is terminal in this category and the Kleisli category is the target of an adjunction which is initial in this category. For example see Riehl's category theory in context: Image of Category of Adjunctions

Thanks very much for that pointer.
To my surprise I was able to follow the first two parts of chapter 5 of Emily Riehls' book. I guess having a good example in mind, and a few questions helps a lot.

It is a pitty that Bart Jacob's book does not give the category of adjunctions view, as that does bring out really well the importance of algebras to monads. So if Eilenberg Moore of algebras on a monad is the final object in the category of adjunctions, that highlights its significiance: something to meditate further on.

And the same the other way around, though I guess the question is if the Eilenberg Moore category for a coalgebra monad is the final or initial object there? (And what that would mean.)

I should go back and listen to Milewski's course on Lenses. He explained that somewhere too.

view this post on Zulip Jade Master (Jul 13 2021 at 20:19):

Mike Shulman said:

A less handwavy way of saying this is that by naturality, a functor into the EM-category consists of a functor into CC together with a natural structure of TT-algebra on each of its values. So since functors into the EM-category have a nice description, it is a limit construction.

Oh I get it now. Thanks!

view this post on Zulip Jade Master (Jul 13 2021 at 20:19):

Henry Story said:

Jade Master said:

More generally, for a monad T:CCT: C \to C, there is a category AdjT\mathsf{Adj}_T of adjunctions which give rise to T. The Eilenberg Moore category is the target of an adjunction which is terminal in this category and the Kleisli category is the target of an adjunction which is initial in this category. For example see Riehl's category theory in context: Image of Category of Adjunctions

Thanks very much for that pointer.
To my surprise I was able to follow the first two parts of chapter 5 of Emily Riehls' book. I guess having a good example in mind, and a few questions helps a lot.

It is a pitty that Bart Jacob's book does not give the category of adjunctions view, as that does bring out really well the importance of algebras to monads. So if Eilenberg Moore of algebras on a monad is the final object in the category of adjunctions, that highlights its significiance: something to meditate further on.

And the same the other way around, though I guess the question is if the Eilenberg Moore category for a coalgebra monad is the final or initial object there? (And what that would mean.)

I should go back and listen to Milewski's course on Lenses. He explained that somewhere too.

Of course, you're welcome :)

view this post on Zulip Henry Story (Jul 13 2021 at 20:53):

So here is something a bit surprising. Because Monads are associated with EM categories of Algebras, I have associated Monads with Algebras. But perhaps too hastily. I came across this recently, but in a programming intro to Lenses, so I was not sure if I should take it seriously. But now re-reading Bart Jacobs' 450p Intro to Coalgebras he writes on p248

Before seeing examples of monads we immediately illustrate their usefulness in the theory of coalgebras: they introduce sequential composition of coalgebras. The composition will be presentied in slightly more general form via Kleisli categories

Lemma 5.1.2 For a monad TT, coalgebras XT(X)X \to T(X) form a monoid - where X is a fixed object.

The proof is the kleisli composition operation. Then in the section on Kleisli Categories p 363.

Kleisli maps XYX \to Y may be seen as generalised coalgebras XT(Y)X \to T(Y), with different domain and codomain.

I guess that shows one has to be very careful in what associations one makes...

view this post on Zulip Morgan Rogers (he/him) (Jul 14 2021 at 07:50):

Henry Story said:

I guess that shows one has to be very careful in what associations one makes...

What do you mean by this?
Also, do you know that the objects of the Kleisli category for a comonad correspond exactly to the cofree coalgebras?

view this post on Zulip Henry Story (Jul 14 2021 at 07:53):

Is this perhaps a mistaken association? @Morgan Rogers (he/him)

So could one say: the Category AdjT\text{Adj}_T of adjunctions for a monad TT, starts from the Kleisli category of (generalised) comonads on TT ( following Bart Jacobs' point above about AT(A)A \to T(A) being a coalgebra) and ends with the Eilenberg-Moore category of algebras on TT?

view this post on Zulip Henry Story (Jul 14 2021 at 08:00):

@Morgan Rogers (he/him) wrote:

Also, do you know that the objects of the Kleisli category for a comonad correspond exactly to the cofree coalgebras?

I read it, but have not yet integrated that.... I wonder how that fits in with my previous question...

view this post on Zulip Morgan Rogers (he/him) (Jul 14 2021 at 08:47):

Henry Story said:

So could one say: the Category AdjT\text{Adj}_T of adjunctions for a monad TT, starts from the Kleisli category of (generalised) comonads on TT ( following Bart Jacobs' point above about AT(A)A \to T(A) being a coalgebra) and ends with the Eilenberg-Moore category of algebras on TT?

Ah, it sounds like you might be confused about algebras/coalgebras. The story is dual for the two concepts:
Every monad TT on a category CC has an associated EM category CTC^T consisting of the algebras for that monad, and the full subcategory on the free algebras constitutes the Kleisli category Kl(T)\mathrm{Kl}(T) (this is interesting because the usual presentation of the Kleisli category doesn't make it so obvious that you can identify the objects with free algebras). Both the Kleisli and EM categories admit a canonical adjunction with CC in which the right adjoint is the "forgetful functor" with codomain CC, and both adjunctions induce the monad TT. In the category of monads inducing TT, these are the initial and terminal, objects respectively.

Dually, any comonad GG on a category CC has an associated (co)EM category CGC_G consisting of the coalgebras for that comonad, and the full subcategory on the cofree coalgebras constitutes the coKleisli category coKl(G)\mathrm{coKl}(G). Both the coKleisli and coEM categories admit a canonical adjunction with CC in which the left adjoint is the "forgetful functor" with codomain CC, and both adjunctions induce the comonad GG. In the category of comonads inducing GG, these are still the initial and terminal objects, respectively.

view this post on Zulip Morgan Rogers (he/him) (Jul 14 2021 at 08:49):

(remembering that the Kleisli/coKleisli categories consist of the free/cofree things and so admit an inclusion into the EM/coEM categories is a good way of remembering that the former are the initial objects in the categories of adjunctions)

view this post on Zulip Henry Story (Jul 14 2021 at 08:58):

Thanks @Morgan Rogers (he/him) . That is approximately the picture I had in mind after reading Emily Riehl's "Categories in Context" chapter 5.0 to 5.2.
Then I read the passage by Bart Jacobs I cited above, where it looks like Kleisli categories are coalgebras. (which is why I was suprised I guess now)

Perhaps it a situation where one can see it both ways? I think it is the case that initial algebras have morphisms going both ways, so that initial algebras can be seen as coalgebras too. (The same happens for final coalgebras too, which can be seen as algebras). Perhaps that is the situation here too?

view this post on Zulip Morgan Rogers (he/him) (Jul 14 2021 at 09:05):

The reason for those observations is that there is a notion of "algebra/coalgebra" for an endofunctor (ignoring the extra monad/comonad structure), and annoyingly this seems to be the notion that is more common in computer science contexts:
for an endofunctor FF, an algebra in this sense is an object XX equipped with a morphism F(X)XF(X) \to X, while a coalgebra is an XX equipped with a morphism XF(X)X \to F(X), with no extra conditions. Bart Jacobs is using the term "coalgebra" in this sense, since for a typical monad there is no way to talk about coalgebras in a more structured sense.

view this post on Zulip Morgan Rogers (he/him) (Jul 14 2021 at 09:06):

cf https://ncatlab.org/nlab/show/coalgebra+for+an+endofunctor

view this post on Zulip Henry Story (Jul 14 2021 at 09:11):

Ah yes, that is the definition of coalgebra that Bart Jacbos works with in his book "Introduction to Coalgebra". He starts with the simple definition of a coalgebras of an endo-Functor F:CCF: C \to C as a pair of an object SCS \in C and a morphism s:SF(S)s: S \to F(S), looks at categories thereof, with initial and final objects. Then later he look to add the extra conditions to speak of coalgebra of a comonad or algebra of a monad.

Monadic functors give the ability to sequentially compose coalgebras, so looked a like that it does appear that the kleilsi category is one of such composable coalgebras.

view this post on Zulip Morgan Rogers (he/him) (Jul 14 2021 at 09:39):

It's a shame this is how the terminology worked out, it seems to leave a lot of room for confusion, and I personally don't think that the notion of (co)algebra for an endofunctor fundamentally captures anything algebraic without the extra conditions.

view this post on Zulip Henry Story (Jul 14 2021 at 09:45):

@Morgan Rogers (he/him) I don't think this terminological difference has much impact on this question, since in a kleisli category we do have the extra monadic structure. A Monad is an endofunctor with certain properties. A Morphism in the Kleisli category of a Monad MM is a function ABA \to B which in the underlying category is AM(B)A \to M(B), which has the structure of a (generalised) coalgebra - perhaps a composition of a strict coalgebra with a map of a morphism inside the Monad AM(A)M(B)A \to M(A) \to M(B). Similarly a morphism ABA \to B in the Kleisli category for a Comonad WW would have the structure in the underlying category of W(A)BW(A) \to B which would be an algebraic type morphism.

view this post on Zulip Morgan Rogers (he/him) (Jul 14 2021 at 11:12):

Right, but what exactly makes a generic morphism AM(A)A \to M(A) coalgebraic in nature?
The name "algebra" for objects of EM-categories presumably comes from the fact that for most of the motivating examples, these are exactly the algebraic objects we expect. For the free group monad, for example, the EM category is equivalent to the category of groups. Treating this as an endofunctor, however, a generic morphism M(X)XM(X) \to X (an "algebra" for this endofunctor) ignores the extra data of the monad which, to me, encodes the algebraic aspects.

view this post on Zulip Morgan Rogers (he/him) (Jul 14 2021 at 11:14):

Henry Story said:

A Monad is an endofunctor with certain properties.

view this post on Zulip Nick Smith (Jul 15 2021 at 02:27):

there is a notion of "algebra/coalgebra" for an endofunctor (ignoring the extra monad/comonad structure), and annoyingly this seems to be the notion that is more common in computer science contexts

This has confused me as well. Are there any contexts in which one would care about such a (co)algebra where the functor doesn't have the structure of a (co)monad? i.e. are non-monadic algebras actually interesting?

view this post on Zulip Henry Story (Jul 15 2021 at 04:59):

Are there any contexts in which one would care about such a (co)algebra where the functor doesn't have the structure of a (co)monad?

One of the early examples in Bart Jacobs' book, is coalgebras for the functor F(X)=1+A×XF(X) = 1 + A \times X from which one can build a coalgebra c:SA×Sc: S \to A \times S. (Now I think the disjunction already means that it cannot be a comonad, because there would be nothing to extract for :1*: 1.) The coalgebra cc gives a state machine: you give it a state s:Ss: S and applying c(s)c(s) you either get no further state just a *, or you get another state of a pair (a,s)(a,s') of an AA that can be observed, and a further state ss' that can be plugged back in to c(s)c(s') to get the next observation/state or end. This is a state machine that potentially goes on for ever. In the category of such coalgebras the final coalgebra is
next:A1+A×A\text{next}: A^\infty \to 1 + A \times A^\infty
An instancce of AA^\infty could be a finite or infinite sequence. Clearly it cannot be constructed in one go, and it can only be viewed in pieces. Perhaps it is a Web Service that returns a word or a 404, or it is an interface implemented by some other company. In any case when one has such a sequence ss on can do one can ask for next(s)\text{next}(s) which will allow one to find out if it ended, or get a pair of some observable value and the next potentially infinite sequence.

view this post on Zulip Henry Story (Jul 15 2021 at 05:10):

if A={0,1,2,3,4,5,6,7,8,9}A = \{0,1,2,3,4,5,6,7,8,9\} then the final coalgebra can represent real numbers between 0 and 1 in decimal notation. The other (non-final) coalgebras are ways of stepping through such a number seen more as a state machine.

view this post on Zulip John Baez (Jul 15 2021 at 05:18):

Nick Smith said:

there is a notion of "algebra/coalgebra" for an endofunctor (ignoring the extra monad/comonad structure), and annoyingly this seems to be the notion that is more common in computer science contexts

This has confused me as well. Are there any contexts in which one would care about such a (co)algebra where the functor doesn't have the structure of a (co)monad? i.e. are non-monadic algebras actually interesting?

Yes. But I hope it's already clear: even if a functor has the structure of a monad, an "algebra of the functor" is different from an "algebra of the monad". They're just different concepts. The distinction between them, and some of their relationships, are explained here:

https://ncatlab.org/nlab/show/algebra+for+an+endofunctor

Unfortunately no examples in this article. But as you hinted earlier, there are tons of examples from computer science. Henry explained one.

view this post on Zulip Henry Story (Jul 15 2021 at 06:06):

There is this very nice article by J.J.M.M Rutten from 2000 Universal Coalgebra: A theory of systems which gives many examples of coalgebras.

view this post on Zulip Henry Story (Jul 15 2021 at 06:25):

The link to the nlab wiki page John Baez just posted itself links to a page on coalgebras which points to a couple of blog posts by @David Corfield of which coalgebraically thinking. He gives an overview of Bart Jacobs' book, and points out that the Functor I called F(X)=1+A×XF(X) = 1 + A \times X above can also give a category of algebras, known in computing as Lists. (I was actually looking for that just earlier). These are finite structures and have to be fully constructed first.
Actually it's interesting that the final coalgebra for FF called nextnext being final has an inverse (which is therefore an algebra) and which constructs a stream
next1:1+A×AAnext^{-1}: 1+A \times A^\infty \to A^\infty
allowing one to create the empty stream, or if you already have an infinite stream and an AA allows you to build a slightly longer infinite stream. :-)

view this post on Zulip Nick Smith (Jul 15 2021 at 06:34):

Thanks to you both. I haven't had time to study these topics properly yet, so some of the details are still hard to absorb.

I hope it's already clear: even if a functor has the structure of a monad, an "algebra of the functor" is different from an "algebra of the monad". They're just different concepts.

Right, but according to the article and earlier discussion "an algebra of the functor" is equivalent to "an algebra of the free monad generated by the functor", so it seems that even if a T-algebra for a monad T isn't the same as the F-algebra for the functor underlying the monad T, an F-algebra is still ultimately a T'-algebra for a different monad T'. (As long as F is a functor capable of generating a free monad T'.) Is this correct? So we still end up with an "algebra over a monad", even if we weren't trying to define one.

view this post on Zulip Nick Smith (Jul 15 2021 at 06:40):

I think the disjunction already means that it cannot be a comonad, because there would be nothing to extract for *: 1.

F(X)=1+A×XF(X) = 1 + A \times X is a polynomial functor, so it has a corresponding cofree comonad according to David Spivak and Nelson Niu (their book on Poly).

view this post on Zulip Henry Story (Jul 15 2021 at 07:57):

There is a really long discussion about the relation of algebras and coalgebras in both blog post from 2008 by David Corfield on The Status of Coalgebra and the question as to the pure functorial vs monoidal difference comes up

view this post on Zulip Henry Story (Jul 15 2021 at 09:22):

Proposition 2.5.3 in "Introduction to Coalgebras" p 97:

For a finite Kripke polynomial functor F:SetsSetsF: \bold{Sets} \to \bold{Sets}, the forgetful functor U:CoAlg(F)SetsU: \bold{CoAlg}(F) \to \bold{Sets} has a right adjoint. It sends a set AA to the carrier of the final coalgebra of the functor A×F().A \times F(-).

(A Kripke polynomial is a functor built out of functors for identity, products, coproducts, exponentials and the powerset. )

That is a bit weird because we have a different functor appear in the adjunction: but it is ok, it is appearing in the carrier. So after the proof of the above proposition Jacobs goes on to write:

the unit η\eta is a homomorphism from a coalgebra XF(X)X \to F(X) to the cofree coalgebra X^F(X^)\hat{X} \to F(\hat{X}) on its carrier. It is obtained by finality. And the counit ϵ\epsilon maps the carrier A^\hat{A} of a cofree coalgebra back to the original set AA.

So I think that could be the relation that @Nick Smith was speaking about between coalgebras and some free relation to them and monads. But I think that just shows why the minimal definition of a (co)algebra on a functor -- without the monad -- is useful, as it allows one to show exactly where different pieces of the puzzle come together.

view this post on Zulip Morgan Rogers (he/him) (Jul 15 2021 at 09:36):

I'm not saying it's not useful, I'm saying I don't see how it captures any notion of "(co)algebraic structure". Basically, I think it was a bad choice of name.

view this post on Zulip Morgan Rogers (he/him) (Jul 15 2021 at 09:39):

Assuming that the Eilenberg-Moore notion came first, you can see how this name arose, but it's a shame that it stuck.

view this post on Zulip Nathanael Arkor (Jul 15 2021 at 13:11):

Nick Smith said:

Thanks to you both. I haven't had time to study these topics properly yet, so some of the details are still hard to absorb.

I hope it's already clear: even if a functor has the structure of a monad, an "algebra of the functor" is different from an "algebra of the monad". They're just different concepts.

Right, but according to the article and earlier discussion "an algebra of the functor" is equivalent to "an algebra of the free monad generated by the functor", so it seems that even if a T-algebra for a monad T isn't the same as the F-algebra for the functor underlying the monad T, an F-algebra is still ultimately a T'-algebra for a different monad T'. (As long as F is a functor capable of generating a free monad T'.) Is this correct? So we still end up with an "algebra over a monad", even if we weren't trying to define one.

This is true in many well-behaved settings, but it's not always possible to generate an algebraically free (co)monad from an endofunctor.

view this post on Zulip Mike Shulman (Jul 15 2021 at 16:09):

And maybe this goes without saying, but even in the (very common) situation where one has algebraically-free monads, the notion of algebra for an endofunctor is still important. For instance, without that notion we can't even define what "algebraically-free" means.

view this post on Zulip John Baez (Jul 17 2021 at 16:30):

Nick Smith said:

John Baez wrote:

I hope it's already clear: even if a functor has the structure of a monad, an "algebra of the functor" is different from an "algebra of the monad". They're just different concepts.

Right, but according to the article and earlier discussion "an algebra of the functor" is equivalent to "an algebra of the free monad generated by the functor", so it seems that even if a T-algebra for a monad T isn't the same as the F-algebra for the functor underlying the monad T, an F-algebra is still ultimately a T'-algebra for a different monad T'. (As long as F is a functor capable of generating a free monad T'.) Is this correct?

This is correct except - sorry, I'm about to make a pedantic point but I think it's important - you shouldn't just say "the free monad generated by the functor", since - as you later note - such a monad may not exist.

In math we use "the" when something exists and is unique, so e.g. "the largest prime number" is not something one should ever say (since we know it does not exist), or "the largest perfect number" (since we don't know if it exists). This is also true when the possibly existent entity depends on a variable, e.g. "the smallest perfect number greater than nn" or "the free monad generated by the functor FF".

The usual thing to do in cases where uniqueness is known but not existence is add the phrase "if it exists":

"the largest prime number, if it exists"
"the smallest perfect number greater than nn, if it exists"
"the free monad generated by the functor, if it exists".

view this post on Zulip John Baez (Jul 17 2021 at 16:38):

Anyway, returning to the practical point: while we can sometimes "reduce" the study of algebras of an endofunctor to the study of algebras of a monad that it generates, I don't know of cases where this actually makes things simpler or easier. But I'm not at all an expert on this subject, so maybe it does help sometimes!

view this post on Zulip Nick Smith (Jul 18 2021 at 00:25):

What if you’re in a broader framework where you’re already working with (co)algebras of (co)monads? I’m thinking of Poly, where you have categories Cat# and P which nicely capture polynomial comonads, maps between them, coalgebras/comodules over them, monoidal structures on them, limits etc. If you want to study coalgebras of (polynomial) endofunctors in this setting, it seems like looking at their cofree comonads is the right way to go about that? Otherwise you’d be carving out a separate island.

This is the motivation for me thinking about this whole endofunctor/free monad distinction in the first place. “Algebra of an endofunctor” is a common meme in functional programming when talking about recursion schemes, and I want to understand how to think about these concepts in Poly.

view this post on Zulip John Baez (Jul 18 2021 at 00:28):

If @David Spivak were here he'd tell you... maybe someone here knows. I'm avoiding learning about Poly, which is really hard now because I'm at the Topos Institute and he's teaching a course on polynomial functors, and writing a book about them, and talking to everyone about them... but I need to think about thermodynamics and other stuff.

view this post on Zulip Nick Smith (Jul 18 2021 at 00:35):

I did already quiz David a little, but he’s a busy guy. Maybe I’ll get a chance to ask during the Poly course. It’s a shame I can’t attend in person.

view this post on Zulip John Baez (Jul 18 2021 at 00:42):

You might ask Nelson Niu, who is writing the book with David. He's probably less overwhelmed with demands on his time.

view this post on Zulip Henry Story (Jul 18 2021 at 06:29):

On page 272 of "Category Theory 2nd edition" Steve Awodey writes

Finally, we ask, what is the relationship between algebras of endofunctors and algebras for monads? The following proposition, which is a sort of "folk theorem", gives an answer.

view this post on Zulip Henry Story (Jul 18 2021 at 06:53):

Proposition 10.14: Let the category S\mathcal{S} have finite coproducts.
Given an endofunctor P:SSP : \mathcal{S} \to \mathcal{S}, the following conditions are equivalent:
1. The P-algebras are the algebras for a monad. Precisely there is a monad (T:SS,η,μ)(T: \mathcal{S} \to \mathcal{S}, \eta, \mu), and an equivalence P-Alg(S)ST\text{P-Alg}(\mathcal{S}) \simeq \mathcal{S}^{T} between the category of P-algebras and the category ST\mathcal{S}^T of algebras for the monad. Moreover, this equivalence preserves the respective forgetful functors to S\mathcal{S}.
2. The forgetful functor U:P-Alg(S)SU: \text{P-Alg}(\mathcal{S}) \to \mathcal{S} has a left adjoint FUF \dashv U.
3. For each object AA of S\mathcal{S}, the endofunctor PA(X)=A+P(X):SSP_A(X) = A + P(X): \mathcal{S} \to \mathcal{S} has an initial algebra.

view this post on Zulip Henry Story (Jul 23 2021 at 18:25):

I have always felt Kleisli to be a bit mysterious, but I trusted the mathematicians that it worked out. But as I now wanted to really understand about the category of adjunctions for a (co)monad I studied @Paolo Perrone 's very helpful lecture notes and even worked on exercise 5.1.21 and 5.1.22 - and found typos here and there :-) . Awodey only has half a (very clear) page on Kleisli (p277 of Category Theory). But both say that the objects of a Kleisli category CTC_T are the same as that of the category CC for a monad (triple) TT. But my feeling is that really kleisli is collapsing all the Ts for an object. So that the initial left adjunct functor LTL_T takes LT(A)=LT(T(A))=LT(TT(A))=...=ATL_T(A) = L_T(T(A)) = L_T(TT(A)) = ... = A_T as shown in this diagram I tried to draw out. So my guess is that one could say they are the same objects, but after a whole bunch of them have been merged with an equivalence relation?
Diagram of Functor to Kleisli

view this post on Zulip Henry Story (Jul 23 2021 at 20:16):

This is what Awodey writes:

This category CTC_T is called the Kleisli category of the adjunction, and is defined as follows:

So my thinking is that if one has a functor that maps ηA\eta_A to the identity on ATA_T in CTC_T, then it has to map the objects AATA \mapsto A_T and TAATTA \mapsto A_T, but then since identity composes it would need all sequences of η\eta arrows to collapse to 1AT1_{A_T}.

view this post on Zulip John Baez (Jul 23 2021 at 21:01):

Henry Story said:

But both say that the objects of a Kleisli category CTC_T are the same as that of the category CC for a monad (triple) TT.

That's right. In our minds, an object of the Kleisli category is a free algebra for the monad TT. This will be the free algebra on some object ACA \in C. But it turns out to very convenient to simply say this object of the Kleisli category is AA itself! Then what makes the Kleisli category different from CC is that we change what are the morphisms.

But my feeling is that really Kleisli is collapsing all the Ts for an object. So that the initial left adjunct functor LTL_T takes LT(A)=LT(T(A))=LT(TT(A))=...=ATL_T(A) = L_T(T(A)) = L_T(TT(A)) = ... = A_T as shown in this diagram I tried to draw out.

No, that's not true. It's just not true, usually, that LT(A)=LT(T(A))=L_T(A) = L_T(T(A)) = \cdots for the monad TT coming from an adjunction with left adjoint LTL_T. Furthermore this is not a good way to think about what's going on here. That is: not only is this technically false, it's morally false.

view this post on Zulip Henry Story (Jul 23 2021 at 21:03):

Ok. I need to start from scratch and try again... Perhaps I'll start with the object from C\mathbf{C} and draw the kelisli arrows on top in a different color...

view this post on Zulip John Baez (Jul 23 2021 at 21:04):

I suggest taking some adjunction whose monad you understand, and working out its Kleisli category according to the standard definition of Kleisli category. For me, being a mathematician, the best example is the adjunction between groups and sets. Then I can take the definition of Kleisli category, turn the crank, and see that the Kleisli category is equivalent to the category where the objects are free groups and the morphisms are group homomorphisms.

The "trick" is that instead of talking about "the free group on the set AA", the standard definition of Kleisli category just says "AA". That is, the objects in this particular Kleisli category are just sets!

view this post on Zulip John Baez (Jul 23 2021 at 21:05):

But the morphisms from a set AA to the set BB in this Kleisli category are not functions from AA to BB, but rather homomorphisms from the free group on AA to the free group on BB.

view this post on Zulip Henry Story (Jul 23 2021 at 21:08):

Ah yes. Paolo has an example the Power Set monad, for which the Kleisli category from Set\mathbf{Set} is equivalent to the category of Sets and relations (which is neat). So yes, forgetting that those new morphism are no longer functions is perhaps what was confusing me.

view this post on Zulip John Baez (Jul 23 2021 at 21:11):

Algebras for the power set monad are suplattices, so the Kleisli category for this consists of free suplattices and suplattice morphisms between these. The free suplattice on a set AA is the power set of AA, PAPA. The homomorphisms of suplattices from PAPA to PBPB happens to correspond to relations between AA and BB.

view this post on Zulip John Baez (Jul 23 2021 at 21:11):

However, unless you are comfortable with suplattices, this example can be confusing.

view this post on Zulip John Baez (Jul 23 2021 at 21:12):

To me, as a mathematician, monads on Set describe "algebraic gadgets" like groups, abelian groups, vector spaces over your favorite field, rings, suplattices, etc....

view this post on Zulip John Baez (Jul 23 2021 at 21:13):

But I learned about groups long before I learned about suplattices, so groups seem like a better example when I'm trying to test out ideas. (Computer scientists will probably prefer other examples, like monoids: the free monoid on AA is the set of lists of elements of AA.)

view this post on Zulip John Baez (Jul 23 2021 at 21:14):

And also, the fact that suplattice morphisms from PAPA to PBPB correspond to relations from AA to BB is a special feature of this case that takes a bit of thought to intuit - so this fact could be a bit distracting if you're trying to wrap your brain around the general idea of Kleisli category.

view this post on Zulip Henry Story (Jul 23 2021 at 21:20):

I don't have any trouble using Monads in programming at all, where I use them all the time, and from afar Kleisli looks very simple. It's more trying to see how to think draw these out to make sure I really understand. :-)
For me the relation of the power set monad to Rel\mathbf{Rel} is very interesting because functional programmers seem to live in Set\mathbf{Set} and it seems that RDF folks are in the Powerset Kleisli category of Set (According to Evan Patterson's "Knowledge Representation in Bicategories of Relations" paper).

view this post on Zulip John Baez (Jul 23 2021 at 21:24):

It's very interesting, but I can't see it as the royal road to understanding Kleisli categories in general: it has too many special features.

view this post on Zulip John Baez (Jul 23 2021 at 21:25):

To understand the Kleisli category of a monad you need to understand the algebras of that monad, and then the free algebras of that monad.

view this post on Zulip Henry Story (Jul 23 2021 at 21:25):

ah. that's interesting.

view this post on Zulip John Baez (Jul 23 2021 at 21:26):

As I mentioned, the algebras of the power set monad are suplattices. So if you don't already know and love suplattices, this example is not good for you!

view this post on Zulip John Baez (Jul 23 2021 at 21:26):

At least, not good as a starting-point for learning about Kleisli categories.

view this post on Zulip Henry Story (Jul 23 2021 at 21:27):

the simplest I know is the Option monad X1+XX \mapsto 1+X which gives the Kleisli of partial functions I believe.

view this post on Zulip John Baez (Jul 23 2021 at 21:27):

What are the algebras of this monad?

view this post on Zulip John Baez (Jul 23 2021 at 21:28):

(It took me a minute to figure it out - this is not my favorite monad. But it could be a good example, because its algebras are pretty easy to understand.)

view this post on Zulip Henry Story (Jul 23 2021 at 21:29):

1+XX1+X \to X with the right equations. But it will take me a lot of time to figure out step by step what the algebra of that monad are. But that should be easy to read up on.
The Option Monad is used a lot in programming.

view this post on Zulip John Baez (Jul 23 2021 at 21:32):

It's better to figure out yourself what the algebras of the monad are. You can just guess...

view this post on Zulip John Baez (Jul 23 2021 at 21:33):

When would you have a set XX with a map 1+XX1 + X \to X satisfying some equations?

view this post on Zulip John Baez (Jul 23 2021 at 21:33):

I can answer if you want.

view this post on Zulip Henry Story (Jul 23 2021 at 21:34):

ah so you need either 1 to map to a special element or X to map to X. I remember now: pointed sets.

view this post on Zulip John Baez (Jul 23 2021 at 21:34):

Right!

view this post on Zulip John Baez (Jul 23 2021 at 21:36):

A map 1+XX1 + X \to X is the same as a map 1X1 \to X and a map XXX \to X. Every set has a god-given map XXX \to X, namely the identity. "The right equations" - the definition of monad - wind up saying we have to use this map XXX \to X. So the only choice we get to make is the map 1X1 \to X. And this is point of XX.

view this post on Zulip John Baez (Jul 23 2021 at 21:36):

So an algebra of this monad is a pointed set.

view this post on Zulip John Baez (Jul 23 2021 at 21:36):

And: the free pointed set on the set XX is 1+X1 + X.

view this post on Zulip Henry Story (Jul 23 2021 at 21:38):

And pointed sets can be represented as the over-slice category 1/Set1/\mathbf{Set} if I remember correctly.

view this post on Zulip John Baez (Jul 23 2021 at 21:42):

Yes.

view this post on Zulip John Baez (Jul 23 2021 at 21:44):

But here's a sad feature of this example: every algebra of this monad is isomorphic to a free algebra! In other words, every pointed set is isomorphic to a pointed set of the form 1+X1 + X for some sets XX.

I feel makes it hard to appreciate the special features of the Kleisli category, which is the category of free algebras, and how it's different from the Eilenberg-Moore category, which is the category of all algebras.

view this post on Zulip John Baez (Jul 23 2021 at 21:45):

The example of groups doesn't suffer from this problem: free groups are very special, most groups aren't isomorphic to free groups.

view this post on Zulip Henry Story (Jul 23 2021 at 22:01):

Mhh. Other monads we come across in programming are the Either Monad (XA+XX \mapsto A+X), the list monad (X1+A×XX \mapsto 1+A \times X), the state monad (S(A×S)SS \mapsto (A \times S)^S), the continuation Monad XCCXX \mapsto C^{C^X} (but that is probably even more difficult than powerset, and even difficult to understand though widely used in JavaScript, (see continuation monad), ... Is there one that is similar to groups in that it appears differently kleisli and EM?
(I had been reading up on groups and its relation to the rubix cube...)

The continuation monad has Ultrafilters - morphisms from propositions 𝓟(X) to the final algebra 𝟮 - as special case (C=2). Wondering about this, lead me to Odersky's 2010 article on continuations in #Scala The first 3 sections are very clear. https://infoscience.epfl.ch/record/149136/files/icfp113-rompf.pdf https://twitter.com/bblfish/status/947041248975687680/photo/1

- The 🐟‍‍ BabelFish (@bblfish)

view this post on Zulip Mike Shulman (Jul 23 2021 at 22:36):

I thought that the list monad was XList(X)X\mapsto {\rm List}(X). The endofunctor X1+A×XX\mapsto 1+A\times X (which I don't think is a monad) is the one whose initial algebra (in the endofunctor sense) is List(A){\rm List}(A).

view this post on Zulip John Baez (Jul 24 2021 at 00:22):

It's interesting that the top five monads any computer scientist (or at least Henry) will list as easily understood examples are disjoint from the top five monads any mathematician (or at least me) would list.

view this post on Zulip Reid Barton (Jul 24 2021 at 04:10):

With the arguable exception of X1+XX \mapsto 1 + X, and also we have an entirely different name for the list monad.

view this post on Zulip Henry Story (Jul 24 2021 at 05:25):

Mike Shulman said:

I thought that the list monad was XList(X)X\mapsto {\rm List}(X). The endofunctor X1+A×XX\mapsto 1+A\times X (which I don't think is a monad) is the one whose initial algebra (in the endofunctor sense) is List(A){\rm List}(A).

yes, you are right. I was putting those monads together from memory just before going to bed. What is List(X)List(X)? Ah it is List(X)=1+A×List(X)List(X) = 1 + A \times List(X), is the free datatype, recursively defined. Bart Jacobs often presents the algebraic view of List in his book on coalgebras.

view this post on Zulip Henry Story (Jul 24 2021 at 05:47):

There is a catalogue of standard monads on the Haskell wiki. I think I got most of them. The Monads that really made be click (coming as I did from Java 10 years ago) were the scala Future "Monad" (which is not quite a monad) and the Free Monad.

view this post on Zulip Henry Story (Jul 24 2021 at 06:21):

I am - by slow immersion reading these math books - learning about the mathematical monads too. Hopefully when I have the mapping math <-> programming clarified, I will start feeling the urge to use them too :-)

view this post on Zulip Henry Story (Jul 24 2021 at 10:49):

I know John Baez said I should not do this, but while he is asleep let me try it out. Let's hope that it works out better than for Mickey Mouse...

view this post on Zulip Henry Story (Jul 24 2021 at 10:56):

So because of the link to RDF the Powerset monad actually makes a lot of sense to me.

John wrote above

Algebras for the power set monad are suplattices, so the Kleisli category for this consists of free suplattices and suplattice morphisms between these. The free suplattice on a set AA is the power set of AA, PAPA. The homomorphisms of suplattices from PAPA to PBPB happens to correspond to relations between AA and BB.

So I'll try out Awodey's definition on it.

view this post on Zulip Henry Story (Jul 24 2021 at 10:57):

Is it not a a bit odd to say that "homomorphisms of suplattices happen to correspond to relations"? Is not everything in mathematics necessary? How could there be an accident here?

(Ah I see from nlab: perhaps the fact that suplatice is also a lattice is accidental.)

view this post on Zulip Fawzi Hreiki (Jul 24 2021 at 11:20):

Its the same way that frames happen to be Heyting algebras, but maps of frames don't necessarily preserve the Heyting structure.

view this post on Zulip Henry Story (Jul 24 2021 at 12:02):

This is what Awodey writes:

This category CTC_T is called the Kleisli category of the adjunction, and is defined as follows:

So we are taking T to be the Powerset functor XP(X)=2XX \mapsto \mathcal{P}(X) = 2^X

Ok, that makes sense, if I jump right to the conclusion that the Kleisli for the Powerset is the same as the category of sets and relations. The objects are just sets. ((So I guess we can also have PowerSets of PowerSets, products and sums, ... in Rel\mathbf{Rel}?))

I think this is what is confusing as a definition, because how can the arrows in two categories be the same if they don't have the same domain or codomain? I guess that what is really meant is implementation wise if one is in the category Set\mathbf{Set} then one can think of f:ATBf: A \to TB as an arrow in the Kleisli category. And indeed if A is People and B is cats then, a function owns:PeopleP(Cat)\text{owns}: \text{People} \to \mathcal{P}(\text{Cat}) takes a person to the cats that he owns, including possibly none, which makes a relation, since we can have one to many and one to none.

Again, this only make sense implementation wise. I can consider a function ηPerson:PersonP(Person)\eta_{Person}: \text{Person} \to \mathcal{P}(\text{Person}) to be a relation idPerson:PersonPerson\text{id}_{Person} : \text{Person} \to \text{Person}, which takes each person pp to pp. Otherwise the talk of "is the arrow" is weird, again since the domain and codomain are different in Set\mathbf{Set}.

so here if we have friend:PersonP(Person)\text{friend}: \text{Person} \to \mathcal{P}(\text{Person}) and we apply it twice to some person P in set we would end up with PP(Person)\mathcal{P}\mathcal{P}(\text{Person}), so we apply μ\mu to get back P(Person)\mathcal{P}(\text{Person}). And that would give us friend of a friend relation in the Kleisli category or Rel\mathbf{Rel}.

view this post on Zulip Martti Karvonen (Jul 24 2021 at 16:24):

Henry Story said:

This is what Awodey writes:

This category CTC_T is called the Kleisli category of the adjunction, and is defined as follows:

So we are taking T to be the Powerset functor XP(X)=2XX \mapsto \mathcal{P}(X) = 2^X

Ok, that makes sense, if I jump right to the conclusion that the Kleisli for the Powerset is the same as the category of sets and relations. The objects are just sets. ((So I guess we can also have PowerSets of PowerSets, products and sums, ... in Rel\mathbf{Rel}?))

I think this is what is confusing as a definition, because how can the arrows in two categories be the same if they don't have the same domain or codomain? I guess that what is really meant is implementation wise if one is in the category Set\mathbf{Set} then one can think of f:ATBf: A \to TB as an arrow in the Kleisli category. And indeed if A is People and B is cats then, a function owns:PeopleP(Cat)\text{owns}: \text{People} \to \mathcal{P}(\text{Cat}) takes a person to the cats that he owns, including possibly none, which makes a relation, since we can have one to many and one to none.

Again, this only make sense implementation wise.

I claim that the definitions of Awodey make sense literally, and not merely "implementation wise", whatever the latter means. To define a category it is enough to tell what the objects are, for each pair of objects what are the morphisms between them, and define how composition works and check that the end result is a category (i.e. satisfies the relevant axioms). It does not matter at all that those morphisms might happen to be equal (in the context of material set theory) to entities that are used elsewhere in mathematics for some other purposes, whether as numbers, objects of some other category or as morphisms in n some other category or anything else. For instance, a morphism ABA\to B in Cop\mathcal{C}^{op} is a morphism BAB\to A in C\mathcal{C}.

More artificially (just to drive the point home), consider the unit ηR ⁣:RPR\eta_{\mathbb{R}}\colon \mathbb{R}\to\mathcal{P}\mathbb{R} which is a morphism of Set. Let us build a new category C\mathcal{C} as follows. C\mathcal{C} has one object which we will denote by 00. We will define the only hom-set we need to define by setting homC(0,0):={ηR}\hom_{C}(0,0):=\{\eta_{\mathbb{R}}\}, and define the composition in the only possible way by setting ηRCηR=ηR\eta_{\mathbb{R}}\circ_{\mathcal{C}}\eta_{\mathbb{R}}=\eta_{\mathbb{R}}. This results in a notationally awful but a mathematically valid way of constructing the terminal category. Of course, no one would want to define it in such a manner, but it shows that it's perfectly possible to have a morphism of one category going between distinct objects to be an identity morphism in some other category. When defining CT\mathcal{C}_T, this flexibility is actually convenient, which explains why Awodey defines things the way he does.

view this post on Zulip Martti Karvonen (Jul 24 2021 at 16:37):

To put it another way, if f ⁣:ATBf\colon A\to TB in C\mathcal{C}, then CT\mathcal{C}_T reuses this ff but types it differently, so that it becomes a map ABA\to B in CT\mathcal{C}_T. This is fine because such typing data is part of the data of a category rather than something fixed in advance once and for all: that CT\mathcal{C}_T repurposes ff from C\mathcal{C} is not a crime.

view this post on Zulip Henry Story (Jul 24 2021 at 17:54):

Thanks @Martti Karvonen. I'll go over it again, remembering to keep HomSets in mind. That looks like where the problem was.

view this post on Zulip John Baez (Jul 24 2021 at 20:52):

Everything Martti said looks right. The key, as far as I'm concerned, is that because TT is a monad, we can define a way to compose two morphisms f:aTbf: a \to Tb and g:bTcg: b \to Tc in the Kleisli category of TT, namely by taking this composite in the category CC:

afTbTgTTcμTc a \xrightarrow{f} Tb \xrightarrow{Tg} TTc \xrightarrow{\mu} Tc

where μ\mu is the multplication of the monad TT.

I'm sure Awodey explains this, since this is the most interesting feature of the Kleisli category - and without saying how morphisms are composed one hasn't specified a category!

view this post on Zulip Henry Story (Jul 24 2021 at 21:12):

yes, Awodey gives the composition of two morphisms.
What has been hurting my head is the use of the word "is". The 1A:AA1_A : A \to A is the morphism η:ATA\eta: A \to TA . :hurt:
But I do believe the HomSet is the right way out. It's still hurting though...

view this post on Zulip John Baez (Jul 24 2021 at 21:17):

Put quotes around the "is" if it makes you feel better. But it's perfectly correct if you're doing math based on traditional set theory, as Awodey clearly is here. The set of morphisms f:abf: a \to b in the Kleisli category IS the set of morphisms f:aTbf: a \to Tb in the original category: these sets are equal, so each element of one set is equal to an element of the other.

view this post on Zulip John Baez (Jul 24 2021 at 21:20):

If you want to use structural set theory, or type theory, or something else, then it's your job to translate traditional math textbooks into your own favored framework.

view this post on Zulip Henry Story (Jul 24 2021 at 21:21):

Feeling better allready. Should be out soon :hospital: :-)

view this post on Zulip Mike Shulman (Jul 25 2021 at 00:50):

I think this would be equally true in type theory. In type theory you would define a category by giving a type C0:TypeC_0:\rm Type of objects and a type family homC:C0C0Type\hom_C : C_0\to C_0\to \rm Type of morphisms, with composition and identity operations like 1C:c:C0homC(c,c)1^C : \prod_{c:C_0} \hom_C(c,c). Then given such a category CC and a monad TT on it, you would define the Kleisli category by (CT)0C0(C_T)_0 \equiv C_0 and homCT(a,b)homC(a,Tb)\hom_{C_T}(a,b) \equiv \hom_C(a, T b). The latter equality is then definitional, and one would then define 1cCTηc1^{C_T}_c \equiv \eta_c and so on.

view this post on Zulip Mike Shulman (Jul 25 2021 at 00:51):

(Quite generally, traditional math textbooks are arguably written in a language that's closer to dependent type theory than to ZFC.)

view this post on Zulip Mike Shulman (Jul 25 2021 at 00:51):

But it's true that in structural set theory (like ETCS) the equalities might be only isomorphisms, because it often doesn't have a basic notion of "family of sets".

view this post on Zulip Henry Story (Jul 25 2021 at 05:43):

That looks like how Agda Categories defines the Kleisli Monad.
I'll go through those for my rehab exercises. https://twitter.com/bblfish/status/1419049461007929345

I am feeling better than James Brown thanks to some help by Martti Karvonen and @johncarlosbaez helping me to understand the Kleisli category of a monad. https://mattecapu.github.io/ct-zulip-archive/stream/229199-learning:-questions/topic/coalgebras.20and.20comonads.html#247046659 https://www.youtube.com/watch?v=-TxHInkyNpg

- The 🐟‍‍ BabelFish (@bblfish)

view this post on Zulip Jacques Carette (Jul 25 2021 at 13:27):

I was catching up to this thread, and about to point to our mechanization of it... until I saw that it was already done. It's all completely straightforward, except for the proof that composition is associative, which does require real work. If one were using string diagrams, the proof would mostly 'collapse' because it's largely re-assembling things. What the current proof reveals is a peek at what's going to happen one level up (i.e. in bicategories): the associator will be quite a bit of work to define.

view this post on Zulip Simon Burton (Jul 26 2021 at 14:42):

@Henry Story Have you seen Dan Marsden's "category theory using string diagrams" ? I'm going over this again today (after doing a lot of my own string-diagramology) and he has a very nice description of monads & the Kleisli category and much more..

view this post on Zulip Henry Story (Jul 26 2021 at 16:40):

Ah yes, I was reading that in Jan 2020, but I did not finish it, as I was sidetracked into something else.

@BartoszMilewski Best to read the article "Category Theory Using String Diagrams" last after going through Bartosz course and reading the bicategories of relations. Also annotate the diagrams in the article carefully, for it to make sense, and try writing a few equations like this https://twitter.com/bblfish/status/1221425869899599873/photo/1

- The 🐟‍‍ BabelFish (@bblfish)

view this post on Zulip Henry Story (Jul 26 2021 at 16:44):

Currently I need to hone into how these can be used for the web server I am building. The problem is it takes a lot of time to read maths and I am not even sure yet I know how to translate good maths into Scala code. I can use libraries that have already been written though. It would be great if I got to a place where I could read CT and just write it out if it fit what I needed.

Here is a recent article that looks very interesting: Algebraic and Coalgebraic Perspectives on Interaction Laws. It even mentions lenses...

view this post on Zulip Henry Story (Jul 29 2021 at 05:27):

A very well written set of blog posts A Scala Comonad Tutorial that came from work by a group at Verizon. The last post has very good illustrations for the graph comonad, and comes with a Quiver implementation.

view this post on Zulip Jon Awbrey (Jul 29 2021 at 11:45):

@Henry Story said:

Here is a recent article that looks very interesting: Algebraic and Coalgebraic Perspectives on Interaction Laws. It even mentions lenses …

Abstract

Monad algebras, turning computations over return values into values, are used to handle algebraic effects invoked by programs, whereas comonad coalgebras, turning initial states into environments (“cocomputations”) over states, describe production of coalgebraic coeffects that can respond to effects. (Monad-comonad) interaction laws by Katsumata et al. describe interaction protocols between a computation and an environment. We show that any triple of those devices can be combined into a single algebra handling computations over state predicates. This method yields an isomorphism between the category of interaction laws, and the category of so-called merge functors which merge algebras and coalgebras to form combined algebras. In a similar vein, we can combine interaction laws with coalgebras only, retrieving Uustalu’s stateful runners. If instead we combine interaction laws with algebras only, we get a novel concept of continuation-based runners that lift an environment of value predicates to a single predicate on computations of values. We use these notions to study different running examples of interactions of computations and environments.

Wow. That's the first blurb I've read where I felt like I understood at least the questions they were talking about, if not the answers they were exploring.

view this post on Zulip Henry Story (Jul 29 2021 at 12:09):

I guess it's the clearly stated importance of the distinction between effectful computation and a coeffectful environment that is so interesting. There is a really nicely designed web page Coeffects: Context-aware programming languages which describes the 2014 PhD thesis by Thomas Petricek. Coeffects are of course modelled with comonads.

view this post on Zulip Jon Awbrey (Jul 29 2021 at 13:00):

Henry Story said:

I guess it's the clearly stated importance of the distinction between effectful computation and a coeffectful environment that is so interesting. There is a really nicely designed web page Coeffects: Context-aware programming languages which describes the 2014 PhD thesis by Thomas Petricek. Coeffects are of course modelled with comonads.

This brings up many issues ...
Not sure it's proper to raise them in this, er, context.
But what the hecuba ...

view this post on Zulip Spencer Breiner (Oct 14 2021 at 20:49):

I have a sense that the Taylor series is a coalgebraic construction, via some kind of unrolling operation on (say) smooth functions
f(f(0),f)(f(0),f(0),f),...)f\mapsto(f(0),f')\mapsto(f(0),f'(0),f''),...)
Does anyone know of a good reference for material in this direction?

view this post on Zulip John Baez (Oct 14 2021 at 20:57):

Comonads should be perfectly tailored for this!

view this post on Zulip Conor McBride (Oct 14 2021 at 21:11):

Wherever there is differential structure (some sort of "you are here"), you will find comonads.

view this post on Zulip Mike Shulman (Oct 15 2021 at 05:08):

Perhaps Pavlovic-Escardo Calculus in coinductive form?

view this post on Zulip Tobias Fritz (Oct 15 2021 at 06:43):

It sounds like you may be looking for the jet comonad, since the sequence of derivatives that you wrote down is exactly the jet of ff at 00. The jet comonad captures PDEs, since its Eilenberg-Moore category is exactly the category of PDEs! See for example Synthetic geometry of differential equations: I. Jets and comonad structure or Jets and differential linear logic for more. I learned about the latter paper from @JS Pacaud Lemay, who may be able to say more about the jet comonad in general than I could. Also the original reference A note on the category of partial differential equations may be worth taking a look at.

view this post on Zulip JS PL (he/him) (Oct 16 2021 at 16:33):

You may also be interested in the "Hurwitz series ring" H(A)=Πn=0AH(A) = \Pi^\infty_{n=0} A over an algebra AA, which is the cofree differential algebra over AA, and thus induces a comonad HH on the category of algebras. In characteristic zero, H(A)H(A) is isomorphic to the power series ring over AA. If AA is a differential algebra, then it is an HH-coalgebra where AH(A)A \to H(A) is defined as:
a(a,a,a,...)a \mapsto (a, a', a'', ...) which is similar to what you wish to obtain.

view this post on Zulip JS PL (he/him) (Oct 16 2021 at 16:37):

So setting A=C(R)A = C^\infty(\mathbb{R}), which is a differential algebra with the standard derivative, you get an HH-colagebra structure ω:C(R)H(C(R))\omega: C^\infty(\mathbb{R}) \to H(C^\infty(\mathbb{R})). Then taking the evaluation at zero map ev0:C(R)Rev_0: C^\infty(\mathbb{R}) \to \mathbb{R} (which is an algebra morphism), the taylor series is recaptured via the composition:
C(R)ωH(C(R))H(ev0)H(R)C^\infty(\mathbb{R}) \xrightarrow{\omega} H(C^\infty(\mathbb{R})) \xrightarrow{H(ev_0)} H(\mathbb{R}).

view this post on Zulip Patrick Nicodemus (Oct 17 2021 at 19:05):

Conor McBride said:

Wherever there is differential structure (some sort of "you are here"), you will find comonads.

do you have an elaboration on this? is there any kind of exposition you can recommend here

view this post on Zulip Henry Story (Oct 18 2021 at 08:08):

Conor McBride said:

Wherever there is differential structure (some sort of "you are here"), you will find comonads.

In philosophy (of language) those are called "indexicals". (So for me now having come across "differential structure" this gives me a potential view into "differential structure").
David K. Lewis famously argued that the word "actual" as in "the actual world" is an indexical like "here", "now", "there", in that they all require a relation to a speaker-in-the-world to make sense. Since it has been argued that "modal logics are co-algebraic" this link to modalities makes complete sense, and so to action too. After all you can only act if you are in a situation in the world at a certain position that can effect things by changing them.
(One can also think of "ought", "should", etc as working by giving a point in the world, though in those cases the points are idealised, often non-actual ways things would be if they were good, to which one should strive.)

view this post on Zulip Henry Story (Oct 18 2021 at 09:05):

There is a very interesting discussion on Twitter on provability and truth, where I think that something like action mediates them. If algebra is thought of as more linguistic, and colagebra as more semantic - or at least action oriented - one may have the material for providing category theoretic foundations for pragmatism?
https://twitter.com/bblfish/status/1450006207427719171

@taooftypes @andrejbauer @mietek In "Between Saying and Doing" Brandom gives a relation between what Pragmatic ability is needed in order to be able to be said to be correctly using a vocabulary and what vocabularies can be used to describe such abilities. He mentions #CategoryTheory here. cc @DavidCorfield8 https://twitter.com/bblfish/status/1450006207427719171/photo/1

- The 🐟‍‍ BabelFish (@bblfish)

view this post on Zulip Conor McBride (Oct 18 2021 at 09:07):

Most of my writing on this has been on Haskell StackOverflow: e.g., https://stackoverflow.com/a/25530311/828361 where differentiation acts on data structures, giving rise to spatial notions of linear context.