You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Question about coinductive free things:
For a fixed set , is the free monoid on where elements are finite strings in and multiplication is concatenation.
There is also the monoid of finite or infinite strings in that contains as a submonoid, where if a is infinite, .
In computer science I think it is called the 'coinductive list monoid'.
Another well known example is the conatural numbers (that is the coinductive free monoid on a singleton set) which is the natural numbers + infinity.
In category theory these free structure come from left adjoints to a forgetful functor.
Where do the coinductive free structures come from? Is it actually the right adjoint??
Also, is there any reason why coinductive structures seem to be uncommon in maths since in computer science they seem to be quite important?
Yes, the "cofree" structures come from having a right adjoint to the forgetful . I haven't yet gotten a solid intuition of coinduction, but it seems very powerful. Bart Jacobs' book http://www.cs.ru.nl/B.Jacobs/CLG/JacobsCoalgebraIntro.pdf is a good introduction.
As for why it's become prominent in CS and not math, that is a very good question.
Thank you Christian
For some reasons (which we could get into) mathematicians like to describe algebraic structures - like groups, rings, etc. - using Lawvere theories, and then it's very handy that any structure of a given kind can be described using free ones, either directly as an algebra of a monad on Set, or as a coequalizer of free algebras (which is almost the same thing).
What would happen to math if took the other path and tried to use coalgebras of comonads on Set?
One possible reason is that mathematicians are generally happy to work directly with infinite things, whereas programmers are forced to encode infinite things with finite information. One of the uses of coinductive structures is to do the latter, but mathematicians don't need them as much for this. Sometimes what the mathematicians are doing would be clarified by a coinductive perspective, but coinduction also has limitations relative to explicit infinity, so the mathematicians aren't hugely motivated to pick it up.
mathematicians are generally happy to work directly with infinite things
:muscle: It's interesting how we convince ourselves that we can do this.
It's not as if our brains are infinitely big.
Another possible reason is that mathematics is generally static whereas programming can be dynamic. For instance, variables in mathematics don't change their values; if we let then we don't go back later and say let , and there's no analogue of a for-loop in mathematics: we only use recursion and induction, never iteration. The relevance of this here is that inductive definitions are datatypes that describe what something is, in a static way, whereas coinductive definitions are interfaces or behaviors that a dynamic process can implement.
The usual description of a coinductive definition as an infinite object is the "trace" of all possible execution paths of a process, but since a corecursively defined function doesn't evaluate until it's "forced", I sometimes find it helpful to instead think of the definition of such a function as creating an "object" or "process" that you can "communicate" with by applying a destructor, thereby getting a response and/or changing its "state".
John Baez said:
What would happen to math if took the other path and tried to use coalgebras of comonads on Set?
Of course, coalgebraic structures do pop up in mathematics. But it's true that they generally tend to play second fiddle to algebraic ones. I wonder how much of that can be blamed on the nature of our base categories like Set.
What are some good examples of categories that are comonadic over Set? Here are two simple ones:
Here are some relevant old discussions from the n-Cafe: https://golem.ph.utexas.edu/category/2008/11/coalgebraically_thinking.html, https://golem.ph.utexas.edu/category/2008/12/the_status_of_coalgebra.html. Maybe John can find more.
Thank you all for the references.
I still have some basic questions maybe someone can help me clear up.
First, what is it that makes coalgebras about 'processes' and algebras about 'structures'? Is it simply because given an algebra and we can only apply once to , but if we have a coalgebra and a we can do it many times to get a sequence ?
Second, for the examples I mentioned I don't see how the monoid operation is determined by the coalgebra. E.g., the set of (co)natural numbers are defined as an initial (terminal) algebra (on Set) to which determine them as sets with a zero and successor (predecessor) function. But do those functions canonically determine the monoid operations on the natural and conatural numbers?
Third, Christian mentioned the cofree structures are coming from a right adjoint to the forgetful . What about e.g. the right adjoint to the forgetful --does it exist? and if so, when applied to a singleton set does it give the conatural numbers?
Fourth, what does it mean and what is the significance of a category being (co)monadic? I only found the definitions for (co)monadic functors but I don't really see the connection to all of this.
Nasos Evangelou-Oost said:
First, what is it that makes coalgebras about 'processes' and algebras about 'structures'? Is it simply because given an algebra and we can only apply once to , but if we have a coalgebra and a we can do it many times to get a sequence ?
That's part of it. Another aspect is how they are implemented in a programming language. Suppose you define an infinite stream of ones by ones := 1 :: ones
. We can't try to evaluate this completely right away, the way we could with the definition of a finite list, or we would go into an infinite spin. So instead the computer stores the definition of ones
as a sort of "thunk" or "callback" (or "lazy list"), and only evaluates it further when some other part of the program tries to inspect the elements of ones
(necessarily some finite initial segment thereof). In general, the definition of an element of a coinductive type could be quite a complicated function, and instead of being evaluated right away it is saved and "called" when the destructors are applied. Thus, we can think of the destructors of a coinductive type as defining an "interface" -- the things that other parts of the program can "call" -- and the definition of an element of the coinductive type as creating an object that implements that interface, by specifying the "methods" that instantiate the interface.
Second, for the examples I mentioned I don't see how the monoid operation is determined by the coalgebra. E.g., the set of (co)natural numbers are defined as an initial (terminal) algebra (on Set) to which determine them as sets with a zero and successor (predecessor) function. But do those functions canonically determine the monoid operations on the natural and conatural numbers?
Yes, in that you can use the (co)recursion principles to construct the monoid operations. But that's something special to these examples: an arbitrary (co)inductive type doesn't come with a monoid operation.
Third, Christian mentioned the cofree structures are coming from a right adjoint to the forgetful . What about e.g. the right adjoint to the forgetful --does it exist? and if so, when applied to a singleton set does it give the conatural numbers?
It does not exist. That forgetful functor does not preserve colimits.
Fourth, what does it mean and what is the significance of a category being (co)monadic? I only found the definitions for (co)monadic functors but I don't really see the connection to all of this.
Roughly speaking, a monadic category is one whose objects are sets (or whatever other base category you have) equipped with "algebraic structure", while in a comonadic category they are equipped with "coalgebraic structure".
(By the way, regarding the rarity of coinduction in mathematics, there's probably also an interesting story to be told regarding the foundation axiom of ZFC and the much less well-known anti-foundation axioms of Aczel, Finsler, etc.)
I wanted to add that Paolo's notes give different intuitions about comonads.
After quickly reading it before posting, I thought it might be good to emphasize here that coalgebras of an endofunctor and coalgebras of a comonad are different things (I am not sure how much though). Paolo shows in Example 5.4.8 that coalgeberas of the stream comonad are streams that look like which is a big restriction over the arbitrary streams that are coalgebras of the endofunctor .
Another thing mentioned in Example 5.3.7 is that a co-Kleisli morphism for the stream comonad, being a function , may depend on the whole stream which is not very CS-friendly. What that tells me is that cofree structures should be less relevant in CS, but I don't know if that is true.
Ralph Sarkis said:
What that tells me is that cofree structures should be less relevant in CS, but I don't know if that is true.
I'll defend comonads in CS here and say that comonads are fundamental to categorical semantics of linear logic. In particular, the comonad captures the exponential modality . Futhermore, the coKleisli category in this case is going to be Cartesian closed category, and therefore a model of the -calculus. The idea being that coKleisli maps should be thought of as smooth maps, while the maps in the base category are the linear maps.
(I don't know if my comment is useful to the conversation above, but thought it was worth mentionning. I apologies if it's not!)
The relation between endofunctor-coalgebras and comonad-coalgebras is that if cofree coalgebras for an endofunctor exist, then they are the values of a comonad whose comonad-coalgebras coincide with the original endofunctor-coalgebras. So, at least under niceness hypotheses, comonad-coalgebras are a more general concept, as they include all endofunctor-coalgebras as well as other things.
What do you mean @Mike Shulman? If is an endofunctor and is a cofree coalgebra of , then what is the comonad whose comonad-coalgebras coincide with 's endofunctor-coalgebras?
This is a special case of the fact that an action of an object on another object in a monoidal category is the same thing as an action of the free monoid on .
Well, its a special case of the dual statement.
Unfortunately, the monoidal category of endofunctors does not have free (co)monoids in general, but when a given endofunctor has a free co-monad, the co-algebras of the co-monad will correspond to the co-algebras of the underlying endofunctor.
Right. Explicitly, the comonad is the functor that sends to the cofree coalgebra on .
Is there a general criterion for when this assembles into a co-monad?
Mike made it sound like it always does, given that the cofree coalgebra on any object exists:
if cofree coalgebras for an endofunctor exist, then they are the values of a comonad whose comonad-coalgebras coincide with the original endofunctor-coalgebras.
Ah I missed that. Thanks
I'm currently trying to verify this...so far I've got that given an endofunctor , the existence of cofree coalgebras on all objects is equivalent to each comma category having a terminal object, where and is the forgetful functor, which by a standard theorem, is in turn equivalent to having a right adjoint sending to the terminal object of . So the functor it seems that Mike is talking about is . So then this is a comonad.
So given an endofunctor-coalgebra , there is a unique coalgebra-morphism , which gives a morphism . So we just need to verify that is a comonad-coalgebra and that is a bijection between -endofunctor-coalgebras and -comonad-coalgebras.
One place the dual argument is written out is https://maciejcs.wordpress.com/2012/04/17/free-monads-and-their-algebras/
See also https://ncatlab.org/nlab/show/algebra+for+an+endofunctor#relation_to_algebras_over_a_monad and https://ncatlab.org/nlab/show/free+monad#algebraicallyfree_monads.