You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
The cofree coalgebra on the signature (where is a unary function and is a constant is the extended natural numbers with for and .
So I totally see why this is the final coalgebra of the endofunctor corresponding to this signature.
But the other intuition I am supposed to have about cofree things, that they are the largest things closed under certain operations, seems not to work. This intuition tells me I should also have distinct from with , because that would make it larger, for instance. If we're really trying to make it as large as possible, why not have a proper class of 1-cycles, a proper class of 2-cycles, etc. Why not have with ? Why not have everything that can happen happen? How is this intuition supposed to work?
Isn't it the case that any additional such that will be bisimilar to (all the operations map these two points to bisimilar points, coinductively so) ? So even if you could add more points, you already have all the possible observable behaviours
That's helpful...why don't we have an additional with and then? That seems like a new behavior
I don't think so : you could define a relation relating both and to that is a witness of the bisimilarity of and with . Since you do not have an operation observing the equality of elements, you cannot distinguish the cycle from just .
A common intuition is something like "algebras are, coalgebras do". The final coalgebra on this signature can be usefully thought of as a sort of state machine. If you press the "s" button it will do a state change where it reduces the state variable by 1, or stays in the same state if it was in the "infinity" state
Don't we have a bisimulation relating everything in to then?
Or maybe the problem with that it is wouldn't preserve the constant
How can you describe a cofree object for a given signature in terms of bisimulation?
I think "largest things closed under certain operations" isn't a good intution for coinductive types. First of all, it should really be "largest things COclosed under certain COoperations". But also, I think "largest" is misleading except when the ambient category is a poset, such as for a coinductively defined predicate. When working with a category that's not a poset, it's better to replace "largest" with "terminal" wherever you see it.
With inductive types, the intuition "smallest thing closed under certain operations" works great! What is the dual intuition for coinductive types?
I guess you said it, "largest things COclosed under certain COoperations". So what does "coclosed" mean?
Joshua Meyers said:
Don't we have a bisimulation relating everything in to then?
I got puzzled by your question because that sounded plausible. Actually I think it is indeed the case that any point in would be bisimilar to if we take the you provided as the coalgebra structure. But at least according to wikipedia that's not the right coalgebra structure on making that set into the final coalgebra on the signature . The coalgebra structure given there is defined by , if and . So you can observe and distinguish any finite number from by sufficient applications of the coalgebra map.
Well, the emphasis is probably on the co- in co-operations rather than co-closed. The point is that a coinductive type is defined by destructors, not constructors. For instance, a coalgebra for the "natural numbers" functor doesn't have a "zero operation" and "successor operation" the way an algebra does; it has a single destructor that either returns a predecessor or says "I'm zero, I have no precedessor".
Mike Shulman said:
Well, the emphasis is probably on the co- in co-operations rather than co-closed. The point is that a coinductive type is defined by destructors, not constructors. For instance, a coalgebra for the "natural numbers" functor doesn't have a "zero operation" and "successor operation" the way an algebra does; it has a single destructor that either returns a predecessor or says "I'm zero, I have no precedessor".
OK that clarifies things substantially. Since we want the largest structure with this destructor, we would like to have elements with every possible "destructor profile" --- an element with no predecessor, an element which has a predecessor but whose predecessor has no predecessor, and element whose predecessor has a predecessor but whose predecessor's predecessor has no predecessor, etc., as well as an element which has an $$n$$th predecessor for all .
However, we do want to avoid redundancy: the cofree structure should not ever have two elements with the same destructor profile --- is this a general rule for cofree structures on equational (co)signatures?
It seems like this avoidance of redundancy is dual to the absence of equalities in free structures on equational signatures. How can this be made precise?
It's the terminality of the coalgebra mentioned earlier: if you had e.g. two elements with the same "destructor profile", then a coalgebra built from just such an element would map into ours in two ways.
Right, the picture of cofree coalgebras as terminal coalgebras makes a lot of sense to me. But I'm wondering if it's possible to dualize the picture of free algebras as smallest sets closed under operations.
I just don't have that picture. In my head, for something to be "closed under" some operations, it has to be a subset of some larger structure on which those operations are defined. An abstract group, for instance, isn't "closed under" multiplication; what's "closed under" multiplication is a subgroup of some given group.
Also: absence of equations should give you weak initiality, and adding "no extra junk" promotes this to initiality, and similarly, having each destructor profile present should give weak terminality, and the "no redundancy" above should promote this to terminality.
Martti Karvonen said:
Also: absence of equations should give you weak initiality, and adding "no extra junk" promotes this to initiality, and similarly, having each destructor profile present should give weak terminality, and the "no redundancy" above should promote this to terminality.
Nice!
having each destructor profile present should give weak terminality
Here's a counterexample: Let with given by for and . Then (where ) has every destructor profile, but there is no morphism from to it.
Mike Shulman said:
I just don't have that picture. In my head, for something to be "closed under" some operations, it has to be a subset of some larger structure on which those operations are defined. An abstract group, for instance, isn't "closed under" multiplication; what's "closed under" multiplication is a subgroup of some given group.
Sure, so let's say that a free magma on is the smallest subset of (where is Kleene star) which contains and and is closed under the operation taking two strings and and outputting .
Mike Shulman said:
Well, the emphasis is probably on the co- in co-operations rather than co-closed. The point is that a coinductive type is defined by destructors, not constructors. For instance, a coalgebra for the "natural numbers" functor doesn't have a "zero operation" and "successor operation" the way an algebra does; it has a single destructor that either returns a predecessor or says "I'm zero, I have no precedessor".
I understood what you said in this comment from 2011 as saying that the coinductive naturals ARE defined with the zero and successor: https://golem.ph.utexas.edu/category/2011/07/coinductive_definitions.html#c038996 . Similarly I've heard people say e.g. that types defined in Haskell are coinductive rather than inductive, yet they're defined by constructors, and not destructors. Could you clarify this point please? (Edit: rereading my comment it sounds like I may be accusing you of being unclear--please know that is totally not my intention, I'm 100% sure it is just my being confused/dense :) )
Joshua Meyers said:
having each destructor profile present should give weak terminality
Here's a counterexample: Let with given by for and . Then (where ) has every destructor profile, but there is no morphism from to it.
You're right, I was being too optimistic.
Joshua Meyers said:
Sure, so let's say that a free magma on is the smallest subset of (where is Kleene star) which contains and and is closed under the operation taking two strings and and outputting .
That's not the definition of "free magma"; it's one particular construction of it.
To drive the point home, if you're compiling this construction in standard material set theory, and your friend is using a different construction of ordered pairs, they'll end up with a different set as the free magma (using otherwise the same construction). Neither of the end results is smaller than the other, and it's the initiality of the end result that really matters.
Nasos Evangelou-Oost said:
I understood what you said in this comment from 2011 as saying that the coinductive naturals ARE defined with the zero and successor: https://golem.ph.utexas.edu/category/2011/07/coinductive_definitions.html#c038996 . Similarly I've heard people say e.g. that types defined in Haskell are coinductive rather than inductive, yet they're defined by constructors, and not destructors.
This is actually a long-standing confusion about coinductive types, which has only recently started to be resolved. So while there is something to that comment of mine from 2011, its phrasing is based on that old confusion. In particular, the constructor-based ("positive") Coq syntax for coinductive types that I used in that comment is now deprecated in favor of a destructor-based ("negative") one.
The categorical definition of coinductive type is as a terminal coalgebra, and if you translate this directly into syntax you get a definition in terms of destructors. So a destructor-based definition is the only one that's semantically justified, and recently people realized that it's also the only one that's syntactically well-behaved: at least in the presence of dependent types, constructor-based coinductive types break subject reduction. (See for instance the appendix of these slides.)
It is true that destructor-based coinductive types have constructors, because the structure map of a terminal coalgebra is an isomorphism by Lambek's lemma, hence it has an inverse . And the informal description of coinductive types that I gave in that comment is correct. But it doesn't work syntactically (at least in dependent type theory) to allow "matching" against those constructors, and we shouldn't think of a coinductive type as defined by constructors rather than destructors.
The status of Haskell is less clear to me. Certainly its laziness means that its "datatypes" can be infinite, like coinductive types and unlike inductive ones. Haskell does permit matching against datatype constructors, but since Haskell isn't dependently typed, it's possible that constructor-based coinductive types aren't problematic. So I guess the thing to say is that Haskell's datatypes are positive coinductive types, but be aware that at least in dependent type theory it's more correct to view coinductive types negatively.
Mike Shulman said:
Joshua Meyers said:
Sure, so let's say that a free magma on is the smallest subset of (where is Kleene star) which contains and and is closed under the operation taking two strings and and outputting .
That's not the definition of "free magma"; it's one particular construction of it.
Sure, I like this method of construction because I like to think in terms of syntax. Is there a similar, very concrete, syntactic construction (or maybe, "destruction") of cofree things?
In some cases at least, you can define the elements of a coinductive type to be compatible streams of elements of a corresponding inductive type that are its "finite approximations".
Mike Shulman said:
Well, the emphasis is probably on the co- in co-operations rather than co-closed. The point is that a coinductive type is defined by destructors, not constructors. For instance, a coalgebra for the "natural numbers" functor doesn't have a "zero operation" and "successor operation" the way an algebra does; it has a single destructor that either returns a predecessor or says "I'm zero, I have no precedessor".
I have been holding one question. ( It might be a silly question! )
For the case of natural number, I cannot find why is the coalgebra rather than is.
That is, my question is, " why cannot the category be an abelian cateogory ? "
Surely X×1 is isomorphic with just X?
You can ask what are the initial algebra and final coalgebra for the endofunctor in an abelian category (which is then isomorphic to but also to ), and the answer is just that both are the zero object I think
@D.G. Berry @Kenji Maillard
Ah, yes! Thank you very much. Really, it is!
( Now I modified my intuition about products and coproducts.
I now start to think it is abelian categories that are strange! )
Well the coalgebras and are both valid things to consider, it's not that one is right and the other is wrong.
But the terminal coalgebra of is always just the terminal object, so despite its validity, it's not usually the thing you want to consider. (-: