You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I know that in algebra comonoids/coalgebras/Hopf algebras are quite interesting beasts, but all the examples I know are of two types:
This lack of interesting examples might be a byproduct of the fact our metamathematics is overwhelmingly cartesian, hence comonoids are somewhat forced to be copy-like. But is it true? Are there comonoids which don't really come from copying?
A way out might come from structures that 'decompose', so that is given by a decomposition of into a 'left' and 'right' part. Universally, should be given by binary trees. But then the left branch of the right branch is not the same as the right branch of the left branch, so coassociativity fails. Also I have no idea how to define a map that wouldn't just be discard.
Are there comonoids which don't really come from copying?
The comonoid axioms sort of define copying (and deleting). We have a process that takes one thing and gives us two things. The coidentity law says that if we delete one of the things then the other is the same as the one we started with. The coassociativity law says that copies of copies are the same as the original copies.
Maybe in a monoidal category where and are not quite what we are used to this intepretation migth fail
e.g. mapping to the unit does not always mean to discard. In categories of optics, copoints are very interesting.
Indeed, now that I think about it, in #general: mathematics > parametrised operations I described comonoids in lenses are almost pairs of a monoid and comonoid (EDIT: I didn't mention it explicitly, but that's where the structure I described where came from). Hence comonoids in lenses are at least as interesting and varied as monoids in the base category.
I disagree with Oscar on this one. One nice example is to take a finite group (or monoid) and consider its algebra . We can equip this with a coalgebra structure generated by sending each element to the sum of pairs whose product gives that element, and extend linearly, and the counit sends each element of G to 1 and extends linearly.
If monoids embody composition, it makes sense that comonoids embody decomposition
The canonical coalgebra structure of the tensor algebra or symmetric algebra are not simply just copying.
JS Pacaud Lemay (he/him) said:
The canonical coalgebra structure of the tensor algebra or symmetric algebra are not simply just copying.
Can you elaborate? Isn't the equivalence class of under the quotient ? In other words, isnt simply ?
Morgan Rogers (he/him) said:
I disagree with Oscar on this one. One nice example is to take a finite group (or monoid) and consider its algebra . We can equip this with a coalgebra structure generated by sending each element to the sum of pairs whose product gives that element, and extend linearly, and the counit sends each element of G to 1 and extends linearly.
This is very cool!
It's essentially similar to the group ring example shared by @Morgan Rogers (he/him)
The idea is that you're taking words/monomials and then sending it to the sum of all words/monomials which when multiplied together giving you back your starting input.
@Morgan Rogers (he/him) said it really nicely: you're decomposing
For examples, take just the polynomial ring . One of its comultiplication is defined on monomials as follows:
So for example, and
As you can see, this comultiplication does not simply copy
This comultiplication makes a bialgebra with the standard multiplication.
also has another comultiplication where you drop the binomial coefficient
and this is similar to @Morgan Rogers (he/him) group ring example. BUT this comultiplication does not make a bialgebra with the standard multiplication. Instead you need to use another multiplication.
Anyways, hopefully this a convincing example that shows that coalgebra are not just about copying, but about decomposing.
I see, when you said 'canonical' I thought you referred to the my example (2), which probably isn't canonical at all!
I always found coalgebras really hard to think about (or even imagine) back when I was still trying to use the set theory that was hardwired into me in undergrad, which makes sense: comonoids in sets can only have the trivial copying operation! Once you appreciate that you can use a commutative binary operation to turn composition into decomposition in a natural way (by taking sums in my example) then you can quickly find a lot more examples.
Well every comonad is a comonoid in the endofunctor category, so you get lots of examples that way that don't have a straightforward copy/delete feel about them.
Dan Marsden said:
Well every comonad is a comonoid in the endofunctor category, so you get lots of examples that way that don't have a straightforward copy/delete feel about them.
good point!
Morgan Rogers (he/him) said:
Once you appreciate that you can use a commutative binary operation to turn composition into decomposition in a natural way (by taking sums in my example) then you can quickly find a lot more examples.
This is an extremely nice trick
This may seem like cheating but in Rel - the smc of sets and relations, which is self-dual - you can take any monoid you like and turn it into a comonoid that non-deterministically decomposes an element into its possible products under the dual monoid operation. On its own, this feels like just a matter of perspective (reading an operation in one direction as composing things or in the other as decomposing things), but it gets a lot more interesting when you realise that, in Rel, the monoid and comonoid versions of the same operation can cohabit. For example, if the monoid is an abelian group, the monoid and comonoid operations interact to form a commutative Frobenius algebra!
Here's an algebraic geometry example. Take a polynomial ring over a field . Then write as . A comultiplication is given by that sends This is expressing addition, but contraviariantly, so it comes out as a "co-addition." A similar example works with algebraic groups. Take GL(2), as 2x2 matrices, then these have four components, so we need four variables, but then one more variable for the nonzero determinant: . Once again if you write out the multiplication for this group, you get something contravariant, so it is a comultiplication.
The intuition I have for coalgebras, which is present in the examples everyone is giving, is that as you said it should be like chopping something into two parts. But also like you said, this isn't associative, and anyway the "two parts" thing is where the cartesian-ness is hiding. So we can fix this intuition by saying we're going to take the list/formal sum of all ways to chop something into two parts. Then this gets to be associative because if you chop it and chop the left part, or chop it and chop the right part, both results will be in the list of all ways to chop into three parts. And this is the sort of thing that naturally lives in a tensor product (the sort that characterize bilinearity) rather than a cartesian product.
All of this is literal when you consider "combinatorial bialgebras".