You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
As a Haskell programmer, I've long "known" that monads are monoids in an endofunctor category. Studying (unbiased) monoidal categories now, though, there's a clear monadic structure being relied on, specifically turns a into a and is just , given the monad of finite lists. But of course finie lists are just the free monoid!
Two questions come out of this:
Regarding the second question: monoidal categories are representable T-multicategories for T = free monoid monad / free list monad on Set. By picking another cartesian monad, you can get analogous notions. By T-multicategory, I mean the concept defined in Section II.4 of Higher Operads, Higher Categories.
I'm pretty sure monoids came first historically, at least. In fact, I think the "mon" in monad comes from "monoid".
Before that they were called even worse names, like "triple" and "standard construction".
My vague understanding is that they (monoids) aren't really recognized as part of traditional abstract algebra because they're considered too boring.
In a typical modern algebra course they breeze through monoids in about 1 minute before getting to the real star of the show: groups.
When I become king of the world I will correct this.
Shea Levy said:
- Perhaps a matter of taste, but which notion is the more fundamental one?
I think it's part of the appeal of category theory that everything can be defined in terms of everything else, with no need to arbitrarily pick one to be more fundamental. You also see it with the universal constructions, which are all mutual special cases of each other. It's postmodern, or something
If you're looking for deep structure theorems, groups are what you want. If you're looking for a fundamental structuring mechanism that is pervasive in the organizational structure of mathematics (and computer science and ...), then monoids are your stars. It depends on what you value.
Mac Lane came up with the name "monad" because he thought "triple" was stupidly vague, "monad" sounds like "monoid", and because Leibniz had a theory of monads:
5.1 Monads and the World of Phenomena
Thus far we have seen that Leibniz rejected the Cartesian account of matter, according to which matter, the essence of which is extension, could be considered a substance. Leibniz held instead that only beings endowed with true unity and capable of action can count as substances. The ultimate expression of Leibniz's view comes in his celebrated theory of monads, in which the only beings that will count as genuine substances and hence be considered real are mind-like simple substances endowed with perception and appetite.
Mac Lane's monads have nothing to do with Leibniz's, but he liked stealing philosophical jargon (Aristotle's "categories", Carnaps "functors").
I personally think the fundamental concept is "monoid in a monoidal category", where monoidal category is a special case of "pseudomonoid in a monoidal bicategory", and monoidal bicategory is a special case of "pseudomonoid in a monoidal tricategory", ad infinitum. One can deal with this "ad infinitum" efficiently using a good theory of -categories.
I forget how well homotopy type theory does at crushing all these layers of complexity into a simple thing. Traditional homotopy theorists would call it an space.
One could also argue that groups are more fundamental than monoids, for example:
I find the entire premise that isomorphisms are "more fundamental" somewhat shaky. If it is based on the fact that there can be different kinds of morphism between objects of a particular kind, then I should point out that the notion of isomorphism depends on the choice of morphisms; just because you picked some class of examples where different notions of morphisms give the same isomorphisms doesn't mean this is always the case, at which point the argument falls apart. Also...
Reid Barton said:
The constructions of Galois representations in number theory is an example where a "fixed" object (the algebraic closure of a field) actually has automorphisms and these automorphisms automatically induce an action on everything built from this algebraic closure. If we considered a "Galois monoid" instead of a Galois group, then this wouldn't work.
As someone who is in the process of building an analogue of Galois theory which replaces (topological) groups with (topological) monoids, I find this contention personally offensive :joy:
I think you misunderstand what I'm saying about isomorphisms.
Quite possibly! What do you mean when you say "more fundamental"?
I'm saying there is an a priori notion of isomorphism which is not "a map of class X together with a map in the other direction of class X whose compositions are identities", and if you manage to come up with some class of maps X for which that definition in terms of X isn't equivalent to the intrinsic one, then it means that either your class of maps X is wrong or the structure you claim to be working with isn't in fact structure at all.
And again this a priori notion of isomorphism relies on the fact that everything respects isomorphism.
What notion is that?
For example, an isomorphism of topological spaces and is an isomorphism between the sets and such that, when we identify and via this isomorphism, the topologies become equal.
How do you express isomorphism without reference to, well, morphism?
An isomorphism of groups is an isomorphism of their underlying sets such that, when we identify the sets by this isomorphism, the group laws become equal (and also the identity and inverse, but this follows automatically).
[Mod] Morgan Rogers said:
just because you picked some class of examples where different notions of morphisms give the same isomorphisms doesn't mean this is always the case
The only natural counterexample I can think of is the category of contexts (in some fixed type theory) under, respectively, renaming and substitution, which have different isomorphisms. I don't know what that suggests in terms of the original argument.
I feel like these examples just push the argument down to , though; you still have to say what an isomorphism is there, and you can't do that without functions.
In a univalent setting, the isomorphisms in Set are just the paths, which are not defined in terms of morphisms. Though, now the argument is pushed down to univalence...
I think this gets to questions about foundations (and obviously I'm not trying to argue that you can do math without talking about functions, or without talking about monoids).
In ZF the definition of a function in turn involves the definition of an ordered pair, but this is just some encoding detail, and doesn't mean that the concept of an ordered pair is more fundamental than the concept of a function
The fact that you can describe isomorphisms with morphisms, but not vice versa, suggests to me that the latter are more fundamental.
The point is that this isomorphism structure is the thing that gets preserved by everything that we consider a mathematical definition.
I think this kind of argument is missing the point, but maybe we just have incompatible understandings of what a fundamental concept is.
That is why I asked what you meant by "more fundamental" :wink:
A 1-category is a 2-category whose Hom-categories are all discrete categories. Does it mean a 2-category is more fundamental than a 1-category?
This is the same argument as with groups and monoids.
For a homotopy theorist, a monoid is more like a group plus extra noninvertible stuff.
And this viewpoint "explains" why in HoTT we can define an -group, but we don't know how to talk about the extra stuff in an -monoid.
Maybe a better word than "fundamental" would be something like "central" or "important"--I don't mean "foundational".
The notion of importance is entirely cultural/context-dependent, though. Take monads, since the OP was asking about those too: I would argue that non-invertible monads (ie monoids in the category of endofunctors which are not groups) are a lot more 'central' than their invertible counterparts.
(I am kind of curious what an invertible monad looks like now, though, admittedly, even if it is just an auto-equivalence as I suspect)
Hopf algebras in the category of endofunctors?
It's not generally appropriate to study groups in categories of endofunctors, because they're naturally monoidal categories, but not rigid, so it doesn't make sense to talk about a group object internal to the category of endofunctors.
Anyway, I guess if you're the kind of person who uses monoids with no inverses other than for the identity (e.g, ℕ, any free monoids over sets), that they happen to be the 0 group with some extra structure is a little incidental.
“Now draw the rest of the owl”, &c
More than anything else, I would say math is about the question: when are two things the same? Out of this falls the notion of isomorphism, and hence automorphism, and hence group, but not monoid.
Meanwhile, CS is to a larger extent concerned with processes, and usually non-reversible ones.
So I think the common idea that groups are more fundamental/central/important/whatever than monoids in math, while the reverse is the case in CS, seems to have some basis.
And obviously monoids have many uses in math and vice versa.
But to say "a group is a monoid in which every object is invertible" and conclude that monoids are the true objects underlying the phenomena in math where we see groups is, I think, too reductionist and committed to specific presentations of the concepts of monoids and groups.
I guess maybe another way to look at this is that the theory of -categories depends very much on but not so much on , and it's hard to explain how this could be unless adding a new kind of invertible morphisms is actually quite different from adding a new kind of non-invertible morphisms.
For example it's hopeless to try to understand a (5, 1)-category as a (5, 5)-category in which every 2-, 3-, 4- and 5-morphism is invertible, but fortunately we have a working theory of (n, 1)-categories for any n (which we can get by truncating the theory of -categories) in which we can set n = 5.
So, while by comparison it's not at all difficult to understand a (1, 0)-category (i.e. groupoid) as a special case of a (1, 1)-category (i.e. category), at least it should be plausible that this might not be the only or always the best way to view a (1, 0)-category.
Reid Barton said:
The point is that this isomorphism structure is the thing that gets preserved by everything that we consider a mathematical definition.
This sold it for me. It reminded me of a fun post about why the category of sets is "wrong", at least when you compare to our pre-mathematical notion of a set, and it ultimately solves the problem by starting with the isomorphisms: https://bowtochris.tumblr.com/post/162010650907/mark-gently-jerry-seinfeld-voice-whats-the
Wouldn't a bunch of people say that it's wrong because it's identifying sets up to isomorphism?
Instead of up to extension, like 'normal'. :smile:
By the way, if you expand out the ZF definition of a function down to ordered pairs, it's a set such that
while it's a bijection when
so you can push this discussion all the way down to the level of propositional logic and the relationship between ->
and <->
.
If you accept propositional extensionality, which seems appropriate in this context, then <->
is "being the same".
So the core of the discussion amounts to whether you think of P <-> Q
as an abbreviation for P -> Q /\ Q -> P
, or as expressing that P
and Q
are "the same". Then that translates into the relationship between function and bijection, and in turn the relationship between monoid and group.
Treating P <-> Q
as denoting P -> Q /\ Q -> P
means that foundationally you only need the single concept ->
(which you obviously need anyways), but it fails to "explain" why this condition is the one we need to substitute Q
for P
freely.
And defining a homeomorphism of topological spaces as a continuous map with continuous inverse is foundationally convenient because we needed continuous maps anyways, but it doesn't explain the phenomenon that all properties of topological spaces that mathematicians consider are preserved under homeomorphisms, but not under general continuous maps.
HoTT is nice for this purpose because it gives a specific name to this "substitutability" facet of the concept of homeomorphism, namely the identity/path types of the type of all topological spaces. But I think the concept exists even prior to its encoding in some particular formal system (although the encoding can certainly help!)
Reid Barton said:
So the core of the discussion amounts to whether you think of
P <-> Q
as an abbreviation forP -> Q /\ Q -> P
, or as expressing thatP
andQ
are "the same". Then that translates into the relationship between function and bijection, and in turn the relationship between monoid and group.
The former way is exactly how I think of <->
. I guess that's just another instance of the difference in perspective that has already been established, but on the other hand I don't see how you can escape the fact that an isomorphism is not just the statement that two things are the same, but a way in which two things are the same. In order to show "not P <->Q
" I show one of "not P -> Q
" or "not Q -> P
". If I'm not gifted with a guarantee that P <-> Q
then to prove it I would necessarily start with a proof of one implication or the other; I would have to construct an isomorphism, and that starts with constructing a morphism of some kind or other, even if it's just a function.
Reid Barton said:
Treating
P <-> Q
as denotingP -> Q /\ Q -> P
means that foundationally you only need the single concept->
(which you obviously need anyways), but it fails to "explain" why this condition is the one we need to substituteQ
forP
freely.
I don't think it fails. P -> Q
means I can substitute any P
for a Q
; the other implication gives substitution the other way around.
By substitution I mean in an arbitrary context. If you only have P -> Q
then you can't substitute Q
for P
in not P
.
HoTT can also be seen as a general framework for saying how things are the same in an inherently symmetric way.
Because the type theory it was built from already accidentally left that possibility, but didn't have any non-trivial ways of saying how two things are the same.
(And in the case of isomorphism, by "arbitrary" I don't mean literally a completely arbitrary formula of ZF, just the ones which encode "reasonable" mathematical notions.)
It is true that as soon as you think of these things as directed arrows, you have to worry about which way they're pointing..! Thanks for the perspective, in any case. Maybe my inner homotopy theorist will one day break out of his cage.
Reid Barton said:
By the way, if you expand out the ZF definition of a function down to ordered pairs, it's a set such that
while it's a bijection when
so you can push this discussion all the way down to the level of propositional logic and the relationship between
->
and<->
.
One problem with this is that in ZF functions don't remember their codomain. So all this says is that the function is injective.
Yes, in addition to the condition , you need the add the symmetric condition .