Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: Eckmann-Hilton argument


view this post on Zulip David Egolf (Jun 06 2024 at 15:40):

John Baez said:

Yes, a simple and beautiful argument called the [[Eckmann-Hilton argument]] shows that if CC is symmetric monoidal, a monoid in Mon(C)\mathsf{Mon}(C) is the same as a commutative monoid in CC. So Mon(Mon(C))\mathsf{Mon}(\mathsf{Mon}(C)) is the category of commutative monoids in CC.

The Eckmann-Hilton argument seems to keep popping up! When I have a bit more energy, I'd like to work through it, to understand it - and then aim to understand why Mon(Mon(C))\mathsf{Mon}(\mathsf{Mon}(C)) is the category of commutative monoids in CC.

view this post on Zulip John Baez (Jun 06 2024 at 16:51):

Eugenia Cheng created a nice picture of the Eckmann-Hilton clock, which summarizes the argument. It's a bit cryptic at first:

view this post on Zulip Notification Bot (Jun 06 2024 at 16:53):

2 messages were moved here from #learning: questions > defining an internal category without using pullbacks? by John Baez.

view this post on Zulip David Egolf (Jun 10 2024 at 15:18):

That picture looks cool! I look forward to understanding it.

I will start by thinking about this version, from Wikipedia:

Let XX be a set equipped with two binary operations, which we will write \circ and \otimes, and suppose:

  1. \circ and \otimes are both unital, meaning that there are identity elements 11_{\circ} and 11_{\otimes} of XX such that 1a=a=a11_{\circ} \circ a = a = a \circ 1_{\circ} and 1a=a=a11_{\otimes} \otimes a = a = a \otimes 1_{\otimes}, for all aXa \in X.
  2. (ab)(cd)=(ac)(bd)(a \otimes b) \circ (c \otimes d) = (a \circ c) \otimes (b \circ d) for all a,b,c,dXa,b,c,d \in X.

Then \circ and \otimes are the same and in fact commutative and associative.

view this post on Zulip David Egolf (Jun 10 2024 at 15:28):

The proof Wikipedia gives seems good to me! I particularly like the two-dimensional proof, where the article uses the fact that (2) is an "interchange law":
interchange law

view this post on Zulip David Egolf (Jun 10 2024 at 15:31):

The "Eckmann-Hilton clock" above, by Eugenia Cheng, seems very similar to the 2D proof that Wikipedia gives, with the addition of a bunch of \ell and 1\ell^{-1} symbols. I don't know what the \ell and 1\ell^{-1} symbols are indicating.

At any rate, I quite like the 2D version (e.g. as pictured in the clock). Intuitively, our identity and interchange rules let us "move two elements around one another" by making use of the 2D space available to us. Our ability to do this shows that both operations are equal and commutative.

view this post on Zulip David Egolf (Jun 10 2024 at 15:36):

Now, let us assume that both operations are associative and unital, so they define monoids on XX. We would like to show that these two conditions are equivalent:

  1. the interchange law is satisfied
  2. \otimes is a monoid homomorphism :(X,)×(X,)(X,):(X, \circ) \times (X, \circ) \to (X, \circ) and \circ is a monoid homomorphism :(X,)×(X,)(X,):(X, \otimes) \times (X, \otimes) \to (X, \otimes)

view this post on Zulip David Egolf (Jun 10 2024 at 15:48):

Let us assume that :X×XX\otimes: X \times X \to X is a monoid homomorphism when we put a monoid structure on X×XX \times X by using \circ elementwise, and where XX is a monoid witih multiplication given by \circ. Doing this, we get the following multiplication on X×XX \times X, which I will write using juxtaposition: (a,b)(c,d)=(ac,bd)(a,b)(c,d) = (a \circ c,b \circ d).

If :X×XX\otimes: X \times X \to X is a morphism of monoids, then for any a,b,c,dXa,b,c,d \in X we must have ((a,b)(c,d))=(a,b)(c,d)\otimes((a,b)(c,d)) = \otimes(a,b) \circ \otimes(c,d) and so we must have (ac,bd)=(a,b)(c,d)\otimes(a \circ c, b \circ d)= \otimes(a,b) \circ \otimes(c,d). Using infix notation, we must have (ac)(bd)=(ab)(cd)(a \circ c) \otimes (b \circ d) = (a \otimes b) \circ (c \otimes d). So, the interchange law holds if :X×XX\otimes:X \times X \to X is a monoid homomorphism, where XX and X×XX \times X have monoid structures induced by \circ!

We could also carry out a similar argument to show that the interchange law holds if :X×XX\circ:X \times X \to X is a monoid homomorphism, where XX and X×XX \times X have monoid structures induced by \otimes.

view this post on Zulip David Egolf (Jun 10 2024 at 15:57):

If the interchange law holds, and both operations are associative and unital, then is :X×XX\otimes: X \times X \to X necessarily a morphism of monoids? \otimes will preserve composition, because the interchange law holds. It remains to show it preserves the unit. Since by assumption the interchange law holds and both operations are unital, we know by the Eckmann-Hilton argument that the units of the two operations are the same. So, (1,1)=11=1\otimes(1, 1) = 1 \otimes 1 = 1 is indeed the unit of XX with respect to the multiplication given by \circ.

Similarly, if the interchange law holds, and both operations are associative and unital, then :X×XX\circ: X \times X \to X will be a monoid homomorphism (where X×XX \times X and XX are made into monoids using \otimes).

view this post on Zulip David Egolf (Jun 10 2024 at 16:06):

I would next like to prove this:

If XX is a monoid object in the category Mon\mathsf{Mon} of monoids, then XX is a commutative monoid object in Mon\mathsf{Mon}.

But I will stop here for today!

view this post on Zulip John Baez (Jun 10 2024 at 16:37):

David Egolf said:

The "Eckmann-Hilton clock" above, by Eugenia Cheng, seems very similar to the 2D proof that Wikipedia gives, with the addition of a bunch of \ell and 1\ell^{-1} symbols. I don't know what the \ell and 1\ell^{-1} symbols are indicating.

Yeah, they're confusing. I bet they're saying that something or other doesn't need to be an equation: it can be just an isomorphism, called \ell. But we can set =1\ell = 1, or in other words ignore it, and focus on what's left.

view this post on Zulip Eric M Downes (Jun 11 2024 at 09:07):

Aside on the ,1\ell,\ell^{-1} operations.

I think the ,1\ell,\ell^{-1} are present to represent the chirality in an operation most of us take for granted as a single operation (where \cdot is either composition identifying leftward with upward)
1xxx11\cdot x\overset{\ell}{\to}x\overset{\ell}{\to}x\cdot 1
1x1x1x11\cdot x\overset{\ell^{-1}}{\leftarrow}x\overset{\ell^{-1}}{\leftarrow}x\cdot 1

I believe this is done (1) to preserve the egg-shape (she's very artistic! :), (2) to point out that the "leftward" / "upward" movements of xx mediated by \ell are exactly balanced in number to the rightward/downward movements accomplished by 1\ell^{-1}.

But why would we care about that? This might matter if 11 was not a global identity. This can occur in submonoids determined by idempotents; ee=eee=e acts as an identity for the submonoid SS of all elements conjugated through it; MS={exe; xM}M\geq S=\{exe; ~x\in M\}. So if you wanted to keep track of wether there were any dangling idempotents left over at the end of a transformation, and which side they were on, this kind of bookkeeping would tell you. Something like a subspace where e(xy)(xy)ee(x\ldots y)\neq (x\ldots y)e, simplifying xyx\ldots y requires using ee, so you need to know "where/if leftovers end up".

The only problem with this theory is of course that we have a cancellation of 1 at the top and bottom of the clock, but no \ell's appear/disappear there, so... perhaps this points toward a slight generalization where 11 is an identity for the vertical operation, but merely an idempotent for the horizontal operation, and α,β\alpha,\beta are implicitly idempotent-conjugated. shrug

Is anyone aware of circumstance in homotopy where this actually occurs?

view this post on Zulip Rémy Tuyéras (Jun 11 2024 at 09:51):

The first time I saw Eugenia's diagram was in this paper (page 11), with Michael Makkai, where they used it on a particular example of Penon's 3-categories to show that Penon's definition could not recover all weak categories. There, the diagram does not have \ell and 1\ell^{-1} (but it technically should). In particular, the arrows \ell and 1\ell^{-1} can be helpful to clarify calculations in a situation where you are trying to lift some property from a strict context to a weak context in order to show that a certain monoidal category is not braided but symmetric (which is what they were doing).

view this post on Zulip John Baez (Jun 11 2024 at 10:00):

@Eric M Downes wrote:

perhaps this points toward a slight generalization where 1 is an identity for the vertical operation, but merely an idempotent for the horizontal operation...

You're reminding me of this:

In homotopy theory we get a bicategory from any space XX, called its fundamental bigroupoid, which has

Every point xx has an 'identity' path 1x1_x, the constant map [0,1]X[0,1] \to X taking xx as its value. This in turn has an 'identity' path-of-paths 11x1_{1_x}, the constant map [0,1]2X[0,1]^2 \to X taking xx as its value.

11x1_{1_x} is truly the identity for the vertical composition of 2-morphisms, but not for the horizontal composition. For horizontal composition, say \otimes, we instead have a 2-isomorphism

x:1xff\ell_x : 1_x \circ f \to f

such that

x(11xα)x1=α \ell_x (1_{1_x} \circ \alpha) \ell_x^{-1} = \alpha

for any 1-morphism f:yxf: y \to x and 2-morphism α:ff\alpha: f \Rightarrow f.

(Here as people often do I'm using the same symbol, in this case \circ, for composition of 1-morphisms and horizontal composition of 2-morphisms. I'm denoting vertical composition of 2-morphisms simply by setting the symbols next to each other.)

In fact this is how it works for a typical bicategory.

Maybe this is what Eugenia is talking about? I'm still confused by the use of \ell in her picture.

view this post on Zulip Eric M Downes (Jun 11 2024 at 11:18):

Thanks! This is very promising; biking to work is homotopy equivalent to biking to work and then catching my breath for a few minutes, but if I explore with you adjusting the path I take α:ff\alpha:f\to f, it does not make sense to distract our discussion of paths with such details about me pausing for breath or not at the endpoint. Of course in my head, I know that certain paths entail more exhaustion, so internally I keep track of this when actually planning how much time it will take, after we have finished our conversation. Or something.

I'm a little confused by (co)domains though, in your second equation.

Let s:=x(11xα)x1s:= \ell_x(1_{1_x}\circ\alpha)\ell_x^{-1}. Then dom(s)=cod(s)=fdom(s)=cod(s)=f, but cod(f)=y,dom(f)=xcod(f)=y, dom(f)=x. Maybe you are using an implicit 1ff1_f\sim f, but you said ==...

view this post on Zulip Eric M Downes (Jun 11 2024 at 11:21):

Or maybe there's just a missing ff, and you mean x(11xα)x1f=f\ell_x(1_{1_x}\circ \alpha)\ell_x^{-1}f=f?

view this post on Zulip John Baez (Jun 11 2024 at 11:22):

I meant =α = \alpha.

view this post on Zulip Eric M Downes (Jun 11 2024 at 11:24):

ahhhhhh hmmm ok great I think that all fits then. Why α\alpha seems to leave ff alone was going to be my next question :)

view this post on Zulip John Baez (Jun 11 2024 at 13:23):

Eric M Downes said:

Aside on the ,1\ell,\ell^{-1} operations.

I think the ,1\ell,\ell^{-1} are present to represent the chirality in an operation most of us take for granted as a single operation (where \cdot is either composition identifying leftward with upward)
1xxx11\cdot x\overset{\ell}{\to}x\overset{\ell}{\to}x\cdot 1
1x1x1x11\cdot x\overset{\ell^{-1}}{\leftarrow}x\overset{\ell^{-1}}{\leftarrow}x\cdot 1

Okay, assuming I understand what's going on, this is not what 1\ell^{-1} means. Instead 1\ell^{-1} is the inverse of \ell, and we have iso-2-morphisms

1ff11f1 \circ f\overset{\ell}{\to}f \overset{\ell^{-1}}{\to} 1 \circ f
f1rfr1f1f \circ 1 \overset{r}{\rightarrow} f \overset{r^{-1}}{\rightarrow}f \circ 1

view this post on Zulip Matteo Capucci (he/him) (Jun 11 2024 at 15:35):

it seems to me \ell is the left unitor of the ambient bicategory, which pops up when α\alpha and β\beta are paired with an identity cell

view this post on Zulip Matteo Capucci (he/him) (Jun 11 2024 at 15:35):

which is maybe what John got at anyway

view this post on Zulip Nathan Corbyn (Jun 11 2024 at 18:58):

If \ell is the left unitor, surely we’d also need the right unitor to show up for the proof to go through? (We certainly cancel units on the right in the picture)

view this post on Zulip John Baez (Jun 11 2024 at 20:57):

Yes, that's one of the reasons I'm confused by the use of \ell in Cheng's diagram.

view this post on Zulip Mike Shulman (Jun 11 2024 at 22:17):

I think all the 1-morphisms in this diagram are identities, and the left and right unitor coincide on identities.

view this post on Zulip Rémy Tuyéras (Jun 11 2024 at 22:31):

By definition, the left and right unitors are the same up to commutativity (due to the braiding; i.e. B=r\ell \circ B = r). Since the Eckmann-Hilton argument is about showing that something is commutative, I would imagine that Eugenia wanted to remove any relation that was already commutative to only focus on the relations that are not commutative "for free".

view this post on Zulip Mike Shulman (Jun 11 2024 at 23:13):

Eugenia's version of the Eckmann-Hilton argument is written for endomorphisms of the identity 1-cell in a bicategory, so there is no braiding.

view this post on Zulip Rémy Tuyéras (Jun 12 2024 at 00:40):

@Mike Shulman Sorry for the confusion, I meant to take the 2-cell BfB_{f} defined by the following composite, which gives a formal commutation on every 1-cell f:AAf:A \to A:

f12rf:f1idAidA1f\ell^{-1}_{f} \circ_2 r_{f}:f \circ_1 \mathsf{id}_A \Rightarrow \mathsf{id}_A \circ_1 f ,

Also, regarding the fact that left and right unitors coincide on identities, how do we show that the arrow

BidA:idA1idAidA1idAB_{\mathsf{id}_A}:\mathsf{id}_A \circ_1 \mathsf{id}_A \Rightarrow \mathsf{id}_A \circ_1 \mathsf{id}_A

is an identity 2-cell in a bicategory? (or do you mean that they coincide in a different way?)

view this post on Zulip Mike Shulman (Jun 12 2024 at 00:49):

Here's the proof: https://ncatlab.org/nlab/show/monoidal+category#kel2

view this post on Zulip Matteo Capucci (he/him) (Jun 12 2024 at 09:40):

Nathan Corbyn said:

If \ell is the left unitor, surely we’d also need the right unitor to show up for the proof to go through? (We certainly cancel units on the right in the picture)

Ah, right. Now I am confused too lol.

view this post on Zulip John Baez (Jun 12 2024 at 09:42):

Mike Shulman said:

I think all the 1-morphisms in this diagram are identities, and the left and right unitor coincide on identities.

Okay, that's the explanation I sought!

view this post on Zulip David Egolf (Jun 13 2024 at 16:22):

(I've not been feeling well enough to do math for much of the last week, unfortunately. However, today I have the energy to do a little bit!)

I'd like to think about this at least briefly today:

If XX is a monoid object in the category Mon\mathsf{Mon} of monoids, then XX is a commutative monoid object in Mon\mathsf{Mon}.

Let us assume that XX is a monoid object in Mon\mathsf{Mon}. To start out with, I want to understand why this would mean we have an induced commutative multiplication on the elements of XX.

Since XX is an object of Mon\mathsf{Mon}, it is a monoid and so we have for XX some corresponding unit element 11_\circ and some multiplication rule \circ. But since XX is also a monoid object in Mon\mathsf{Mon}, we must have some morphism :X×XX\otimes:X \times X \to X and some morphism e:1Xe:1 \to X where 11 is the terminal monoid (which has just the identity element, *). Here, X×XX \times X has the multiplication induced by applying \circ componentwise.

Intuitively, since XX is a monoid object, I suspect that :X×XX\otimes: X \times X \to X must provide an associative operation on XX that is unital with respect to 1=e()1_{\otimes} = e(*). Then, since :X×XX\otimes:X \times X \to X is also a morphism of monoids where X×XX \times X and XX have multiplications provided by \circ, by the discussion above I suspect \otimes and \circ must satisfy the interchange law. Assuming all this is true, we could then conclude by the Eckmann-Hilton argument that \circ and \otimes are equal and commutative. So, we have a commutative multiplication on XX.

I may work out the details at some point, but I'll stop here for today.

view this post on Zulip Eric M Downes (Jun 13 2024 at 18:41):

David Egolf said:

Let us assume that XX is a monoid object in Mon\mathsf{Mon}. (....) \otimes and \circ must satisfy the interchange law. Assuming all this is true, we could then conclude by the Eckmann-Hilton argument that \circ and \otimes are equal and commutative. So, we have a commutative multiplication on XX.

Nice. Hope you feel better soon.

When you do work out the details, if you have the energy, I have two questions

  1. When you say X×XX\times X has the multiplication component-wise, I assume you're talking about the product monoid, not \otimes. If that's correct, (how/why) do we know Mon\sf Mon must have finite products?
  2. I'm curious if/where your argument changes if the category containing the monoid object is, say, Ab\sf Ab, the category of abelian groups, instead? Like, would you (not) end up with a commutative group?

view this post on Zulip David Egolf (Jun 14 2024 at 16:57):

Thanks for your comments, @Eric M Downes ! I think your questions will be interesting and helpful for me to think about.

Regarding question (1): The multiplication on X×XX \times X I mentioned above works as follows. If we have elements (a,b)(a,b) and (a,b)(a',b') in XX, and we use \cdot to denote the multiplication on X×XX \times X, then (a,b)(a,b)=(aa,bb)(a,b) \cdot (a',b') = (a \circ a', b \circ b'). Here \circ is the multiplication on the elements of XX that makes XX a monoid, an object of Mon\mathsf{Mon}.

I believe that X×XX \times X together with \cdot is indeed a product of the monoid XX (which has multiplication \circ) with itself in Mon\mathsf{Mon}.

view this post on Zulip David Egolf (Jun 14 2024 at 16:58):

Note that by Mon\mathsf{Mon} I mean the category of "classical" or "elementary" monoids. I'm not working here with a category of monoids internal to some arbitrary monoidal category. (Although I seem to recall that Mon\mathsf{Mon} is the same as Mon(Set)\mathsf{Mon}(\mathsf{Set}), the category of monoids internal to Set\mathsf{Set}).

view this post on Zulip David Egolf (Jun 14 2024 at 17:01):

Regarding the question "how/why do we know that Mon\mathsf{Mon} must have finite products?", I think I've just assumed that the product monoid we can form from two monoids is in fact the categorical product of those two monoids in Mon\mathsf{Mon}. But it might be good to prove this!

view this post on Zulip David Egolf (Jun 14 2024 at 17:10):

For clarity, I will start denoting monoids here as pairs of the form (X,X)(X, \circ_X), where XX is a set and X:X×XX\circ_X:X \times X \to X is a unital associative binary operation on XX. Given a monoid (A,A)(A, \circ_A) and a monoid (B,B)(B, \circ_B), I claim that they have a categorical product given by the monoid (A×B,)(A \times B, \cdot). Here \cdot is the "component-wise" multiplication which works like this for any elements (a,b)(a,b) and (a,b)(a',b') in A×BA \times B: we have (a,b)(a,b)=(aa,bb)(a,b) \cdot (a',b') = (a \circ a',b \circ b').

view this post on Zulip David Egolf (Jun 14 2024 at 17:13):

To show that (A×B,)(A \times B, \cdot), together with its projection morphisms πA\pi_A and πB\pi_B, is indeed a product of (A,A)(A, \circ_A) and (B,B)(B, \circ_B), it suffices to show it satisfies the universal property of products. To do this, we need to show that for any monoid (C,C)(C, \circ_C) with monoid homomorphisms f:(C,C)(C,A)f:(C, \circ_C) \to (C, \circ_A) and g:(C,C)(C,B)g:(C, \circ_C) \to (C, \circ_B), there is then a unique monoid homomorphism (f,g):(C,C)(A×B,)(f,g):(C, \circ_C) \to (A \times B, \cdot) that makes this diagram in Mon\mathsf{Mon} commute:
diagram

view this post on Zulip David Egolf (Jun 14 2024 at 17:16):

Note that πA:(A×B,)(A,A)\pi_A:(A \times B, \cdot) \to (A, \circ_A) acts by πA(a,b)=a\pi_A(a,b) = a, and similarly πB(a,b)=b\pi_B(a,b) = b for all (a,b)A×B(a,b) \in A \times B.

view this post on Zulip John Baez (Jun 14 2024 at 17:17):

David Egolf said:

Note that by Mon\mathsf{Mon} I mean the category of "classical" or "elementary" monoids. I'm not working here with a category of monoids internal to some arbitrary monoidal category. (Although I seem to recall that Mon\mathsf{Mon} is the same as Mon(Set)\mathsf{Mon}(\mathsf{Set}), the category of monoids internal to Set\mathsf{Set}).

Yes, monoids in Set\mathsf{Set} are the original monoids, and 95% of mathematicians only know about these. Then there are people who know about monoids in a category C\mathsf{C} with finite products, often called internal monoids or monoid objects in C\mathsf{C}. And then there are people who know about the even more general monoids in a monoidal category (C,)(\mathsf{C}, \otimes), also called internal monoids or monoid objects in (C,)(\mathsf{C}, \otimes). And of course you want to be one of those people.

David Egolf said:

Regarding the question "how/why do we know that Mon\mathsf{Mon} must have finite products?", I think I've just assumed that the product monoid we can form from two monoids is in fact the categorical product of those two monoids in Mon\mathsf{Mon}. But it might be good to prove this!

It's true, in any event. More generally, if C\mathsf{C} is any category with finite products, Mon(C)\mathsf{Mon}(\mathsf{C}) is again a category with finite products. But if (C,)(\mathsf{C}, \otimes) is merely a monoidal category, Mon(C)\mathsf{Mon}(\mathsf{C}) may not monoidal, at least not in any standard way. If (C,)(\mathsf{C}, \otimes) is a braided monoidal category, Mon(C)\mathsf{Mon}(\mathsf{C}) is monoidal. And if (C,)(\mathsf{C}, \otimes) is a symmetric monoidal category, then we get another stable situation: Mon(C)\mathsf{Mon}(\mathsf{C}) is again symmetric monoidal. (I think I said this earlier in this long conversation.)

view this post on Zulip David Egolf (Jun 14 2024 at 17:22):

Just to wrap up the above, if (f,g)(f,g) is to make the above diagram commute, we need πA(f,g)=f\pi_A \circ (f,g) = f. Thus, for any cCc \in C we need to have πA((f,g)(c))=f(c)\pi_A((f,g)(c)) = f(c). So, the first coordinate of (f,g)(c)(f,g)(c) must be f(c)f(c). Similarly, we must have πB((f,g)(c))=g(c)\pi_B((f,g)(c)) = g(c), and so the second coordinate of (f,g)(c)(f,g)(c) must be g(c)g(c). We conclude that, if (f,g)(f,g) exists, then we must have (f,g)(c)=(f(c),g(c))(f,g)(c) = (f(c),g(c)) for any cCc \in C. So if, if (f,g)(f,g) exists, it is unique.

It remains to show that this (f,g)(f,g) makes our diagram commute. But (πA(f,g))(c)=πA((f(c),g(c))=f(c)(\pi_A \circ (f,g))(c) = \pi_A((f(c), g(c)) = f(c) and (πB(f,g))(c)=πB((f(c),g(c))=g(c)(\pi_B \circ (f,g))(c) = \pi_B((f(c), g(c)) = g(c). So (f,g)(f,g) does make our diagram commute!

We conclude that (A×B,)(A \times B, \cdot) is a product of (A,A)(A, \circ_A) and (B,B)(B, \circ_B) in Mon\mathsf{Mon}. So, Mon\mathsf{Mon} has at least binary products.

view this post on Zulip John Baez (Jun 14 2024 at 17:23):

Great! And I guess you can see Mon\mathsf{Mon} also has nullary products, aka a terminal object. So it has finite products.

view this post on Zulip David Egolf (Jun 14 2024 at 17:26):

John Baez said:

Great! And I guess you can see Mon\mathsf{Mon} also has nullary products, aka a terminal object. So it has finite products.

Yes! Although I admit I don't know the details of how to show that (A×B)×C(A \times B) \times C is not just a product of A×BA \times B and CC, but it is also a product of AA and BB and CC. More generally, I suppose there is some induction procedure to show we have finite products of more than two objects, given that we have binary products.

Or, perhaps much more simply, one could just carry out an argument similar to what I did above, except modifying the argument to form the product of nn monoids instead of two.

view this post on Zulip David Egolf (Jun 14 2024 at 17:29):

There is an interesting related exercise (which I have not worked through) in "Categories for the Working Mathematician" by Mac Lane:

Prove: if BB has finite products, so does Mon(B)\mathsf{Mon}(B).

So, our result discussed above is a special case of this exercise, where B=SetB = \mathsf{Set}.
[And I notice now that @John Baez mentioned this a few messages above.]

view this post on Zulip Eric M Downes (Jun 14 2024 at 21:08):

Nice work David!

We can also infer that a category from algebra C\sf C has finite products (even if they are not cartesian) whenever we know we can speak of C\sf C-objects, the defining diagrams of which form the "syntactic category" of the [[Lawvere theory]] of C\sf C. These syntactic categories require products for their expression, at least of an object with itself in this case.

Despite the imposing terminology, you actually already know much of the mathematical content, and for Mon\sf Mon have illustrated it above in your proof! Yet the existence of a lawvere theory is not trivial -- not all categories related to abstract algebra have products. Can you think of one?

hint

The reason I mention this in connection with Eckmann-Hilton is that one of the things we do with Mon\sf Mon objects etc. is use them to present/exhibit/build other cool algebraic things in different contexts! (Which is where question 2 comes in. :) And EH tells us that sometimes (when exactly?) we get, instead of a composite thing we might have expected, a more symmetrical subtype of the very thing we started with!

view this post on Zulip David Egolf (Jun 14 2024 at 22:45):

Thanks for your comment, @Eric M Downes ! Could you maybe elaborate slightly on what you mean by a product that is (or is not) cartesian? My current guess is as follows:

view this post on Zulip David Egolf (Jun 14 2024 at 22:48):

I am also unsure what you mean by a "C\mathsf{C}-object". I guess that C\mathsf{C} is some kind of category (associated in some way to algebra?).

I do know we can use the term "monoid object". Perhaps "Mon\mathsf{Mon}-object" is intended as a synonym for "monoid object"? I'm not quite sure how I'd need to interpret "C\mathsf{C}-object" in general to make this work out, though... I suppose it would be something like this:

I'm not sure that this is what you mean by the term "C\mathsf{C}-object", though. :sweat_smile:

view this post on Zulip John Baez (Jun 14 2024 at 23:08):

David Egolf said:

Thanks for your comment, Eric M Downes ! Could you maybe elaborate slightly on what you mean by a product that is (or is not) cartesian?

It may reassure you somewhat, David, to hear that I also don't know what he meant by that. To me "a category has finite products" and "a category is cartesian" mean the same thing, and when I'm talking about categories "product" means "cartesian product" unless I say something like "tensor product" or "monoidal product".

view this post on Zulip Eric M Downes (Jun 14 2024 at 23:42):

Sorry for the confusion, and possibly poor terminology. I didn't mean "not cartesian" in a "Cartesian closed category" sense... which is how I should probably mean it, since we are all category theorists here. I mean it in a slope-browed slack-jawed set theory sense of the cartesian product one encounters in "those other" mathematics classes. :) (this is exactly the kind of hate speech we were just being lectured on in the hijacked thread on ETCS!! I'm horrible... :)

Regarding universal products which are not strictly cartesian in a set-theoretic sense.

Consider the following injective faithful functor Z:MonMon0Z:{\sf Mon}\to{\sf Mon^0} where the latter is the category of "pointed possibly-empty monoids with an adjoined zero", and its morphisms always preserve the zero, but otherwise act as they did before. That is, an external zero (idempotent absorbing element) is adjoined even if one was already present internally and each externally-evident zero is identified in each monoid. I can provide the syntactic category and the initial object etc. if you like, or prove functoriality.

In this category, I want the product to be the smash-product
M0×0N0=(M×N)/0M^0\times^0 N^0=(M\times N)/\sim^0 where 0\sim^0 will be the equivalence class:
(m,0)(0,n)(0,0)0 mM,nN(m,0)\sim(0,n)\sim(0,0)\sim0~\forall m\in M, n\in N.

I know, I know that's kind of cheating, like its heart is totally cartesian, but we can agree it is not strictly a cartesian product. Nonetheless, I assert without proof that this product is the universal product in this somewhat artificial but legit category.

view this post on Zulip Eric M Downes (Jun 14 2024 at 23:46):

So perhaps John, you would call that a "monoidal product"? If so, I think that is better terminology.

view this post on Zulip Eric M Downes (Jun 15 2024 at 00:04):

David Egolf said:

I am also unsure what you mean by a "C\mathsf{C}-object". I guess that C\mathsf{C} is some kind of category (associated in some way to algebra?).

Yep thats what I mean.

I do know we can use the term "monoid object".
Perhaps "Mon\mathsf{Mon}-object" is intended as a synonym for "monoid object"?

Yes. So far your batting average interpreting my unhinged ravings is pretty good!

I'm not quite sure how I'd need to interpret "C\mathsf{C}-object" in general to make this work out, though... I suppose it would be something like this:

I'm not sure that this is what you mean by the term "C\mathsf{C}-object", though. :sweat_smile:

That's pretty much it exactly!

Yeah, take a look at the first chapter in MacLane and he shows the diagrams for a monoid object, a group object, and a group-action-object. These are just formalization of the equations that define those algebraic varieties, but externalized so that we only see the "inversion morphism", "identity morphism", "multiplication morphism" etc. Everything else is "hidden" away from prying eyes safely within the object.

So then you can get fancy with them, and say "I am in the category Top\sf Top and as I have proven that XX partakes in all the correct diagrams for a group object, so I assert that XX is a topological group." and then you would drop the mic. (Or at least you could have dropped the mic if you did this like 60 years ago! It's pretty common now.)

view this post on Zulip Todd Trimble (Jun 15 2024 at 04:55):

I guess a few things in response.

Sorry for the confusion, and possibly poor terminology. I didn't mean "not cartesian" in a "Cartesian closed category" sense... which is how I should probably mean it, since we are all category theorists here. I mean it in a slope-browed slack-jawed set theory sense of the cartesian product one encounters in "those other" mathematics classes. :) (this is exactly the kind of hate speech we were just being lectured on in the hijacked thread on ETCS!! I'm horrible... :)

I don't think this is maximally helpful.

view this post on Zulip Todd Trimble (Jun 15 2024 at 04:55):

Consider the following injective faithful functor Z:MonMon0Z: \mathsf{Mon} \to \mathsf{Mon}^0 where the latter is the category of "pointed possibly-empty monoids with an adjoined zero"

I find this fairly confusing, since monoids are never empty to begin with in the language I speak, and "pointed... with an adjoined zero" sounds either strangely redundant or very confusing. But yes, I understand a monoid equipped with an absorbing element 00. And this ZZ is apparently the left adjoint to the forgetful functor from monoids with 00 to monoids. (But do we need this ZZ to discuss the smash product?)

Smash products are fairly familiar. On pointed sets, they are pretty easy to motivate through their adjointness relation with the internal hom that consists of basepoint-preserving functions. But I'm not sure how to assign a meaning to "universal product" to it otherwise. It seems even muddier in the case of (noncommutative) monoids with zero, since there is no good internal hom there (by Eckmann-Hilton!).

view this post on Zulip Todd Trimble (Jun 15 2024 at 04:55):

Everything else is "hidden" away from prying eyes safely within the object.

And yet, you can probe such internal algebra objects by targeting them with arrows from other objects. In other words, a group object (for example) means precisely an object GG whose contravariant representable C(,G)\mathsf{C}(-, G) is, as a set-valued functor, equipped with natural group structure. GG may think it is safe from prying eyes, but GG would reveal itself as a terrible poker player: all objects XX can tell just by looking at GG that C(X,G)\mathsf{C}(X, G) carries group structure. :-)

view this post on Zulip Julius Hamilton (Jun 15 2024 at 13:21):

A simple and beautiful argument called the [[Eckmann-Hilton argument]] shows that if CC is symmetric monoidal…

I’d like to recall some definitions here. If CC is “symmetric monoidal”, this means that CC is a category plus a binary operation :Ob(C)×Ob(C)Ob(C)\cdot : Ob(C) \times Ob(C) \to Ob(C), such that:

(AB)C=A(BC)(A \cdot B) \cdot C = A \cdot (B \cdot C)

AB=BAA \cdot B = B \cdot A

and there exists EOb(C)E \in Ob(C) s.t. EA=AE \cdot A = A.

I think that this operation \cdot can be thought of as a functor :C×CC\cdot : C \times C \to C, but I also think this implies we would have to be working in a category whose objects are categories and which has categorical products. There is a cool proof I can’t remember right now which I’d like to relate to this, about how in categories with products, monoid objects are also comonoid objects (I think), due to the existence of the diagonal map.

a monoid in Mon(C)\mathsf{Mon}(C) is the same as a commutative monoid in CC.

I’ll assume Mon(C)\mathsf{Mon}(C) is the category of monoid objects in CC? And the above says that a monoid object in CC’s category of monoids is a commutative monoid in CC.

So Mon(Mon(C))\mathsf{Mon}(\mathsf{Mon}(C)) is the category of commutative monoids in CC.

I understand; now I’d like to study the proof.

view this post on Zulip Eric M Downes (Jun 15 2024 at 13:31):

Todd Trimble said:

I find this fairly confusing, since monoids are never empty to begin with in the language I speak, and "pointed... with an adjoined zero" sounds either strangely redundant or very confusing. But yes, I understand a monoid equipped with an absorbing element 00.

Here I want them to be possibly-empty in order for them to be pointed with initial object {0}\set0.

What you haven't told me is what you would say instead to describe this category? :)

I freely admit this is not the sharpest example possible, it just seemed relevant and I felt certain David would understand it without any difficulty. Yes, the Yoneda lemma means that internalized structure is only superficially hidden, I'll have to work on my category theory jokes.

But I'm not sure how to assign a meaning to "universal product" to it [without an internal hom]

Maybe I am conflating "universal" with "categorical". Is there a difference here?

By "universal foo" I usually mean "the object foo having the universal property Yoneda(foo)".

Soecifically: M0×0N0M^0\times^0N^0 having
π1:M0×N0M0,π2:M0×0N0N0\pi_1:M^0\times N^0\to M^0, \pi_2:M^0\times^0 N^0\to N^0
is the categorical product for all M0,N0Mon0M^0,N^0\in \mathsf{Mon}^0, that is:
P0Mon0,a:P0M0,b:P0N0;\forall P^0\in \mathsf{Mon}^0, a:P^0\to M^0, b:P^0\to N^0;
!u:P0M0×0N0,a=π1u,b=π2u\exists! u:P^0\to M^0\times^0 N^0, a=\pi_1\circ u, b=\pi_2\circ u

This does not require an internal hom, which you know, so I am confused.

there is no good internal hom there (by Eckmann-Hilton!)

This is a great point! David can indicate if he'd welcome this discussion, or wants to wrap up other matters, but this might actually be a great place to discuss this.

view this post on Zulip Todd Trimble (Jun 15 2024 at 14:17):

Here I want them to be possibly-empty in order for them to be pointed with initial object {0}\{0\}.

What you haven't told me is what you would say instead to describe this category? :)

I was trying to go along with what you wrote, but now it sounds like you want semigroups with an absorbing element.

Soecifically: M0×0N0 having
π1​:M0×N0→M0,π2​:M0×0N0→N0
is the categorical product for all M0,N0∈Mon0, that is:
∀P0∈Mon0,a:P0→M0,b:P0→N0;
∃!u:P0→M0×0N0,a=π1​∘u,b=π2​∘u

That's a categorical product, but before you said you wanted the product to be the smash product. I don't know what point you're trying to make, but you were saying something about a non-cartesian product being a "universal product", and I was trying to decipher what that meant.

view this post on Zulip Eric M Downes (Jun 15 2024 at 14:41):

Ah ok, thanks! I will take John's advice and rephrase point 2 "The smash product is an example of a monoidal product that is not the cartesian product from set theory." I think we're on the same page there.

As for point 1, I am referencing an alternative valid Lawvere theory for monoids etc. in which instead of defining η:1M; μ(η×idM)=idM\eta:1\to M; ~\mu\circ(\eta\times id_M)=id_M etc. for a monoidal product μ:MMM\mu:MM\to M one instead defines
e:MM; eπ1=eπ2; μ(idM×e)Δ=μ(e×idM)Δ=idMe:M\to M;~e\circ\pi_1=e\circ\pi_2;~\mu\circ(id_M\times e)\circ\Delta=\mu\circ(e\times id_M)\circ\Delta=id_M
There are reasons to do this but I apologize, I appear to be taking us on a tangent far from EH...

So I'll just reference this blog post esp. comments by Alexander Campbell, Tom Leinster, and Mike Shulman. The referenced paper by Freyd is here -- If I should call these "possibly empty monoids" something else, I'm certainly open to suggestions! (In my case, Mon0\sf Mon^0, I wanted them to not be empty, but instead to always have a 0 which except in the degenerate case is distinguished from the identity map ee)

view this post on Zulip Todd Trimble (Jun 15 2024 at 15:07):

Okay, this is now making sense.

view this post on Zulip John Baez (Jun 15 2024 at 20:40):

@Todd Trimble - have you figured out what Eric means by a "possibly empty monoid"? A semigroup?

view this post on Zulip John Baez (Jun 15 2024 at 20:42):

Julius Hamilton said:

I’d like to recall some definitions here. If CC is “symmetric monoidal”, this means that CC is a category plus a binary operation :Ob(C)×Ob(C)Ob(C)\cdot : Ob(C) \times Ob(C) \to Ob(C), such that:

(AB)C=A(BC)(A \cdot B) \cdot C = A \cdot (B \cdot C)

AB=BAA \cdot B = B \cdot A

and there exists EOb(C)E \in Ob(C) s.t. EA=AE \cdot A = A.

That's not the definition of a symmetric monoidal category! It's the definition of a very special case, which I call a "commutative monoidal category". In a general symmetric monoidal category, all these equations are replaced by isomorphisms, which need to obey coherence laws. (See the link.)

view this post on Zulip John Baez (Jun 15 2024 at 20:43):

Very few symmetric monoidal categories are, or are even equivalent to, commutative monoidal categories.

view this post on Zulip Eric M Downes (Jun 15 2024 at 20:54):

John Baez said:

Todd Trimble - have you figured out what Eric means by a "possibly empty monoid"? A semigroup?

I know the question was for Todd, but do you remember the discussion you had with Alexander Campbell on your blog post the group with no elements? It's just that minus the inversion involution. I found the original paper by Freyd in case the link on your blog is paywalled, and it is linked above.

Above it appears in the context of an adjoined zero Mon0\sf Mon^0, which may be confusing matters.

view this post on Zulip Eric M Downes (Jun 15 2024 at 20:57):

(It's defining the monoid identity to be a unary operation instead of a nullary op.)

view this post on Zulip Todd Trimble (Jun 15 2024 at 21:11):

John Baez said:

Todd Trimble - have you figured out what Eric means by a "possibly empty monoid"? A semigroup?

Yes, I figured it out; no, he doesn't mean semigroup exactly. He referred back to a Cafe post of yours on groups without identities. Well, I see Eric has responded, so maybe I don't have to type it out. Instead of dealing with a constant e:1Me: 1 \to M in the theory, one works instead with a [[constant function]] f:MMf: M \to M, meaning a function such that f(x)=f(y)f(x) = f(y) for all x,yx, y. If MM is inhabited, then such ff factors through a constant e:1Me: 1 \to M, but nothing forces MM to be inhabited.

The hidden point is that if MM is inhabited, meaning that if the unique map M1M \to 1 is a regular epi, then 11 is the coequalizer of the two projection maps π1,π2:M×MM\pi_1, \pi_2: M \times M \rightrightarrows M (think of the projection maps as signifying free variables x,yx, y).

Now why Eric brought this up: ask him I guess. But in the end, he said the smash product is an example of a noncartesian monoidal product. I don't know what his claims were about "universal products", but maybe he can explain if he wants.

view this post on Zulip Eric M Downes (Jun 15 2024 at 21:19):

Todd Trimble said:

I don't know what his claims were about "universal products", but maybe he can explain if he wants.

All I meant was to claim that the smash product has the universal property of a product in the category Mon0\sf Mon^0 I described, rather than the product one might expect to be inherited from Mon\sf Mon, which would "mix zeros" with non-zero elements. Sorry for the confusion and thank you for your patience as I improve my communication skills. [edit see below; the smash product internalizes the product from Mon\sf Mon but is merely a not-universal monoidal product in Mon0\sf Mon^0]

I'm happy to talk further about why I brought all this up, but I think it takes us too far afield from Eckmann-Hilton, and I think your point about the lack of an internal hom would be more interesting, once David feels confident he got what he came for. (Hopefully I can pull together all of my stuff in time for my Aug 28 zulip seminar talk! :)

view this post on Zulip Todd Trimble (Jun 15 2024 at 21:24):

All I meant was to claim that the smash product has the universal property of a product in the category Mon0 I described,

I don't think so. This Mon0\mathsf{Mon}^0 is the category of algebras for a Lawvere theory, and products in any such category of algebras are computed as they are in sets. The smash product is something else.

view this post on Zulip Eric M Downes (Jun 15 2024 at 21:25):

Okay! I'll go over my proof and see if there is an error, and will start in another topic.

view this post on Zulip Todd Trimble (Jun 15 2024 at 21:34):

Well, I have to say that there's still some weirdness in the exact verbal description of objects in Mon0\mathsf{Mon}^0, but it can wait.

view this post on Zulip John Baez (Jun 15 2024 at 22:12):

Todd Trimble said:

John Baez said:

have you figured out what Eric means by a "possibly empty monoid"? A semigroup?

Yes, I figured it out; no, he doesn't mean semigroup exactly. He referred back to a Cafe post of yours on groups without identities. Well, I see Eric has responded, so maybe I don't have to type it out. Instead of dealing with a constant e:1Me: 1 \to M in the theory, one works instead with a [[constant function]] f:MMf: M \to M, meaning a function such that f(x)=f(y)f(x) = f(y) for all x,yx, y. If MM is inhabited, then such ff factors through a constant e:1Me: 1 \to M, but nothing forces MM to be inhabited.

Okay, thanks! I was too lazy to go browsing through links to find the definition: I figure if someone uses a weird nonstandard term like "possibly empty monoid" they should just come out and say what they mean. But I get it now.

As you and Eric know, but others here might not, this trick of replacing constants with constant functions lets us turn any Lawvere theory into a new Lawvere theory that has all the same nonempty algebras in the category Set\mathsf{Set}, but also one empty algebra.

view this post on Zulip Todd Trimble (Jun 15 2024 at 22:19):

But what is a little slippery is that it looked like Eric didn't use this trick for both the zero and the identity element, i.e., it looked (to me) like Eric introduced a zero constant, as opposed to a zero constant function. [He said, "with an adjoined zero."] If so, then there's no difference between this whatchamacallit with zero, and a monoid with zero. That's because, if MM has a zero constant term 0:1M0: 1 \to M, then you get the identity constant term as f0:1Mf \circ 0: 1 \to M where f:MMf: M \to M is this constant function, hence you're getting a monoid after all, none of this confusing business.

view this post on Zulip John Baez (Jun 15 2024 at 22:29):

Okay. Yes, if you replace a bunch of the constants in your Lawvere theory TT by constant functions, but keep at least one constant in the theory, then you get a new Lawvere theory TT' whose algebras in Set\mathsf{Set} are "the same" as those of TT, and thus TTT \cong T'. So, in some sense there was no point introducing TT'.

(This is a semantic argument, sketched out in a way that could be filled in, but you can also see TTT \cong T' directly by a generalization of the syntactic argument you just gave.)

view this post on Zulip Eric M Downes (Jun 15 2024 at 23:14):

Since we are apparently discussing this here anyway... :)

I mean to make 1 a constant morphism e:M0M0e:M^0\to M^0 and have 0 be z:1M0z:1\to M^0 which will act as the basepoint for the smash product ×0\times^0, because I want this diagram to commute, where the top is in Mon0\sf Mon^0 and the bottom is in the familiar Mon\sf Mon
products exactly preserved

I intend this refactored category to mirror Mon\sf Mon with the exception that
1) Mon0\sf Mon^0 has an extra object ({0},μ,e); e=idM\cong(\set0,\mu,e);~e=id_M, which is initial, with the inclusion of the initial object of Mon\sf Mon now appearing as (B,μ,e); e()=1\cong(\mathbb{B},\mu,e); ~e(-)=1 and
2) we have refactored the constants; previously a 0 was invisible in the syntactic category because not every monoid had a zero, while now every monoid has a zero and also a one, but the one is unary.

I'd actually be fine with a nullary one, but then things get messy because there are now two maps from the initial object (B,,1)\cong(\mathbb{B},\land,1), and that was a bit too much for my feeble brain to reason about easily in ensuring the category mirrors Mon\sf Mon properly.

I claim this is not isomorphic to Mon\sf Mon (it has an extra object, and although that object was also present in Mon\sf Mon, its inclusion is the "logical and" above), so the adjoint functor returning us to Mon\sf Mon is projective... but I agree it is only trivially different. The interesting thing is not the category itself, but further things I build on top of it.

view this post on Zulip Todd Trimble (Jun 15 2024 at 23:27):

You're not still claiming the smash product has the universal property of the cartesian product, are you?

If you are, then I have some very simple Socratic questions, beginning with: how many elements do you think {0,1}{0,1}\{0, 1\} \wedge \{0, 1\} has?

I'm not quite sure I have the energy and interest to untangle various things I see, but I would like to put that claim to rest, if you're still making it. (I guess I'll register one nit: the adjunction sign is made with a \dashv, and the turnstyle or entailment symbol is made with a \vdash.)

view this post on Zulip Eric M Downes (Jun 15 2024 at 23:36):

Todd Trimble said:

If you are, then I have some very simple Socratic questions, beginning with: how many elements do you think {0,1}{0,1}\{0, 1\} \wedge \{0, 1\} has?

No I'm not still claiming that, apologies, should have been explicit, and thanks for the correction, I realized while I was writing things up that I was not saying what I meant... it allows us to capture the categorical product of a different category. I apologize, things get jumbled when I try to express them, even if I'm not making a mistake.

But since you asked. It has two elements, where we would need the osbject which is the categorical product {0,1}×{0,1}\set{0,1}\times\set{0,1} to have twice that many, as if we have two distinct maps to {0,1}\set{0,1} they cannot both factor through the same (single copy of) {0,1}\set{0,1}. (And here Todd means \land to be the smash product not "logical and")

So I don't know what to call ×0\times^0, I just want it to exist. It has a different unit than the product so not identical by Eckmann-Hilton, thankfully. I'll go with "a monoidal product" for now I guess.

And thanks for the correction on the adjoint conventions.

view this post on Zulip Todd Trimble (Jun 15 2024 at 23:44):

Well, just to let onlookers know which direction I was about to head in: earlier I said that products in a category of algebras of a Lawvere theory are calculated just as they are in sets, which is to say that the forgetful functor TAlgSetT\mathsf{Alg} \to \mathsf{Set} preserves products. And one nice way to see that's true is that this forgetful functor is representable, and representable functors preserve products, by the very definition of categorical product, a conversation that David Egolf will certainly remember. In this case, the representing object, which is the free algebra on one generator, can be presented as {0,1,2,4,8,}\{0, 1, 2, 4, 8, \ldots\} (under multiplication).

view this post on Zulip Todd Trimble (Jun 15 2024 at 23:52):

Now, back here, I'll start with point (2). I don't know if you saw the "slippery point" I mentioned, but there's no need for this tricky unary business, because as soon as you have one constant, then all constants can be reinstated (are terms in the theory), and so you may as well present the theory using the constant terms rather than bother with the tricky unary business. If ee is your unary operation, then the constant 11 can be defined as e(0)e(0).

So according to my current understanding of your category, it really is just monoids with a zero element.

I also want to say, regarding point (1), that the initial object is {0,1}\{0, 1\}, not {0}\{0\}.

view this post on Zulip Eric M Downes (Jun 15 2024 at 23:59):

Yes, its supposed to be monoids with a zero element, while capturing the ability to take products that preserve the original monoids. It's really quite uninteresting by itself, and I'm sorry for spending so much time on it; I hope some of the above has been useful to others, or if not that you all had some popcorn handy.

Re (1) -- I suppose you are reminding me that monoid homomorphisms preserve 1, so even if I am asking them to also preserve 0, and one is unary, there cannot be a monoid homomorphism that does both from {0}\set0. Thank you.

view this post on Zulip Eric M Downes (Jun 16 2024 at 00:09):

Todd Trimble said:

Now, back here, I'll start with point (2). I don't know if you saw the "slippery point" I mentioned, but there's no need for this tricky unary business, because as soon as you have one constant, then all constants can be reinstated (are terms in the theory), and so you may as well present the theory using the constant terms rather than bother with the tricky unary business. If ee is your unary operation, then the constant 11 can be defined as e(0)e(0).

So to present the Lawvere theory, you are recommending I just bite the bullet and define η,ζ:1M\eta,\zeta:1\to M as my constants, as there is nothing one can do with unary constants that cannot also be done with nullary constants; the approaches are entirely equivalent save only that one can have empty objects in a unary-constants-only theory.

view this post on Zulip Todd Trimble (Jun 16 2024 at 00:11):

Not a problem! I had forgotten this stuff about eliminating constant terms, so it was actually a little bit fun thinking things through.

The other option you have is to introduce an unary operation for zero as well, and then the empty structure can get readmitted to your category, if you like. But the slogan here is that: once you have one constant term, you have them all. It's an all-or-nothing proposition. :-)

view this post on Zulip David Egolf (Jun 16 2024 at 04:28):

Much of the conversation here is going over my head! I think if I were aiming to gain more understanding from what's been posted above, it would probably help if I knew a bit more about "Lawvere theories". I may return to this thread at some point to try and learn more in that direction...

But for now I feel that I've gained a somewhat better understanding of the Eckmann-Hilton argument and some related things. (Although there is still more to learn in that direction, too!) For that reason, I am going to focus my efforts on a different thread for a while.

Thanks again to everyone for their comments here!