You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
In our work on system dynamics, @Nathaniel Osgood, @Xiaoyan Li, @Evan Patterson and I are formalizing a tradition of using small commutative monoids to describe 'polarities'.
What does that mean? Well, sometimes when one quantity X increases, it cause some other quantity Y to increase. Then people draw a causal loop diagram with an arrow pointing from X to Y with a + sign.
Sometimes when X increases, it causes Y to decrease. Then people draw an arrow pointing from X to Y with a - sign.
Here + and - are called polarities. People treat as a commutative monoid with
For example: if X negatively affects Y, and Y negatively affects Z, then X positively affects Z.
Up to isomorphism there are just two 2-element commutative monoids: the one I just described (which is isomorphic to the additive group ) and the booleans with "or" as its monoid operation (which is isomorphic to the booleans with "and").
But people also use some 3- and 4-element commutative monoids to describe polarities. For example the monoid I just described is often extended to a 3-element monoid with a third polarity, "indeterminate", used when you're not sure how some quantity affects another. Then we have rules like
You can think of as , if you like. There's a monoid homomorphism from the multiplicative monoid of to sending all the positive numbers to , all the negative numbers to and zero to .
This gives a practical reason to study 3-element and maybe 4-element commutative monoids.
Puzzle. Up to isomorphism, there are 5 3-element commutative monoids. What are they?
I don't know, though I can easily think of a few.
I suppose one could write a computer program to find them, although I'm not sure how much insight that would generate. If there are only three elements then any proposed multiplication table only has nine entries. So there are only multiplication tables to review.
Actually, since the monoid is required to have a unit element, that fixes five of the entries of our multiplication table, leaving just possible multiplication tables. We can reduce this further, because commutativity implies symmetry across the main diagonal of our table. So we really only have possible multiplication tables to check.
Our multiplication table (which perhaps I should call an "addition table" because I'm using additive notation) looks like this:
table
That's getting quite manageable!
For those wondering what this has to do with category theory, let me say a word about that. A so-called causal loop diagram (terrible terminology) is just a graph, in the category theorist's sense of graph, with edges labeled by elements of some commutative monoid, called the monoid of polarities.
We can form the free category on a graph . Each morphism in is a composite of edges of . So, we can form the free category on a causal loop diagram - but then every morphism gets labeled with a polarity, which is just the product of polarities of the edges we've composed to get that morphism.
This lets us understand 'indirect causation'. E.g. if positively affects and negatively affects , our causal loop diagram will have edges
and
But then the free category on this will have a morphism that's the composite of these two edges, labeled by :
This says has an indirect negative effect on .
All very simple - but what's nice is that a lot of economists and epidemiologists and others already use these qualitative models! So, it's worth using a bit of math to formalize them.
I like that!
On the topic of finding some specific commutative monoids with three elements, I'm trying to think of a more pleasant way than just guessing values for x,y,z in my table above and then checking to see that associativity holds.
(By the way, the integers modulo 3 under addition, , will give us one of our commutative monoids.)
Edit: Hmm, well nothing is clever is coming to mind, currently, beyond the vague idea of trying to somehow express the monoids in generator and relation form (which I think is called a "presentation").
I'm trying to think of a more pleasant way than just guessing values for x,y,z in my table above and then checking to see that associativity holds.
Since someone has already told us there are only 5 different commutative monoids with 3 elements, another approach is to figure out 5 of them and show they're not isomorphic. That's what I would probably do, since I hate long computations, and I like dreaming stuff up.
I can think of 2 more besides the group of integers mod 3, but I don't want to spoil the fun for other people!
The interval [0, 2] on the integers, equipped with the semi-lattice structure given by the minimum function.
The multiplicative monoid of integers mod 3.
Hmm, maybe thinking from the perspective of reasoning about related quantities will be helpful to dream up some ideas.
We might think of 0 as indicating that two things vary in the same way. So if describes how relates to and describes how relates to , then describes how relates to .
Then we might try using to say that two things vary in opposite ways. From this perspective, we'd probably want . Finally, could say two things are related in some "indeterminate way", so that and . (This is following the discussion above).
It would take some work to check that this is actually associative, though.
I wonder if the min function can be interpreted from this perspective.
Using "+" to denote taking the min, we have:
Here, "0" seems to "absorb" the other thing it interacts with, so it should probably stand for some kind of uncertainty/intederminance. And "2" is our unit, so it should correspond to two things relating in the same way. Interestingly, "1" seems to "absorb" a 1 or 2, but not a 0. Perhaps it could denote an intermediate level of uncertainty.
Madeleine Birchfield said:
The interval [0, 2] on the integers, equipped with the semi-lattice structure given by the minimum function.
Great! If we think of these as polarities - i.e., use these numbers to say how, or how much, some entity affects some entity , then we can say
means no effect at all
means only a tiny effect
means a substantial effect
Then using the minimum function as our monoid operations means that we are (for some reason) making a bunch of decisions like
If has only a tiny effect and has a substantial effect on , then has only a tiny effect on .
I'm not trying to justify these decisions, just saying you can formalize them using this particular monoid!
I suppose the interpretation I gave above (for the "min" operation) could be rephrased as:
This seems maybe similar in spirit to the interpretation @John Baez gave above regarding the size of effects.
Madeleine Birchfield said:
The multiplicative monoid of integers mod 3.
This is the other one I guessed! So we just need to find two more.
This one is actually widely used! Sometimes people use it this way:
Note there's a monoid homomorphism from to this monoid - I mentioned this already. So this is like a 'qualitative' version of the multiplicative monoid of real numbers.
Some other people say:
I suspect that we can define a commutative monoid having underlying set and addition given by will work, but I haven't checked that carefully.
David Egolf said:
I suppose the interpretation I gave above (for the "min" operation) could be rephrased as:
- 0 means we don't know how two things relate
- 1 means we have some idea of how two things relate
- 2 means we know how two things relate
This seems maybe similar in spirit to the interpretation John Baez gave above regarding the size of effects.
But it's interestingly different in meaning, since it's all about knowledge. My gang has considered the 4-element commutative monoid:
We can get this by taking with multiplication to describe positive effect, unknown effect and negative effect, and then throwing in a new zero element that absorbs all the other elements - namely "no effect". So, for example
unknown effect no effect = no effect
You can always take a monoid (thinking of the operation as multiplication), and throw in a new "zero" element that absorbs all the other elements, getting a new monoid. The new zero beats the old zero!
David Egolf said:
I suspect that a monoid with underlying set and addition given by will work, but I haven't checked that carefully.
Yes, it does! @Kevin Carlson had whispered in my ear a nice proof. In fact you can take any semigroup, throw in a new zero element that absorbs all the existing elements (), get a monoid with as its identity. If your semigroup was commutative, your monoid will be too.
You are throwing a into the 2-element monoid with the operation you described (which you're writing additively, just to be different).
So now we've seen four 3-element commutative monoids. Do you have a good interpretation of this new one?
It's neat you can add a zero to a semigroup and get a monoid!
I think we can interpret this one as follows:
Then expresses the idea that uncertainty compounds when we pass information through two uncertainty-inducing processes.
Intuitively, I'm thinking of information being passed through "lossy channels". Passing information successively through two moderately lossy channels can result in more than a moderate loss in information. (I'm also thinking of "noisy channels" with a similar intuition.)
We might write this instead using the set . Then each element corresponds to our degree of uncertainty in our understanding of the relationship between two things. (0 means no uncertainty, 1 means some uncertainty, 2 means more uncertainty than indicated by 1). We perform "saturated addition" because uncertainty maxes out at "level 2" instead of looping back around.
This kind of saturated addition can also occur when observing something with a sensor, provided the sensor can only observe up to some maximum value.
This is cool - but I'm a bit confused by your additive notation, so I will have to re-read it carefully. I'm using multiplicative notation because that gives us a simple standard name for an element that overpowers every other element: ZERO! (When we think multiplicatively, taking a semigroup and throwing in such a zero element gives a monoid.)
When we think additively, perhaps we should call such an element INFINITY.
Anyway, now I've reread your last comment after flipping the little switch on the back of my head from to , and it makes sense now.
I wrote a (now-deleted) comment and realized that I was just confusing myself... It's easy to mix up this notation!
Perhaps infinity would be a better name for the "maximum uncertainty" element, when working in additive notation!
I guess there now only remains a single three-element commutative monoid that we haven't found yet...
Maybe we can try to get the last one by starting with a different two element commutative semigroup, and then adding in zero. So, perhaps we could start with the integers modulo two and then add in another "zero" element?
John Baez said:
David Egolf said:
I suspect that a monoid with underlying set and addition given by will work, but I haven't checked that carefully.
Yes, it does! Kevin Carlson had whispered in my ear a nice proof. In fact you can take any semigroup, throw in a new zero element that absorbs all the existing elements (), get a monoid with as its identity. If your semigroup was commutative, your monoid will be too.
You are throwing a into the 2-element monoid with the operation you described (which you're writing additively, just to be different).
Wait a second - how can be the identity of a resulting monoid if it "absorbs" all the other elements? I would have expected or for all in our semigroup.
(By the way, here's why I like to use "+" in a commutative context: I know about some "multiplications" that aren't commutative, but I don't know of any "addition" that isn't commutative. So, seeing "+" floating around helps me remember that our binary operation is commutative.)
David Egolf said:
Edit: Hmm, well nothing is clever is coming to mind, currently, beyond the vague idea of trying to somehow express the monoids in generator and relation form (which I think is called a "presentation").
Classifying all 3-element monoids by generators and relations, is easily done:
There are two cases, depending on the number of generators.
Case1: The monoid M is generated by one element , hence
Case2: The monoid M is generated by two elemets hence .
For the Case1, should be equal to one of , so we need an additional equation for . Therefore they provides examples, namely,
[] cyclic group:
[] parity monoid with addition.
[] Plurality monoid with addition.
For the Case2, should be equal to one of . We can assume that or , since implies that is generated only by one element , . By the same argument, we can assume or .
[] In this case, all generators are invertible, so that the resulting monoid is a group with even order. So cannot be .
[] In this case, . (Otherwise, we can prove ). This monoid is isomorphic to .
[] In this case, is equal to either or . This monoid is equivalent to the monoid .
What I'm interested in is, the relationship with algebraic language theory. Our problem is equivalent to the classification of -element quotient monoid of (or congruent -coloring of the "first quadrant" ). This seems strongly related to algebraic formal language theory in two ways.
John Baez said:
But it's interestingly different in meaning, since it's all about knowledge. My gang has considered the 4-element commutative monoid:
For the "knowledge-modeling monoid", I want to mention Lawvere's work on Graphic monoids. As far as I understand, even non-commutative -element monoid would be of interest. If so, I have an favorite monoid, , which is the endomorphism monoid of the -simplex . William Lawvere explains how this kind of finite monoid (with the "ignore-overlapping-inputs-equation") could be a model of our knowledge system, in his paper Display of graphics and their applications, as exemplified by 2-categories and the Hegelian “taco”.
(This is also related to algebraic language theory. For example, it is mentioned as in Mathematical Foundations of Automata Theory).
David Egolf said:
Wait a second - how can be the identity of a resulting monoid if it "absorbs" all the other elements? I would have expected or for all in our semigroup.
Yeah, you want to adjoin a new identity, not a new absorbing element! This is the left adjoint to the forgetful functor from semigroups to monoids. I suppose adjoining a new absorbing element to a monoid also always gives a monoid, but this construction doesn’t have the nice property of being injective on isomorphism classes.
The latter means that isomorphism classes of three-element (commutative) monoids are in natural bijection with isomorphism classes of two-element (commutative) semigroups, and it’s pretty manageable to check by hand that there are only three of those.
Here’s a proof that there are only two more three-element commutative monoids: the only group is under addition. We've also already seen under multiplication as "+,-,indeterminate", where "indeterminate" is an absorbing element. This is the only choice with as a submonoid since multiplication by an invertible element in a monoid is a self-bijection and that forces the third element to be absorbing.
So for the other possibilities we have an identity and two other elements with and none of If we could have then we must have since again multiplication by a unit is a bijection and then so that’s ruled out. Therefore the others are the the three cases with as a sub-semigroup described above.
David Egolf said:
John Baez said:
In fact you can take any semigroup, throw in a new zero element that absorbs all the existing elements (), and get a monoid with as its identity. If your semigroup was commutative, your monoid will be too.
Wait a second - how can be the identity of a resulting monoid if it "absorbs" all the other elements?
Blech, you're right - here I am getting mixed up by additive vs. multiplicative notation! This has been clarified by now, but let me point how the truth looks vaguely similar to the baloney I wrote... let's use additive notation now:
"you can take any semigroup, throw in a new identity (), and get a monoid with as its identity. If your semigroup was commutative, your monoid will be too."
@Ryuya Hora - that's very interesting stuff; thank you!
As far as I understand, even non-commutative 3-element monoid would be of interest.
It's certainly true that the trick of taking the "free -labeled category on an -labeled graph" works fine for a noncommutative monoid. I added the commutative law just because all the examples that have shown up in applications so far are commutative, and I wanted to reduce the number to look at. There are five 3-element commutative monoids, and now I see there are only seven 3-element monoids. That's not many more!
I once spent a bunch of time thinking about graphic monoids, trying to understand Lawvere's work, but I seem to have forgotten most of what I knew.
I just looked to see what I've written about them, and all I see is a fact someone told me, whose significance I've never really understood: graphic monoids are the same as unital shelves.
A [[shelf]] is a set with a binary operation that distributes over itself:
There's a lot of interesting things to say about them, but a shelf is unital if has an element with
and these turn out to be the same as graphic monoids, which seems merely strange.