You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
The current focus of my research in topos theory is to use toposes to understand monoids and vice versa. So I want to know what monoids you use or encounter in your research, monoids you want to understand better, and which are your favourites for whatever reason!
I've become very fond of the handful of monoids with just a few elements. They provide very workable (and sometimes surprising) examples and give me excuses to draw pictures, which I was afraid I would hardly ever get to do in topos theory. :heart_eyes:
My username I go by now is MonoidMusician. It was initially chosen because it sounded nice, but I do love monoids! I don’t have particular favorites, but I was just thinking the other day that a lot of monoids have layers of elements that are identities or “almost-identities”, and also absorbing elements or almost-absorbing elements. E.g. m <> e = m (identity), and m <> e1 = m for all m != e (almost identity).
Ah, you mean like elements which are the identity of some subsemigroup?
Yeah, I guess that would be one way to phrase it
What you're describing sounds a bit more special, I like that. Like having an idempotent which absorbs the identity but is absorbed by every other element.
Where do you get your intuition about monoids from/where do you find them?
I like implementing programming languages in functional languages, so they pop up whenever I see a pattern I need to generalize or need to classify/aggregate data. As an example of a aggregation-type monoid, maybe something that looks at a list of (syntax) trees and finds their common intersection, or returns an identity if there is none. The identity is essentially adjoined to the set of trees, and I guess the empty tree is an absorbing element, but sometimes you’d get an almost-identity instead. I’m currently seeing how much I can implement typechecking in terms of monoids 😃
I am a math undergrad, but we don’t talk about monoids a lot. Everyone loves groups of course. Groups and rings and fields.
Nicholas Scheel said:
I am a math undergrad, but we don’t talk about monoids a lot. Everyone loves groups of course. Groups and rings and fields.
As a computer scientist, I spend quite a lot of time peeling away inverses mathematicians have added to things.
Haha, of course :joy: Do you have any favorite examples to share?
James Wood said:
Nicholas Scheel said:
I am a math undergrad, but we don’t talk about monoids a lot. Everyone loves groups of course. Groups and rings and fields.
As a computer scientist, I spend quite a lot of time peeling away inverses mathematicians have added to things.
Amen to that. During my math (under)grad I asked myself more than once "how can monoids be interesting? They look so simple and stupid"
Then I found out about computer science :D
I dream of one day of compiling a course “Monoids, Semirings, and Semimodules”, just to see how far it can go. I think some good motivating examples of semirings are the tropical semirings, where the addition is a meet and the multiplication is an addition. The standard example is (ℝ⁺ ∪ {∞}, ∞, min, 0, +), though a favourite for computer scientists is any well behaved class of formal languages 𝓛, with addition being union of languages and multiplication being pointwise concatenation of words (the order in which union is meet is ⊇). I did my batchelor's project on a generalisation of tropical semirings, which get used in shortest-distance problems on weighted graphs (like Dijkstra's algorithm solves). The example there is that you have vectors (ℝ⁺ ∪ {∞})ᵏ, and instead of the addition being just the idempotent min, you merge the two vectors (as in mergesort), then take the first k. This has the property that if you add something to itself k times, it'll reach a fixed point (concretely, the first element repeated k times).
But regarding just monoids, do people know of Conor McBride's favourite monoid on the integers, which has identity -1?
James Wood said:
I think some good motivating examples of semirings are the tropical semirings, where the addition is a meet and the multiplication is an addition.
So you can construct a tropical semiring from any ordered monoid? If so, I hadn't thought about that before!
James Wood said:
But regarding just monoids, do people know of Conor McBride's favourite monoid on the integers, which has identity -1?
I've not heard of this, what's the operation? :grinning:
Essentially, it's a version of the category of thinnings (a strictification of the category of order-preserving embeddings between totally ordered finite sets), but where instead of each object being the size of a finite set, they're all just ω, and it collapses into a monoid. This comes up when you want to implement de Bruijn indices without dependent types, so you have no way to enforce good scoping in types. In the usual case, a thinning from m to n is equivalent to {bs ∈ 2ⁿ | ∑bs = m}, i.e, a certain class of bit vectors, with each bit telling you whether that n-element came from an m-element. Without the domain and codomain, we just have to give a bitstream. In implementation, it's convenient to restrict to the eventually constant bitstreams, which are exactly the integers represented in two's complement binary. The identity thinning corresponds to the identity order-preserving embedding, in which every element in the codomain came from the domain, hence every bit is 1, corresponding to the integer -1. The composition is a bit difficult to explain in this form, but it's not commutative.
With composition written in diagrammatic order, θ ; φ works by iterating through φ, then sometimes θ. If the first bit of φ is 0, we know that the first element of the codomain is not in the domain (because it wasn't even in the intermediate set), so the first bit of the result is 0. If the first bit of φ is 1, then we have to check with θ whether it carries all the way through to the domain. So the result is to copy the first bit of θ, and continue on the tails of θ and φ.
James Wood said:
Essentially, it's a version of the category of thinnings (a strictification of the category of order-preserving embeddings between totally ordered finite sets), but where instead of each object being the size of a finite set, they're all just ω, and it collapses into a monoid. This comes up when you want to implement de Bruijn indices without dependent types, so you have no way to enforce good scoping in types. In the usual case, a thinning from m to n is equivalent to {bs ∈ 2ⁿ | ∑bs = m}, i.e, a certain class of bit vectors, with each bit telling you whether that n-element came from an m-element. Without the domain and codomain, we just have to give a bitstream. In implementation, it's convenient to restrict to the eventually constant bitstreams, which are exactly the integers represented in two's complement binary. The identity thinning corresponds to the identity order-preserving embedding, in which every element in the codomain came from the domain, hence every bit is 1, corresponding to the integer -1. The composition is a bit difficult to explain in this form, but it's not commutative.
Wait why does the bitstream with every bit 1 correspond to -1? Is it just a convention in "two's complement binary" that the number encoded by a stream that's eventually 1 is the negative of the complementary, reversed stream with one added to the start..? I can accept that, I guess.
Yeah, it's a convention that happens to be quite convenient. I'm thinking of the representation being little-endian, so “eventually” refers to the high-value bits. For finite-range integers of n bits, two's complement says that the highest-value bit has value -2ⁿ⁻¹, while the other values stay the same. You have to take some sort of limit to make this work for arbitrary-range integers, I guess. You at least get the properties that consing on high-valued 0s to a positive integer and high-valued 1s to a negative integer both don't change the value.
IIRC, those properties are the ones that justify “sign extension”.