You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I'm hoping to find a book that works out the basic general theory of monoids (and their modules) in an arbitrary monoidal category.
I am aware of some books that study monoids in certain specific monoidal categories. For example, there are books dedicated to specifically algebras, or specifically rings. Due to their relative specificity, such books can study many interesting results that don't necessarily hold for monoids in arbitrary monoidal categories. However, I suspect that there will also be some basic theory in common that will repeat between such books.
It would interesting to know what kind of constructions and results hold for monoids in any monoidal category, and which results are specific to e.g. rings or algebras. A reference on monoids in an arbitrary monoidal category could provide a library of results that one could immediately apply to monoids in some unfamiliar monoidal category of interest.
Does anyone know of a reference like this? (I'd also be interested in a reference which is similar in spirit but which works in an arbitrary symmetric or braided monoidal category).
[Edit: I am wondering now if perhaps not much can be said in the general case. That could explain why I haven't seen a book on this topic.]
I don't know of any such reference. I suspect you may be right that the extreme generality of monoids in monoidal categories makes a general theory difficult. For example, any monad can be construed as a monoid in a monoidal category (a category of endofunctors whose monoidal product is given by endofunctor composition). There are difficult technical papers concerning just the problem of constructing free monads.
However, things can get better depending on the nature of the monoidal product. If for example the monoidal product is separately cocontinuous and the underlying category is cocomplete, then there is a nice "geometric series" construction for free monoids. Moreover, in this case the category of monoid objects is monadic over the underlying category, and there may be some other useful structural things one can say. But I am still unaware of a book devoted to studying this case.
It might be useful to note that Cocartesian categories are precisely those monoidal categories in which every object has a unique monoid structure. In other words, a very special case of the theory of “categories of monoids in a monoidal category” is the theory of “cocartesian categories.” In particular the theory of strong monoidal functors out of such categories contains the theory of models of algebraic theories. So, it’s quite a lot.
Kevin Carlson said:
It might be useful to note that Cocartesian categories are precisely those monoidal categories in which every object has a unique monoid structure.
I don't know if this is true. What I know is that a symmetric monoidal category is the kind we obtain from a choice of binary coproducts and initial object iff every object is a monoid, every morphism is a morphism of monoids, the monoid unit of the monoidal unit is the identity morphism, and the monoid multiplication of a tensor product is the one induced by the two underlying multiplications.
Opposites of Markov categories are also categories such that every object admits a structure of monoid though it is not required to be unique.
So I don't know a counterexample if it happens to be false. If there exists a Markov category such that the structure of comonoid on every object is unique then its opposite category would be a counterexample.
An interesting fact about monoids in a monoidal category is that they always form a monoidal category in which the tensor product, which is induced by the tensor product of is a coproduct.
The reason why the tensor product becomes a coproduct when you move to the symmetric monoidal category of monoids is that you can use the unit to obtain morphisms and and you obviously have the multiplications . You can use this multiplication to obtain the copairings from two morphisms and .
Jean-Baptiste Vienney said:
So I don't know a counterexample if it happens to be false. If there exists a Markov category such that the structure of comonoid on every object is unique then its opposite category would be a counterexample.
I think that the ordered monoid seen as a category by the order and seen as a monoidal category by the structure of monoid could maybe be a counterexample to your statement too.
This category doesn't have a product of and since and but it is false that .
For every object we have .
Hmm , but for every object , we don't have .
It doesn't work.
Using an ordered monoid could be a way to find a counterexample too, maybe.
More generally, looking for a poset which is a monoidal category but not a cartesian could be a good idea.
You probably have to add some more conditions, like every morphism is a monoid map, and the monoidal structure is symmetric. That doesn't really detract from Kevin's point.
Jean-Baptiste Vienney said:
An interesting fact about monoids in a monoidal category is that they always form a monoidal category in which the tensor product, which is induced by the tensor product of is a coproduct.
It's true if you change "monoid" to "commutative monoid". But I don't think it's true for rings seen as monoids in the monoidal category of abelian groups.
Hmm, I don't think so.
Hm, you don't think so what? Do you disagree that rings gives a counterexample?
I don't disagree directly that rings gives a counterexample because I don't understand extremely well this example but I think that you don't need the commutativity (and so that rings don't give a counterexample).
Well, you don't even have to use rings; just use monoids in the category of sets. The coproduct of monoids is not given by the cartesian tensor.
[It might be true that commutativity is not strictly necessary, that some other condition might conceivably replace it (I don't know offhand), but I only said it would be sufficient.]
But I think this paper says that you don't need the commutativity?
I mean as I understand this paper, it says that the category of monoids of a monoidal category has coproducts given by the tensor product of the monoidal category. Maybe I don't understand this paper?
Ok, the paper asks for commutativity.
You are totally right.
and this entry of the nlab contains the same mistake that I made:Cartesian monoidal category
The difficulty is that in Fox's paper he calls "coalgebra" what we usually call a cocommutative coalgebra.
He says "coalgebra" for short, I think, but where he first introduces what he means, he puts in parenthetically "(coassociative, coabelian, counitary)".
Where is the mistake in the nLab? I don't see it right away.
also he calls "monoidal category" what we usually call a symmetric monoidal category.
At the end of section on the nlab
Todd Trimble said:
He says "coalgebra" for short, I think, but where he first introduces what he means, he puts in parenthetically "(coassociative, coabelian, counitary)".
But what is fun is that he didn't add "cocommutative" but add it in the required equations next
By "coabelian", I think he means what we ordinarily call nowadays "cocommutative".
Yes, I see now the mistake in the nLab. Good catch. But I'm about to go to bed! Maybe I'll fix it tomorrow if no one else gets to it.
Todd Trimble said:
By "coabelian", I think he means what we ordinarily call nowadays "cocommutative".
Oh, ok.
Todd Trimble said:
Maybe I'll fix it tomorrow if no one else gets to it.
Never mind, I added the word "cocommutative" on the nLab. It was not a lot of work.
"Monoidal category" should also be changed to "symmetric monoidal category" on that page in §3, which is what Fox proves, and is also necessary to define "cocommutative".
David Egolf said:
I'm hoping to find a book that works out the basic general theory of monoids (and their modules) in an arbitrary monoidal category.
If you're willing to wait a few years, I might have something for you :sweat_smile:
I have recently been working on the question of when the construction of the category of actions of a monoid in as general a context as possible acquires some of the standard properties one would associate with those, most notably the existence of adjoint functors with the base category. I really want to tackle questions about what can be said about monoids and their actions in a top-down way. It's a long project, but I have time.
I agree that there is still much to be done on what you're asking. On my side, I would be interested to work on monoids in symmetric monoidal regular categories (or another kind of structure used in categorical algebra such as exact categories or protomodular categories. I need first to learn about them.) in the future. I think we could develop more basic algebra in this context than in a mere symmetric monoidal category. For instance we should be able to talk about things such as monoid congruences, quotient by a monoid congruence and maybe isomorphism theorems in this context.
Usual categorical algebra has been developed as starting with abelian categories and then generalizing to weaker contexts. I think that abelian symmetric monoidal categories are studied but clearly not all the weaker contexts have been studied by replacing the category by a symmetric monoidal one.
I'm not not sure at which point this idea makes sense or not, but at least I'd like to try to understand if there is something to be done.
Whereas usual categorical algebra is good to talk about monoids, groups and modules, I think that if you add a symmetric monoidal structure, you could also talk about (commutative) semirings, rings and algebras
Maybe the point is that what's usually done with a product can be done with an arbitrary symmetric monoidal or monoidal product.
Anyway, just some ideas.
Advertising my own work, in my recent book, section 6.2, there's an introduction to monoid objects and their modules, and some basic facts.
Thanks everyone! I hope to respond in more detail when I have some more energy.
Jean-Baptiste Vienney said:
Usual categorical algebra has been developed as starting with abelian categories and then generalizing to weaker contexts.
I know that Andrea Montoli specializes in this direction, with the motivation that to get anywhere studying monoids, you need to have access to standard constructions such as centralizers. This approach is a descendant of universal algebra, where the properties of specific categories of interest (this is a term that is used with a precise meaning in the paper I just linked but I am talking more generically) are abstracted and their respective consequences for the category of monoids are examined; it's a top-down approach. I want to go for a bottom-up approach of establishing how little you can get away with to talk about monoids and their actions at all, and then see what you have to add to recover standard constructions, or what you need in order for those constructions to behave the way one would expect from familiar cases.
In this paper they introduce a framework whose examples are the category of groups, the category of rings, and other ones. What I would like to see is a paper that talks about the category of monoids of a monoidal category and what properties of the monoidal category would imply that this category of monoids has interesting properties. Such desirable properties could be for instance properties that are shared by the category of set-theoretic semi rings and monoids. So I think I’m interested in a bottom-up approach too.
At the very least, and as a starting point, I would be happy to see a notion of congruence on a monoid in a (type of ) monoidal or symmetric monoidal category.
@Jean-Baptiste Vienney It might be a bit naive, but I have the impression that Marcelo Aguiar's phd thesis contain material relevant to internal monoids (even though it rather focuses on internal categories)
Section 2.1 is interesting! He introduces a notion of regular monoidal category which is a monoidal category with equalizers of pairs of maps which are preserved by tensoring on each side by objects. And he says that the category of monoids of a regular monoidal category is a regular monoidal category.
It’s still not enough to talk about kernel pairs, congruences etc… but it is a good start.
Kevin Carlson said:
It might be useful to note that Cocartesian categories are precisely those monoidal categories in which every object has a unique monoid structure.
@Kevin Carlson . Sorry, I think that a statement very close from this is true. However, I don't know if this a well-known statement. Do you have a reference for your statement? We can even reduce the hypotheses. But I only know how to give a statement for symmetric monoidal categories.
Cocartesian symmetric monoidal categories are precisely those symmetric monoidal categories in which every object as a unique unital magma structure.
As far as I know, the usual statement (for instance given in the book "Categories for Quantum Theory") is:
Cocartesian symmetric monoidal categories are precisely those symmetric monoidal categories in which every object as a unital magma structure, such that a compatibility identity between the multiplications and the tensor product is verified and a compatibility identity between the units and the monoidal unit is satisfied. (I can give explicitly these two compatibility equations if needed.)
Note that I dropped the word "unique" in the second statement and it changes everything. The way I understand this subtlety is that given a symmetric monoidal category , the category of unital magmas is another symmetric monoidal category together with a symmetric strict monoidal functor . It happens that a symmetric monoidal category is a cocartesian monoidal category iff is a split epimorphism in the category of symmetric monoidal categories and strict symmetric monoidal functors iff is an isomorphism in iff is a split epimorphism in the category of monoidal categories and strict monoidal functors iff is an isomorphism in . But a strict monoidal functor is an isomorphism in iff it is an isomorphism in . This is the same phenomenon than when you define an isomorphism of groups, or an isomorphism of monoids. The inverse function is automatically an homomorphism. But a section of an homomorphism is not necessarily an homomorphism.
There is also a characterization of cartesian monoidal category without the word "symmetric" which is similar to but where the compatibility between the multiplication and the tensor product is different since the previous one can't be stated without the symmetry.
I don't know about a characterization of cartesian monoidal categories similar to , i.e. about what you said. It might well be the case that this is true (even if I have some doubts whether the opposite of a Markov category could not be a counterexample). But this is a bit more difficult to think about it since in the case of a monoidal category, we lose the tensor product of unital magmas since we can't equip the tensor product of two magmas with a structure of magma in an obvious way without the symmetry. Thus we can no longer give in an obvious a way a tensor product to the category .
Hopefully, before thinking about whether this is all correct at least I wrote what I wanted to say.
Finally the uniqueness story boils down to that: if every objects is a unital magma in a unique way, then the necessary equations are satisfied automatically because you can build a structure of unital magma on the tensor product of any two unital magmas in a symmetric monoidal category, but you know it must be unique. Therefore it is the same than the one you have on the tensor product by hypothesis. It thus gives you these two equations for free.
If you don't have the symmetry, you can't build the tensor product of the two unital magmas, so I don't know what happens in this case.
Wow, there's a lot of interesting discussion here! I'd like to at least try to understand some of what was said in rough outline.
Jumping back up to the start of this thread, we have this interesting comment:
Todd Trimble said:
However, things can get better depending on the nature of the monoidal product. If for example the monoidal product is separately cocontinuous and the underlying category is cocomplete, then there is a nice "geometric series" construction for free monoids.
I don't know what it means for a monoidal product to be "separately cocontinuous". If we have a monoidal category , then we have a functor . A functor is called "cocontinuous" if it preserves all small colimits in .
I'm not sure what it means for our monoidal product to be "separately" cocontinuous, though.
Multiplication of real numbers
isn't linear, but it's separately linear in each argument:
and similarly for the first argument.
Similarly, multiplication of sets
isn't cocontinuous, but it's separately cocontinuous in each argument.
We expect this when we have a monoidal category that's more like a ring than a mere monoid: we can not only multiply (take tensor products of objects), but also add (take colimits). Then we expect - or at least hope! - that the tensor product is "separately cocontinuous". Perhaps a more common expression is that the tensor product "distributes over colimits". This means the same thing.
I see! That makes intuitive sense! Let me see if I can write out the details of what it would mean to be "cocontinuous in each argument".
Let us imagine we have a functor that is cocontinuous in each argument. Writing "+" to refer to taking coproducts, I expect we want (at least) this to happen:
for all and all .
Let me see if this works for the functor , at least when working with finite sets.
Let denote a set with elements ( is some natural number), and similarly for and . Then we would like to show that:
In other notation, we want to show that . We would also like to show that this isomorphism is "standard and obvious" in some sense.
But at the least, these sets are isomorphic because they have the same number of elements, namely elements (where now I am treating as natural numbers, instead of the sets they also refer to).
Similarly, we'll have .
It's interesting to note that this behaviour is quite reminiscent of "the distributive property" of the natural numbers.
By the way, we don't just want there to exist an isomorphism: we want the 'standard, obvious morphism' to be an isomorphism.
E.g., I claim there's a 'standard, obvious morphism' going between and , one way or the other... without any extra assumptions, this exists! I'll let you figure out which way it goes and what it is.
So, when we assert that this morphism is an isomorphism, we are merely asserting a property of the functor , instead of equipping it with extra structure. So being 'separately cocontinuous' is merely a property. And that's a very good thing.
As a next goal, I'd like to figure out the "standard, obvious morphism", between and .
I notice that is a coproduct. We can induce a morphism from a coproduct by using morphisms from each of the objects we are taking the coproduct of. So, I'd like to find a morphism from to and a morphism from to .
Now, if I had a morphism from to , applying would give me a morphism from to .
And a morphism from to amounts to a morphism from to and a morphism from to .
Ah! And comes equipped with a coprojection morphism . I think I have the pieces I need to start putting this together now.
Let denote the coprojection morphism. Then we have a morphism . And we have .
Similarly, let denote the other coprojection morphism associated to . Then we have a morphism . And we have .
Finally, we get using the universal property of coproducts.
Yes, you did it! This is one of my favorite examples of the power of universal properties. You start with a question:
Which way can I find a morphism between and ?
And the universal property of coproducts tells you:
You could get a morphism
if you had morphisms and .
And then you ask
But how could I get a morphism ?
and the definition of functor whispers in your ear:
You could get one if you had a morphism .
And so on: at every stage, the concepts tell you what to do next.
Actually, this whole story shows up in a simpler context when we try to define what it means for a functor to preserve binary coproducts (or products) - that's my usual go-to example for how the definitions sometimes write themselves in category theory.
Now that I have some idea of what "separately cocontinuous" means, I would like to return to this comment by Todd Trimble:
Todd Trimble said:
If for example the monoidal product is separately cocontinuous and the underlying category is cocomplete, then there is a nice "geometric series" construction for free monoids.
I don't yet know what "free monoid" means in this more general context, but maybe I can figure it out.
When I see "free monoid" I think of the free-forgetful adjunction between and . We have , where creates a monoid in a free way from a set, and sends each monoid to underlying set. In that context, we have: , for each set and each monoid .
The nLab notes that the free monoid on a set has as its elements all the finite sequences of the elements of . We compose two such monoid elements via concatenation of sequences.
We presumably want to do this sort of thing more generally, working with monoids in any nice enough monoidal category, instead of just those in .
I notice this theorem on the nLab:
Suppose is a monoidal category with countable coproducts for which the tensor product distributes over countable coproducts (for example, a cocomplete monoidal biclosed category). Then a left adjoint to the forgetful functor exists, taking an object to , which thereby becomes the free monoid on .
That's the theorem Todd was talking about. It's good to understand it for before thinking about it more generally!
Then this geometric series gives the free monoid on a set, in the traditional sense of the term "free monoid" (not internalized to an arbitrary yet).
That sounds like a good idea! Let be a set, and let in . Then, what is ?
Since our "sum" starts at , I need to figure out what the 0-fold product of a set with itself is. I suspect that in general the zero-fold monoidal product should be an object that is a monoidal unit. In this case, I'll choose a monoidal unit that is a singleton set I'll call .
This seems like a reasonable guess in , for the following reason. The set for can be thought of as the set of functions from an -element set to . Extending this to , we expect to be a singleton set, as there is only one function from the empty set to another set.
We can now expand . We get . What is an element of this set? This set will have one element for each of the elements of the sets that we are taking the disjoint union of.
So, here are some elements of :
An element of consists of a sequence of elements from . When , we can call the only element of the "empty sequence".
So, we conclude that for a set in the "geometric series" is a set having as elements the finite sequences of elements of .
The nLab says that the monoid structure in this case is obtained by concatenation of sequences. This is some function for any .
I think a good next step could be to work out the corresponding "concatentation" morphism in a sufficiently nice monoidal category. But I'll stop here for now.
Good work! In we often call the set of words in the alphabet .
By the way, there's a lot of interesting mystical mathematics connected to this 'geometric series' formula for the free monoid in a category where the tensor product distributes over coproducts:
The right hand side doesn't make sense - does it? - but we can often get interesting result by pretending it does, doing calculations, and then expressing the answers in terms of things that obviously do make sense.
By the way, another fun example is the monoidal category , where is the category of vector spaces over your favorite field and is the usual tensor product of vector spaces. This tensor product again distributes over coproducts.
A monoid in is called an associative algebra, so we get a formula for the free associative algebra on a vector space. The free associative algebra on a vector spaces is usually called the tensor algebra.
I guess this example is only fun if you're reasonably comfortable with tensor products of vector spaces, or associative algebras. Believe it or not, most algebraists would find this example more familiar than the case of !
I'm endeavouring to become more comfortable with tensor products, as I've been trying to learn some representation theory (from the book "Quantum Theory, Groups and Representations").
To spell out this example, let , where is some field (perhaps ). Then, we claim that distributes over coproducts.
That would mean that , where the isomorphism is the "canonical" one we constructed earlier. In different notation, we have . Similarly, we expect to have for all -vector spaces .
What does our 'geometric series' free monoid look like in this setting? Let be some -vector space. Then we wish to consider .
By analogy with the previous example, I expect to be our monoidal unit, which is the -vector space . So, this 'sum' expands as: .
Now, an element of the direct sum of some vector spaces is a tuple containing one element from each vector space. I seem to recall that when we are taking the direct sum of an infinite number of vector spaces, we need to restrict to tuples that only have finitely many nonzero tuple entries.
What is an element of like? I am guessing it is a -linear combination of "elementary tensors". Here, I am guessing an "elementary tensor" in involves an -fold tensoring of vectors in .
To get some more intuition for , we might ask what a morphism from this vector space amounts to. Since this is a coproduct, such a morphism amounts to a morphism from , and a morphism from , and a morphism from , and a morphism from and so on.
So, a -linear map amounts to:
(I'm not confident on various things when it comes to tensor products, so I may have made some mistakes in the above!)
You can summarize a map from the tensor algebra of into as:
You can apply this data as a single mapping by:
you could try to figure this out yourself too
That makes sense! Thanks for spelling that out!
David Egolf said:
Now, an element of the direct sum of some vector spaces is a tuple containing one element from each vector space. I seem to recall that when we are taking the direct sum of an infinite number of vector spaces, we need to restrict to tuples that only have finitely many nonzero tuple entries.
Right! When we take the product of any number of vector spaces we get the vector space of tuples containing one element from each vector space. When we take their product we also get vector space of tuples, but tuples where only finitely many entries are nonzero.
Thus finite products and coproducts are "the same" in : to make this precise we introduce the concept of [[biproduct]]. But infinite products are bigger than the corresponding infinite coproducts.
Is this essentially because vector spaces are typically defined using binary addition, which can only generate finitary sums? If a vector space were defined with an unbiased “infinite linear combination” operator instead of addition and scaling, would this difference go away? (Just idle wondering…)
@davidad (David Dalrymple) - You're sort of right, but not all infinite linear combinations are well-defined, e.g. you might have a vector space and be trying to form
Infinite linear combinations work a lot better over a rig where addition is idempotent - e.g. [[complete semilattices]], which can be seen as modules over the rig of truth values that allow infinite linear combinations.
@David Egolf - Your description of the tensor algebra of a vector space (= free monoid on that vector space) looks great.
If you ever meet a crowd of algebraists and want to fit in, don't write for coproducts of vector spaces, write - and write the tensor algebra of a vector space as
I would next like to understand how we can "multiply" elements in our free monoid .
The nLab notes that if we are workinging in , then we can multiply two finite sequences (words) in the set by concatenation. Using this as a hint, I'd like to figure out how we can multiply elements in our free monoid in general.
Concatenation of a length word followed by a length word can be described as a function .
This function acts by .
I will now use to refer to either a natural number OR to a set containing elements. I'll do a similar thing for as well. Hopefully context will make it clear which meaning is intended.
A word of length in can be thought of as a function . And similarly a word of length in can be thought of as a function . We wish to take two functions and and concatenate them to a function .
Now, is a coproduct of the sets and . Hence we have a bijection between morphisms from and pairs of morphisms having one morphism from and one from . I am guessing that is basically this bijection.
I might be missing some nuance here, as a sequence involves a function from a domain with a total ordering.
In the general case, working in some monoidal category , we are looking for a "multiplication" morphism for the object .
So, we are looking for some
in .
As a next step, I may try to find a of this form when working in . I am hopeful that we can use the fact that is separately cocontinuous to rewrite the left-hand side of this equation... But I'll stop here for now.
Yes: after rewriting the left side by distributing the over the two 's, the multiplication map is built from a bunch of "summands", each of which is just...
:shush:
Let's see what happens! Rewriting the left side using the fact that distributes over coproducts in each argument, we get:
.
Inside the parentheses we have:
So, our left side is isomorphic to:
Assuming this is correct, we could induce our multiplication morphism if we had a morphism for each pair with .
To do this, the first idea that comes to mind is to make use of the fact that is a coproduct, which means it comes equipped with a coprojection for each .
In this way, we get a morphism for each pair with .
So, we'll get an induced map, using the universal property of coproducts:
.
I am guessing that this should be our multiplication map , up to some canonical isomorphism.
If is the appropriate canonical isomorphism, then I think we can try setting
.
It is somewhat intimidating to contemplate showing that this proposed multiplication map is appropriately associative and unital :sweat_smile:.
At any rate, I think I more clearly understand now why the separable cocontinuity of is important for defining our free monoid. Namely, when distributes over countable coproducts, then we can induce the multiplication map using some properties of coproducts, as described above.
Morgan Rogers (he/him) said:
David Egolf said:
I'm hoping to find a book that works out the basic general theory of monoids (and their modules) in an arbitrary monoidal category.
If you're willing to wait a few years, I might have something for you :sweat_smile:
That's exciting! I look forward to it!
Morgan Rogers (he/him) said:
I have recently been working on the question of when the construction of the category of actions of a monoid in as general a context as possible acquires some of the standard properties...
Thanks for linking to those slides! I've only looked at the start of them so far, but it was already very cool to learn that one can define a monoid in a multicategory! And, if I understand the slides correctly, this means that one can define a monoid in a multicategory induced by a single endomorphism in a virtual bicategory.
I'm already amazed by how many structures can be thought of as monoids in disguise. I assume that this more general context provides even more examples of things that are secretly monoids!
Also according to those notes, apparently one can define a monoid in a "skew" monoidal category. I had not heard of a skew monoidal category before!
And there are even things called "skew multicategories", in which we can also define monoids! That's a bit more abstraction than what I feel up to at the moment, though.
Morgan Rogers (he/him) said:
I have recently been working on the question of when the construction of the category of actions of a monoid in as general a context as possible acquires some of the standard properties one would associate with those, most notably the existence of adjoint functors with the base category.
I would like to better understand what is meant here by "the existence of adjoint functors with the base category".
Morgan's probably aiming at, especially, the existence of a "free monoid" functor.
You've been working on the details of one such case above, I see! The question of when such functors exist in general already has a grand and glorious history, with one culmination in Kelly's unified treatment of transfinite constructions (https://www.cambridge.org/core/journals/bulletin-of-the-australian-mathematical-society/article/unified-treatment-of-transfinite-constructions-for-free-algebras-free-monoids-colimits-associated-sheaves-and-so-on/FE2E25E4959E4D8B4DE721718E7F55EE). The general question is more difficult than the case answered by the power series construction when the monoidal product doesn't distribute over coproducts (or when those don't even exist.)
David Egolf said:
I'm already amazed by how many structures can be thought of as monoids in disguise. I assume that this more general context provides even more examples of things that are secretly monoids!
Indeed! I love monoids - I've even been called a monomaniac.
John Baez said:
David Egolf said:
I'm already amazed by how many structures can be thought of as monoids in disguise. I assume that this more general context provides even more examples of things that are secretly monoids!
Indeed! I love monoids - I've even been called a monomaniac.
Good joke but oof I don't enjoy the ableist vibes of that term :cold_face:
I find I have to do a fair amount of work to guess at what the concrete meaning of a sentence like that is; I think something like "I'd like to put in a bid to avoid making puns about words that are or have been mental health diagnoses" would communicate a lot more, if indeed that's close to what you meant.
Very well. I'd like to put in a bid to avoid employing words that carry an outdated or disparaging understanding of mental health issues.