You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I would like to understand in this book by Rosicky Adamek Vitale: Algebraic Theories_ A categorical introduction to general algebra (2009), on the page 7 what do the morphisms in correspond to in the sense of universal algebra in the sense of Birkhoff. I understand though that are -sorted sets, but I do not understand it for a more general . What are the objects and morphisms in send to by finite product preserving functors to (and correspond to) ?
The name they give to is algebraic theory. That name is used in universal algebra to refer to a signature coupled with a set of equations between -terms. So you should think of as describing the operations allowed in your algebras as well as their constraints. It can be difficult to see how exactly these are represented in the objects and morphisms of .
Sometimes, people restrict their attention to theories which are generated by a single object, meaning the objects of are natural numbers and the product is addition. This makes it easier to see because a finite product preserving functor will send to some set , and the rest will be completely determined as means is sent to .
Now, a morphism in will be interpreted as a function , and you should understand this as an -tuple of terms with variables. Indeed, takes a choice of n inputs in A and computes m outputs in A, and by some properties of the product, you can check this done by unrelated computations (this is not necessarily the case in monoidal theories).
When is not generated by a single-object it can be more complicated to see. I find it easier to think of theories as signatures with equations, but I am sure this depends on what kind of algebras you are trying to present. There are "classical" theories that are disgusting and yet present some simple things (I remember an example that I cannot find right now).
These notes by Awodey and Bauer are way more gentle than Algebraic Theories.
Now, a morphism in the theory will be sent to a function , and you understand this as an tuple of terms with variables. Indeed, takes any assignment of variables into (i.e. a function ) to elements of . So it tells you how to compute different outputs out of inputs from . Thanks to the properties of products, you can show that these different computations are unrelated so it is actually different terms (this is not the case when looking at monoidal theories).
There is a missing message that could not compile for some reason, let me try again. OK kinda fixed it. But there is a sentence where I cannot put dollar signs around 'n' for some reason. It should be: takes a choice of inputs in and computes outputs in
Why this compiles here but not in my earlier message I don't know.
Well now my older paragraph has reappeared without my doing anything. This is really confusing...
@Ralph Sarkis Yes, I'm aware of what you say, but my questions was exactly about what you claim that can "be more complicated to see". Could you come up with an example of a that would give to me a feeling what some strange morphism in could mean in the interpretation of ,i.e. a functor to ? In particular should not be a single object generated, so try and generated and the interpretation of a morphism in question in it should be somehow self-explanatory for my question.
For an exemple of a simple theory, yet not generated by asingle sort, what about the theory of a ring together with a module ?
It's a theory whose models are pairs of a ring and an -module . In terms of signature, it is generated by two sorts and , operations , , together with the ring and modules equations. I guess an explicit presentation of can be worked out using freely generated rings and modules.
The simplest typical example of a that isn't generated by a single object is the theory of "A monoid and a set it acts on." This is generated under finite products by two objects, call them and As in any finite product category with object generators the only interesting morphisms are those with codomain and There are morphisms comprising a full subcategory of isomorphic to the usual theory of monoids, while there are no morphisms or other than those built from product projections and diagonals. The most interesting part of the theory, then, is the class of morphisms which, in a model, correspond to the action on an element of by elements of the monoid
Anyway, that's my suggested example. If you feel comfortable with the categorical interpretation of single-sorted algebraic theories I hope this gets across how the multi-sorted variant is not really so different. Now, if 's objects weren't freely generated under finite products by any class of objects at all then you'd have something quite unusual-feeling, but that's not a case you're particularly likely to meet in practice.
Ah, @Kenji Maillard, I see we've given basically the same answer across the abelian/non-abelian divide--just a note that Zulip annoyingly insists on double dollar signs for Latex.
Letting be the category of non-empty finite sets, you find that is the category of Boolean algebras. Todd nicely explained this in his answer on MO.
I have 2 more problems with the notion of algebraic theory. Why the requirement of being finite product complete is exactly the right one for this notion ? Second question is in the snippet below in the example 1.10: why the right morphisms in -sorted sets go contravariantly: ?
Snímek-obrazovky-2023-11-06-195900.png
I find it hard to answer your first question intelligently (I am not even convinced of what I am saying).
Functorial semantics (defining algebras as functors) is a very cool idea (thanks Lawvere) and the list of examples in this chapter shows you it encapsulates what lots of people consider to be algebraic. It led to many advances in categorical algebra and categorical logic, so I don't think anyone would say it is not the right notion.
Now, for why it is exactly right, one way I like to think of it is via [[Fox theorem]]. Informally, this theorem says that categories with finite products are preciely the places where you can copy and delete variables without problems. If you think about algebra as manipulating operation symbols and variables on paper (in a 1-dimensional syntax), in particular, copying and deleting variables as it pleases you, then you would expect the categories that embody the syntax for algebra are categories with finite products.
For the second question, there many ways to see this, I think unrolling what a product-preserving functor is and seeing it corresponds to a -sorted set would be the most enlightening. In particular, you need to decompose any morphism in as a composition of projections and pairings (what are they in ) because these you know are preserved by the functor.
If you only look at the case , hence , you see that a morphism is sent to a function that takes where each is equal to some . You can see this morphism as terms in the empty signature over variables, so each of the term is a single variable and if the th term is the th variable.
Jan Pax said:
I have 2 more problems with the notion of algebraic theory. Why is the requirement of being finite product complete exactly the right one for this notion?
Because otherwise you get a different notion! :upside_down: If you want to use infinite products that's fine, but you get a different concept, sometimes called an infinitary algebraic theory. It can be very useful to restrict to products of order at most some cardinal . If you want to use finite limits that's fine too, but you get yet another notion, sometimes called a finite limits theory or essentially algebraic theory. If you want to use arbitrary limits, or limits of diagrams whose cardinality is at most some cardinal, that's also good. All these different notions are important and well-studied.
Jan Pax said:
Why the requirement of being finite product complete is exactly the right one for this notion ?
The operations we require in our categories tell us what the (co)domains of our function symbols are allowed to be. (For now, let's only allow equational axioms).
So for instance, finite product theories have some basic sorts, and then we allow function symbols from the products of those sorts to each other! For instance with groups multiplication is defined on , and for the 2-sorted theory of (ring, module) pairs, the action is defined on . So the finite product structure is precisely what's used for the domains of our function symbols.
Contrast this with the finite limit theories (read: essentially algebraic theories) which allow partial operations, defined only on "nice" subsets of products of the basic sorts. For instance, in the definition of a category, composition is defined on (where is the set of arrows). Notice we need products and equalizers in order to build this domain, so we're using the full finite limit structure!
This story gets more complicated once we start adding relation symbols, but the basic premise that the operations on our (base) category correspond to allowable operations on our basic sorts remains useful intuition
@Ralph Sarkis Is it correct to say that your are exactly the standard variables where each appears only once so that is a permutation ? Otherwise there might occur some ambiguity in defining the morphism . If there were more than 1 variable/term th which is the more than 1 th variable then will not be defined uniquely or correctly or undefined at all ? What is mind-boggling is this sentence: each is equal to some and there might go something wrong in defining ? I cannot resolve this.
These and are elements of , so the function defined by is , i.e. the semantics of for some particular algebra .
One way people represent terms is where each is a term over variables (we can call them ). Then the semantics of takes an element of to the -tuple whose th coordinate is the value of when the variables are assigned value .
Since terms are over the empty signature, they will be really simple. For instance will be interpreted (in some algebra) as the function sending to .
Jan Pax said:
Why the requirement of being finite product complete is exactly the right one for this notion ?
To complement John's answer, my naive intuition is that, compared to finite limit theories, finite product theories have the nice property that reflexive coequalisers of algebras are well-behaved. To take the monad perspective (and restricting to nice categories), you probably know that forgetful functors from -algebras to the base category create limits, but not colimits in general.
The colimits that are created are in fact the ones that preserves (this is easy to prove).
In particular: