You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
A functor preserves the operation of composition in the sense that . It is possible to define a new two-argument operation in terms of , and I'm curious when such an operation will also be preserved by the functor.
For example, let . When and are such that is defined, then . In this case, the operation is preserved by . As a second example, let . When is defined, then . So the operation is also preserved by .
I'm curious if this always works. However, I'm not sure how to tackle this question. As a first step, I could try and precisely describe what I mean by a a "two argument-operation defined in terms of ". Perhaps I could try and define this set of operations inductively, and then try to prove that preserves all these operations using (structural?) induction on the resulting set.
Once I understand the case above, I'm also interested in the case where is a map that preserves multiple operations. For example, if is a homorphism between Boolean algebras then and . Then will preserve new two-argument operations defined in terms of and ?
In general the answer is yes, and one way to make it precise is with universal-algebraic tools such as "operads" and "Lawvere theories". For instance, the Lawvere theory of Boolean algebras is a category with finite products generated "freely" by one object that has the structure of an internal Boolean algebra. Then the morphisms in this category are precisely the "-ary operations that can be defined in terms of the defining Boolean algebra operations". Moreover, freeness of means that to give an actual Boolean algebra is equivalent to giving a finite-product-preserving functor , and similarly to give a Boolean algebra homomorphism is equivalent to giving a natural transformation between such functors. Naturality of such a transformation then tells you that any such homomorphism automatically commutes with any of the "definable operations". The case of categories is similar, except that you need something fancier instead of a finite-product category.
Thanks @Mike Shulman! I will plan to read your response more carefully and type something up in response when I have the energy.
David Egolf said:
A functor preserves the operation of composition in the sense that . It is possible to define a new two-argument operation in terms of , and I'm curious when such an operation will also be preserved by the functor.
[...]
I'm curious if this always works.
Yes it does!
However, I'm not sure how to tackle this question. As a first step, I could try and precisely describe what I mean by a "two-argument operation defined in terms of ". Perhaps I could try and define this set of operations inductively, and then try to prove that preserves all these operations using (structural?) induction on the resulting set.
That basically works, but one thing @Mike Shulman was trying to tell you is that the induction will work a lot easier if you use arbitrary finitary operatons: -argument operations for arbitrary Plus you'll get a stronger result: if preserves some binary operation it'll preserve all finitary operations defined in terms of that binary operation.
And it's not really harder to prove an even stronger result: : if preserves some collection of finitary operations, it'll preserve all the finitary operations defined in terms of those operations!
The formalisms Mike was alluding to - "operads" and "Lawvere theories" - give two distinct ways to make precise the concept of "defined in terms of".
But while they're incredibly fascinating formalisms which I (among many) have spent years studying, you don't strictly need them to carry out the inductive argument you're hinting at.
They become essential when you want to go further with the study of finitary operations.
Mike Shulman said:
In general the answer is yes, and one way to make it precise is with universal-algebraic tools such as "operads" and "Lawvere theories". For instance, the Lawvere theory of Boolean algebras is a category with finite products generated "freely" by one object that has the structure of an internal Boolean algebra. Then the morphisms in this category are precisely the "-ary operations that can be defined in terms of the defining Boolean algebra operations". Moreover, freeness of means that to give an actual Boolean algebra is equivalent to giving a finite-product-preserving functor , and similarly to give a Boolean algebra homomorphism is equivalent to giving a natural transformation between such functors. Naturality of such a transformation then tells you that any such homomorphism automatically commutes with any of the "definable operations". The case of categories is similar, except that you need something fancier instead of a finite-product category.
This was great motivation for me to learn a bit about Lawvere theories, which have always scared me away before. I don't understand all the details, but I think I follow the big picture reasonably well now!
A couple things that would require more work from me to understand are:
Anyways, this was interesting to learn a bit about!
In general, the Lawvere theory of is always the opposite of the category of finitely generated free -algebras. So for Boolean algebras, it would be the opposite of the category of finitely generated free Boolean algebras.
Following up from the first point, it would be nice to concretely describe the "definable" operations that commute with homomorphisms.
The "definable" operations are precisely the morphisms in your Lawvere theory; any homomorphism between models of the Lawvere theory will automatically preserve (commute with) all these operations.
That's because a model of a Lawvere theory is a (nice) functor from that theory to Set, say , and a homomorphism between Lawvere theories is (any) natural transformation between such functors. As you know, a natural transformation between two functors out of gives a commutative square for each morphism in . These commutative squares say that homomorphisms between models of commute with all the operations in !
How do we figure out the structure of a Lawvere theory so that product preserving functors from to will yield examples of the kind of algebraic object of interest?
Mike told you one method: if you hand me the category of finitely generated groups, I can just take its opposite and that's the Lawvere theory for groups, and similarly for any other gadget described by a Lawvere theory. This is an amazing, wonderful theorem that becomes quite obvious when you look at a few examples. But this trick only works if your gadget is described by a Lawvere theory, and it can sometimes be rather awkward to "hand someone" the category of all finitely generated gadgets of some sort.
John Baez said:
Following up from the first point, it would be nice to concretely describe the "definable" operations that commute with homomorphisms.
The "definable" operations are precisely the morphisms in your Lawvere theory; any homomorphism between models of the Lawvere theory will automatically preserve (commute with) all these operations.
I think I understand that. I'm curious how we figure out what those morphisms in the Lawvere theory are, I suppose. So, if I start with a kind of algebraic object of interest (for example - Boolean algebras), there are some operations that we know are preserved by homomorphisms to begin with (for example, and ). These operations correspond to morphisms in the Lawvere theory corresponding to that kind of algebraic object. However, there will also be additional morphisms in the Lawvere theory describing the "definable" operations. The process by which these additional morphisms are formed in the Lawvere theory is not yet clear to me.
However, now that I know how we can form the Lawvere theory for an algebraic structure of interest, (as you have both described above), maybe I could figure out the answer to this question.
How do we figure out the structure of a Lawvere theory so that product preserving functors from to will yield examples of the kind of algebraic object of interest?
Mike told you one method: if you hand me the category of finitely generated groups, I can just take its opposite and that's the Lawvere theory for groups, and similarly for any other gadget described by a Lawvere theory. This is an amazing, wonderful theorem that becomes quite obvious when you look at a few examples. But this trick only works if your gadget is described by a Lawvere theory, and it can sometimes be rather awkward to "hand someone" the category of all finitely generated gadgets of some sort.
Luckily there's another method. If you hand me a bunch of finitary operations obeying a bunch of equations, I can inductively build all the operations obtainable from these operations, and all the equations they obey - and if I do it right this is a Lawvere theory! E.g. starting with the famous operations of a group and the famous equations they must obey, I will get the Lawvere theory for groups.
But you'll notice this "inductively building up" business is exactly what you were asking about at first. So there's not really any way around it - which is why I didn't advertise Lawvere theories at first: you were well on the way to reinventing them. The only hint I had to give you is not to focus solely on binary operations!
The process by which these additional morphisms are formed in the Lawvere theory is not yet clear to me.
A Lawvere theory is a category with finite products, and this tells you how to get new morphisms from old: first by composing them (which you can do in any category), and then by a coupld other tricks, which are built into the definition of "finite products".
For example in any category with finite products we have a "diagonal morphism" for any object , so one operation we always get to use is something like
This "duplicates a variable".
In the Lawvere theory for groups we also have an operation called "multiplication" , which lets us multiply. For short we could write it like this:
Composing this with we get a squaring operation which does this:
.
And so on. When you understand finite products, you know all the tricks here.
Ah, I see! A morphism to from some object is determined uniquely by morphisms from to . So, if we know some morphisms to we can induce new morphisms to . I think the duplication example you gave is the induced morphism from to induced by two copies of the identity morphism from to .
Then, as you pointed out, we can get even more morphisms by composing the morphisms we get this way with other rmorphisms we already have!
So this seems like a way to get at new operations that will be preserved by homomorphisms, starting with old ones that we already know will be preserved.
Yes, exactly! Here are three other things we can do in any category with finite products. Given a morphism and a morphism we can "set them side by side", forming the morphism
Also, every category with finite products lets us "delete data", since there's a unique morphism from any to the terminal object . This lets us take a morphism and "throw out" any one of the outputs and get a morphism from to .
Thirdly, there's a god-given isomorphism that lets us "switch things around".
These three, together with composing morphisms and "duplicating data" using the diagonal are all the basic ways, to build new morphisms from old in a category with finite products.
Thanks for pointing out those other things we can do! They were interesting to think about.
I think the list of things you've provided closely relates to the idea that we can view a category with finite products as a cartesian monoidal category.
Yes, very good point! I was breaking the concept of "category with finite products" into pieces using the fact that it can be seen as a cartesian monoidal category.
And it turns out to be very interesting to drop the cartesianness and see what you can do using just a symmetric monoidal category. Then you can still build new operations (morphisms) from old by composing them or "setting them side by side" (tensoring them), but you cannot duplicate or delete arguments.
Mike Shulman said:
In general, the Lawvere theory of is always the opposite of the category of finitely generated free -algebras. So for Boolean algebras, it would be the opposite of the category of finitely generated free Boolean algebras.
for a nice reference of this fact (and Lawvere theories in general), Algebraic Theories by Adámek, Rosický and Vitale is a must
specifically, that's corollary 11.22
(I have to say I was wondering if that was true because I was thinking of animated ∞-categories, where the algebraic theory you get out of the category you want to animate might not be the algebraic theory you started out with, ie. for abelian groups you get the theory of -modules, but it is obtained by the finitely presentable objects instead. Haven't thought about this in some time)
I thought I'd prove this as an exercise:
Take a chain of function applications:
Because of associativity, this chain is equivalent to applying the first function and then applying the rest:
Let there exist a functor with our category as a source. According to the functor laws, the above chain is equivalent to.
But is a chain of functional applications, so it is equivalent to . Applying this gives us:
From then on we proceed inductively, until we reach the end.
Feedback is welcome, I am new in writing proofs.
Hi Jencel!
If I understand properly, I think you were working on proving that for some sequence of composable morphisms in the category that is mapping from.
That's not quite what I was asking about in this thread, but it does seem related!
Hm, I guess it is indeed not all there is to it.
Let be a hom set for an object in the source category
(I assume it is the same object, as otherwise you won't be able to generate the operator in a total way).
, , ...
And let be the hom set for a corresponding object of the target.
, , ...
What we want to prove (if I am not mistaken) is that any morphism that is a result of a composition of morphisms in is mapped to the result of the composition of the corresponding morphisms in .
To do so, we take a morphism in and it's counterpart in
But we already proved that
so