You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I've been working through an introductory book on group theory ("Contemporary Abstract Algebra" by Gallian), and had the fun idea of trying to "translate" definitions and theorems in the book into categorical language. Almost immediately, this plan has run into a problem, which I thought would be interesting to ask about here.
To set the context for this problem, I've realized that there are two "levels" one can use to consider groups in the setting of category theory. On the "lower" level, to translate some group into categorical language, we can consider a single-object category with morphisms satisfying the compositional properties that the element of satisfy. On the "higher" level, we can model some things about groups by considering the category where objects are groups and morphisms are group homomorphisms.
Using the "lower" categorical level to prove statements about a single group yields arguments identical to those in the introductory book I'm following, which isn't much fun. On the other hand, at least sometimes, statements can be translated to and this tends to take more thought. As an example, I believe that we can translate the concept of subgroup to by defining a subgroup of a group in this setting to be a monomorphism from some group to .
However, I then came to this theorem in the text (Gallian, page 48):
In a group , the right and left cancellation laws hold; that is,
implies , and implies .
Using the "lower" levelling modelling, this is fast to prove - the result follows immediately by using the fact that every morphism is an isomorphism.
However, I would be interested in figuring out the corresponding statement (should there be one) in the setting of . At least to my taste, it feels a bit contrary to the spirit of working in to consider the elements "inside" of individual objects. To my understanding, a key philosophy of category theory is that we can understand objects in terms of morphisms from and to them. So, I am interested in the corresponding statement in stated in terms of morphisms to and from (and possibly other auxiliary groups).
My initial thought is to wonder if we can create morphisms in that correspond to elements. For example, one can do this sort of thing in the category of sets by considering different morphisms from a singleton set to a set of interest. However, morphisms from the single-element group do not perform this "element identifying" function in , as every morphism from the single-element group must be the morphism sending the identity element to the identity element. I don't think we can specify individual elements in terms of morphisms in . However, I believe we can specify individual subgroups of in terms of morphisms in - each subgroup corresponds to a monomorphism into . If we can specify subgroups using morphisms in , this makes me wonder if the statement about cancellation we are trying to "translate" needs to first be translated into some statements on subgroups. However, the statement about cancellation doesn't really seem to be about subgroups in any way that is obvious to me. So, I guess I got stuck at this point.
Perhaps a simpler place to start is to consider if we can figure out how many elements a group has by considering morphisms to and from groups. This is possible at least in one case: if we know there is exactly one morphism from some group to every other group, then we know that this group must have a single element. Maybe this means that the right perspective here is to study groups in terms of the universal properties they satisfy?
In summary - I am wondering how much we can say about a group by studying morphisms to and from it in , and how we can use this to "translate" statements about elements of to statements about morphisms in .
The forgetful functor from the category of groups to the category of sets is representable. That means there is a certain group, call it (hint!), such that group homomorphisms are in natural bijection with elements of . Therefore, by brute force if necessary, you can translate any statement involving elements of a group into a statement about certain group homomorphisms.
David Egolf said:
As an example, I believe that we can translate the concept of subgroup to by defining a subgroup of a group in this setting to be a monomorphism from some group to .
You're definitely on the right track, but two different monomorphisms can define the same subgroup of . A subgroup of is really a certain equivalence class of monomorphisms into . You might try figuring out this equivalence relation.
This idea is very important in category theory! A "subobject" of an object in a category is a certain equivalence class of monomorphisms into that object. You can figure out this equivalence relation by looking at a couple of examples, like the category of groups and the category of sets.
If you get stuck you can look this up on the nLab or Wikipedia.
However, I bet you can figure out the idea on your own, and that would fit into your project very nicely!
Related is this question on mathoverflow: Could groups be used instead of sets as a foundation of mathematics?. The answer shows than any mathematical statement can be translated into a fact about .
Zhen Lin Low said:
The forgetful functor from the category of groups to the category of sets is representable. That means there is a certain group, call it (hint!), such that group homomorphisms are in natural bijection with elements of .
If I understand, part of the idea is that a group homomorphism is totally determined by what element in it sends to. (Here is the group of integers with addition). So, we can associate the element with the group homomorphism with . There is exactly one homomorphism like this per element in , so we have a bijective relationship.
This relates to the idea that morphisms have subspaces as their image, not individual elements. So, the things we specify about a group by probing it with morphisms should generally be in terms of information about subspaces. From that perspective, it is tempting to consider "translating" the element to the subspace generated by . This is sort of what is going on here, as the image of associated with (so that ) has as its image .
I was wondering why is such a nice group for associating morphisms with elements. I think the answer is that is the free group on one element. This ensures that there is indeed a group homomorphism so that for any . If the source group was not free, there would be additional equations that would hold in it, and that would need to hold in the image of a homomorphism. This could prevent some of these desired homomorphisms from existing.
Trying to generalize, I suppose if we want to consider pairs of elements in , we might consider associating these with homomorphisms from the free group on two elements (which I am calling ). If we call the two generators of this free group and , then the homomorphism such that and would be associated with the pair of elements in .
Oscar Cunningham said:
Related is this question on mathoverflow: Could groups be used instead of sets as a foundation of mathematics?. The answer shows than any mathematical statement can be translated into a fact about .
That's amazing! The link is over my head, but that's still really interesting.
I was wondering why is such a nice group for associating morphisms with elements. I think the answer is that is the free group on one element.
Yes, that's it. And it works for lots of things, not just groups. For example, consider the ring of polynomials with integer variables in one variable, called . This is the free ring on one variable. So, there's a natural bijection between elements of any ring and homomorphisms from to .
There's a whole big fat theory of this stuff, which I will not burden you with now.
Suffice it to say that when I said "it works for lots of things", people know a vast amount about exactly which things.
David Egolf said:
This relates to the idea that morphisms have subspaces as their image, not individual elements. So, the things we specify about a group by probing it with morphisms should generally be in terms of information about subspaces. From that perspective, it is tempting to consider "translating" the element to the subspace generated by . This is sort of what is going on here, as the image of associated with (so that ) has as its image .
No, you should not do this. Information is lost when you pass to the image. For example, cannot distinguish between elements and their inverse.
John Baez said:
You're definitely on the right track, but two different monomorphisms can define the same subgroup of . A subgroup of is really a certain equivalence class of monomorphisms into . You might try figuring out this equivalence relation.
This idea is very important in category theory! A "subobject" of an object in a category is a certain equivalence class of monomorphisms into that object. You can figure out this equivalence relation by looking at a couple of examples, like the category of groups and the category of sets.
Yes, that's a good point. I suppose it is a bad idea to have multiple distinct monomorphisms that we each consider in isolation to "be" a given subgroup. We want, then, to figure out an equivalence relationship on monomorphisms into a group . We want two monomorphisms to be equivalent, , exactly when they specify the same subgroup.
The following line of argument is tempting. Say and . I think of a monomorphism from to as specifying that the structure of exists in . This is because, as a monomorphism in , it has to be injective and therefore not collapse any subgroups to the identity. So, it is tempting to say that and are specifying the same thing about exactly when is isomorphic to , so that there exists a group isomorphism . So, we would say .
I'm not sure if this makes sense, though. We can apply this construction to the category of sets as a test. In this setting, monomorphisms are injective maps between sets, and we would like to associate an equivalence class of monomorphisms into a set with a subset of . Say and we have two other sets and . Now define two monomorphisms, with and , and and . We have that is isomorphic to , and so under the equivalence relationship suggested above, modified for the category of sets, we would claim that and should be considered as equivalent, and they should specify the same subset of . However, notice that the image of and the image of are different. I think our equivalence relationship only ensures that .
I am not sure if this state of affairs is satisfying. In I expect we would get a similar result, and an equivalence class of monomorphisms under this equivalence relationship would only correspond to an equivalence class of isomorphic subgroups of - not to a particular subgroup.
Maybe we can fix this. What we want is . Let us now require that an isomorphism exists that also satisfies . Under this condition, , because is surjective (because it is an isomorphism), and finally because . This condition on also appears to be necessary, as is illustrated by the example with sets above.
So, I think we want for some isomorphism , to be the rule for considering monomorphisms to be equivalent with respect to the subobject they identity.
Zhen Lin Low said:
No, you should not do this. Information is lost when you pass to the image. For example, cannot distinguish between elements and their inverse.
That is a good point!
David Egolf said:
So, I think we want for some isomorphism , to be the rule for considering monomorphisms to be equivalent with respect to the subobject they identity.
That's exactly right! And this is the equivalence relation on monomorphisms that we use when defining a subobject of an object in any category!
Then people take results about subsets, and subgroups, etc., and see how to generalize them to subobjects in general categories. Some theorems generalize to all categories, while others generalize only to certain specially nice categories, like 'regular' categories.
[It is possible this related question was already answered above (and that I just need to work on understanding those answers).]
I was reading in "Category Theory In Context" (by Riehl) and was struck by this statement on page 3:
...the algebra of morphisms determines the category...
This is interesting to consider in the context of groups.
A single group , viewed as a category, is specified in this way as follows, I think:
However, it seems that we usually do things a bit differently when defining the category having groups as its objects and group homomorphisms as its morphisms. This specification of the category goes something like this:
-The objects are all the groups
-A morphism from to must satisfy for all , where . In addition, we must have .
This description of "mixes levels", in that it uses data about composition of morphisms in the categories one sees when "zooming in" on individual groups. It does not directly specify the algebra of composition of morphisms in . As a result, when working in , one often considers elements inside objects - instead of focusing entirely on morphisms between objects.
How might one go about specifying by directly describing the algebra of morphisms, without using the auxiliary concept of elements internal to the obejcts?
(My initial idea is to start by considering the algebra of morphisms about special universal objects and try to build up things from there).
How might one go about specifying by directly describing the algebra of morphisms, without using the auxiliary concept of elements internal to the objects?
The most important thing to realize is that doing this would be a virtuoso exercise - not something we need to do, and not something we should spend our time on unless we happen to enjoy it.
David Egolf said:
How might one go about specifying by directly describing the algebra of morphisms, without using the auxiliary concept of elements internal to the obejcts?
This is essentially how [[Lawvere theories]] work: there is a Lawvere theory of groups, and the category of groups is given by the category of finite-product-preserving functors from into . All the elementwise structure is encapsulated in the category of sets, distinct from the pure algebraic structure of groups.
But in this approach a group is still a set equipped with extra structure. You can't instantly read off the morphisms in from looking at the Lawvere theory . You have to determine the models of , which are sets equipped with operations in (namely groups), and then determine the morphisms between these models, which are functions preserving these operations (namely group homomorphisms).
Lawvere theories are great, but we don't want David to be fooled into thinking they're a trick for "describing the algebra of morphisms [in ], without using the auxiliary concept of elements internal to the objects".
What they are is a trick for describing all the operations that any group has, and the laws they obey, without referring to any particular group or its elements.
Thanks to both of you for your interesting responses. It is very helpful to know that the "algebra of morphisms" characterization of is not necessarily something worth pursuing. I will keep note of the phrase "Lawvere theory" for future reference.
I am beginning to think that some statements about individual groups are just best to state in terms of elements, even if they can be instead stated in terms of morphisms to and from the group. In particular, I suspect this is the case for statements that hold for any group - and are not dependent on particular structure. So, in my "translate introductory group theory into categorical terms" project, some statements will perhaps be best stated in terms of elements, while others may be better stated in in terms of group homomorphisms.
I did notice (I think) that the compositional structure of a group can be detected by morphisms from the free group on two elements, which was interesting. If we wish to know what the result is from composing with in a group , we can consider the morphism from the free group on two elements - which sends the first generator to and the second to . If we call the two generators and , then . So, if we have a description of terms of elements of this morphism, we can figure out the internal structure of in terms of elements. It can also be interesting to look at the kernel of , which tells us about the non-free relationships that hold between and . However, I don't think this is really helpful in providing a different perspective on proving things about elements in a group that are required to hold for all groups (such as cancellation).
In fact, any equation that holds in the free group on infinitely many generators will hold for every group.
A related question that comes to mind - is there a description in of the "free group on one element" that does not make use of elements or "underlying sets", but is instead in terms of the algebra of morphisms in ?
I believe there is, yes, based on its characterisation as the infinite cyclic group.
I've been thinking about it, but I haven't been able to think of a way to specify the infinite cyclic group in terms of the algebra of morphisms in . Any hints would be appreciated.
There is one described in this MO answer.
Reid Barton said:
There is one described in this MO answer.
Great! Thanks.
Earlier in this thread, I was talking about how I wanted to relate statements in an introductory abstract algebra book to corresponding categorical statements. One such statement is that left-cancellation holds in a group: in a group , we have .
This statement can be proved in categorical language, but the way I was doing this did not seem that interesting. I was modelling the group as a category with a single object where each morphism is an isomorphism. The process of proof ends up being very similar to that in the introductory book.
However, I was recently learning about opposite categories and this gave me an idea for how to make the categorical exploration of these theorems a little more interesting. The idea is to follow this process:
Here is how the process would look for the left cancellation statement above:
Yes, you can do that for groups directly too. You can construct for every group an opposite group – but a special thing about groups (and groupoids) is that they are isomorphic to their opposites!
You have really interesting ideas and good intuition ! In the case of groups, it’s effectively quite trivial but I believe that in some other situations, finding the dual theorem is way more interesting. I believe that this kind of duality was known for some problems of plane geometry maybe before the advent of category theory. I don’t know interesting examples of this but I know it exists. Maybe somebody will be able to develop this point.
I remember, I read this in the introduction of the book Quantum groups : A path to current algebra by Ross Street. There is such duality in « projective plane geometry » and he gives the example of two theorems known as Pascal’s theorem and Bianchon’s theorem which are duals.
Jean-Baptiste Vienney said:
I remember, I read this in the introduction of the book Quantum groups : A path to current algebra by Ross Street. There is such duality in « projective plane geometry » and he gives the example of two theorems known as Pascal’s theorem and Bianchon’s theorem which are duals.
Very interesting! It looks like I can get a copy through my school library, so I'll be able to take a look.
But we always have dual definitions and dual theorems in category theory, you will see that with the further things you will learn. So you take a definition, put a « co » before and have a new definition. Related to the fields I know a little, we have a structure which is named «differential category », if we reverse all the arrows it gives us a « codifferential categories ». The definition without the « co » comes from logic where we found some years ago differential logic. But in Mathematics, not like in logic, it seems that this is the codifferential categories which are more naturals, quite funny ! Because of that fact people need often to add some « co » in their phrases. This is the same story as with electrons, bad luck, we chose at some time that the current is positive in some direction and not the other one and because of that electrons have now a negative charge and not a positive one. Well, this is what I have in my head about duality :grinning_face_with_smiling_eyes: . It’s no more related to groups.
Re using duality to get theorems for free: this two for the price of one observation is used all over the place. You might have heard that left adjoints preserve colimits; the dual result is that right adjoints preserve limits, and we get this for free!
Some definitions of specific classes of category are self-dual, which allows us to perform similar deductions. For example, the definition of abelian category is self-dual, which means that any result one can prove about abelian categories is also true for opposites of abelian categories. This becomes a lot more interesting when you consider that the typical examples of abelian categories (categories of modules over commutative rings, say) are far from being equivalent to their opposites.