You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I have been thinking about categories more as abstract objects in their own right, where you do not need to interpret their objects and arrows as some underlying mathematical objects like sets, topologies, etc.
This has made me wonder if it is mathematically rigorous to even say that the objects and arrows of a category “are” anything, other than, abstractly, something called “objects” and “arrows”.
If we say that the arrows are actually group homomorphisms, for example, couldn’t we have some technical apparatus declaring like, a correspondence, first, between objects in group theory, and the abstract elements of a category in category theory, before we go ahead and start to use concepts in category theory to prove, or describe, things in group theory?
You can prove theorems about abstract categories where you don't say what the objects and morphisms are, but you can also start in some framework (like traditional set theory, or type theory, etc.) and define (say) "groups" and "homomorphisms" between them, and then prove they form a category, and then prove theorems about that category, where you get to use properties of groups and homomorphisms. Both approaches are extremely common, and they interact nicely.
Ok, thanks.
This is sort of interesting, https://mathoverflow.net/questions/82526/does-there-exist-a-name-for-a-nonassociative-category-without-identities , wondering why associativity is so important, if useful structures only emerge with it present. But here it seems there is a whole theory of non-associative categories.
Well, I don't know that it's such a big theory. I guess about the same size theory as for magmas, which are "structures without properties", hence useful for some purposes.
Category theory is much bigger, and associativity and identities really are important. One way (just one way) of thinking about associativity is that it guarantees that hom-functors actually are functors (preserve composition). And associativity together with units are crucial to the proof of the Yoneda lemma, in all of its glory and manifestations. It is to my mind is the most fundamental result of category theory (although it takes time to appreciate why it's so fundamental: it brings together many themes surrounding universal properties and representable functors and adjunctions and so much more).
Another perspective regarding whether objects "are "something:
As far as I understand, the data describing an object in a category only has impact on the mathematical structure of that category if it has some impact on what morphisms exist in that category and how they compose. For example, I could define a category where the objects are groups, and the morphisms are all functions between the underlying sets of these groups. This category is a lot like a category where the objects are sets and the morphisms are functions between them. If, however, we require these functions between groups to preserve the group structure (so the functions describe group homomorphisms), then the resulting category becomes very different from the category of sets with functions between them.
I think this perspective is especially relevant when trying to think about how to apply categories. I suspect it's probably good to call an object in some category by some intuitive name relating to an application, and to associate some application-relevant data to that object. However, unless this data has some impact on the morphisms of the category being used to model an application, that application data is not really being reflected in the mathematical structure of the category.
There's nothing particularly special about category theory here: all structures in mathematics can be studied either abstractly, where the constituents of the structure have no known internal identity other than their "places" in the structure, or concretely using specific examples where the elements have other identity. For instance, in an abstract group the elements aren't assumed to "be" anything other than "the elements of the underlying set of the group", but in a concrete group these elements may be numbers, matrices, permutations, etc.
Yes, organizing transformations of some set, vector space, etc. into a group and saying that the abstract group is what really matters is a distillation which involves a choice of opinion about what "really matters", much as organizing some structured sets and structure-preserving maps into a category and saying the abstract category is what "really matters" - in fact it's a special case.
Luckily, category theory has ways of recording the concreteness of such situations if we want to: we're not stuck in abstractness. See [[concrete category]], for example.
Thanks, those are the kinds of perspectives I was hoping to hear. I now have an open-ended beginner’s question kind of related to the above.
I was thinking about how perhaps the defining conditions on a functor could be deduced just by the notion that it is supposed to be a “morphism between categories”. But then I asked myself, what properties would I like functors to fulfill, such that their defining conditions (ie, preserving identities, and composition of arrows), manifest those properties?
I have been thinking about how ultimately, the arrows in a category don’t need to “act” on the objects in any way, they can be meaningless token objects which are simply defined as having one object as a domain and one object for a codomain. So, I think we could define a meaningless “category of categories” where the objects are categories but the arrows are not normal functors, they are just, I don’t know, assignments of objects, with no special conditions on the arrows.
So, how might we realize why we would want the definition of a functor to be what it is, if we were trying to define a “category of categories”? Why do we want composition of arrows preserved, by the morphisms?
Thank you
I think part of the answer was already given in this:
One way (just one way) of thinking about associativity is that it guarantees that hom-functors actually are functors (preserve composition). And associativity together with units are crucial to the proof of the Yoneda lemma
But I need to think about it some more.
There’s nothing forcing you to want composition of arrows to be preserved. But there’s a long history in abstract algebra of focusing on homomorphisms, ie the maps that preserve every operation present in your theory. A functor is the immediate notion of homomorphism for categories in this sense. Furthermore, a mapping on objects and morphisms respecting source and target but not necessarily composition is just a homomorphism of the underlying graphs, which makes it unnatural to claim you’re describing a category of categories at all.
Right, you can define a category by taking the objects to be anything you want and the morphisms to be anything you want. You could define a category whose objects are groups and whose morphisms are functions between the underlying sets of groups, not necessarily respecting the group operations. And similarly when the objects are categories.
That said, there are precise senses in which functors are the "canonical" morphisms between categories. For instance, there is an [[essentially algebraic theory]] whose models are categories, and whose homomorphisms of models are functors.
Just as there is an [[algebraic theory]] whose models are groups, and whose homomorphisms are group homomorphisms.
I find it helpful to think of functors as "sending true equations to true equations":
From this perspective, if you have a functor , and you know some true equations in , then you can use the functor to get true equations in . So, functors can be used as a tool to take things that we know in one category, and figure out new (but related) things in a different category.
Broadly, in any category, I believe we want the internal structure of our objects to have an impact on which morphisms exist and how they compose. So, in a category of categories, we want the morphisms to be closely linked to the internal structure of the categories they are mapping between.
There's a geometric / higher categorical interpretation of that too. You can think of a category as a certain kind of 2-dimensional "cell complex" with vertices being objects, edges being morphisms, and 2-dimensional faces being commutative polygons. Then a functor is just a "cellular map" that takes vertices to vertices, edges to edges, and faces to faces, the last being a geometric version of "sending true equations to true equations".
There's also the concept of a "walking" category, which encodes the axioms of a particular type of object, such as a groups, graphs, etc. Functors from these walking categories into an abstract category C are called "___ objects in C" , for example functors from the walking graph category into C are "graph objects in C". Replacing C with Set typically returns the original concept, i.e. Fun(graph, Set) = Graph, the category of graphs. However, you have to be careful, sometimes the original type of object requires that you change the ambient categorical structure to re-obtain that type of object. That is, instead of functors, you might need to consider functors preserving monoidal products, or certain limits, or whatever, this is typically referred to as a "doctrine". For example there is a walking monoidal category encoding the axioms of a group so that MonoidalFun(group, Set) = Group, the category of groups. So then we can extend the notion of group to other contexts, for example MonoidalFun(group, Graph) = group object in Graph = Graphs with a multiplication satisfying the group axioms.
In this way, we can loosely consider objects of Fun*( W, C) as objects of type W in a category C. Even though we may not know exactly what the objects of C are, the objects of Fun*( W, C) satisfy the categorified axioms that W encodes. (here Fun* may refer to extra ambient categorical structure)
Now, if you were to find an equivalence (or weaker) of your original category C Fun*( W, D), you might be on your way to saying that the objects of C "are" something.
don't you need the category to be cartesian for groups?
You can capture the category of groups either as group objects in Set (a limit notion) or as Hopf algebras in Set (a monoidal notion), perhaps this was meant?
A cartesian category is monoidal, though...
David Michael Roberts said:
A cartesian category is monoidal, though...
it is, but don't you need cartesianness to state that the inverse is actually an inverse? or are duplication and deletion maps preserved by monoidal functors and thus only the walking category needs to be cartesian? it'd help to have a reference on this
Monoidal functors send the duplication and deletion morphisms to some morphisms, of course, and these obey the same equations (duplication followed by deleting either argument is the identity). But the best way to say this is that:
in a cartesian category the duplication and deletion maps make every object into a [[comonoid]],
in a cartesian category every object becomes a [[comonoid]] in a unique way,
monoidal functors send comonoids to comonoids.
Also, everything I said would remain true if we replaced "comonoid" by "cocommutative comonoid" and "monoidal" by "symmetric monoidal".
All this should be in the nLab under [[cartesian monoidal category]].
But it's not... not quite.
I added some more.
To repeat what @Martti Karvonen said: while we can define a group object in any cartesian category, we can define a [[Hopf object]] in any symmetric monoidal category, and this is a very useful generalization. Just as cartesian monoidal functors send group objects to group objects, symmetric monoidal functors send Hopf objects to Hopf objects.
For example the "free vector space on a Set" functor is symmetric monoidal from to . is cartesian so you can define group objects in , which are just groups. is not cartesian, but since it's symmetric monoidal you can still define Hopf objects in , which are Hopf algebras. And the "free vector space functor" sends groups to Hopf algebras!
[[Hopf object]] is missing. It should be [[Hopf monoid]]. I don't know how to create a redirect on nlab.
You must edit and add [[!redirects Hopf object]] at the end.
You need diagonals to be able to get the map , yes. And you need the map , which would be most easily achieved by assuming the tensor unit is terminal (see [[semicartesian monoidal category]]). If the tensor product is symmetric, this nearly forces the monoidal structure to be cartesian. But do we need symmetry in the group axioms? I don't believe so.
See https://mathoverflow.net/questions/348480/a-semicartesian-monoidal-category-with-diagonals-is-cartesian-proof for discussion of how diagonals and semicartesianness in a symmetric monoidal category need to interact to force cartesianness.
David Michael Roberts said:
But do we need symmetry in the group axioms? I don't believe so.
You mean, do we need symmetry of the ambient monoidal category? I think so: the first axiom at [[bimonoid]] involves a string crossing.
@Mike Shulman I mean for the diagrammatic definition of group object, if one tries to replicate it in a monoidal category with diagonals and with terminal tensor unit.
Isn't that the same as writing down the definition of Hopf object, using the supplied diagonals and terminal maps as the comonoid structure?
Why does one need to know that (gh,gh) = (g,g)(h,h) for the definition of a group, as in the first string diagram at [[bimonoid]]? Or am I misunderstanding what that is trying to say?
I don't think that's quite the right way to say it: I would say both sides are versions of . In Sweedler notation it would be .
I suppose maybe you don't need that to write down the bare definition. But I would expect it to behave pretty oddly otherwise.