You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hello! I have a soft question.
I recently came across a blog post that gives an example of a category which is not concrete: Homotopy is not concrete.
This surprised me a little, since apparently the Yoneda lemma provides a general way to make any category concrete, namely by replacing each object with the set of morphisms into it. (Indeed, this construction is also cited at the beginning of the blog post).
The seeming cotradiction here is resolved by the fact that the above construction doesn’t quite work, because the “set of all arrows into an object” might actually be too big to be a set.
Now, the problem is that I can’t make up my mind about whether this is some fundamentally deep fact, or is “just a size issue”. What are your thoughts about this?
To start off the conversation, I may give my current opinion:
At the moment I tend to see this as “just a size issue”, for the following reason:
Assume we have a countable hierarchy of Grothendieck universes ; then, we can define locally -small categories in the obvious way, and -concrete categories as categories equipped with a faithful functor to the category of -small sets; now, given any locally -small category , we have that is a fortiori a locally -small category, and with the construction from before we can make a -concrete category.
Therefore, any category can be considered concrete, if we see it in a large enough universe.
From a more philosophical point of view, it seems to me that the Yoneda lemma does tell us that any category could be seen as concrete; but morphisms in a category can capture so much information that, to see them as plain functions, we are forced to consider really big sets in general, and perhaps even proper classes.
I'd say this is a real issue, not the kind you just wave away by muttering "Grothendieck" three times in a darkened room.
The thing is that, by definition, a locally small category is concrete if it has a faithful functor into the category of small sets. It's the same occurrence of "small" on both sides of the "is" there!
In many cases, size issues become interesting when there's a definition that involves the same universe occurring twice. For instance, my favorite example of a "real" size issue is that is almost a free cocompletion monad, except that the presheaf category is only the free cocompletion if is small, while the presheaf category itself is almost never small. So this isn't an endofunctor! This leads to real actual math, where you realize the free cocompletion monad is the mapping of locally small categories sending to the category of small presheaves on it. There's no way to produce this result just by muttering about Grothendieck universes.
Similarly, the category of (small!) groups is not cototal. This is a real actual theorem: there's a real actual continuous functor from groups to (small) sets which is really actually not representable by any group. It's representable by a large group, one which maps onto every small simple group in an essentially unique way (I think I'm remembering that construction right.)
Getting back to concreteness: the theorem is that there are categories with a small number of maps between any two objects that can't be represented faithfully by small sets. Once again, that's "small" on both sides. That's a real theorem! It depends on knowing a lot about homology groups! It's also a theorem that every category can be faithfully represented by sufficiently large sets, but it's more like formal nonsense.
A related and very interesting fact is that every locally small category does admit a conservative functor into (small) sets. This is another theorem where you have to actually come up with specific mathematical ideas; and it's good to know! If you're used to homotopy groups of pointed connected topological spaces, and then you realize there is no similar representable family of conservative functors on all topological spaces, you might throw up your hands and wonder if there's any conservative functor to small sets at all--but there always is one. It's somehow never of any real mathematical interest to know some functor sort of exists, but only if you upgrade the universe level.
It's fine to say "it's just a size issue", only so long as you also accept the consequences: our intuition about many things goes out the window. In particular [[function extensionality]] is no longer assured... That's a pretty big loss!
Wait, what's the connection of size to function extensionality?
My understanding is that in the set-theoretic sense (functions agreeing on arguments are equal) depends upon the category being concrete. (Freyd? Can't remember; can find it.)... In Topos theory you recover this by expressing it in the internal language about subobjects, but the logic is weakened so it may not "mean the same thing" in the external logic... honestly ask a topos theorist about that! I'd love to understand it better, but am only a student of it. But in general, all the theorems I know of to ensure it works depend on something topos-like or concreteness, and I think it doesn't hold in general.
It would be really nice to develop an example using as a quotient, and show how somehow morphisms agreeing on inputs can now have different content, maybe they are the same up to a certain kind of equivalence. Sadly I don't have a sharp example; wish I did!
Well, I think you have to specify a proposed concreteness functor before you can even ask about function extensionality in this sense; it doesn't seem to be a property of the abstract category (what are "the inputs", if not the points of the underlying set of the domain?) So if you had a large-concrete category you could still import the function extensionality of large sets and use it in much the same way. If you're using the Yoneda embedding as your large-concreteness functor, this reduces to the fact that morphisms which send all generalized elements to the same place are equal, i.e. the faithfulness of the Yoneda embedding itself.
Yeah agreed insofar as I understand it.
Okaay here is a good discussion thread on the HoTT book; its homotopy so it comes down to what we mean by "equal"... but these subtleties also cannot be waved away by saying Grothedieck and lighting a candle.
We should ask @Conor McBride !
Okay, my neanderthal-brain attempt to explain what is a deep notion, a sense in which function extensionality could fail when you only use intensional types, is that identity types in HoTT are path-like, and of course you can have unequal paths which are equivalent on their boundaries. And the "assignment of values" concept we use for function extensionality (wether on generalized objects or like, points) is really a property of the behavior of assignments on the boundary.
And I guess I had stretched that this comes into play not just in HoTT (e.g. the axioms needed for a computer to reason intensionally about and manipulate inequivalent proofs of the same theorem) but also in like, "actual mathematical homotopy", and this is supported to by being not-concrete. The two concepts may actually be different, and there are experts here who can tell us if I am wrong (as I am wont to be), and such concepts should not be thought of as the same thing. But for now that's what I can say semi-intelligently.
"Function extensionality" usually refers to a property of type theory, which could be the internal type theory of some category. It's very different from the question of whether two morphisms that act the same on all global elements are necessarily equal; that's the notion of a [[well-pointed category]]. Well-pointedness is an extremely strong property and very few categories satisfy it (e.g. doesn't); whereas categorically speaking, function extensionality is really just part of the definition of cartesian closure.
Concretizability is of course much weaker than well-pointedness, but its connection to extensionality is also much more tenuous. I suppose you're thinking that if there's a faithful functor then you can think of as the "set of elements" of and faithfulness as saying that "if two morphisms act the same on all elements then they are equal". But that notion of "element" could be completely bizarre and cooked up for the purpose -- and a given category could admit many different such -- so I'm dubious that it's useful to think of concreteness as a sort of extensionality principle.
In particular, while function extensionality is an essential aspect of much mathematics that no one would want to do without, and well-pointedness is a strong property that has many powerful consequences when it holds, it's hard to think of any real applications of concreteness.
May I humbly point to https://thehighergeometer.wordpress.com/2022/11/18/slides-for-colloque-benabou/ which is related but perhaps not entirely what the original post is after. I draw attention to Isbell's condition mentioned later in the slides, which precisely delimits which large but locally small categories are concrete and which are not. It is not purely a matter of size, but an algebraic condition.
Using universes like this, you can even make the simpler claim than "all categories are concretisable":
Theorem: Every class is a set
Proof: Interpret "class" as being a set that is possibly large relative to the current universe. Then it is an element of a bigger universe, and hence a set. QED.
Or even:
Theorem: There is a set of all sets.
Proof: consider all the -sets for the current universe . This universe is itself a set. QED.
This is obviously very tongue in cheek, but it feels to me more or less the same as being able to go up a universe and use Yoneda. If you genuinely have a large category, one that doesn't sit inside any universe (for instance the colimit of all the categories of sets relative to all the universes), then you can't wave the universe wand at it.
Thanks everyone for the very thoughtful answers!
I think that the reason why I did not consider the size aspect to be central is that indeed I was looking for a formal-nonsense argument: I tend to see the proof of the concreteness via Yoneda as an analog (indeed, a direct generalization) of Cayley’s theorem for groups, which I consider more “morally important” than “mathematically useful” as far as theorems go.
@Kevin Carlson I liked your point about “smallness on both sides of the is”, and I want to make sure I understand correctly your cocompletion example: if we define the “free large-cocompletion” of as the universal large category equipped with a cocontinuos functor , can we say that the presehaf category is the “free large-cocompletion” of ? Then, the point of your example is that altough this “free large-cocompletion” exists, it is much less interesting/mathematically useful than the usual free cocompletion (in particular, because it doesn’t yield a monad), right?
There is also the fact that I tend to be very loose with size when I think about categories, which may be the reason why I missed the subtlelty here. For instance, when I wish to construct a category I tend to think of it informally first, then convince myself that there is some universe such that it could be formalized as a locally -small category (so as to stay safe from paradoxes), and then do my thing. But at that point, I may just as well take such that the category is -small, and then it is a theorem that the category is also concrete.
(It didn’t occur to me, as @David Michael Roberts suggested, that there could be an essential problem with “categories that don’t sit in any universe”: that is perhaps due to the fact that I tend to think of formal issues within the framework of type theory, and I have no idea how one could define a category which doesn’t sit in any type universe).
About the thought-provoking examples by @David Michael Roberts, for some reason I “morally” agree with the first statement, but not the second:
1) Every class is a “set”: the tautulogical universe level argument, to me, simply suggests that collections behave in much the same way at all universe levels; morally they are all just “sets”, but importantly they ave different “sizes” (or rather, they are “stratified” so as to avoid impredicative paradoxes).
2) There is a “set” of all “sets”: that argument feels wrong, since if we understand “set” as in the above point then we can’t just consider a “current universe” which fits them all.
These are just “gut feelings” of course, but I wonder if they come from some mathematical misconception deep inside my head.
So here's the link: consider the class of all sets. If you believe the first theorem, you have to believe the second theorem
Well, the thing is that to make sense of the first claim I understood “set” as “any set-like object” (in the sense that it behaves just like a set, except for stratification/size differences); but then I can’t “consider the class of all “sets””.
This is somewhat analogous to a finitist that can consider “natural numbers”, has an intuitive understanding of the fact that natural numbers of any size behave the same way, but nonetheless will not accept the “class of all natural numbers”.
Proper classes don't behave like sets, though. If you can tell me formally what you mean by "set-like object" then there is something precise to say. I mean a 'set' as something that satisfies a system of axioms for sets (ZF(C), NBG(C), ETCS, ...), not some vague "set-like" notion. And one can definitely work with the category of all sets as a model of the theory of categories.
@David Michael Roberts to say that a “set” is something which (belongs to a universe that) satisfies the axioms of e.g. ZFC seems very reasonable to me. But then I don’t see why proper classes would not be “sets”: if we assume a hierarchy of universes , doesn’t each universe give a model of ZFC? (Forgive my naivety if I’m horribly wrong here.)
Also, if you use the above definition of “set”, I still can’t see how you can “consider the class of all sets”; the axioms of ZFC do not imply the existence of a class of all sets, do they?
The axioms of ZFC do not talk about classes at all, but you can still define what a class is: a first-order formula p in the language of ZFC with a free variable, giving something like { x | p(x) }. The class of all sets is then { x | x=x }. You can even define the category of classes of ZFC as the syntactic category of that theory, so you can know what operations are possible on classes: finite limits, quotients (using eg Scott's trick), and a few other things. They cannot do all the things that sets do. Any by "all sets" I mean that. Not "all -sets". It is perfectly legitimate to take the colimit of the categories of -sets. So an object consists of a set from any universe. And I'm assuming there is some kind of inclusion mapping of one universe into another, so it makes sense to talk about functions between sets that come from different universes. By "all sets" I could mean "an element of an arbitrary universe".
If ZFC only being able to access classes indirectly bothers you, try NBG instead. There you really do get a class of all sets as an object of the theory.
Also, if you use the above definition of “set”, I still can’t see how you can “consider the class of all sets”; the axioms of ZFC do not imply the existence of a class of all sets, do they?
That is correct: "class" is not part of the object language of ZFC. There are extensions of ZFC like NBG in which both "class" and "set" are part of the formal language.
So relative to ZFC itself, "class" belongs to the meta-language. I pretty much identify "class" here with first-order formula in the language of set theory, as in the class of sets satisfying some formula you can write down. You can manipulate classes as you do formulas, but operations like taking the power class of a class is out of bounds here.
(David beat me to it by a few minutes while I was typing.)
I really like this paper for thinking about classes in ZF(C): "What is the real category of sets?" https://arxiv.org/abs/1212.3107 by @Samuele Maschio The published version toned down the title, but I prefer this one. The objects being literally syntactic objects in the language of ZF set theory mean by definition ... you have everything. The properties of this category might be not determined (eg you can't prove or disprove the continuum hypothesis about this category, for instance, it's just not determined from the ZF axioms - and this isn't about having a model), but you can't suddenly go: "ah, but secretly it's just some countable transitive model in some ZFC metatheory", and think about other models.
Ok, I think I’m suffering from a lack of background in formal logic and set theory which prevents me from fully understanding your last comments.
One foundation that I think I may understand formally well enough is type theory (as laid out in the HoTT book), but in that context I can’t see how to do something analogous to what @David Michael Roberts is saying.
For instance, I don’t see how one can meaningfully “take the colimit of the categories of -types”, at least from within the language. I can see how one could move out to a meta-language and define something that effectively behaves like a “universe of all types”, but this introduces a separation between stuff which exists at the level of the meta-language as opposed to within the language, which I think creates a further “meta-stratification” of universes: if we take the perspective of the meta-language, then we can meaningfully talk about both the types internal to the language and the “universe of all types”, but the “universe of all types” is defined purely in terms of the types within the language (it’s actually “the universe of all types internal to the language”), and hence will not “include” all the universes that exists at the level of the meta-language (e.g. itself).
(I assume the situation is similar in NBG, which internalizes “the class of all sets” , but then is not “the class of all classes”.)
Perhaps I should revisit this issue with a better background. For now, I will thank you all for the precious answers.
Note that I've not said anything about the class of all classes :-)
Also, if you'll look at the slides linked in my first comment here, one point of the project is to divorce Freyd's theorem about concretisability of locally small categories from ZFC (well, in fact NBGGC), and make the theorems hold for algebraic set theory + some rank axiom about stratifying "all the sets". This theory allows one to talk about classes directly, but even the syntactic category of ZF is a model of AST, so I don't feel like it's importing a metatheory: everything is implemented in AST, of which any number of other foundations are merely examples.
Sorry @David Michael Roberts, I got a bit off a tangent with the “class of classes” thing.
What I wanted to say is that even if we define the “class of all sets” at the level of the metalanguage, I am not convinced by the “consider the class of all sets; … it is a set” argument, because:
(*): … belongs to some universe which satisfies the axioms of ZFC.
(I think that in NBG we also have a problem, but a different one: though the “class of all sets” exists, I think one can’t show that it is a “set” in the sense that (*), because operations on proper classes are severely limited in NBG.)
By the way, I will gladly take a look at the slides you linked, but not before I get back to a screen larger than that of my phone :sweat_smile:.
I don't mean to take the two theorems seriously. Rather, I'm saying that your claims about all categories being concrete feels to me like it seems those theorems do to you, though perhaps for different reasons.
You should remember that even talking about large categories requires one to have a framework for discussing non-sets. If every category is secretly small, then of course they are all concrete. But I posit that people really do sometimes care about all the things, and explicitly state they are working in a foundation where theses issues arise. Scholze, for instance said he didn't want to use universes in setting up the machinery of condensed sets, unlike the pyknotic sets approach.
Your attempt to explain my point about the free cocompletion monad isn’t quite right, but I definitely wasn’t clear to begin with! By a “cocomplete” category I mean one with all small colimits. The “large cocompletion” you describe would be the category of presheaves valued in large sets, ie sets the next universe up! The category of all presheaves of small sets on a locally small category has no universal property at all. The free (small) cocompletion of a locally small category is its full subcategory on those presheaves that are small colimits of representables. So there are several different possible presheaf categories flying around! The one valued in large sets, though, is of no mathematical interest.
Another interesting “size matters” theorem, while we’re on this: a -small category cocomplete for -small colimits is almost impossible—it must be a preorder! That’s one reason why the “large cocompletion” isn’t interesting. Its universal property involves maps into large-cocomplete categories, but in an extremely strong sense there are absolutely no examples of those in normal mathematical practice, since ordinary categories are themselves small for the second universe. Much in contrast to small-cocomplete ones, of course.