You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
This is not a formal mathematical question, but a "soft question" I want to discuss.
I found the following quote in Leinster's category theory book:
Once you know a universal property of an object, it often does no harm to forget how it was constructed. For instance, if you look through a pile of algebra books, you will find several different ways of constructing the tensor product of two vector spaces. But once you have proved that the tensor product satisfies the universal property, you can forget the construction. The universal property tells you all you need to know, because it determines the object uniquely up to isomorphism.
I wonder: how often does it no harm to forget the construction of an object if one knows instead its universal property? Should you really forget (your favorite) construction of the tensor product? To make the question a bit more concrete, let me phrase it this way:
If you're doing abstract algebra and prove things about, say, fields of fractions, polynomial rings, free groups, localizations of rings, and quotient groups, is this almost always possible with just knowing the universal properties? Or is it sometimes in practice necessary to know the construction? If yes, how often?
(I'm completely aware that the universal property determines an object up to isomorphism, so that in theory, the answer to the first question is "yes". At a pinch, when trying to prove that an object has an isomorphism-invariant property while only knowing the universal property of , one can still construct an object satisfying the same universal property and show that , i.e., one can "inline" the construction in the proof. Strictly speaking, this is then still a proof of from the universal property of . So the question is rather: is this sometimes the only known way of proving ?)
For instance, if all you know about is , how do you in practice get a handle on maps to ? This would be a case in which I suspect it's necessary to work with constructions in practice (but please convince me otherwise!).
Interestingly, in categorical logic constructions often do matter: for a theory (in a fixed doctrine), you can consider the 2-functor (where is the 2-category of "categorical structures" for the doctrine, e.g., the 2-category of Grothendieck toposes in the doctrine of geometric logic) which sends each to the category of models of in and ask whether it is representable (i.e., whether there's a classifying category of ). But you are not merely interested in finding any representation of that functor, since only if you find a syntactic description of the classifying category, then you've automatically proven both a soundness and a completeness theorem.
It might be worth consulting some mechanised library of algebraic theorems (or better, someone who understands said library). I don't know any off the top of my head, but if they do exist, they might well have made a decision about whether to use a construction or to abstract away and use the universal property.
My guess, if this were done in type theory, would be to abstract away as soon as possible from tensor products (because the construction is ugly and unhelpful), but for something like biproducts of Vect, use the construction (because it's so simple – just products of sets). But this is somewhat contingent on type theory, and in particular its definitional equality.
Thanks for that thought. I hope someone like this will read this thread. :smile:
In my experience of working with tensor-hom adjunctions between monoid toposes, the explicit construction certainly has uses in proofs in a way that doesn't immediately lift to the universal property. For example, tensoring with a flat M-set may or may not preserve infinite products, and showing that it sometimes does not requires an explicit counterexample illustrating how that failure happens, which rests on an explicit construction of said tensor product.
There's a similar issue in universal algebra: free objects are really handy, and we can prove lots of positive things about them (eg. of the form "every free object has (categorical) property X") from just the universal properties, but showing negative properties requires explicit constructions. For example, every -vector space is free, but to show that not all groups are free (or not all members of a given variety are free) you need to construct a counterexample which exhibits some property which free groups do not have. To me, this is also a property of free objects, but it is not one which can be proven with the universal property alone. I would enjoy for someone to argue against me on this :wink:
Thanks!
I guess you can use the universal property of free groups and the fact that free groups have coequalizers to construct the group as a coequalizer of two morphisms out of the free group on one generator. Then we can use the universal property of free groups to prove is not free on one generator, because it lacks that universal property. Do we ever need the explicit construction of free groups in this argument? I don't know. If so, it's in the second sentence, the "Then..." part.
To verify that isn't free on one generator we need some extra information about groups. We need to be able to commute the underlying set of once we've specified how it's built, for example, and then observe that has infinitely many endomorphisms. But that's circular! To know that the free group on one generator has infinitely many elements (or even more than 2 elements) requires us to have an explicit construction of that free group to hand, and computing the number of elements of requires even more information a priori.
If we already know that the monad induced by the free-forgetful adjunction looks like (what its values are), then we have enough information, but we certainly need more than the universal property of free groups to work anything out.
Thanks!
By the way, I think one could argue that for almost all objects in mathematics it is enough to know their universal property. After all, in homotopy type theory, the main way to construct types is as (higher) inductive types and such types are introduced via their universal property. For instance, all there is to know about the natural numbers is that they satisfy the Peano axioms, and roughly speaking the Peano axioms are more or less equivalent to " is a natural numbers object" (I think), which can be thought of as the universal property of . Of course, one reason my question is inherently vague is that I don't specify in which category the universal properties I'm discussing should be phrased. My original post probably suggests that I'm talking about universal properties in categories of groups, rings, and so on. On the other hand, the universal properties that can be derived from definitions of types as inductive types are universal properties in categories of algebras over endofunctors on the category of types (or something like that).
Yes, I think that's exactly the point: for the primitive objects of a mathematical foundation it suffices to describe them in terms of universal properties, but for other objects that are constructed out of those, it can be useful or essential to have a concrete construction.