You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I've been thinking a lot about linear categories, meaning Vect-enriched categories where Vect is the category of vector spaces over some field, and I use the 'free vector space on a set' functor
F: Set Vect
to do 'base change', turning Set-enriched categories into Vect-enriched categories by applying F to each homset.
This works well because F is symmetric monoidal from (Set, ) to (Vect, ).
I'm wondering: what's a good level of generality where we have something like that happening?
More precisely, I guess I'm asking for which symmetric monoidal categories V do we have a god-given symmetric monoidal functor F: Set V?
That's still somewhat vague, I know! But if I also require that F preserve colimits then I guess it's not so vague.
In this case, as soon as we know F maps 1 Set to the unit object of the symmetric monoidal category V, we know what F does to all objects and morphisms in Set.
And then, for F to be symmetric monoidal, it's sufficient (and necessary?) that the tensor product in V distribute over colimits.
So maybe I'm answering my own question here: if V is symmetric monoidal, has all (small) colimits, and its tensor product distributes over all (small) colimits, then we get a functor
F: Set V
that's symmetric monoidal and preserves (small) colimits.
So V being a Benabou cosmos is sufficient. But perhaps it's not necessary?
Are there interesting examples of symmetric monoidal cocomplete categories where the tensor product distributes over colimits that aren't Benabou cosmoi?
(It seems to find examples one must read the technical fine print of the adjoint functor theorem, which seems like the opposite of 'interesting'.)
John Baez said:
I'm wondering: what's a good level of generality where we have something like that happening?
More precisely, I guess I'm asking for which symmetric monoidal categories V do we have a god-given symmetric monoidal functor F: Set V?
Given your free vector space example, I think you mean lax symmetric monoidal, right?
One general construction which achieves this is when is the Kleisli category of a commutative monad on . Clearly your vector space example is an instance of this, since the free vector space monad is commutative. I think that this construction should make sense more generally for any symmetric monoidal category in place of .
I think the free vector space on a set functor is strong monoidal from (Set, ) to (Vect, ):
On both sides we have a vector space whose basis consists of ordered pairs .
In the end I claimed we have an easy result that's quite general: if V is symmetric monoidal, has all (small) colimits, and its tensor product distributes over all (small) colimits, then we get a functor
F: Set V
that's symmetric monoidal and preserves (small) colimits.
If that's wrong someone please let me know! :pray:
Whoops, sorry! I guess I should think before writing garbage from memory :sweat_smile:
No problem. That's a fine way to get rid of garbage!
But I agree, @Tobias Fritz, that it would be nice to generalize from Set to other categories.
(I just don't happen to need that right now. I'm just wanting to know that for many categories V, we get an "automatic" way to turn categories into V-enriched categories)
@Tobias Fritz , did you mean the Eilenberg-Moore category? In the case of vector spaces they're equivalent since (assuming the axiom of choice) every vector space is free, but in general the Kleisli category won't be cocomplete.
For an example of a cocomplete symmetric monoidal category whose tensor product distributes over colimits but is not closed monoidal, how about the category of [[small presheaves]] on a large category with finite limits, with its cartesian monoidal structure?
For any (symmetric monoidal) , we have a forgetful functor sending each homset to it's -elements for the monoidal unit . In case is moreover cocomplete, this forgetful functor has a left 2-adjoint which sends each homset to the coproduct of many copies of .
In case is Vect, this is exactly the construction you describe, where we take the free vector spaces on the old homsets to get something Vect-enriched.
Anyways, I guess this is what you were already getting at since you were asking for to be cocomplete, but it's nice to know that this is in Kelly's book (chapter 2.5)
Mike Shulman said:
Tobias Fritz , did you mean the Eilenberg-Moore category? In the case of vector spaces they're equivalent since (assuming the axiom of choice) every vector space is free, but in general the Kleisli category won't be cocomplete.
No, I did mean the Kleisli category. The reason I gave that formulation is because the Kleisli category automatically comes equipped with a symmetric monoidal structure, while for the Eilenberg-Moore category this requires suitable coequalizers to exist (in order to construct the tensor product in the form of bimorphism classifiers). But I understand that most people would consider the Eilenberg-Moore category, provided that it is symmetric monoidal, a nicer category to enrich over.
(This makes me realize that I don't actually know why colimits in bases of enrichment are considered important. I can see that one will want certain limits to exist, like the ends that define objects of enriched natural transformations, but why colimits?)
I've connected on Zulip and I've seen this. Just a commentary: the 'free vector space on a set' functor
is the restriction of this other "free vector space on a set" functor:
By using the restriction functor
which forgets that a function is a function and just remember that it is a relation ie.
I'm interested by these matters me too and I think that is even more interesting that !
Because if are two sets, then a relation from to is almost the same thing that a matrix, except that we have forgotten the order of elements in the column and lines. Then if is a relation, is the linear map which maps every base element to every base element as described by the relation . The correspondence between a matrix and a linear map between two vector spaces with a basis can thus be understood through this functor .
I must work on my courses now, I must do this for my phd, goodbye haha.
I couldn't resist to say this.
Sorry I said something wrong. To think about matrices, you must look at a functor
where are the -valued relations
One more time, you have a restriction (which sends relations to -valued relations):
and
I think it would be very interesting to replace the use of matrix by the use of the functor and develop the theory of these -valued relations instead of the one of matrices.
That's something I'm very interested by and I'd like to work on this, for instance how to diagonalize -valued relations etc...
John Baez said:
So maybe I'm answering my own question here: if V is symmetric monoidal, has all (small) colimits, and its tensor product distributes over all (small) colimits, then we get a functor
F: Set V
that's symmetric monoidal and preserves (small) colimits.
one thing that lawvere taught me is that every abstraction arises from practice. the practice that the functor Set Vect captures is that it completes sets as vector bases into free actions by a particular field. in other words, a set is a set of observables and the vector space that it spans is the space of mixtures that we may measure (i.e. count and normalize) and record in the field.
i think the elephant in the room might be the category Spid of vector spaces with bases, i.e. of special commutative dagger-frobenius algebas. (some people call vector spaces with bases spiders, but in bob's and jamie's and dusko's paper which says that they are just bases they are still called special commutative dagger-frobenius algebas or monoids...) the functor Set Spid maps every set to the free field-action on it, just like Set Set maps every set to the free -actions on the right.
the functor Set Vect is then obtained by forgetting the bases, i.e. as the composite Set Spid Vect. this reflects the practice that we first learn (and first developed) linear algebra as algebra of matrices, and still try to reduce linear operators to matrices whenever we can, but have to forget the bases to model systems that we cannot really observe. (which is often the whole point of it all.)
note that Set is just the subcategory of Spid restricted to comonoid homomorphisms, i.e. to linear operators that preserve the bases. general linear operators do not preserve bases because nature does not care to keep our observables apart.
tobias'es intuition about linear operators as kleisli morphisms might be based on tacitly replacing Vect with Spid, where a linear operator is indeed just a function from a basis to the space spanned by another basis. most linear algebra courses do that, and claim that all they need is the axiom of choice or zorn's lemma or whatever. magic :))
BTW, the fields of rationals, reals, and complex numbers are themselves nontrivial monoidal categories for which the linear operators associated with matrices can be obtained by extending matrices as enriched profunctors along the kan extensions of enriched yoneda embeddings. i worked this out for the above three fields, and it might be true in general. i think it goes through for p-adics, but i don't know how it would work and what it would mean for finite fields. (the monoidal category of the rationals and the reals for which the linear operators are kan extensions is in my paper in this samson abramsky collection of articles that finally just appeared. or on arxiv. called "retracing" and then a long title which i forget.)
in general, just like we forget the vector bases but secretly use them all the time, maybe we forgot the morphisms that live in the fields that we use. these morphisms in any case record the practices that drove us to construct the fields the way we construct them.
Chris Grossack (they/them) said:
For any (symmetric monoidal) , we have a forgetful functor sending each homset to it's -elements for the monoidal unit . In case is moreover cocomplete, this forgetful functor has a left 2-adjoint which sends each homset to the coproduct of many copies of .
Only if the tensor product of preserves coproducts on each side. This is hidden in Kelly's formulation because he has a blanket assumption that is closed.
Tobias Fritz said:
(This makes me realize that I don't actually know why colimits in bases of enrichment are considered important. I can see that one will want certain limits to exist, like the ends that define objects of enriched natural transformations, but why colimits?)
Well, the conversation here is one reason: you need colimits in order to have a left adjoint from Set-categories to -categories!
Another is that you need colimits to be able to compose -profunctors.
But it's true that you can do more without colimits than is always recognized, e.g. you can assemble profunctors into a virtual equipment even if they can't be composed.
John Baez said:
So maybe I'm answering my own question here: if V is symmetric monoidal, has all (small) colimits, and its tensor product distributes over all (small) colimits, then we get a functor
F: Set V
that's symmetric monoidal and preserves (small) colimits.
sorry, i didn't finish what i wanted to say above so it ended up looking unrelated to your @John Baez explanation. i went so widely because i was trying to argue that the functor in question should be viewed in terms of functorial semantics. that captures the mathematical practice from which the constructions arise.
the claim is that looking at the "evolutionary history" of mathematical structures helps and is sometimes necessary.
Set is the free category with coproducts over 1 generator. V=Span is the free category with biproducts over 1 generator. Spans are matrices of natural numbers. if the counted multiplicities need to be subtracted, we end up with matrices of integers. if they need to be partitioned and normalized, we end up with matrices of the rationals. if we restrict everything to be finite, we are looking at categories of suitable free algebras: commutative monoids for Span, abelian groups for matrices of integers... coproducts become biproducts because the requirement that the monoid operation is single-valued means that it preserves the comonoid, which is the bialgebra law. V=Spid is the category of free actions of the field. (of course all field actions are free. and the generator is not an algebra. stuff under the carpet here.) Vect emerges along a forgetful functor from
V=Spid.
the product in Set comes about as . if the algebraic operations in V are commutative, then the product creates the monoidal structure in the category of free algebras...
in this perspective, the coequalizers don't play a significant role. the coproducts allow us to bundle observables, and induce both the monoidal structure and its preservation. Set V is simply the inclusion of strict morphisms (mapping generators to generators) into free algebras for some algebraic operations that we used. this was probably tobias'es intuition as well.
sorry, i always write these explanations too long to read. i am struggling to say that we generally don't stand much chance to explain things by looking at abstract structures floating in space on their own, but that we stand a better chance if we look at how they came about.