You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I am wondering what connection we can make between the process of "picking a basis" for a vector space of dimension and "picking a particular coproduct" of one dimensional vector spaces.
For example, let's work in the category of real finite dimensional vector spaces. Then consider the two dimensional vector space , where the vectors are pairs of real numbers which we can add component-wise, and where we can multiply by scalars by multiplying each component. We have inclusion functions and defined by and for any . I think that together with with these two inclusion functions forms a coproduct of and in .
However, we could try to define some different inclusion functions. For example, we could try defining so that it sends to a line of slope going through the origin, and sending to a line of slope going through the origin. Then is together with and also a coproduct of and in ?
If this is so, then it is interesting to note that, given , and specify a particular coproduct of and by specifying two one-dimensional subspaces of . Each nonzero vector of specifies a one-dimensional subspace of , so picking a basis for also corresponds to selecting two one-dimensional subspaces of .
Inspired by this similarity (assuming the above is correct), I am wondering if the process of "picking a basis" for a real -dimensional vector space is equivalent to "picking coprojections from that make into a coproduct". In other words, maybe "picking a basis" for an -dimensional real vector space is essentially the same thing as "picking a specific coproduct" for copies of ?
For me, a good definition of a basis of a finite-dimensional -vector space of dimension is this one:
"An isomorphism "
where is the external direct sum of copies of which is the same thing than the vector space ie. the cartesian product of copies of because the cartesian product of a finite number of vector spaces is at the same time a product and a coproduct.
Probably it is equivalent to your definition.
From such an isomorphism, you recover a basis by defining to be the image of ( in the position, in all others positions ) by this isomorphism.
From a basis , you get such an isomorphism by sending to .
Hmm, I think another definiton which is maybe closer to what you say is that:
" A basis with elements of is given by functions such that is a coproduct of copies of "
I'm pretty sure it's equivalent
So you were talking exactly about the case of this I guess.
In this case you recover the basis by defining . And if you have a basis, you define
Thanks, @Jean-Baptiste Vienney! By the way, for context, I was hoping this line of thought might help with trying to make arguments that don't depend on a particular basis.
Hmm, I see but for me it is rather giving a categorical definition of what is a basis, which is good too that don't using basis ie. we can define a basis of dimension of an object in a monoidal category as a choice of maps such that is a coproduct of copies of .
Then, for instance in , a basis of a set of cardinal is a choice of an order on these objects, ie. you write . It comes from the fact that in as a monoidal category with tensor product the cartesian product and monoidal unit a one element set , a funtion is exactly the same thing than an element of .
By the way, working out what you can do with such a definition of a basis in a monoidal category would be interesting! For instance, you could define what is a finite-dimensional object, what is the matrix of a map, what is a change of basis... But I think that you must assume that the category is also enriched over commutative monoids in order to be able to do enough interesting things. Probably you can then prove that a map between two finite-dimensional objects of the same dimension is an iso iff any matrix of the map with respect to two basis is invertible. Also you know that in such a category, is always a semi-ring...
I was interested by thinking about this some times ago, and I thougt that maybe you could use this paper to get the definition of the determinant (without negatives, because I want only to assume that the category is enriched over commutative monoids, not abelian groups): Determinants of matrices over semirings
But I'm always afraid of doing something like this which can be perceived as abstract non-sense with only trivial applications. It's true that it would be making something simple: a basis into something which looks more complicated, and without any application to complicated stuff. I like doing such things but I don't feel like everybody think the same.
I like to think about a basis of a vector space as an isomorphism where is the "free vector space on a set functor". In this blog article I basically described the essential image of this functor . Roughly, this means to describe in purely linear algebraic terms what extra structure a vector space gets from having a basis.
I also did it for finite-dimensional vector spaces. The answer for finite-dimensional vector spaces is different in flavor from the answer for vector spaces.
Most of what I was describing was work of Aurelio Carboni, but the case of arbitrary vector spaces, as opposed to finite-dimensional ones, involves a conjecture that was proved by Theo Johnson-Freyd in a comment to my article.
Thanks, I will look at this!
A minor comment: given a fin. dim. vector space and choosing an isomorphism to a specified gives an ordered basis, but choosing an isomorphism to the vector space of functions , where is a finite set, gives an unordered (in the sense of not-necessarily ordered) basis. This might make a very tiny amount of difference. The first case induces a canonical orientation, for instance, on , but the second one doesn't.
It definitely makes a difference, and putting the structure of a commutative special Frobenius algebra on a finite-dimensional complex vector space is the same as choosing an unordered basis for it.
For real vector spaces this is alas not true since this sort of fact holds only for separably closed fields (i.e. fields with no finite separable extensions).
What goes wrong in the real case? Are there other SCFAs that come from the separable extensions?
Yes, the general one is a product of finitely many copies of and .
So you wind up describing your vector space as consisting of real functions on one finite set and complex functions on another!
I don't see an 'equational' way to get rid of the complex stuff, offhand.
John Baez said:
I don't an 'equational' way to get rid of the complex stuff, offhand.
It seems you accidentally a verb
But more seriously, it's good to know how much of a difference one sees between the two cases!
Yes, and if we were working over the situation would become a nightmare because every algebraic number field gives a commutative special commutative Frobenius algebra over . So you'd get , and , and tons more!
Well, this is either a nightmare or Grothendieck's Fundamental Theorem of Galois Theory, depending on how you look at it. :upside_down:
John Baez said:
Most of what I was describing was work of Aurelio Carboni, but the case of arbitrary vector spaces, as opposed to finite-dimensional ones, involves a conjecture that was proved by Theo Johnson-Freyd in a comment to my article.
I've been looking at the paper Matrices, relations, and group representations by Aurelio Carboni. I'm far from having understood everything for the moment, but I've already learned something very useful from it.
He says that the construction makes sense whenever is a regular category. Until then, I knew this construction only in the case where is the category . It makes me wondering two things:
David Michael Roberts said:
A minor comment: given a fin. dim. vector space and choosing an isomorphism to a specified gives an ordered basis, but choosing an isomorphism to the vector space of functions , where is a finite set, gives an unordered (in the sense of not-necessarily ordered) basis. This might make a very tiny amount of difference. The first case induces a canonical orientation, for instance, on , but the seco.d one doesn't.
Indeed, and I think that what has been said about unordered basis in category theory doesn't make less interesting the idea of exploring and generalizing the notion of ordered basis in category theory too.
Jean-Baptiste Vienney said:
John Baez said:
Most of what I was describing was work of Aurelio Carboni, but the case of arbitrary vector spaces, as opposed to finite-dimensional ones, involves a conjecture that was proved by Theo Johnson-Freyd in a comment to my article.
I've been looking at the paper Matrices, relations, and group representations by Aurelio Carboni. I'm far from having understood everything for the moment, but I've already learned something very useful from it.
He says that the construction makes sense whenever is a regular category. Until then, I knew this construction only in the case where is the category . It makes me wondering two things:
- We know that is a codifferential category. Is it also true that is a codifferential category? (ping JS PL (he/him) who might find this interesting, there is also the question to know if it is a model of differential linear logic, who are compared to basic differential categories in your paper (together with Blute, Cockett Seely) Differential Categories Revisited) (Also thanks to Sacha Ikonicoff who told me about his interest about some ideas in the same vein but in the framework of tangent categories) (Also thanks to Ralph Sarkis who told me that equivalence relations in more general categories than such as any regular category for instance, are interesting.)
- Might interest Rose Kudzman-Blais: you showed (together with Blute, Niefield) in Constructing linear bicategories that for a quantale is always a linear bicategory. Is this construction a particular case of for a regular category? Is a linear bicategory in general?
Thanks Jean-Baptiste for making me aware of this paper by Carboni. It will be of great help for what I am currently working on.
In that pre-print, we actually showed something different. We weren't looking at the relations in a quantale, rather that , the category of -valued relations, is a linear bicategory, provided itself is a linearly distributive category. This implies that is a linear bicategory as the the 2 element quantale has a negation, making it linear. As for your other point, what makes Rel a linear bicategory is that there is a notion of negation, which is given by inverse followed by complementation. So, as long as you have inverse relation (which you do for any regular category) and a sufficiently nice notion of complemented subobjects in , is a linear bicategory. This is mentioned quickly in Example 2.3 (2) of [Introduction to linear bicategories].
Rose Kudzman-Blais said:
Jean-Baptiste Vienney said:
John Baez said:
Most of what I was describing was work of Aurelio Carboni, but the case of arbitrary vector spaces, as opposed to finite-dimensional ones, involves a conjecture that was proved by Theo Johnson-Freyd in a comment to my article.
I've been looking at the paper Matrices, relations, and group representations by Aurelio Carboni. I'm far from having understood everything for the moment, but I've already learned something very useful from it.
He says that the construction makes sense whenever is a regular category. Until then, I knew this construction only in the case where is the category . It makes me wondering two things:
- We know that is a codifferential category. Is it also true that is a codifferential category? (ping JS PL (he/him) who might find this interesting, there is also the question to know if it is a model of differential linear logic, who are compared to basic differential categories in your paper (together with Blute, Cockett Seely) Differential Categories Revisited) (Also thanks to Sacha Ikonicoff who told me about his interest about some ideas in the same vein but in the framework of tangent categories) (Also thanks to Ralph Sarkis who told me that equivalence relations in more general categories than such as any regular category for instance, are interesting.)
- Might interest Rose Kudzman-Blais: you showed (together with Blute, Niefield) in Constructing linear bicategories that for a quantale is always a linear bicategory. Is this construction a particular case of for a regular category? Is a linear bicategory in general?
Thanks Jean-Baptiste for making me aware of this paper by Carboni. It will be of great help for what I am currently working on.
In that pre-print, we actually showed something different. We weren't looking at the relations in a quantale, rather that , the category of -valued relations, is a linear bicategory, provided itself is a linearly distributive category. This implies that is a linear bicategory as the the 2 element quantale has a negation, making it linear. As for your other point, what makes Rel a linear bicategory is that there is a notion of negation, which is given by inverse followed by complementation. So, as long as you have inverse relation (which you do for any regular category) and a sufficiently nice notion of complemented subobjects in , is a linear bicategory. This is mentioned quickly in Example 2.3 (2) of [Introduction to linear bicategories].
Thanks, it's very cool!
Jean-Baptiste Vienney said:
- We know that is a codifferential category. Is it also true that is a codifferential category? (ping JS PL (he/him) who might find this interesting, there is also the question to know if it is a model of differential linear logic, who are compared to basic differential categories in your paper (together with Blute, Cockett Seely) Differential Categories Revisited)
is a (co)differential category and a full model of differential logic. I think that checking if is a (co)differential category is just the case of referencing the right stuff...
I believe that it is known that if has a free commutative monoid functor , then becomes a free exponential modality on , thus a model of linear logic (I there might be a reference for this... I just can't find it right now). Since a free exponential modality always has differential structure, it follows that is a (co)differential category.
I don't want to derail the discussion too much here. But I'm happy to talk about being a differential category or differential category stuff in general in another thread or in DMs.
Thanks JS. We could talk more of that another day. I don't want to go into details of anything research related now because I must concentrate on courses during the next two semesters. But I could ask you more later.