You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Is there any categorical setting where this composition of maps between vector spaces (or modules over a semi-ring) makes sense?
The first map is a (linear) copy map which sends to . The second map is the universal (bilinear) projection into the tensor product which sends to . The third map is the universal (linear) map from to the second tensor symmetric power of (which can be defined as the coequalizer of the identity and the swap from to ) which sends to .
Together it gives me an exponentiation map which sends to .
It can seem silly but you can't do this in a monoidal category and I think you can't do this in a multicategory, poycategory, additive, with tensor products, as you want... I don't think the definitions would allow such composition. And in logic, it doesn't fall either in the framework of linear logic: linear logic is only about linear maps. Not quadratic maps like this. But these power maps are an elementary component of algebra and are useful in the most basic mathematics.
I feel like category theory isn't even equipped with enough theory to express the identity in a purely categorical setting that I would like to write for any vectors in a common vector space .
I've already talked about that and people didn't seem to realize how there is a basic problem to solve here.
Maybe I'm wrong also and someone knows how to do that. Maybe it's super simple and I'm just idiotic.
I think the key here is that polynomial maps are the natural maps of affine schemes, not of vector spaces. Commutative -algebras are monadic over -vector spaces and the coordinate rings of affine spaces are freely generated, so you can still talk about linear maps as free maps between freely generated objects.
Basically: the polynomial rings are the freely generated symmetric algebras on the -vector spaces , and if your vector space is not of that form you will still get a "polynomial" ring. The elements of the algebra represent the (polynomial) maps from your free affine scheme to , which are the same as ring homomorphisms from to your -algebra. More generally, polynomial maps are represented as ring homomorphisms in the opposite direction.
Could you give more details? When you say that you still get a "polynomial" ring are you talking about the symmetric algebra of a vector space / module or something else?
Are you saying that you view a polynomial map as a linear map of the form ?
If this is what you say, then I'm not very satisfied by this approach: it doesn't allow me to see concretly a map which takes an element to its square. And it doesn't allow me to encode for instance the binomial theorem as a commutative diagram.
I'm interested by very very practical stuff. I want to be able to understand the binomial theorem via sequent calculus and cut elimination in some logic for instance and this is not something any schemes help me to do as far as I understand them. But I could very well be wrong.
Looking at the maps you talked about at the top, the first one is represented by , the lift of the codiagonal to symmetric algebras; the second one by the nonlinear map from to that takes every generator to ; the third is the inclusion of in . The composite takes every symmetric 2-form on A to the corresponding quadratic 2-form on A as a polynomial in 1-forms.
Ok. I'm glad you understand how to encode these maps in terms of symmetric algebras, tensor products and biproducts. But do you know how to do this in an abstract categorical setting? For instance, monoidal categories can be seen as an abstraction of vector spaces together with their tensor product. You can see multilinear maps between vector spaces as a multicategory. What about these maps? Is there a setting to talk about them purely categorically? This is really what I'm interested by. Because once you have this, you can maybe see your maps as proofs of some logic.
And I'm saying I view a polynomial map as a linear map of the form .
You can definitely abstract this process considerably, for example you can abstract to just "a monad", but you will have trouble talking about binomial coefficients at that level of abstraction.
James Deikun said:
I think the key here is that polynomial maps are the natural maps of affine schemes, not of vector spaces.
That's the solution that @Todd Trimble, @Joe Moeller and I are using to deal with problems of this sort. There's a symmetric monoidal functor from to which lets you turn any category enriched over into one enriched over , but the latter allows for more general maps between the hom-spaces: maps that are not just linear but 'polynomial'.
@James Deikun wrote:
it doesn't allow me to see concretely a map which takes an element to its square.
It does: indeed that's exactly the sort of thing it does.
(The only time I've really used this point of view in anger was when I was thinking about determinants and characteristic polynomials. It worked pretty well then. I didn't completely succeed in what I was trying to do, but I don't think that was down to any of this.)
John Baez said:
it doesn't allow me to see concretely a map which takes an element to its square.
It does: indeed that's exactly the sort of thing it does.
Yeah; if you take a basis for the dual of your vector space, this is the same as choosing coordinates; then the ring underlying your affine scheme is the ring of polynomial functions of the coordinates. You can then read formulas directly off the backward maps by sending the basis elements of B through.
John Baez said:
James Deikun said:
I think the key here is that polynomial maps are the natural maps of affine schemes, not of vector spaces.
That's the solution that Todd Trimble, Joe Moeller and I are using to deal with problems of this sort. There's a symmetric monoidal functor from to which lets you turn any category enriched over into one enriched over , but the latter allows for more general maps between the hom-spaces: maps that are not just linear but 'polynomial'.
Hmm, it looks interesting but I'm very very cautious. I want to encode things with the extreme level of formalism of proof theory that's why I'm not satisfied by most of what I read. How is this functor defined? I know how to talk about symmetric powers with sequent calculus or string diagrams as long as we allow scalars in . This is what I've been doing in this preprint: String diagrams for symmetric powers I: In symmetric monoidal ℚ≥0-linear categories (with string diagrams, not sequent calculus). And you could apply your symmetric functors to my string diagrams because they form a category enriched over ...
Then it could give you a practical language to talk about polynomial maps.
By the way, it tooks me months to write all these string diagrams. All that because I dream to talk about polynomials in a purely syntactic setting like string diagrams or proofs. If you want, I'm interested by the categorical logic of polynomial maps, in the same way than in the book of our lamented P. Scott and J. Lambek (which doesn't talk about polynomials maps of course).
(Or just in general, you can compute some 1-form on B explicitly by sending it through the backwards map and you'll get a formula for the result in terms of a polynomial in 1-forms on A...)
How is this functor defined?
Given a finite-dimensional vector space over a field , let be the symmetric algebra on its dual. This gives a contravariant functor from to , the category of commutative algebras over . But by definition is the opposite of . So we get a functor from to .
(I am now making the dependence on the field explicit, which I wasn't earlier.)
It takes a bit of work to show that this functor is symmetric lax monoidal. (I left out the word 'lax' earlier!)
Brr... That's fine mathematically but my ultimate question is: Can you express the map using string diagrams or anything purely syntactic? That's really what I want to do. Can you write the binomial theorem using only the language of category theory? That's the kind of stuff I'm interested by.
You can use this functor if it helps toward this goal.
My questions are much more difficult that what it sounds I think. For instance we learned how to talk about linear maps purely syntactically only with the invention of symmetric monoidal categories and linear logic. The second one was only 40 years ago.
Okay, you have a somewhat different goal than I have. I'm not trying to use string diagrams for anything now; I just need a category where I can talk about things like the nth power map from (the algebra of endomorphisms of a finite-dimensional vector space) to itself, or the concept of an 'algebraic' representation of this algebra of matrices, etc.
I don't think I understand the rules of the game. Are you just allowing yourself one tensor product?
I'm actually pretty interested in how the lax monoidal structure on this functor works. And I think it's relevant to @Jean-Baptiste Vienney's questions too ...
Todd Trimble said:
I don't think I understand the rules of the game. Are you just allowing yourself one tensor product?
You can use two if you want...
John Baez said:
Okay, you have a somewhat different goal than I have. I'm not trying to use string diagrams for anything now; I just need a category where I can talk about things like the nth power map from (the algebra of endomorphisms of a finite-dimensional vector space) to itself, or the concept of an 'algebraic' representation of this algebra of matrices, etc.
I would want the power map to be a natural transformation if possible.
I think the representation as "converting symmetric n-forms to degree-n forms" as I outlined above is a natural transformation.
The nth power map on isn't natural though just because isn't a functor. Maybe it's dinatural?
Brr I think I'm going to be infinitely annoying. My original question was about this map:
Jean-Baptiste Vienney said:
You see, it looks very simple. A composition of three maps. And I don't use a symbol for the symmetric algebra. Only for the second symmetric power. I don't use a symbol for the dual of a vector space. I want to be able to express this as a proof in some sequent calculus. I already know how to talk about the first and the third maps but not how to talk about the second map. I'm sure that such a sequent calculus doesn't exist yet and not lot of people here are interested by proof theory so they will probably not invent it. (I would be fine with string diagrams or a purely categorical framework as well, they are all pretty much the same thing). So I don't think it's useful to talk a lot about that. Unless someone really understands what's my goal and knows an answer. That's all the time like this here. People say what they know even if it doesn't answer the question or they don't understand the spirit of the question. There is lot of "showing off". But also I'm just being lazy. I should just work on this by myself.
However, if someone can gives me a purely categorical framework where the composition of these three maps makes sense, which is one of the three formalisms I'm interested by together with string diagrams and sequent calculus, please do it. If you know it doesn't exist, please say it. But anything else than these two possibilities will probably not be the answer I'm looking for and it's going to confuse me a lot to try to understand if there is something useful for me in the answer.
Pff, thanks everyone for trying to make progress on this question though.
Jean-Baptiste Vienney said:
All these maps are morphisms in the category of finite-dimensional vector spaces and polynomial maps. These are excellent examples of the kind of maps I use a lot in my work with Todd and Joe on Schur functors. We find the approach is crucial for proving the kinds of theorems we want to prove. But of course that doesn't imply that you should take this approach!
Can anyone rewrite thes maps without using the symmetric algebra but only symmetric powers? I think both @James Deikun and @John Baez must understand very well these maps.
James Deikun said:
Looking at the maps you talked about at the top, the first one is represented by , the lift of the codiagonal to symmetric algebras; the second one by the nonlinear map from to that takes every generator to ; the third is the inclusion of in . The composite takes every symmetric 2-form on A to the corresponding quadratic 2-form on A as a polynomial in 1-forms.
(I prefer symmetric powers because they allow me to introduce the from , which I can't do using only the symmetric algebra. This line of idea comes from graded linear logic. And I've been extending it to the richer setting of graded differential linear logic (I've invented/discovered this thing). This is the first thing I did in research. After this, I became interested a lot only in symmetric powers (the in graded linear logic is not necessarily a symmetric power). And I rediscovered a theorem about an algebraic characterization of symmetric powers but in a purely categorical setting which is the subject of the preprint I linked above. I say this in order that people understand my background because I'm always annoyed when people don't understand that there is a lot of thoughts behind my question.)
In order to interpret the in this answer, I would need probably a compact closed or star-autonomous category. I would prefer if there is no need of using .
The dual is a "no" in logic and I always prefer when we don't use the word "no" if we don't need to. I understand this is a very personal point of view.
Also the symmetric algebra introduces the notion of infinity by taking a countable coproduct and I prefer not to use any infinity when it’s not required.
Well, your two conditions are hard, but maybe fulfilling them together will be easier than either one alone. If you are only interested in homogeneous maps, then you can take a category-like setting where there are objects and for each pair of objects and each natural number a set of arrows . Each one of these can be realized without using the dual because it is basically if your vector spaces are finite-dimensional or Banach or whatever.
This presentation secretly represents the squaring map as the identity map on actually. This is kind of the problem with it from a linear logic perspective; the bookkeeping to see a square as two uses doesn't really come so naturally.
The squaring map as the identity on is exactly what I don't like!
Do you have a real squaring with what you wrote before?
It took me a lot of time to realize that the linear logic / differential categories etc... perspective talks about non-linear maps but in a kind of fake way, without the non-linear phenomena.
With what I wrote before, if you feed in a coordinate of to the backward map, you will get a formula in coordinates of a quadratic function out.
To me this counts as "real squaring", but I don't know if you would count it or not.
Hmm, is your feeding realized by a map? Is it really part of the categorical setting or something that you can do only using set-theory and elements?
I guess you would probably not consider it part of the categorical setting; it uses the fact that we are working with -algebras in particular.
You can get a setting where you keep track of everything in the categorical part if you use a generalized multicategory where each input comes with a degree/multiplicity. But it will still represent the squaring map as an identity in the underlying category of vector spaces. Vector spaces just don't understand the idea of "using the same variable twice", so polynomial maps have to be encoded in some way or other.
(Whether it's using a linear map from the space of polynomials to the space of polynomials, like above, which sort of moves the nonlinearity below the reach of vector spaces, or whether it's sometimes using an interpretation of as containing only the diagonal, which moves it above the reach of vector spaces.)
I think a syntactic approach would be simpler. Trying to write a kind of linear logic where you can square and understanding the categorical semantics later. The weird part is clearly the . Or another solution is to introduce a rule which makes . Or a with a graded modality. This last one is maybe the best one. It would encode directly the power map (on homogeneous polynomials of degree for instance).
The last one is clearly in the spirit of the strucural rules of linear logic I think. But introducing a very non-linear element.
With this you could maybe understand for instance the binomial theorem through cut elimination which would be very cool.
I'm glad I talked with you because I had never thought before to these two other ways of encoding the square or power operations in a linear logic way.
I'm so happy about this:
This is a beautiful line
:) :) :)
A few years ago I got the idea of combining differential linear logic and graded linear logic and now here is a new element to add: "exponentiation rules" :) :) :)