You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
has an initial object, i.e. the empty set , and has a terminal object, i.e. the set with one element
The functor
sends the initial object to the vector space with one vector , i.e.
However, sends the terminal object to the vector space i.e.
I recently learned that is not the terminal object of (something I should have known, but never really thought about).
Both and are bicomplete, but apparently the functor does not preserve limits (otherwise would be terminal in ) :thinking:
I think the pushout of a span
in is a cospan with the usual tensor product of and as apex, i.e.
If so, what would be the meaning of the pushout of
? It should be something like
but I'm struggling to interpret what that would mean.
Can someone help? :pray:
Because there is a single morphism for any vector space (the trivial one), whatever you put in the corner with two well typed morphisms will make the square commute. In other words, this span does not put any restriction on the cospan (except for the endpoints) which means the colimit of this span is simply a universal cospan for and , i.e.: a coproduct (direct sum of vector spaces).
Conclusion :
Pushouts in Vect are not tensor products, these are (quotients of) direct sums.. as Ralph is saying.
Thank you Ralph and thank you Simon :pray:
I see how the push out of span with as apex is the direct sum of vector spaces.
I got it in my head that composition of cospans in corresponded to tensor product of bimodules :thinking:
Now I need to figure out where I went astray :thinking:
Eric Forgy said:
Both and are bicomplete, but apparently the functor does not preserve limits (otherwise would be terminal in ) :thinking:
The functor is a left adjoint: it's left adjoint to the forgetful functor .
Left adjoints preserve all colimits. So in particular preserves the initial object, and coproducts, and pushouts.
But it's very rare for them to preserve limits. And indeed does not preserves the terminal object, or products, or pullbacks.
Right adjoints, on the other hand, preserve limits. So the forgetful functor works the other way around: it preserves terminal objects, and products, and pullbacks, but not initial objects, or coproducts, or pushouts.
All this stuff should become second nature as you get used to category theory.
Eric Forgy said:
I think the pushout of a span
in is a cospan with the usual tensor product of and as apex, i.e.
You've already been informed of this, but: no.
Tensor products in are not pushouts in ; they are not any sort of limit or colimit in . They are something new.
what would be the meaning of the pushout of
?
It's just . Pushing out over the initial object is the same as taking the coproduct, always!
Eric Forgy said:
I got it in my head that composition of cospans in corresponded to tensor product of bimodules
No. I told you all this wonderful stuff about how composing spans in a cartesian monoidal category is the same as tensor product of bimodules in the cocartesian monoidal category , and now you're applying that to , where the tensor product is not cocartesian. So you're getting nonsense.
(Remember, in I'm using to refer to the product in , which is the coproduct in .)
The only way to get out of the mess you got yourself in is to back out - straight back.
Tensor product of vector spaces or modules of algebras is not any sort of pushout or pullback. I told you the way of thinking that actually works, which is to treat spans in as bimodules in and then map those bimodules to bimodules in via the monoidal functor , no longer treating them as spans or cospans.
I guess if you tell someone how to cook a souffle they have to ruin a few to see why you described it so carefully.
John Baez said:
The only way to get out of the mess you got yourself in is to back out - straight back.
I keep saying this to my children but they won't listen either.
I made the same mistake Eric just made, once upon a time. The interplay between products and coproducts and tensor products is confusing at first.
I was joking and not actually referring to his understanding. The concepts are certainly confusing for me.
Yup. I tease Eric rather mercilessly, but I start defending him when other people join in teasing him.
According to Wikipedia:
, the category of vector spaces over a given field, can be made cocartesian monoidal with the "tensor product" given by the direct sum of vector spaces and the trivial vector space as unit.
is a weird category where
It has pushouts and an initial object so it is finitely cocomplete. It has pullbacks and a terminal object (which is the same as initial object ), so is also finitely complete.
Tensor product is a quotient of the product, which I believe means it is also a quotient of direct sum so I think it all actually works out.
I think it is true that the pushout
and the pushout
is a quotient of in
John Baez said:
Eric Forgy said:
I got it in my head that composition of cospans in corresponded to tensor product of bimodules
No. I told you all this wonderful stuff about how composing spans in a cartesian monoidal category is the same as tensor product of bimodules in the cocartesian monoidal category , and now you're applying that to , where the tensor product is not cocartesian. So you're getting nonsense.
Right. You told me all this wonderful stuff :raised_hands:
But since is cocartesian with pushout, then is cartesian with pullback so I thought we have
I think it is true that the pushout
I don't know what means - what do you mean by that?
and the pushout
is a quotient of in
No:
1) I told you earlier tonight that the tensor product is not any sort of pushout or pullback involving and . Any thoughts along these lines are doomed. That's why I said "back straight out".
2) In particular, taking , has dimension 16 so there's no way it can be a quotient of , which is a lower-dimensional vector space. So, it's not true that is a quotient of in That's just not how tensor products work.
Eric Forgy said:
... since is cocartesian with pushout, then is cartesian with pullback so I thought we have
Yes, that's true.
But note, none of this has anything to do with the usual tensor product in .
So this is false:
the pushout
is a quotient of in
and this is just meaningless by the standards of normal mathematicians:
I think it is true that the pushout
since none of us knows what you mean by .
In both cases you're trying to connect to colimits in a way that just doesn't have any chance of working.
If you just try examples you'll quickly be disabused of such notions.
Thanks John :sweat_smile:
For sure, I've gone astray somewhere. I know that. It is just a little difficult for me to pinpoint where.
In my first comment, I said what was, but not very clearly. It is the object obtained from pushing out the span which we know is coproduct. I used the tensor symbol because of the Wikipedia quote that said it was
"tensor product" given by direct sum
but
If
then
I know that.
So the pushout of the span should be
There are only so many operations that can happen in so I would like to better understand how the tensor product relates to all this. I need to get to a point where I can understand a relationship (if it exists) between spans and bimodules in a way that involves actual tensor product or that beautiful knowledge I painfully acquired really isn't helpful to me, which would be a shame.
Eric Forgy said:
In my first comment, I said what was, but not very clearly. It is the object obtained from pushing out the span which we know is coproduct
Yeah, the object obtained from pushing out the span is the coproduct of and .
In the coproduct is the direct sum, usually denoted . So call it if you actually want people to understand you.
Or call it if you want to show off the fact that you know it's the coproduct.
Or, if you want to show off the fact that it's "the object obtained from pushing out the span ", you can call it . Only a crazed category theorist would ever do this, but we'd still understand you.
The notation is basically just undefined crap. (Actually I can make up a meaning for it, but it's not at all useful here.)
I used the tensor symbol because of the Wikipedia quote that said it was
"tensor product" given by direct sum
Yeah, but they put "tensor product" in quotes because they meant it's just a monoidal structure on - the quotes basically mean "haha, just kidding, don't write it as unless you want people to think you're confused".
Btw, I have seen a few places that describe tensor product as pushout and it made sense to me, but maybe it is in a different context? For example:
https://math.stackexchange.com/questions/1548206/prove-that-tensor-product-is-pushout
[Edit: Yeah. It looks like I wanted instead of I guess :face_palm: ]
There they are saying that if and are commutative algebras over some field , the commutative algebra is the pushout in of a certain cospan
That's true. But this is a pushout in , not in , so it's a completely different story.
By the way, for any field the monoidal category is cocartesian so some of the general stuff I said about cospans in cocartesian categories applies here. But is not cocartesian, and that's what you were working with.
I think I see where I confused myself.
Since I got one thing right, I'll copy it here:
Among other things, this means that every vector space is a monoid object and I know monoid objects in are algebras. However, this is a monoid object in It wasn't a huge step in my mind to get from there to commutative algebras, but I went astray (surprise!).
Eric Forgy said:
... every vector space is a monoid object and I know monoid objects in are algebras. However, this is a monoid object in
It's good to be precise to avoid confusing oneself:
Monoid objects in are algebras.
Monoid objects in are something else, and they turn out to be just vector spaces.
The monoidal structure matters!!!
If a random guy walks up to you and says "monoid object in ", they almost always mean "monoid object in ". But you can also ask them to say which monoidal structure they're talking about.
For future reference, I think another source of confusion for me was an example of bicategory on the nLab.
If is a monoidal category, there is a corresponding bicategory with
For , the composition is I want to relate this to spans / bimodules somehow :thinking:
If you take a bicategory, pick one object, and take all of its endomorphism 1-cells, and all 2-cells between those, its the same thing as a monoidal category, related to the fact you gave about . So what do you get when you do this in the bicategory of bimodules?
Thanks Joe. This is precisely the situation I'm interested in. Super cool :blush:
In the case of , we'd pick a ring and morphisms would be -bimodules and 2-morphisms would be -bimodule homomorphisms.
Composition of morphisms would be tensor product
If you choose a field (which is also a ring), then a -bimodule is a vector space and
Right, so in the case where we start with the bicategory
[rings, bimodules, bimodule homomorphisms]
and choose one object that happens to be a field , the monoidal category we get from Joe's construction is our old friend .
Oh, actually that's not true!
The problem is that a -bimodule is not the same as a vector space over .
Vector spaces over give some examples of -bimodules, but not all.
The point is that whenever our ring is commutative, any left -module becomes a -bimodule in a standard way, but we don't get all -bimodules this way.
A fun example is: figure out how to make the abelian group into a -bimodule in a nonstandard way.
So is usually just a sub-monoidal category of Joe's monoidal category, when is a field.
And Joe's construction is even more interesting when the ring we pick is noncommutative.
But I don't think any of this helps Eric very much, except to give him yet another workout in categorical algebra. :muscle:
I feel so disappointed. My hero seems a lot less special now :sweat_smile:
The point is that is a category of modules of a ring, not a category of bimodules of a ring.
I thought of my counterexample pretty quickly because I've thought about doing quantum mechanics with bimodules instead of modules...
A hunch suggests that the distinction might come down to zero objects. If a -bimodule has a zero object, it is a vector space?
A -bimodule has scalar multiplication, so if it also has addition, it is a vector space.
I think addition requires a zero object.
https://ncatlab.org/nlab/show/additive+category
Eric Forgy said:
A hunch suggests that the distinction might come down to zero objects. If a -bimodule has a zero object, it is a vector space?
It doesn't make sense to ask if a -bimodule has a zero object. A category can have a zero object, not a bimodule.
I need an emoticon for "You know what I mean" :sweat_smile:
All I want to say is that if a -bimodule has addition, it should also be a vector space.
Well, that's at least meaningful, but it's false. All bimodules of rings have addition, by definition.
Is everything I thought I knew about vector spaces wrong? I thought scalar multiplication that distributes over addition is enough to have a vector space :sweat_smile:
(scalar -> field)
Yes, that's true.
A vector space is a left -module.
Every -bimodule gives a vector space by forgetting the right -module structure and only remembering the left -module structure.
What I'm saying is something else: every left -module gives a -bimodule by defining the right module structure to equal the bimodule structure. But, not every -bimodule arises this way.
So: there's a forgetfulful functor from -bimodules to vector spaces, and also a functor going back the other way, but these do not form an equivalence.
I already said part of this:
John Baez said:
The point is that whenever our ring is commutative, any left -module becomes a -bimodule in a standard way, but we don't get all -bimodules this way.
"Having zero objects" or "having addition" has nothing to do with this stuff.
Ok
You would understand this stuff better if you found two different ways to make , with its usual addition, into a -bimodule.
There's the obvious way and the less obvious way.
John Baez said:
You would understand this stuff better if you found two different ways to make , with its usual addition, into a -bimodule.
Yeah. I was thinking about it :thinking:
Okay. It doesn't make much sense to talk about this until after you do that....
It's like talking about the difference between lizards and salamanders before you've ever seen them.
The obvious way, I think is just that is already a -bimodule with action given by .
Huh?
Well, with action given by multiplication seems too obvious and you stressed it was an abelian group :sweat_smile:
I think we're in deep trouble here.
I think you don't know what a bimodule of a ring is.
So I should just ask you "what's a bimodule of a ring?"
I'll try to answer without looking, which is dangerous :sweat_smile:
Recalling the puzzles, we start with monoid objects in a monoidal category . An -bimodule is an object togoether with compatible left and right actions.
That's pretty good, at least if you know what "compatible" means.
But we were doing the classic case: bimodules of rings.
What's the monoidal category that gives this classic case?
(When you talk to algebraists and say "bimodule", you're referring to this classic case.)
I'd need to know what monoidal category has rings as monoid objects. Without looking, my guess would be but I've never worked this out. "Working it out" would mean checking that a bunch of intuitive diagrams commute, which I can't remember off the top of my head, but I'm fairly certain they work out here.
Good, yes, it's . So you need to know what the tensor product of abelian groups is, and then check you get a monoidal category this way, and check that monoid objects in here are exactly the same as rings, etc.
So a bimodule in this context is an abelian group A with a left action of a ring R and a right action of a ring S, obeying the "compatibility" law (which is a version of the associative law).
So my puzzle:
You would understand this stuff better if you found two different ways to make , with its usual addition, into a -bimodule.
says to treat as an abelian group in the usual way, and then find two different ways to make it into a -bimodule, where these two 's are being treated as a ring in the usual way.
So start by telling me one way to do this. Pick the way that's most likely to work, since you were saying some crazy stuff before.
is a monoid object in
The most obvious way to make into a -bimodule is to define the left and right actions to be multiplication, i.e. is already a -bimodule just like any ring is an -bimodule. In this case, compatible is simply associativity.
Yes! This is what I meant by the "obvious" way to make into a -bimodule, since as you mention, this way works for any ring.
So the interesting part of my puzzle was to find another way to make into a -bimodule, which is not isomorphic to the "obvious" one.
As a hint: this other way would not work for ; we really use the complex numbers here.
I'm drawing a blank :thinking:
Well, think about what the complex has, as a ring, that the real number doesn't. That seems like a pretty robust approach!
Note I say as a ring since this is a question about bimodules of rings. So all sorts of stuff connected to analysis is irrelevant.
Complex conjugates?
Hmm, so try using those.
I'm just throwing darts, but maybe we could define left and right actions by multiplication by the complex conjugates?
You're in the general vicinity of the dartboard. :darts:
You get three darts right? So what can you try?
Yes, Eric's suggestion was a bit too vague for me to tell if it's right or not... I think Reid was pointing out there are 3 things to try here.
I meant, left action
and right action
Sorry. That's a perfectly fine way to make into a bimodule, but it's isomorphic to the usual way! See why?
Okay, I'll say why. Your way to make into a bimodule is isomorphic to the usual where the left action is
and the right action is
The isomorphism is complex conjugation: !
To see this, notice that if we apply your left action and then the isomorphism:
it's the same as applying the isomorphism to and then applying the usual left action:
The same is true for the right action.
So, to get a really new bimodule structure on , we should do something like this: use the usual left action
but tweak the right action using complex conjugation:
This is not isomorphic to the usual bimodule structure: the attempted isomorphism works for the right action but not the left, and the attempted isomorphism works for the left action but not the right.
We could also have tweaked the left action using complex conjugation, and used the usual right action.
For what it's worth, I edited that misleading Wikipedia sentence, so it doesn't say
can be made cocartesian monoidal with the "tensor product" given by the direct sum
anymore, but
can be made cocartesian monoidal with the monoidal product given by the direct sum.
It was clearly confusing to at least one person with a PhD in a quantitative science, so ...