You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Given two algebras and , there's this nice fact that the category is equivalent to the category , the Deligne-Kelly tensor product of the module categories.
How does one do homological algebra in this -category?
Iirc, every object in is a colimit of objects of the form for an -module and a -module, and my intuition is that we can get a projective resolution for one of these by finding an -projective resolution and a -projective resolution and then applying to these in the sort of "obvious" way, using terms that look like ...
In particular, if and are algebras over a field , so that , this should give us a Künneth formula for the -groups as computed in
Is my intuition right? Is this written down somewhere? There might also be a higher powered way to make this "obvious", and I would also be interested in that.
Thanks in advance! ^_^
I like your question. I don't know the answer, but I fear bringing in the Deligne-Kelly tensor product may not increase the number of good answers you get. You could instead ask something like:
Given algebras and over a field , with modules and , is naturally a module over . Is there a way to compute the cohomology of in terms of the cohomology of and , or obtain a projective resolution of in terms of resolutions for and ?
And this sounds like something algebraists will have thought about.
I am a bit confused by the fact that the same symbol is used for a monoidal structure on module categories, and for what seems a monoidal structure of objects, that, however, live in different categories.
Can it help the fact that there is a coproduct diagram of -algebras (are A,B algebras over the same base ring?) ? This gives functors relating the module categories over A,B and their tensor.
fosco said:
I am a bit confused by the fact that the same symbol is used for a monoidal structure on module categories, and for what seems a monoidal structure of objects, that, however, live in different categories.
Given an object in and an object in you can get an object in the Deligne-Kelly tensor product of these categories, , and Chris is following tradition in calling this object .
That's how the Deligne-Kelly tensor product of finitely cocomplete -linear categories works: given two such categories, say and , and objects , you get an object .
Yes, we are overloading the symbol here, but in a way typical of tensor products: given vectors in vector spaces , we get a vector .
But anyway, this is an example of why I recommended Chris not to bring up the Deligne-Kelly tensor product: their actual question here can be asked without mentioning it.
I think there's something fancy you can do, which also agrees with my intuitive answer -- Modules and over and can be thought of as sheaves on and (actually the and I'm interested in are noncommutative, so we're pushing this geometric intuition even further than normal algebraic geometry does).
With this in mind, the module is naturally viewed as a sheaf on , indeed it's basically if you think about it geometrically, and in this case you should really expect a Künneth formula to hold!
This is points in John's direction of not worrying to much about the tensor product of categories
Actually, yeah, this idea shouldn't be so hard to formalize algebraically. Thinking about these "pullbacks of sheaves" (read: induction of scalars along the inclusions $A,B \to A \otimes B$) was useful, thank you! That should work, formally the same, in the noncommutative setting too. Then we're working with a usual tensor product of -modules and the result should fall out, rather than needing to think abou this weird "external" tensor product
Is it really always true that ? If so, I feel like I should have known it, but I don't remember learning it.
I guess I haven't checked myself, but if you believe that (which is mentioned in many places, here's the first I found) then this follows from the explicit descriptions of the product
It's also fairly believable, since if you have a map which is -linear, note that acts trivially on the piece but (presumably) nontrivially on the piece, so it feels difficult to come up with an -linear map that somehow "mixes" the factors. That's not a proof, obviously, but it's some evidence
Is it obvious that it's true even when ? When the vector spaces are finite-dimensional I can count dimensions and see that they're the same, and so "obviously" the canonical map must be an isomorphism, but it's not obvious to me how that generalizes to infinite dimensions. Or is it only claimed in the finite-dimensional case?
Yeah, you probably want and to be finite dimensional as vector spaces over . I don't even know how to make sense of the -product (computationally) in other settings. See, for instance, the relevant nlab article, which also cites a paper by Etingof and Ostrik where they explicitly ask for a finite dimensional assumption. Iirc ENGO's book on tensor algebras has a version for coalgebras, though, which I guess makes sense since I remember @John Baez telling me at one point that coalgebras somehow behave more finitely than algebras, for reasons I still haven't internalized
The ultimate reason is that when we have a coalgebra and we apply the comultiplication to any , the result is a finite linear combination of elements . Somehow one can leverage this into a proof of the Fundamental Theorem of Coalgebras, which says that any element of a coalgebra lies in some finite-dimensional sub-coalgebra. Using the fact that the internal direct sum of two sub-coalgebras is a sub-coalgebra, it follows that every coalgebra is a filtered colimit of finite-dimensional sub-coalgebras.
I'm trying to figure out the proof of this Fundamental Theorem. I've looked at it, and my eyes rolled over the Sweedler notation and it seemed sort of obvious, but it seems a lot less obvious as I'm trying to mentally re-create it. How do you get the finite-dimensional sub-coalgebra containing ? You form
and you take the linear combinations of and all the and . Maybe that's the desired sub-coalgebra. Or maybe you have to repeat this process and comultiply the and and also take the linear combinations of all the elements you get that way. Or....
The question is why this process terminates. Coassociativity must help.
What about the modules -- do they have to be finite dimensional too?
The nLab also claims that there is a version "without the finiteness constraints and using the tensor product of categories with finite colimits".
Here's the link to the proof of the Fundamental Theorem of Coalgebras: https://planetmath.org/FundamentalTheoremOfCoalgebras
The Sweedler notation is not so bad. The proof does hinge on coassociativity. I'll try summarizing with minimal notation.
We're going to define the sub-coalgebra of an element x by double comultiplying. Comultiply x, then comultiply the first slot of the result. Each comultiplication results in a sum over tensors, so our thing is a sum over two indices, the first term over j, the third over i, and the middle term over both.
Now we define the subspace that will be our subcoalgebra D to be the span of all the terms in the middle slot. x is in this subspace because x is equal to applying the counit to the first and third slots of our double comuliplication. This is just using (co)unitality of a (co)monoid. But the counit of the first and third slots gives the coefficients of the linear combination that shows x is in the span.
Showing this subspace is a subcoalgebra is where coassociativity comes in. We're going to do this by showing that comultiplying any of the terms from the middle slot lands in . The gist of the argument is that you apply a third comultiplication, then coassociate it around and use an assumption of linear independence to peel off the other terms so you end up concluding the comultiplication of the middle term is in first , then in , and thus .
Ah, thanks Joe, that's great! I don't mind Sweedler notation, but it's easy for me, when reading a purely computational proof using this or any clever notation, to say "yeah yeah, that probably works" without thinking about it. And then I can't remember the trick on my own!
So now I will remember this: take an element of a coalgebra , comultiply it twice to get a linear combination of elements like , and take the space spanned by all the middle terms . This is our finite-dimensional sub-coalgebra of containing . To show it's a sub-coalgebra, comultiply again and use coassociativity.
This is like a dehydrated version of the proof, and I'm supposed to be smart enough that I can "add water" - do some calculations - whenever I want to reconstitute the whole proof.
It's interesting that the proof requires comultiplying three times.
Possibly some of the above messages should be moved to another thread about coalgebra?
Sometimes people here are chopping conversations in half, moving part to a new thread, making it harder to follow the overall conversation. For example the stuff about finite-dimensionality and coalgebras here arose from people talking about finite-dimensionality assumptions for results on (external) tensor products of modules of algebras.
To exhibit the inner unity of this conversation: not only is it true that every element of a coalgebra lies in a finite-dimensional sub-coalgebra, every element of a comodule of a coalgebra lies in a finite-dimensional sub-comodule of that coalgebra. The proof is analogous. And this is probably why some results on tensor products of comodules of coalgebras work better than the analogous results on tensor products of modules of algebras, as Chris was saying:
Chris Grossack (she/they) said:
See, for instance, the relevant nlab article, which also cites a paper by Etingof and Ostrik where they explicitly ask for a finite dimensional[ity] assumption. Iirc ENGO's book on tensor algebras has a version for coalgebras, though, which I guess makes sense since I remember John Baez telling me at one point that coalgebras somehow behave more finitely than algebras, for reasons I still haven't internalized
I hope @Chris Grossack (she/they) can now internalize those reasons, thanks to what @Joe Moeller said.
My feeling was that the conversation about algebras got interrupted by an essentially unrelated conversation about coalgebras, and now I have to scroll way up and skip over the coalgebra posts to find what we were saying about algebras and get back to it. I don't see any real connection between the algebra and coalgebra conversations, in fact the point of the latter was that coalgebras are different than algebras. But if you want to keep them all together, okay.
If we're done with coalgebras, the question about algebras that I still want to know about is whether for infinite-dimensional vector spaces over a field.
There's a natural map
and we're wondering if it's onto. Let me try to argue that it's not.
Let be bases of countable-dimensional vector spaces , respectively. Consider the linear maps that sends
to
to
to
and so, while sending all other basis vectors to zero. Is this linear map in the image of ? I'm hoping not.
I sort of see what you're getting at. The image of a pure tensor can't behave like that on for any . We can "correct" that for any particular by taking a linear combination of pure tensors, but it feels like since any linear combination is finite we can only correct finitely many of them. But I'm not sure how to make that rigorous.