You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I took a shot at categorifying eigenvalues and eigenvectors in this blog post:
Eigenvalues only make sense for endo-morphisms though. So , and there's a typo in the definition, because the domain and codomain wouldn't match otherwise.
Apart from this, as I said on twitter, I was thinking about the notion of a "general linear group": one can define the subcategory of all equivalences in to be its most proximate cognate. If you assume the categories to be Cauchy-complete then this is not really different from the bicategory having objects finite categories, equivalences (in !) between them and natural transformations of functors .
@JadeMasterMath The "general groupoid" is the subcategory of self-equivalences of objects of this category. An "invertible matrix" is an endo profunctor p over some n, which is a categorical equivalence.
- amuse-fouche (@ququ7)I suspect this object turns out to be informative about (profunctors of finite categories), and it has a name, somewhere.
Also, note how in this setting an "invertible matrix" can be rectangular: if $D$ is terminal, and $C$ is the walking isomorphism (so a groupoid, so Cauchy complete), then there is an equivalence in between the two. So... vectors can be invertible matrices!
I suspect this poses a real problem when one tries to capture the notion of determinant, and before that of antisymmetric tensor. Another problem is that there seems to be no easy way to categorify the expression of the determinant
(what is -1? how can you sum over permutations if A isn't square?)
I think the categorification maybe works better if the 'underlying vector space' of a category is defined to have as basis of the category rather that its set of objects. I confess I don't know much about the category case; I just extrapolate from groupoids. In the groupoid case there is already quite a bit of elementary linear algebra developed (see Baez-Hoffnung-Walker, Morton, Gálvez-Kock-Tonks). Regarding the minus sign in the determinant definition, it might be a good idea to split it into an odd and an even part and deal with them separately. Lawvere and Menni used this approach to give an objective account of Möbius inversion, where there is an alternating sign very similar.
When you say "we can decategorify to a matrix " I guess you're implicitly saying that the profunctors in FinProf are valued in finite sets?
I tend to agree with Joachim Kock that would be a better choice to use than for decategorification, since that resolves some of the issues that fosco was pointing out
What might help is that there is a notion of trace in Prof: namely taking the categorical trace induced by the compact closed structure. In matrices these two things are the same, and the trace is the sum of all the eigenvalues
Cole Comfort said:
What might help is that there is a notion of trace in Prof: namely taking the categorical trace induced by the compact closed structure. In matrices these two things are the same, and the trace is the sum of all the eigenvalues
There's at least two notions of trace in Prof for an endo-profunctor . As well as that one using the closed monoidal structure, you can take one using the bicategorical structure, namely the hom-object .
I didn't know that. Although the trace induced by the compact closed structure seems to be the immediate generalization of the matrix trace.
I was thinking about the decomposition on connected components to split a generic profunctor into profunctors with connected domains, but it's not clear to me why it's useful to consider a single connected component as a "point in a set that parametrises a dimension"
Cole Comfort said:
...the trace induced by the compact closed structure seems to be the immediate generalization of the matrix trace.
That's a highly subjective statement! :grinning: It very much depends on how you think of matrix trace. I wouldn't have come down on either side. Certainly, if you think of matrix trace as being the sum of the diagonal elements then this 'diagonal trace' - as I call it - would seem to be a very natural analogue.
Anyway, if you're interested, I did give some talks on it way back, but failed to finish the paper on it. You can find slides at "Two 2-traces" in the talks section on my website.
The categorical trace in Prof is the coend and the diagonal trace is the end .
Oh yes I saw tthese slides before but I didn't realize there were recordings on youtube!
@fosco Thanks I will correct the typo. This is super interesting about how invertible profunctors can be rectangular...maybe this is a feature but maybe not and it would be good to take as decategorification instead like @Joachim Kock said. That seems like a good suggestion because it's more standard thing to do? And also it eliminates this issue with rectangular matrices being invertible.i should definitely check out the papers referenced as I am guessing they contain a lot of ways to improve this idea. @Morgan Rogers (he/him) I am assuming that they profunctors are valued in finite sets but I didn't say that (and I should have)
I think I once watched at least part of a talk by Ben Elias titled "categorifying eigenvalues". I think maybe he had a related paper?
I don't know if this was mentioned already, but the intuition for ends as a trace matches with the fact that fo every two composable endo-profunctors.
@Cole Comfort @Simon Willerton yeah it would be really cool if you could show that the categorified trace is (isomorphic?) to some categorified sum of eigenvalues. I'm guessing that that the coend trace is the right one because I am using coends for profunctor composition? That's just an intuition though.
@fosco That fact is true for categorified traces because you the cartesian product is symmetric monoidal? Thanks that seems like a really useful property.
@Tim Campion if you find it let me know :)
Anyway thanks so far y'all have given me lots of fun things to think about.
@Jade Master Okay -- here we go: Elias and Hogancamp, "Categorical Diagonalization" https://arxiv.org/abs/1707.04349 I'm thinking that diagonalization is quite closely related to finding eigenvalues. I think they are focused on an abelian / linear - type setting, which might be a little different from what's going on here.
And here's a talk by Ben : https://www.youtube.com/watch?v=y_kya4UXoaU
A youtube search reveals a few more talks as well.
@Tim Campion Thanks this looks great
Jade Master said:
fosco That fact is true for categorified traces because you the cartesian product is symmetric monoidal? Thanks that seems like a really useful property.
Yes, plus Fubini rule.
Nice.
Uh really cool Jade! Skimming rapidly, I think you swapped indices in the coend formulas (lower indices indicate an end). Also, what's going on at the end? Is supposed to be an iso ?
Still dimensions don't really match :thinking: on the left we have and on the right
fosco said:
Eigenvalues only make sense for endo-morphisms though. So , and there's a typo in the definition, because the domain and codomain wouldn't match otherwise.
Ha, this is probably due to this
"How to link categories and linear algebra" is a questione that has bugged me for a long time. So, this conversation is really following in my dreams :grinning:
Unfortunately, I don't give my best in asynchronous communications: is anyone willing to discuss the matter live?
@Matteo Capucci (he/him) Thanks finally fixed the typos. I feel like coend indices should be on the bottom because that's how it works for integrals but alas. @fosco I am interested but only after I am finished with my thesis work in a month-ish give or take.
Jade Master said:
Matteo Capucci (he/him) Thanks finally fixed the typos. I feel like coend indices should be on the bottom because that's how it works for integrals but alas. fosco I am interested but only after I am finished with my thesis work in a month-ish give or take.
Yeah, of course the convention is that it's ends which have lower indices / coends with upper indices. One could definitely argue that the opposite convention might have made more sense, since coends definitely feel more "integral-y" than ends, and are probably used explicitly more often than ends are. It might not be too late to change the convention, but one would probably want terminology / notation mavens like @Mike Shulman to weigh in before taking such a bold stance!
One reason that coends are more "integral-like" than ends is that they are "colimit-y" in nature, which is kind of like integrals being "sum"-like in nature.
There's also a funny thing -- the most common type of coend to take is where is a monoidal structure, whereas the most common type of end to take is where is a closed structure. In both cases, one is composing some "default-like" bifunctor with some 1-variable functors, but the curious thing is that only in the first case is the bifunctor typically written with infix notation.
The infix notation makes it feel a lot more similar to the form of an integral like or more generally , where there's this pairing between a function and a differential form, or just generally between different differential forms.
Maybe what we really need to do is to convince every analyst in the world to start writing their integrals with the domain as a superscript, to match the category-theoretic convention ;).
Tim Campion said:
It might not be too late to change the convention, but one would probably want terminology / notation mavens like Mike Shulman to weigh in before taking such a bold stance!
Too late, I would say. Not an important enough issue to be worth the immense confusion it would cause.
Aw too bad.
fosco said:
I suspect this poses a real problem when one tries to capture the notion of determinant, and before that of antisymmetric tensor. Another problem is that there seems to be no easy way to categorify the expression of the determinant
(what is -1? how can you sum over permutations if A isn't square?)
Could this work by replacing permutations with automorphisms of (the domain and codomain of ) and signature be replaced by some of 'character' for a representation of on , i.e. a functor in the 'general groupoid' you propose? If I generalize correctly from 0-linear algebra, a character should be a map given by
It remains to find a suitable ... for permutations, is the only non-trivial representation iirc
Conditioned on finding such , then the determinant of an endoprofunctor could be
The dependence on does not seem to be a functor of type (1,1)
But hey, I just realised that we can define tensors of higher rank :grinning: https://arxiv.org/abs/2011.13881
fosco said:
The dependence on does not seem to be a functor of type (1,1)
Maybe switc with in
fosco said:
But hey, I just realised that we can define tensors of higher rank :D https://arxiv.org/abs/2011.13881
Yep
Still don't see what and are
fosco said:
[...] there seems to be no easy way to categorify the expression of the determinant
That formula for the determinant is hard to categorify because it's not very conceptual. This one has some advantages if you're willing to restrict attention to invertible matrices (which is okay since we all know what the determinant of a noninvertible matrix should be):
For any ring let be the group of invertible elements of that ring. The determinant is the canonical group homomorphism
where is the commutator subgroup of : that is, the normal subgroup generated by all elements .
The idea is that the determinant of an invertible matrix is "all that's left if we force matrix multiplication to become commutative".
When is the ring of matrices in a field , is the group of invertible matrices. is canonically isomorphic to the group of invertible elements of , and is then the usual determinant.
More abstractly, the idea is that there's a forgetful functor from abelian groups to groups, and it has a left adjoint called abelianization sending any group to . The counit of this adjunction is the canonical quotient map
People use this approach to determinants in K-theory.
These slides on categorification of determinants also look interesting:
Are you saying it's easier to categorify (or more specifically, )?
Let's recap.
A proposal for GL(n) in this setting is auto-equivalences of a category with n objects; in case n is Cauchy complete these are just self-equivalences of the category itself. This is not a groupoid, as we're not considering isomorphisms, just equivalences. And the category itself will have equivalences as objects, but also non-invertible natural transformations
Now, what's your proposal for the derived subgroup of this thing which is not a group(oid)?
And even if it was: are there quotients for groupoids? Do they still give rise to groupoids?
If yes, what is an abelian groupoid, and are abelian groupoids reflective in groupoids?
Can you obtain the abelianisation of a groupoid as a quotient by the commutator subgroupoid?
None of these questions seems trivial
John Baez said:
When is the ring of matrices in a field , is the group of invertible matrices. is canonically isomorphic to the group of invertible elements of , and is then the usual determinant.
Last time this came up we decided it wasn't true for . https://categorytheory.zulipchat.com/#narrow/stream/266967-general.3A-mathematics/topic/Dieudonn.C3.A9.20determinant
fosco said:
If yes, what is an abelian groupoid, and are abelian groupoids reflective in groupoids?
Every groupoid is a coproduct of groups. So an abelian groupoid has to be one where the automorphism groups are all abelian.
If I have some sort of categorified analogue of a ring - there are a number of definitions choose from - it will have a 2-group of objects that are invertible up to isomorphism. Let's call this 2-group . There are two main levels of commutativity for 2-groups: braided and symmetric. I think we could take the 2-group and force it to be braided (or symmetric) using a left biadjoint to the forgetful functor from braided (or symmetric) 2-groups to 2-groups. Call the result . Then there should be a unit
.
I would say every groupoid is equivalent to a coproduct of groups; how do you account for invertible morphisms between non-equal objects?
John Baez said:
If I have some sort of categorified analogue of a ring - there are a number of definitions choose from - it will have a 2-group of objects that are invertible up to isomorphism. Let's call this 2-group . There are two main levels of commutativity for 2-groups: braided and symmetric. I think we could take the 2-group and force it to be braided (or symmetric) using a left biadjoint to the forgetful functor from braided (or symmetric) 2-groups to 2-groups. Call the result . Then there should be a unit
.
I see, but how do you relate this to profunctors?
I guess endoprofunctors form a 2-rig, right? You can multiply them, and add them...
But to talk about this "determinant" we just need to know that endoprofunctors that are equivalences - that have an inverse up to natural isomorphism - form a 2-group.
More precisely, endoprofunctors that are equivalences, and natural isomorphisms between these, form a 2-group.