You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I came across this student project proposed by @Bob Coecke and @Amar Hadzihasanovic a while back and found it quite interesting. The abstract reads:
Is there a way to define a category whose objects are root systems (with additional structure), so that the construction of Lie algebras from them is functorial? If so, what are its properties - e.g., is it monoidal, and in that case, is the functor (lax) monoidal?
Has there been any progress since that was posted? I've done some googling but couldn't find anything. Thanks in advance for any pointers.
I think this works... direct sums of root systems yield direct sums of the resulting Lie algebras.. Any root system has a Weyl group acting on it, which also then acts on the Lie algebra. So that looks like a groupoid of root systems, functorially mapping onto a groupoid of Lie algebras. If i'm not too confused.
Hi! I think the story, as far as I remember, is that I thought about this only for a few days. I can't remember what I was trying to achieve, some kind of reconstruction... In the same days Bob asked his students to contribute some master's thesis projects. I emailed him back with this idea, then forgot about it (nobody then picked the project...) So I'm sorry but no progress on my side, I'm afraid.
Is the idea here that you were trying to avoid picking a basis of the Cartan algebra and thus a choice of what counts as positive weights and "simple" roots, as usually done when generating a Lie algebra from a root system?
I think I had read this note by Allen Knutson. It goes some way, but not all the way towards building a functor from some category of Dynkin diagrams to Lie algebras.
It seemed like a nice thing to try to make neater as a master's project.
Dynkin diagrams have very few morphisms between them so it should be pretty easy to make the process of turning them into Lie algebras functorial.
Root systems, on the other hand, have tons of automorphisms.
So I've often wondered if there was a functorial way to get a semisimple Lie algebra from a root system. Usually we deliberately break the symmetry - kill off the automorphisms - by choosing a set of "simple" roots.
I think anyone who wants to work on this should talk to Allen Knutson (or some other expert on this stuff).
I have rather dim memories of working on it. One obvious thing to try is to show you can't do it showing the Weyl group of a root system (its group of automorphisms) doesn't act as automorphisms of the corresponding Lie algebra. I think Allen had some very interesting things to say about that. This was about 15 or 20 years ago...
i'm glad the experts showed up...
Allen Knutson is a real expert. I seem to recall there's just one or two Lie algebras for which the Weyl group doesn't act on the Lie algebra. He's the kind of guy who knows which ones, and why.
Thanks for your thoughtful replies. I read Knutson's note and found the idea of connecting Dynkin diagrams to Lie algebras directly (bypassing root systems) very appealing. No idea whether it's possible though...
Oh, I hadn't looked at his note until now. It's good.
Maybe I should make a "Operator algebras" topic for this, but anyway:
The Fuglede-Putnam-Rosenblum theorem (https://en.wikipedia.org/wiki/Fuglede%27s_theorem) states that for any two normal elements and , and some such that , then also .
Is there a way this result can be interpreted categorically? With that I mean that either the structure (*-algebras, normal elements), or the proof (which usually involves quite analytic arguments) can be presented in a "categorically nice" manner.
Wow, that's an amazing theorem! I edited Rosenblum's proof on Wikipedia slightly, to explain where the assumption of normality gets used. Such a nice proof!
It's hard for me to imagine a more "categorical" proof. In the finite-dimensional case they point out that the proof is pretty easy given that you know every normal operator on a finite-dimensional Hilbert space is a linear combination of commuting projections. This is true, but is there a nice way to see it more categorically? That seems like a nice subgoal.
It reminds me a bit about how orthonormal bases of a finite-dimensional Hilbert space give commutative -Frobenius algebras, especially since the commuting projections I just mentioned must come from a decomposition of your Hilbert space into orthogonal subspaces.
Hmm, so maybe you should be able to associate a commutative Frobenius algebra to each normal element and do something with that?
The underlying reason why I'm asking btw is the following: for positive elements and in a C*-algebra, it is the case that if and only if . The only way I've seen this proven is to use the Putnam theorem (the LHS states that is normal).
When and are effects, the expression denotes the sequential product of the effects that corresponds to first observing and then . This result then says "Observing and then is equal to observing and then if and only if they commute as operators". This is quite a neat result and it would be good if this could be proven in a more general categorical way
In the finite-dimensional case you certainly get a commutative -Frobenius algebra from any normal operator whose spectrum is nondegenerate, i.e. there's a 1-dimensional space of eigenvectors for each eigenvalue. I should think about the case of degenerate spectrum.
Is the key point here that the Frobenius algebra associated to a normal is also associated to ?
It's a true fact. Whether it's the "key point" depends on what you're doing.
(Again, I only know what this Frobenius algebra is in the case where the Hilbert space is finite-dimensional and has nondegenerate spectrum.)
By the way, I got really excited by the Fuglede-Putnam-Rosenblum theorem when you pointed it out. I tweeted a proof of it. A discussed ensued and we saw that in the finite-dimensional case it's easier because for any normal operator there is a polynomial such that . But this polynomial depends on the spectrum of .
In the infinite-dimensional case this simple proof breaks down.
I'm again just gonna hijack this topic for a question that combines quantum mechanics with category theory.
The notion of `generalized Hilbert space' (introduced by Piron as far as I understand), seems like the natural generalisation for when you want to replace complex numbers by another division ring (see Propositional systems, Hilbert lattices and generalized Hilbert spaces for a definition), and seems like it should result in quite a a lot of pathological spaces.
However, as Sòler's theorem shows, in infinite dimension this all just collapses to regular real/complex/quaternionic Hilbert spaces.
Now, I am wondering if we could get rid of this infinite-dimension assumption by considering a category of generalized Hilbert spaces. It seems like requiring tensor products of spaces to exist often restricts the types of spaces you can have, so my conjecture is that if we have a monoidal category of generalized Hilbert spaces over some fixed division ring that we must then also have .
Does anybody have any intuition about this?
A reason I care about this:
Call an orthomodular lattice an H-lattice when it is complete irreducible atomistic and satisfies the covering property.
Piron's representation theorem says that any H-lattice of rank at least 4 must be isomorphic to the set of closed subspaces of a generalized Hilbert space. So in particular, for infinite-rank H-lattices this gives you real/complex/quaternionic Hilbert space. This is a 'reconstruction theorem' for going from abstract lattice theory to quantum mechanics.
But here again we have this less nice condition of the space needing to be of rank at least 4. So perhaps if we consider a monoidal category of H-lattices the lower rank ones must also be projections of a generalized Hilbert space, and perhaps they must all be over the same division ring.
This would give quite a nice way to reconstruct quantum theory from lattice properties + tensor product
John van de Wetering said:
IT seems like requiring tensor products of spaces to exist often restricts the types of spaces you can have, so my conjecture is that if we have a monoidal category of generalized Hilbert spaces over some fixed division ring that we must then also have .
Does anybody have any intuition about this?
I don't know a theorem like that, but I know some stuff. A "quaternionic Hilbert space" is often defined to be a left -module with some particular structure and properties (that I'm too lazy to list). Given this, there's not a nice way to tensor quaternionic Hilbert spaces and get another quaternionic Hilbert space, because the quaternions are not commutative.
You can functorially turn any left -module into a right -module, but not an -bimodule. So, if I have two left -modules and I can turn into a right -module and form , but this is now just a real vector space.
Using this idea, I can tensor two quaternionic Hilbert spaces and get a real Hilbert space - not a quaternionic one!
I expand on this here:
Anyway, I really like your idea, @John van de Wetering. I suspect some version of it may limit us to real and complex quantum mechanics, which would be a nice theorem. In my paper I suggest a fancier setup where we allow all 3 associative normed division algebras to play together.
Oh yeah I forgot about the Eckman-Hilton commutative scalar thing in a monoidal category. In Composites and Categories of Euclidean Jordan Algebras they also do this thing where the tensor product of two quaternionic spaces is a real space.
I mean, if quaternions don't wanna play nice that is actually a good thing for this 'project' as then only the real and complex Hilbert spaces would survive