You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hi Suppose I have an associative and commutative comultiplication that is an isometry in (finite dimensional Hilbert spaces). Does this necessarily correspond to an orthonormal basis for ?
It is well known that a special dagger commutative Frobenius algebra on a finite dimensional Hilbert space amounts to an orthonormal basis, from the new description of orthonormal bases by @Bob Coecke @dusko and @Jamie Vicary.
Such an algebra in particular has a comultiplication that is an isometry, as well as associative and commutative. The basis comprises the copyable elements for , i.e. .
So my question is: is the dagger Frobenius requirement (and unit) known to be necessary? Or will an associative and commutative comultiplication that is an isometry suffice?
I can see where the dagger Frobenius structure is used in the neat argument by Bob, Dusko and Jamie, which goes via C star algebras. However, I think I can show that in the extra Frobenius assumption is not necessary. Before I go to higher dimensions, since my linear algebra is slow, I thought to check whether it is well known, or any suggestions where to look?
Sam Staton said:
Suppose I have an associative and commutative comultiplication that is an isometry in (finite dimensional Hilbert spaces). Does this necessarily correspond to an orthonormal basis for ?
I don't know! It would indicate a surprising suboptimality in existing results, but I haven't heard of anything about this, or thought about it.
If any such comultiplication does correspond to an orthonormal basis for , I think you should be able to systematically extend the comultiplication to a full-fledged dagger Frobenius structure on . Have you attempted that?
It seems worth trying, since if your attempt fails, the failure may lead you a counterexample.
If I were making this attempt, I would start by defining a multiplication
.
Then there's a standard trick for defining a counit
namely
where is left multiplication by , and is the usual trace of a linear map from a finite-dimensional vector space to itself. You may need to rescale by a cleverly chosen constant, e.g. something involving the dimension of .
Then you can define a unit by
So the question then becomes whether you can derive all the special dagger commutative Frobenius axioms from the associativity and commutativity and isometricity (is that a word? :thinking: ) of .
Dually speaking, you're asking whether there is a finite-dimensional Hilbert space can be equipped with an associative and commutative multiplication that is a coisometry, but is not isomorphic to the usual , or equivalently not separable.
This reminds me of the concept of subproduct system, which is a generalization of that graded by a semigroup:
image.png
I'm not sure what kinds of examples they consider as I don't really know anything about this, but perhaps it's worth checking out and seeing if any of them are commutative and would yield the sort of example you're looking for.
I've done a bit more digging and found that recent work by Laurent Poinsot is highly relevant to the question:
Perhaps you've seen these already?
As far as I understand, in his terminology the question is whether every finite-dimensional commutative special Hilbertian algebra is semisimple, or equivalently unital Frobenius (using Corollay 35 from the second paper). Note that his "algebras" are not generally assumed unital, which matches your question.
Theorem 22 of the first paper is especially relevant: it shows that the "group-like" elements with form an orthonormal basis of the orthogonal complement of the Jacobson radical of the algebra. So the question is whether the Jacobson radical must vanish, or equivalently whether the algebra must be semisimple.
Thank you both! @John Baez , from abstract stuff that I know, so far I could easily show from specialness (isometry) that the image of is contained in , but not the other way round, so sort-of half the Frobenius law.
@Tobias Fritz thank you very much, these references look very relevant and I'm reading them!
[My argument for two dimensions was indeed that there can be no nilpotents if it's commutative and isometric, but I showed this by splitting on cases, a low level argument, and it's not yet clear whether it will generalize to other dimensions.]
Perhaps you are interested in the backstory. With @Paolo Perrone and Razin Shaikh, we are looking at how Markov cats can arise from SMCs.
We first take the affine reflection of an SMC (e.g. isometries), so that it is semicartesian (e.g. CPTP). Then any comultiplication can be made into an idempotent , and if we split the idempotents coming from commutative associative comultiplications, then we will get a Markov category.
Starting with isometries, ie pure quantum theory, the affine reflection is CPTP, and splitting those idempotents leads us to if they all come from bases.
I wanted to check, to derive probability from quantum theory in this way do we need to do something special about the bases? If so, is there a different Markov category that could be extracted from quantum theory instead?
Sam Staton said:
Thank you both! John Baez , from abstract stuff that I know, so far I could easily show from specialness (isometry) that the image of is contained in , but not the other way round, so sort-of half the Frobenius law.
This "semi-Frobenius law" rings vague bells in my mind. It dimly reminds me of some things I've heard from @Todd Trimble about cartesian bicategories and Frobenius monoids in poset-enriched categories. I could be imagining it.
OK I think I have a counterexample. [OK I retract this]
John Baez said:
Sam Staton said:
Thank you both! John Baez , from abstract stuff that I know, so far I could easily show from specialness (isometry) that the image of is contained in , but not the other way round, so sort-of half the Frobenius law.
This "semi-Frobenius law" rings vague bells in my mind. It dimly reminds me of some things I've heard from Todd Trimble about cartesian bicategories and Frobenius monoids in poset-enriched categories. I could be imagining it.
Just to reply to John, and not adding particularly to the current discussion: in bicategories like or , those relations or spans that come from functions have right adjoints, and so diagonal maps (which I will rewrite as since the setwise product is not a cartesian product in those bicategories), which are the comultiplications for cocommutative comonoid structures on such , will have right adjoints . Those right adjoints are multiplications for commutative monoid structures on , mated to the cocommutative comonoid structures we started with.
By playing with adjoints like , coassociativity isomorphisms like
induce morphisms
which looks like "one half" of Frobenius reciprocity. You get the other half if these morphisms are invertible. That turns out to be the case in and , but not generally for other cases of cartesian bicategories, like the bicategory consisting of categories and profunctors.
Part of the groupoidification story is that these merely "semi-Frobenius" morphisms, which are always present in cartesian bicategories, turn out to be invertible if you work not with categories and profunctors, but instead with groupoids and profunctors between them.
So here's a promising-looking approach towards proving that for every finite-dimensional Hilbert space , every commutative, associative and isometric is of the form for a suitable orthonormal basis .
In the case , the channel is precisely the "dephasing" channel which erases all off-diagonal elements from a density matrix. In other words, it conducts a measurement in the given basis and forgets the result. So the fixed points of are precisely those density matrices that are diagonal in the given basis. Hence for general , it's natural to look at the fixed points of . The pure states in this set of fixed points form the basis that we're looking for.
I guess the main difficulty lies in the penultimate step, where one needs to show that has sufficiently many fixed points in order to show that the dimension of the algebra coincides with , and I don't yet know how to go about this. But overall, the appeal of this approach is that it would give a relatively concrete way to get at the .
Thanks! Yes this is the kind of thing we had in mind to arrive at, so it's an interesting idea to try to resolve it there directly. Although I'm still not sure how to tackle the penultimate step, as you said.
In the paper by Bob, Dusko and Jamie, their proof (sec 4) is by constructing a commutative C star-subalgebra of B(H). I suppose it's the same one? But without the Frobenius law, I can only show that their construction gives a sub-algebra, not a star sub-algebra.
@Sam Staton To investigate this a bit further I used numerical methods to explicitly find associative dagger-special structures in 2d, 3d and 4d. In every case it was also dagger-Frobenius as you have conjectured, within numerical tolerances.
I used my codebase to do some further investigating, and the following conjecture is also validated by all the examples I am able to generate: any dagger-Frobenius structure is also associative.
Hi, Jamie! As you know, the associative law looks like a rotated version of the Frobenius law, and vice versa. Is that enough to prove your conjectures? I guess if it were you would have instantly noticed it. If not, the axioms you're assuming must somehow not instantly give you the ability to use caps and cups to rotate the one law to the other.
I also found dagger-special multiplication maps that satisfy the symmetrical Frobenius condition only (i.e. (1(x)m).(m^\dag(x)1)=(m(x)1).(1(x)m^\dag), but are not associative. Not sure what to make of that.
John Baez said:
Hi, Jamie! As you know, the associative law looks like a rotated version of the Frobenius law, and vice versa. Is that enough to prove your conjectures? I guess if it were you would have instantly noticed it. If not, the axioms you're assuming must somehow not instantly give you the ability to use caps and cups to rotate the one law to the other.
I have tried quite hard to prove that the dagger-Frobenius equations imply the associative equation without success!
Okay. That makes it very weird that it holds in all low-dimensional examples. But 4 dimensions is very low. Maybe the first counterexamples are a bit bigger.
(Completely unrelated, but the smallest nonisomorphic pair of finite groups with equivalent categories of complex representations comes from two 64-element groups, and the smallest known pair of nonisomorphic finite groups with isomorphic group rings comes from two groups with elements, so sometimes you have to work a bit to find counterexamples.)
For numerical checks, I was thinking that Isomorphism types of commutative algebras of finite rank over an algebraically closed field should be helpful. Roughly speaking, this classifies all the possibilities for what the multiplication can be in all dimensions up to 6! (Modulo the caveat that it also assumes unitality, while Sam's original version of the question did not require counitality. And it assumes that the algebras are local, but that should be essentially without loss of generality.) So if there is a counterexample in dimension , it should be one of those. One only needs go through this list and check for of them whether there exists an inner product that makes the multiplication into a coisometry.
Intuitively, I'd think that the 5-dimensional algebra might be a good candidate for a counterexample, since it's a quotient of the regular functions on the cusp defined by , which is a singular curve. Can this algebra be equipped with an inner product that makes the multiplication coisometric?
On the other hand, @Jamie Vicary was probably fixing the inner product and doing numerics to find a suitable (co)multiplication? Maybe that's the better approach numerically than to fix the multiplication and to try and find a suitable inner product.
I agree the dimensionality is limiting, but having evidence in is more comfortable than in . I will see if I can push my method higher.
Tobias Fritz said:
On the other hand, Jamie Vicary was probably fixing the inner product and doing numerics to find a suitable (co)multiplication? Maybe that's the better approach numerically than to fix the multiplication and to try and find a suitable inner product.
Yes that's right, I'm assuming the standard inner product but leaving the elements of the multiplication operator undetermined. Your approach of fixing the algebra and finding the inner product can be done by conjugating with an arbitrary isomorphism which is numerically parametrised. This could actually be easier as the parameter space is only size rather than .
Or optimize directly over the space of semidefinite matrices that represent the inner product, which has only half the number of parameters!
Note however that my method is only suited to disproving counterexamples, not to establishing them!
Tobias Fritz said:
Or optimize directly over the space of semidefinite matrices that represent the inner product, which has only half the number of parameters!
Is there an easy way to parameterise that?
Maybe the Cholesky decomposition would be useful?
Oh nice, so we just need upper-triangular matrices with nonnegative entries. I can certainly try my method on this search space.
That would be interesting!
I've calculated what a coalgebra dual to looks like. This might be easier to work with, since finding an inner product which makes the comultiplication isometric is more intuitive than finding one which makes the multplication coisometric.
So the basis is , where these symbols have no significance other than to indicate the connection to the algebra version. The comultiplication is:
The interesting terms are in bold. Without these, we'd just get the coalgebra dual to (which may also be worth trying in any case).
So after some more thinking, I believe that the answer to @Sam Staton's question is positive: every commutative and associative isometry for a finite-dimensional complex Hilbert space is a copy-map with respect to an orthonormal basis. For now I can prove this under the additional assumption of counitality, but it should be possible to adapt the proof so as to eliminate this extra assumption.
The basic idea is to observe that every pair in with
must have . (See primitive elements in a coalgebra.) The reason is the following: because is an isometry, the first equation implies , and then being an isometry on the second equation yields
Since all terms in this equation are nonnegative, cancelling one leads us to the conclusion that , and therefore by nondegeneracy of the inner product.
So it's enough to prove that if is not a copy-map, then there is a nonzero primitive element . To see this, let's consider , which makes the dual space into a finite-dimensional commutative algebra over . By standard results, every such algebra is of the form
where the are local algebras with maximal ideals , which means in particular that . In fact, we have canonical direct sum decompositions , where the copy of consists of all scalar multiples of the unit .
The goal is to show that for all , so let's assume that that's not the case. Let's start with the case for the moment, for which I'll omit the subscript. Then we'll exhibit a nonzero primitive element in the dual coalgebra . To start, by further standard results, we have . Thus by linear algebra, we can find nonzero with and . Moreover, we can consider defined as the functional that projects onto the first component of .
Now let's determine and by the condition that must be the dual of the multiplication, which says that
for all . By the direct sum decomposition, it's enough to consider the cases that or is or otherwise in . Then using the fact that is an ideal on which vanishes, we easily obtain for all and , and therefore . Using similarly that vanishes on , we get the desired , and we're done.
For , we can use the same method on any with , and simply extend and by defining them to be zero on the other direct summands. Overall, we can therefore conclude that cannot be an isometry unless for all , in which case we obtain the desired conclusion that .
Very nice, where did you use commutativity?
In the meantime I have beefed up my meaty code, with no counterexamples to the claims yet observed. For the claim that an associative dagger-special structure is dagger-Frobenius I have been able to go up to dimension 10, and for the claim that every dagger-Frobenius structure is associative I have gone up to dimension 50 (for some reason the numerics are easier to run in higher dimension here.) Tobias note I'm not using commutativity or unitality here.
In my proof, commutativity enters in the decomposition into a direct sum of local algebras. That's a structure theorem specific to commutative Artinian rings.
But it's intriguing that there might be a stronger statement that doesn't even need commutativity.
Here's how to extend my proof to the non-unital case:
Lemma: if is a finite-dimensional commutative algebra, then it is either unital or has nontrivial annihilator, i.e. there is nonzero with for all .
Proof: If is not unital, consider the unitization . Then is a maximal ideal. In terms of the decomposition into local algebras as above, the maximal ideals are precisely those that replace one of the summands by the ideal . Since is maximal it must be one of those, say . Since the powers of form a strictly descending chain of ideals in , there is with but . Then it is clear that every element of annihilates .
So now let be an associative and commutative isometry for a finite-dimensional Hilbert space . Then consider again the (not necessarily unital) algebra . With the unital case having been settled above, by the lemma we can focus on the case where has an annihilating element .
But then consider , which is a subspace of of codimension 1. With in mind, we see that . Therefore the proof can be completed by induction on .
It seems plausible that the noncommutative case can be approached by similar means using the Wedderburn-Malcev theorem instead, in which case one could try to show that the isometricity condition implies that the radical vanishes, but I have much less intuition for this. Hopefully someone more familiar with the noncommutative structure theory will be able to do it.
Wow this is really exciting, in both directions .
@Tobias Fritz I am going through your proof but I am a bit slow. Even, why do you get from the displayed equation? Sorry, perhaps I should try tomorrow with fresh eyes.
You mean the equation , right? It's equivalent to , and both terms are nonnegative.
Ah thanks! Yes, you did say nonnegative.
Tobias Fritz said:
Here's how to extend my proof to the non-unital case:
[...]
But then consider , which is a subspace of of codimension 1. With in mind, we see that . Therefore the proof can be completed by induction on .
Thank you for this!
I have trouble understanding the high-level story of the generalization to the non-unital case. Are you proving that every isometric must come from a unital algebra, or that even if the algebra is non-unital, it comes from a basis copy map?
Paolo Perrone said:
Are you proving that every isometric must come from a unital algebra, or that even if the algebra is non-unital, it comes from a basis copy map?
Whoops! It looks like I had forgotten to write up the final part of the proof. I've now added it as an extra paragraph above.
The high-level story boils down to showing that in the non-unital case, a with the desired properties cannot exist. However, the overall proof is an induction argument which proves the statement "Every with the required properties is a copy map, regardless of unitality" by induction on the dimension. The induction step treats the unital and non-unital cases separately, where the induction assumption is used only in the non-unital case, and in this case one arrives at a contradiction. (Since the algebra dual to a copy map is unital, the non-unital case is impossible, and that's what needs to be shown.)