You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
For those following the quest for a good categorification of cardinalities with negative values (see, e.g., @John Baez's The Mysteries of Counting), there's an intriguing paper just out:
Interestingly, the authors reference Kapranov on superalgebra:
Kapranov’s work. The relationship between signs and the sphere spectrum has been analysed deeply by Kapranov. In his remarkable paper [21], he first explains (following an insight of Joyal) how supergeometry in mathematics and the Koszul sign rule originate with how sits inside the sphere spectrum as , and then carries on to explain how in physics the notion of supergeometry (rather about spinors and square roots) relates instead to The two instances of the group containing the signs thus relate differently to . Next he analyses the interplay between these two homotopy levels, and finally argues that a deeper understanding of signs should involve the full sphere spectrum. Kapranov’s work is a main source of inspiration, ...
It's a while ago since this latter was being discussed at the nForum but it resulted in [[super algebra]] and [[spectral super-scheme]].
Kock and Møller continue:
...but his paper is not concerned with the question we address in this work. While Kapranov’s work analyses the behaviour of signs, taking for granted the notion of vector space (and negative scalars), the present work operates at a more fundamental level, aiming actually to construct the signs combinatorially.
You've got two pi_2's
And a further paper by the same authors:
The purpose of this note is to present a more general construction allowing for the realization of a wider class of matrices. In our paper [3] we introduced a setting where one can realize matrices with negative entries. In the present paper we go further and get to complex numbers.
David Corfield said:
Kock and Møller continue:
...but his paper is not concerned with the question we address in this work. While Kapranov’s work analyses the behaviour of signs, taking for granted the notion of vector space (and negative scalars), the present work operates at a more fundamental level, aiming actually to construct the signs combinatorially.
Another paper looking to construct a negative scalar combinatorially (or homotopically at least) is
So, they note
![]()
and arrive at a negative unit scalar:
![]()
Meanwhile, Kock and Møller consider scalars in their setting, , where , , and :
![]()
Joachim explained this work to me when I visited him last summer, and it's very nice. The main technical idea is to work with groupoids over , and certain spans thereof, so Joachim explained it to me as an enhancement of the old "groupoidification" program. The signs are put in "by hand", via the introduction of , but not relying on any concept of vector space.
It occurs to me the minus signs that you get in an abelian group could be argued to be of a different nature to the "grading sign" for graded vector spaces. The group {±1} acts on an abelian group by a "canonical" automorphism, random elements don't themselves have a grading (given an infinite cyclic group, there's no canonical generator, for instance).
John Baez said:
The signs are put in "by hand", via the introduction of , but not relying on any concept of vector space.
The "via" there is quite involved:
Objects in are thus , finite groupoids over , meaning that every arrow in has a parity, namely even or odd. We insist on the terminology ‘even’ and ‘odd’ to stress that these quantities are not yet the signs we are after. Parities are the structure with respect to which we will define orientations; the signs will finally arise as ratios between orientations.
( is delooped . An orientation is a factorization through .)
Only then do we have something homotopy-invariant:
The sign splitting of a scalar is the first place where significant signs appear. Unlike parities (and orientations), the sign splitting of scalars as in (6) is a homotopy-invariant notion:
I don't think it's so terribly involved, though I would have tried to describe it in a way that made it sound simpler. However, I do have the feeling that this is not yet the ultimate solution to the problem of where minus signs come from!
If one wants objects whose cardinalities are naturally rational numbers, including both minus signs and fractions, I believe one approach is to use the Euler characteristic of a nice enough algebraic stack. The Euler characteristic of a scheme can be any integer, and making the scheme "stacky" introduces the fractions familiar from groupoid cardinality.
The algebraic stack should be nice enough, for this to work. I don't know exactly what "nice enough" involves, but the homotopy quotient of projective algebraic variety by a finite group should be one kind of "nice enough" example.
John Baez said:
However, I do have the feeling that this is not yet the ultimate solution to the problem of where minus signs come from!
It would be fun to get there purely homotopically.
Joachim points to a further development:
Another direction for further work concerns replacing with the full homotopy type of the zero component of the sphere spectrum, rather than only its 1-truncation.
Sati & Schreiber derive their negative unit from , but point in their Outlook to the investigation to the full (linear) higher homotopy theory.
"Parities are the structure with respect to which we will define orientations; the signs will finally arise as ratios between orientations." Are they saying "parities" form a torsor for "signs" ? or maybe it's the other way around.
The first step is to consider what might be called 'signed groupoids', after the signed categories of A compositional account of motifs, mechanisms, and dynamics in biochemical regulatory networks:
![]()
Such an assignment of parities is their "parity structure on a groupoid".
Then one looks for orientations on a parity structure, which are ways to factor through the trivial parity structure .
![]()
A connected component of a groupoid is orientable if and only if its points have no odd automorphisms.
For a parity structure admitting an orientation, there are possible orientations, where is the number of connected components.
So, it seems, given two orientations for a parity structure on a groupoid, then relative to a 2-cell between them, there is a splitting of the groupoid into positive and negative parts:
![]()
In the case where the groupoid is , there are two such 2-cells:
![]()
Looking at the abstract again is helpful; signs are ratios between orientations. So I'm going with this answer: the signs are a group (Z/2), and the orientations are a torsor for this group.
Perhaps, the orientations on a connected orientable parity structure are a torsor for the group.
In general, as I mentioned, there are orientations on an orientable -component groupoid.
Hi,
thanks for the comments! I actually logged in thinking that
maybe I should blog (or tweet?) about this stuff. But I had
in mind a more basic level than this discussion.
Thanks for the pointers to Sati-Schreiber (although at the
moment it is way above my head). And the paper on biochemical
regulatory networks is also very interesting.
It is a good question whether the signs have simply been put in 'by hand'. We have asked ourselves that question, worried that we might somehow be cheating. Are we really seeing the signs emerge if we take the group ±1 for granted? Maybe not. On the other hand how could you work with groupoids and not take it for granted? It is the smallest possible nontrivial groupoid.
But here are some of our thoughts -- I hope I do not sound defensive :-)
First of all, just because we slice over P = BO(1) = B(±1) does not mean we automatically have negative sets. Where is the set minus 7? In fact, the category Z := grpd/P is equivalent to the category of groupoids with an involution, a purely combinatorial thing that does not in itself have signs or depend on algebra. For the same reason we do not think of P as an algebraic gadget -- rather we think of it as a fundamental object in homotopy theory.
Here is an example of a groupoid with a nontrivial parity structure: it has two points, connected by an odd arrow. (It is the parity groupoid corresponding to the 2-element set with a non-trivial involution.) There is a minus sign here, but it has little to do with a negative set, because the sign is on the arrows, not on the connected components. In fact this parity groupoid is equivalent to 1->P, where there are no signs at all.
So Z does not have signs in the sense of 'a set with minus 7 elements'. But it has the important feature that one can do linear algebra over Z in the same way as one can do linear algebra over F (finite sets or finite groupoids) and that the linear world emerges from the purely combinatorial setting of F (or Z).
In fact one can do linear algebra over Z without noticing that there are any signs!
With linear algebra with coefficients in F you get sums and linear combinations, but no signs. With linear algebra over Z, the signs arise but in a subtle way. They arise because the monoidal structure that the linear-algebra-as-span-algebra construction is relative to is not the cartesian monoidal structure as it is for F. In particular the unit object is not the terminal object. The unit object is e:1->P, whereas of course the terminal object is id:P->P.
If we take 'scalar' to mean endomorphism of the ground field or unit object, then the scalars are not just objects of Z but rather endospans 1 <- S -> 1 and this means span over P, which means there is a little homotopy down to P. It is only here that the signs arise: since a scalar is now a homotopy commutative square via 1 and 1 down to P, it has a unique comparison map S -> 1+1, where 1+1 is the homotopy pullback of 1 and 1 over P, or if you wish, the loop space of P. The points of this homotopy pullback 1+1 are 'plus' and 'minus' and in this way S acquires a sum splitting into the fibre over 'plus' and the fibre over 'minus'. (And this is a homotopy invariant notion.)
And now we can define the cardinality of such a scalar to be |S_+| - |S_-|.
So now we have a setting where we can do linear algebra with spans comfortably like over F (and for this it is important that the category has meaningful slices, pullbacks, sums, and so on), and at the same time it has signs -- but only on the scalars.
And once there are signs on the scalars, you get signs also on general spans, because the 'matrix entries' of a span are defined as two-sided fibres, and this in turn means that they have a natural structure of scalar.
This is a quick route to the definition of the category lin± which is our model for vector spaces over the rational numbers. It is indeed not overly involved.
It is not necessary to say what an orientation is to get here, but the notion of orientation is important in the next step, which is to define the cardinality functor. Here the rather surprising twist comes in: that only the orientable components should count as basis elements in the vector space associated to a slice.
The way these things evolve quite naturally from very little input (you could say that the starting point is that of groupoids with involution) made us think in the end that we are not cheating. (Nobody ever accused us of cheating :-) but we did ask ourselves many questions like that along the way, and we have also asked ourselves if it is all a triviality. But in the end our excitement prevailed :-)
But I think the coolest thing is the application to exterior powers and determinants. The features like signs on scalars and cancellations at the non-orientable points are illustrated well in these applications. The exterior power looks like it is the symmetric power -- way too big! But the orientable locus is of the correct dimension and has the correct universal property as recipient of alternating maps. Likewise, the determinant is a big object, but the orientable part visible in cardinality has the correct size, one-by-one! and satisfies an objective Leibniz expansion formula.
So what from the viewpoint of ordinary algebra with numbers may be just a zero, can have a lot going on at the objective level of lin±.
But I was actually thinking of explaining it all a bit more from the ground up, in the hope of getting the next generation interested :-) I have found the notes from my talk on this at CT2023 in Louvain-la-neuve (although I had both exterior powers and determinants in the abstract, I ran out of time and did not really get that far in the talk), and I think it can become a thread of tweets, or zlips or whatever it is called.
A thread would be great, and maybe we can copy it over to the n-Cafe if you want.
Just briefly, Sati-Schreiber are suspending the whole sphere spectrum (homotopy pushout through two copies of ), while you are looping (homotopy pullback through two copies of ) (or the 1-truncation of ) to a 2-element set. Then your 2-cell is of order 2, while theirs is a generator for , and they restrict to the 2-cell and its inverse.