You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Last week I finally posted my paper with Jesper Michael Møller: /Signs in objective linear algebra, exemplified with exterior powers and determinants/ [https://arxiv.org/abs/2603.19437]. I had promised this paper for a long time. Actually I gave a talk about this at CT2023 in Louvain-la-neuve and said that the paper would be available shortly.
Now that it is finally out, I thought I should try to blog about it, hoping to get more people interested.
So what is a negative set supposed to mean? The context for the question is the observation that the elementary operations with natural numbers -- addition, multiplication, exponentiation -- are just shadows of categorical operations with finite sets -- disjoint union, cartesian product, and mapping sets. 'Shadow' means taking cardinality.
This leads to the question what kind of objects the negative integers could be the cardinality of.
There are various interesting answers, depending on what you want to do with them. And maybe it is also the case that there is no answer for all such things you want to do. Schanuel gave an answer in 1990, in terms of polyhedral sets. It is very cool, and his paper is very nice reading. But it is easier to start with these slides from talks of John Baez: /The mysteries of counting/ [https://math.ucr.edu/home/baez/counting/] (I heard two versions of this talk, one in Sydney and one in Chicago -- both were general-audience talks. It is a very nice topic for general audiences, since it is fancy mathematics about elementary questions.)
I don't want to go into Schanuel's achievements, nor will I go into Joyal's virtual species or Loeb's hybrid sets. Because super-cool as these developments are, these notions do not seem suitable for doing objective linear algebra on top. (That's not a universal criterion, of course, it is just one motivation, to objectify algebraic combinatorics.)
So what's objective linear algebra? Well, most readers here are probably familiar with how the category of finite sets and spans work a bit like matrix algebra. After all a span is a doubly indexed family of sets, just like an ordinary matrix is a doubly indexed family of numbers. And furthermore that pullback-composition of spans corresponds precisely to matrix multiplication! Even if you know this already, it is probably something you enjoy seeing again and again :-) Given one span and another span , the span composite has apex , and if you write that as a sum of its fibres over , you get , and the -entry in that span is , which is the formula for the entries of the matrix product. (It seems Yoneda already knew about this, and the next person to provide insights was probably Bénabou, who studied it because the category of sets and spans is of course properly a bicategory, not an ordinary category, since pullbacks are only given up to isomorphism.)
So far this is matrix algebra.
It is very fruitful to consider an equivalent category to the category of finite set and spans, namely the category whose objects are slices of the category of finite sets, and whose morphisms are /linear functors/, meaning functors that preserve finite sums. This is like abstract vector spaces instead of just matrix algebra. Indeed, is the finite-sum completion of the discrete category , just as the vector space is the linear-combination completion of the set . It is not difficult to show that every linear functor is given precisely by a span (by upperstar-lowershriek along the span), just like every linear transformation of vector spaces with chosen bases is given by applying a matrix. The basic fact that composition of linear transformations is given by matrix multiplication now becomes the Beck-Chevalley about interchanging upperstar and lowershriek around a pullback square.
u
------->
| _| |
p| |q => u_! p^* =~= q^* v_!
| |
v v
------->
v
(For finite sets I, the viewpoint of slices and linear functors is not very different from that of spans. But if you want to model infinite-dimensional vector spaces and profinte-dimensional vector spaces, the slice viewpoint is much cleaner, and there is a nice duality theory: the linear dual of F/I is F^I, the presheaf category, and the continuous linear dual of F^I is F_I. Not sure why I had to say this, but the conclusion is that there are several reasons to work with categories of slices rather than categories of spans. But for the following it does not make much of a difference.)
The story with finite sets generalises to finite groupoids. This was first worked out by Baez, Hoffnung and Walker under the name groupoidification. It also generalises to finite infinity-groupoids (=pi-finite spaces), but to do this it is necessary to take a more consistent homotopy viewpoint. And then of course in retrospect it is better to take the consistent homotoy viewpoint already for 1-groupoids. So: homotopy slices, homotopy pullbacks, homotopy fibres, homotopy quotients, homotopy sums (which means colimit over a groupoid just as an ordinary sum is a colimit over a set). The Baez-Hoffnung-Walker paper is both groundbreaking and correct, but for a more modern approach, see Gálvez-Kock-Tonks: /Homotopy linear algebra/. (John talked about groupoidification in Barcelona in 2008 and it was a big revelation for me, and I just needed groupoids for what I was doing at the time (and ever since). Another huge influence was the Baez-Dolan paper From finite sets to Feynman diagrams, which is strongly recommended reading.)
But I am losing myself in preliminaries. Now back to negative sets. Linear algebra with set-slices gives a nice objective model for (some basic aspects of) linear algebra over natural numbers, and with groupoids you can get nonnegative rational numbers. (Recall that the cardinality of a groupoid is .)
But how to get negative numbers? The notion of negative sets alluded to in the beginning should form a category good enough to make sense of linear algebra over it, so that we can do some linear algebra with signs. Among the many, many things we would like to do are exterior powers and determinants, two notions that depend crucially on minus signs.
So: how to define some notion of negative sets -- and more generally signed sets -- that assemble into a category one can do linear algebra over? The summary above already gives a wish list for a good 'category of negative sets': namely we would like it to have pullbacks, sums, good notions of slices with upperstar, lowershriek and Beck-Chevalley. Technically a good way to get these properties and some other convenient ones is to ask for a locally cartesian closed category, for example a topos.
There are several reasons why this is difficult. If you try to do pairs-of-finite-set-(A,B)-thought-of-as-A-minus-B and divide out by the equation you expect from the construction of the integers from the natural numbers, then you get a category that collapses to a point, at least if you want to maintain the distributive law. This is classical, and Schanuel already mentions it. What else? Well, homotopy theorists will tell you that the true integers are the sphere spectrum. This is because the group completion of the 1-groupoid of finite sets and bijections is the infinity-groupoid underlying the sphere spectrum . The sphere spectrum has Z-many components, all equivalent to each other by translation, and its first few homotopy groups are Z2, Z2, and Z24. The group completion map sends a finite set n to the n-th connected component, and it sends a permutation n~n to its sign (an element in Z_2 = )). This is bad news from the viewpoint of combinatorics, where we would be sad to collapse permutations :-( Also, does not serve to do slices and linear algebra, because it has no non-invertible arrows; it is just an infinity-groupoid. (Well, of course one can do linear algebra over , it's the whole business of stable infinity-categories. I don't want to say anything bad about that.)
One may try to invert the sum operation in the category instead of inverting the disjoint-union operation in the groupoid , but this will still only be an infinity-groupoid: inverting the sum operation will automatically invert all the arrows too: a group object in Cat is automatically a groupoid. (John Baez explained this in another talk in Barcelona, also in 2008, in a workshop we had on 2-groups, but the result goes much further back, at least to the 1972 thesis of Sinh, Grothendieck's PhD student in Vietnam, and later the first female mathematics professor in Vietnam.) I end up mentioning John all the time because he has been so influential for this whole project.)
So, all-important as the sphere spectrum is, it is not the solution to the problem of finding a good negative-sets category for objective linear algebra.
(I seem to get lost in intro-material -- let me see if I can get to the point. Please skip the previous two paragraphs if you are in a hurry.)
Observation: it is not necessary (nor desirable) to have signs on all objects, because when we form a linear combination we are not (at the moment) interested in having signs on -- we don't need linear combinations with a negative number of vectors. We only need the signs on the objects when they serve as scalars.
So what is a scalar? We could say it's an element of the groud field. We could also say it's a linear endomorphism of the ground field. This turns out to be better. So a scalar should essentially be a span from 1 to 1.
Now for the definitions: let O(1) be the group with two elements, and let be its classifying space, so a one-object groupoid with only two arrows, called even and odd. The terminology even and odd may not be ideal, but we want to avoid saying plus and minus, because they are not exactly the signs we are after yet.
We are interested in doing linear algebra over . Here is the category of finite groupoids, and is the weak slice. So its objects are parity groupoids: groupoids in which every arrow has a parity, even or odd. It is important that we use the weak slice (of course -- everything should be homotopy!): this means that the morphisms in are triangles S -> T over with a 2-cell in the triangle, a little homotopy. I say 'little' because its components can only be the identity or the non-identity arrow in .
Note that we are concerned with slices of the slice . In case you think this is too confusing, it is good to observe that can equivalently be described as the category of groupoids with an involution. That's a nice elementary description, but it also ends up being confusing because combinatorics is full of sign-reversing involutions, and these play a very different role (and will come in shortly). Somehow perhaps it is better not to unpack to much...
Now one can just develop linear algebra over and get a category . The objects are slices of , and the arrows are (internal) finite-homotopy-sum preserving functors, which means that they are given by spans over . Over in the homotopy sense, of course, so there is a little 2-cell involved, a natural transformation whose components are little arrows in , meaning even or odd.
One important object in is . It is not the terminal object becaues it has two automorphisms: one given by the idenity of 1 with even 2-cell, and one given by the identity of 1 with odd 2-cell. (The terminal object is of course the identity , as in any slice category; it does not play any important role in the theory here.)
The important point is now that (and hence ) has a monoidal structure which is different from the cartesian monoidal structure. It is given by but with parity structure given by given by the monoidal structure on . (It can be interpreted as Day convolution, if you wish.) It induces a monoidal structure on , namely . The neutral object is . It is with respect to this monoidal structure we now talk about scalars: a /scalar/ is a linear functor from to . So it is a span together with a little 2-cell down to . From such a span there is a unique morphism of spans to the span given by pullback of along . This pullback is the loop space of , that is, the set of group elements of O(1), so the set {+1,-1}. So the underlying groupoid of the scalar splits into a plus part and a minus part. Scalars have this splitting. General objects S do not, because being a scalar is more structure (since e is not terminal). Now we finally define the cardinality of a scalar to be |Splus| - |Sminus|.
Now we should define cardinality of a span, following the insight of Baez-Hoffnung-Walker. Write out a span as the homotopy sum of its two-sided fibres. Since taking fibres is to pull back to 1, these two-sided fibres are now scalars, and we can take cardinality of those to get a matrix of rational numbers. This is not yet quite the correct assignment, because at the objective level everything works with homotopy sums, whereas down in the world of numbers we use ordinary sums. It turns out the correct answer is to divide by the order of the automorphism groups on the codomain of the span, so the final formula for the matrix entries look like this Here is slick notation for the automorphism group on an element in .
The appearance of these symmetry factors is a fact of life in homotopy linear algebra (also without signs). It comes about because the formula for the matrix of a linear transformation is given by expressing the images of the basis vectors in the domain as linear combinations of the basis vectors in the codomain. But at the groupoid level, these have to be homotopy sums, which can be unpacked as ordinary sums with division by automorphism groups, so the factorials appear in the codomain and not in the domain for this reason.
Exercise: work out what is the cardinality of the identity span of . The diagonal entries are the loop spaces of in , which by themselves do not have cardinality 1. But the automorphism group acts canonically on the loop space and the quotient is 1 (and hence has cardinality 1) as required. (This was a hint for the exercise.)
(There is some work to do to check that composition of spans has two-sided-fibre scalars equivalent to the appropriate multiplication of scalars from the original two spans. The trouble is to check that it is not just an equivalence of groupoids (which is known from Homotopy Linear Algebra) but that it is an equivalence of scalars and hence preserves the sign splitting. In fact this is very much how it works at this level: you don't really need to think about signs -- they are just there in the background. What you need to think about is scalars, and keep track of some 2-cells.)
A crucial insight of Baez-Hoffnung-Walker is that all the cardinality assignments should assemble into a functor . So not just the elements of the slice should have a cardinality -- the slice itself should have a cardinality (in which the cardinalities of the 'vectors' can land). (This is one point where it is an advantage to have slices and linear functors instead of just groupoids and spans.)
So what is the vector space associated to ? Here is a little surprise: It is not just the rational vector space spanned by as is the case in ordinary homotopy linear algebra. To explain the required correction, it is convenient to introduce the notion of orientation of a parity structure . It is simply a morphism to (which is the universal (double) cover of ). Since is not terminal, this is extra structure. Now there is a little lemma:
LITTLE LEMMA: A parity groupoid is orientable iff it has no odd automorphisms.
This is because any odd automorphism will be an obstruction to naturality of the little 2-cell involved in the map to e.
A scalar structure is the data of TWO orientations. The sign splitting is then given by
plus for the components where the orientations agree and minus for the components where they disagree.
(There is a slight subtlety here: above, spans were described as homotopy-commutative squares down to , but in reality a span in has rather two triangles, one for each leg of the span. This is why a scalar is the data of two orientations. The square viewpoint is really only an equivalence class of pairs of triangles, but any two representatives are homotopy equivalent: for the sake of defining the linear functor, it is only the ratio between the two triangles that matters. The individual orientations are not a homotopy-invariant notion, but the ratio is.)
And now a medium lemma:
MEDIUM LEMMA: If s is a non-orientable point, then any fibre over it (as a scalar) has cardinality zero.
(Medium in difficulty, but conceptually I still find it striking.) The proof is just a calculation: the odd automorphism can be used to construct a sign-reversing involution (in the long tradition of combinatorics, formalised by Garsia and Milne in their bijective proof of the Rogers-Ramanujan identity). (Not that I know exactly what that RR-identity is, but I always find it fascinating when ideas go a long way back and connect with other areas of mathematics. In fact the whole idea of the objective method has a very long tradition in combinatorics where it is well appreciated that algebraic identities are not fully understood until they have been given a bijective proof. So the best proof of is to find a set counted by and a set counted by and then establish a bijection . (There is a remarkable book by Petkovsek, Wilf and Zeilberger called "A=B" -- that's the title of the book. It is quite a mind-boggling book, about how to find such proofs semi-mechanically.)
Sorry, I digressed again.
Upshot: Whatever lies over a non-orientable point will cancel out in cardinality!
So if the coefficient is always zero, it does not deserve to be a basis element. The correct assignment of cardinality to a slice is to use only the orientable locus of :
the Q-vector space spanned by
So that's our proposal for objective linear algebra with signs. The objects do not have signs in themselves, but the scalars do.
In fact the cardinality functor is characterised by a universal property. It is the unique homotopy-sum-preserving functor to under . The functor sends the point to and the nontrivial arrow to . The functor sends the point to and the nontrivial arrow to the nontrivial automorphism of .
(This fits into a very grown-up theorem by Harpaz who shows that for any infinity-category and a functor to a infinity-semiadditive infinity-category , there is a unique finite-homotopy-sum-preserving functor from the category of -decorated spans to . In other words, the category of -decorated spans is the free infinity-semiadditive infinity-category generated by . All this was explained to us by Maxime Ramzi.)
Now for the applications, justifying that there is some sense to the definition.
What is the exterior power of a finite set?
If the finite set S has n elements then \wedge^k S should have {n \choose k} elements and then there should be some sign rules! Of course no ordinary finite set can possibly satisfy those sign rules since there are no signs around. We argue that the answer is instead a parity groupoid. It is simply the parity groupoid S^k/k! Here k! is the symmetric group on the set k, and the quotient is a weak quotient, of course. The detail is in the parity structure: the weak quotient has only arrows coming from k!, and each has a parity by the sign homomorphism k! -> O(1). So give the quotient that parity structure.
It looks like we are defining the symmetric power, not the exterior power, but look at its dimension: we have to identify the orientable points. A point is a k-tuple, and as soon as there is a repeated entry, that tuple has an odd automorphism, given by interchanging such two entries. So the orientable points are the tuples of /distinct/ points, and on those the group k! acts freely, and the quotient has {n \choose k} points.
And then one can prove that has a universal property as recipient of alternating linear functors, once it is sorted out what alternating linear functor means in .
Now there are many things one may want to do with exterior powers. We only do one, namely determinants. We define the determinant of an endospan J <- A -> J to be the n-the exterior power of the span, assuming that J has n orientable components. We call it the top exterior power, but of course it is not the case that \wedge^p J is empty for p>k. It is only /non-orientable/, and therefore invisible in cardinality!
Now we can calculate the cardinality of the determinant of our span. The determinant as a parity groupoid is huge, but among its two-sided fibres only one corresponds to an orientable point, so the cardinality is just the cardinality of a scalar. (Usually one should divide by a factorial here, but a tuple of distinct elements has trivial automorphism group.) Now you just calculate that scalar. First you do it very quickly in terms of groupoids. But then you remember that it is not enough to know the groupoid, you need to know its structure as a scalar, so then you go through the calculation and keep track of all the small 2-cells to figure out the signs. The outcome is precisely the Leibniz expansion of the determinant! Or rather, it is an objective upgrade, because it is now an equivalence of scalars in .
The rather surprising side conclusion is that the usual Leibniz expansion of the determinant could have been made with all endomaps rather than just permutations, but that the noninvertible maps cancel out to give contribution zero. Quite a silly thing to do with numbers, but, as we see it, the correct thing to do at the objective level.
We give an example of a determinant calculation, namely for the endospan underlying the category given by a split idempotent. So the feet of the span have two elements, and the apex has five. The determinant has nine components, but only one of them is orientable of course. It is quite funny to contemplate the other eight regions, and see how they neatly cancel out (as predicted by the 'medium lemma').
It was not my intention it should be so long :-(
This is fascinating. Could we then formulate an analog of the Cramer's rule?
Yes, we can almost do matrix inversion with a version of Cramer's rule. First we can define the adjugate matrix -- or the cofactor matrix. This can be done in terms of the top-minus-first exterior power. Unfortunately this is a bit involved. And then we can prove an equivalence of groupoids expressing an equivalence of spans
On the left it is just span composition. On the right it is a scalar times the identity matrix.
This is essentially matrix inversion, except that usually one would 'divide' by Det(A) on both sides of the equation. Unfortunately we don't know how to divide by Det(A) :-(
Generally in objective algebra when you want to model division by a number, you have to set it up so that the division is weak quotient by a group action. Remarkably, it is often the case that nature has already set it up in this way, and in many cases one may even say that THE origin of the division is a group action. Unfortunately this division here does not seem to be one of those cases: we can't figure out which group should have order Det(A) or how it should act on A \circ Adj(A). It is quite annoying. Maybe we are doing something wrong.
Speaking of cofactor matrices, here is something funny. Usually the ij-th cofactor matrix is defined by 'deleting the i-th row and deleting the j-th column'. And then you think you have an (n-1)x(n-1) matrix -- which is obvious if you work with numbers. But now we are working with sets or groupoids, and the matrix is square, I times I. But if you remove i from one copy of I and remove j from the other copy of I, then you no longer have a square matrix, strictly speaking: now it is an (I-i) times (I-j) matrix. It is necessary to make an identification of these two sets! When we do this in ordinary linear algebra over numbers, we do it in a silly, unnatural way, by reindexing all the rows after i and reindexing all the columns after j. And then there is a crazy sign rule giving a sign (-1)^{i+j}. Where did that sign come from?
Imagine you have five pairs of socks, in red, blue, yellow, green, and early-spring-sunset-fuchsia. Then you lose one red sock and one green sock. What do you do? Well, according to the logic of classical cofactor matrices you re-pair the socks by moving one of the blue socks down to pair with the lonely red sock, and then you move one yellow sock down to pair with the blue sock that become lonely because of the first move, and then you fix the problem with the lonely yellow sock by pairing it with the green sock that remained from the original loss. And voilà! In just a few simple moves you have fixed the problem and restored a matching of your socks. Very nice! But wait a minute! Why didn't you just ask the left-over red and the left-over green socks to pair up with each other without disturbing all the other pairs? Very clever! It is the natural thing to do, and it is an easy example of something cool called residual bijection!
Residual bijection: given a bijection of finite sets A+C=B+C, there is a canonical bijection A=B.
I don't want to explain it, because it should really be done with a drawing. I don't know who first formalised residual bijection, probably it goes many decades back, but the earliest reference I know of is André Joyal's /Foncteurs analytiques et espèces de structures/ from 1984. Later with Street and Verity he also figured out that it constitutes in fact a canonical trace on the monoidal groupoid of finite sets and bijections.
Coming back to cofactor matrices, in order to sort out how to get them from next-to-top exterior powers, we figured out it is better to use the natural way to re-pair rows and colums. Then the mysterious sign (-1)^{i+j} ends up as the sign of a permutation, and then we are in business.
(Unfortunately doing all this is a bit messier than we think it should be, and it held back the paper for three years, until we decided to cut it off at determinants. The cofactor matrices and Cramer's rule will be for another paper.)
I apologise to the colour-blind for the above example.
Joachim Kock said:
This is essentially matrix inversion, except that usually one would 'divide' by Det(A) on both sides of the equation. Unfortunately we don't know how to divide by Det(A) :-(
Generally in objective algebra when you want to model division by a number, you have to set it up so that the division is weak quotient by a group action. Remarkably, it is often the case that nature has already set it up in this way, and in many cases one may even say that THE origin of the division is a group action. Unfortunately this division here does not seem to be one of those cases: we can't figure out which group should have order Det(A) or how it should act on A \circ Adj(A). It is quite annoying. Maybe we are doing something wrong.
I find this surprising: rather than a group, you have a groupoid-with-involution whose cardinality is the determinant. Is there no sense that an action of such a groupoid on another is comparable to a group action?
Or I suppose a more relevant first question is whether the groupoid constructed to present the determinant may have a meaningful action on the adjugate matrix
It is difficult to imagine any. For example, if the input endospan is discrete, then the groupoid underlying the determinant will be discrete again (but with signs). It is difficult to have it act on anything :-( I don't exclude that we are overlooking something essential here. But it could also be be that Cramer's rule is simply not a very objective thing. Maybe it is just a hack you can do with numbers. Don't quote me on this one. At the moment all I want to say is that we have not understood it well enough.
Can we describe/understand the collection of such that ?
At least, by the updated Cramer formula above implies that is "invertible", with inverse its adjugate .
Yes, that's a safe inversion formula. We don't have any particular characterisation of those spans, but we also didn't really try to find one.
We also have some other work (in final phase) on matrix inversion using Möbius inversion instead. There we can invert matrices that have 1s on the diagonal.
This is all very cool Joachim! I've been greatly enjoying your work on objective linear algebra and applications to Mobius theory for a while now. I was wondering what kind of applications do you see on the horizon for this improved theory? I would expect objective homological algebra to be quite interesting, would that be on your radar?
Thanks for your kind words -- very much interpreted as encouragement. Homological algebra is indeed on our radar. But we are stuck. New insight is required before this can take off the ground.
We have only looked at singular chains of simplicial sets, where the differential is something very combinatorial. As you can imagine, d\circ d = 0 is not really a zero but rather a big object that happens to have a cancellation so that the cardinality is zero. This means that d\circ d \circ d could suddenly be interesting, and it has /two/ cancellations depending on where you set the parentheses. We don't know if these two cancellations are equivalent in any sense -- or if it matters. But the real problem is to understand kernels. In objective linear algebra (at least in algebraic combinatorics) we are used to vector spaces having canonical bases we can work with. But even simple kernels such as x-y we cannot really grasp. Maybe it is a simple new insight we are missing, but something paradigmatically different from the simpleminded bases we are used to in slice categories. If anybody can crack that first enigma, maybe the next steps will be easier? I don't mean to be speculating. Just to note that there is so much unexplored territory out there -- and it is not clear how important or unimportant it is from the viewpoint of mainstream mathematics. I mean, homological algebra is something very practical, computational. Who cares if a small fragment of it can be done with coefficients in sets or groupoids? (Because of such existentialist doubts, encouragement is very well received.)
Joachim Kock said:
In objective linear algebra (at least in algebraic combinatorics) we are used to vector spaces having canonical bases we can work with.
Indeed, that bugged me a bit at first: the slice is like a vector space with a specified basis .
Is it possible to define an objective vector space "without the basis"? Naively, I would identify and if they are related by an invertible linear functor.
Joachim Kock said:
Maybe it is a simple new insight we are missing, but something paradigmatically different from the simpleminded bases we are used to in slice categories. If anybody can crack that first enigma, maybe the next steps will be easier?
This is wild speculation, but one of the most fascinating perspectives on homology I've heard is this @Brandon Shapiro's talk
There kernels and cokernels get a very 'objective' interpretation. Basically a linear map is replaced by a span of sets, where the left map is epi and the right map is mono. Thus is thought of as the image. Then a sequence of such spans is a chain when neighboring spans are such that the following is a pullback:
![]()
and it is exact when that square is also a pushout.
There is also an accompanying paper, which is much more technical (it appears this one is being rewritten in two parts, I link to the version I read):
Joachim Kock said:
But the real problem is to understand kernels. In objective linear algebra (at least in algebraic combinatorics) we are used to vector spaces having canonical bases we can work with. But even simple kernels such as x-y we cannot really grasp.
I'd like to understand this better though :thinking: I guess you can still define kernels as equalizers with , but then you say the problem is computing them?
Does anyone dare to consider an example? I find it hard to see what to do here without looking at a simple example.
Sorry my replies about homology were a bit vague, only trying to say that we did look in these directions without luck. I was writing out of memory about some failed experiments from several years ago. Since a large proportions of my experiments fail, I cannot fully keep track of them. I searched a bit in my files, but did not find anything. However, this particular difficulty with kernels could potentially be an instructive failure, so I promise to come back to it. (But right now I have visitors and have other priorities. Please hang on.)
Peva Blanchard said:
Is it possible to define an objective vector space "without the basis"? Naively, I would identify and if they are related by an invertible linear functor.
One could say that an abstract slice is a category equivalent to a slice, but without specifying an equivalence. Then one could still define linear functors to be finite-sum preserving. Specifying an equivalence would then be like picking a basis. Not a fully satisfactory answer. It would be nice to have an intrinsic characterisation of such categories. I don't know if this is a difficult question, I just don't know the answer.
A bit like how a Kapranov–Voevodsky 2-vector space is something equivalent to Vect^n as a Vect-module category, for some n, or more generally Vect^G for some finite groupoid G. (cf https://golem.ph.utexas.edu/category/2008/10/morton_on_groupoids_and_2vecto.html)
@Joachim Kock When you say "a category equivalent to a slice" is that as a bare category, or one with extra structure?
David Michael Roberts said:
A bit like how a Kapranov–Voevodsky 2-vector space is something equivalent to Vect^n as a Vect-module category, for some n....
Note the rigidity here:
Any category equivalent to has a canonical basis, in the sense that there's a unique set of isomorphism classes of objects such that any object is a coproduct of copies of (arbitrary) representatives .
This implies that the 2-group of autoequivalences of has as its just the group , not something more like .
If you go to the infinite-dimensional 2-vector spaces studied here, then you can get interesting representations of Lie groups (and even Lie 2-groups) on them. But this happens not because the basis is no longer canonical, but because instead of a finite set of basis elements you get a measure space of them - where you have to interpret "basis" a lot more subtly, like how physicists say has a "basis" of delta functions.
Huh. Even if you don't have the n or the equivalence to hand you can (in principle) get those objects?
But how rigid are the categories that Joachim is talking about? And if they have more structure, that would make them more rigid, I guess
David Michael Roberts said, about how every category equivalent to has a canonical 'basis' of objects:
Huh. Even if you don't have the n or the equivalence to hand you can (in principle) get those objects?
Yes, they're the indecomposable objects: the nonzero objects that aren't coproducts of other objects in a nontrivial way. Equivalently, they're the irreducible objects: the nonzero objects without nontrivial subobjects. Equivalently, they're the objects whose endomorphism ring is the ground field.
All this is a categorification of how every free commutative monoid is free on the set of elements that aren't sums of other elements in a nontrivial way.
If you draw the free commutative monoid on 2 generators, you'll see what's going on.
But how rigid are the categories that Joachim is talking about?
I haven't thought enough about that! The stuff I just mentioned is very reliant on the absence of negatives. For example, every invertible matrix of natural numbers is just a permutation matrix, but there are lots more invertible matrices of integers. This is why every free commutative monoid has a canonical 'basis', but this is not true for free abelian groups.