You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
These comments seemed like the start of an interesting topic
John Baez said:
fosco said:
Alex Kreitzberg said:
This is an aside, but I find it a bit strange that the subject with the most tools for structural analogies is powerless to explain its own analogy with linear algebra.
we know too little category theory to explain the mysteries of linear algebra!
I was going to say something similar. It's not that the subject is powerless to do this: it's just that we haven't yet figured out the right way to do this. Math isn't done yet!
Fosco and Todd Trimble are working on a theory of categorified rigs, and that's probably part of what we need.
Tai-Danae Bradley commented - when discussing linear algebra as an analogy for category theory - "...I don't think it's more than an analogy which is kind of puzzling" from https://www.youtube.com/live/_LgWD3UTKfw?si=cj_qT9kOyYqomJM9&t=3142
And from Fosco's text Coend Calculus - "and yet the analogy between integral calculus and co/ends seems to be too elusive to justify."
It sounds like I read both of these sentiments too strongly? Or maybe there are new ideas for formalizing such analogies?
What's the state of the art with formalizing such analogies? Why is it so hard? Or is there anything fun to say about any of the above?
Look for a thread I opened a year or so ago, where I say that there is some interest in what an "eigenvalue" for a profunctor is -I have a definition for eigenvalues and eigenvectors, which is not great but hints at the fact that some structure is present and that it makes sense.
I can only answer in a laconic one-sentence way at the moment, but I believe the problem lies in this analogy:
image.png
This analogy is a masterpiece: it's effective and inspiring, it doesn't fail you if you think in those terms (the tiny objects of are the delta distributions, infinitely concentrated into a point/object? Doesnt it make a ton of sense?
...it doesn't fail you until you dare to ask the question: ok, but then why is it true? At that point you have to make sense of the fact that integrals with respect to a measure should be instances of a categorical concept; in some sense they are, but not in a way that relates to coend calculus as we know it (as I know it -and even after all this time I don't believe I understand it enough)
Even more problematic is the fact that there is another compelling analogy that co/ends bear: quantifiers! So, in what sense there is a concept which specializes to when you do logic, that categorifies to co/ends when you upgrade to non-thin categories, and that in the same fell swoop describes (or is described?!) by "integrals with respect to a measure"?
The kind of generalization this problem needs is analogous to Lawvere's recognition that metric spaces are enriched categories --probably even more, the recognition that order, metric and topological structures can be fruitfully encompassed by the common concept of (T,V)-category.
And we don't have that unifying concept yet...
Let me tag two people who can enrich the conversation (pun intended):
(and Happy new year if I dont happen to open zulip before January 1)
I remember Baez saying to me earlier this year that there was a sense in which Category Theory was "linear algebra one level higher", and I feel like this has only gotten more true as I've investigated the subject.
A pithy remark might be that linear algebra studies a well-behaved but rich class of functions (linear maps), and category theory is "abstract function theory" to a large extent? I'm not sure how much further I could take that analogy, though.
I wonder if this paper has been mentioned yet: https://doi.org/10.1017/S0308210517000208
I hope I said that category theory with profunctors is linear algebra one level higher. This is what I actually believe:
"The invention of profunctors is to category theory as the invention of linear algebra is to set-based mathematics".
John Baez said:
At a very raw intuitive level I think of the presheaf category on a category as analogous to the free vector space on a set, with colimits taking the place of linear combinations, and the Yoneda embedding as analogous to the map sending elements of the set to the corresponding basis vectors. This can be elaborated, e.g. profunctors are like matrices, etc. I think of this whole body of math as the "rediscovery of linear algebra, one level up". It's very worth pondering why linear algebra was useful in the first place, and why this reincarnation is useful.
I was referencing this comment, which is indeed in the context of profunctors!
John Baez said:
The invention of profunctors is to category theory as the invention of linear algebra is to set-based mathematics
I wonder (I have wondered for a while) whether this analogyis ultimately related, or even induced by, the self-dual properties of the cartesian bicategory of relations/profunctors, vs the self-dual properties of the abelian category of vector space.
What makes linear algebra "linear algebra" must be in some way related to freeness: modules over rings that are not fields display all sorts of new phenomena that linear algebra can safely ignore.
I love how there are very few Lawvere theories all of whose algebras are free, two being the Lawvere theory for sets and the Lawvere theory for vector spaces over a chosen field.
You seem to be hinting that this simple situation reappears in the theory of profunctors, which can be seen as cocontinuous functors between free cocompletions of categories.
Before digging deeper, I'd like to test my understanding of the analogy.
A set can be made into a vector space over by inheriting the vector space structure of via . So for example the set gets sent to all linear combinations of the functions where is the Kronecker delta. Functions between sets become linear maps between vector spaces via precomposition.
So by analogy,
A category can be made into its free cocompletion by inheriting the colimit structure of via . Explicitly, the objects of a given category are embedded into our presheaf category by sending them to their representable functors, the analogs of Kronecker deltas, whose colimits then, like a basis, express any presheaf by the co-yoneda lemma. Functors between categories then must lift to the relevant monoidal functors via precomposition.
In the case of finite sets often a bijection is made with the natural numbers which allows the deltas to be expressed as vectors with the position of the indicating the element of our cardinal set.
A matrix viewed as a map generalizes to profunctors , which then have a notion of "matrix multiplication" given by a coend . Specializing to a representable hom functor and to a "one dimensional matrix", or just a presheaf, gives us an explicitly solvable expression by the ninja yoneda lemma, which Fosco shared above.
I'd appreciate feedback on this description I just gave.
As far as I can tell this is an accurate understanding of the analogy!
Profunctors between discrete categories really "feel" like matrices, because a functor is indeed a sized table of sets, and profunctor composition is exactly matrix multiplication, up to iso and up to replacing sums with coproducts
If you like a thin example, the same is true for profunctors valued in a quantale , where Cartesian product is, instead, replaced with the monoid operation of
these are really matrices, over which you can do the usual linear algebra, with the only caveat that Set and Q are semirings, and the absence of additive inverses seems innocuous but is, instead, an important generalization much like passing from fields to rings
categorical probability people will show you that there is a certain spectrum of variation between probability monads:
Clearly, these monads are related, and thus their algebras relate.
The third is the monad whose Eilenberg-Moore category is semivector spaces over the semiring of nonnegative, finite reals. And indeed, working with unbounded distribution feels not probability-ish, but instead linear algebra-ish.
continuing on the linear algebra analogy, in a different direction, Jacobs https://arxiv.org/abs/1309.0844 will then show you that if you take a monad T, and the comonad on , then has coalgebras "the -algebras with a choice of basis".
Yet another point: a -module (k a ring) is free if and only if there is a set and an isomorphism ; in turn, a category is equivalent to , i.e. it's (equivalent to) a free -algebra on a set if and only if:
On one side, the resut is evident; on the other take as above: then .
The categorified general result "feels dirtier" than linear algebra, yet there are reasons to push it as the real analogue of the isomorphism ; this establishes a contravariant equivalence between the categories as above, and ...
Thanks for that characterization of presheaf categories on small discrete categories (= sets), @fosco! I repeatedly wonder about things like that. Now I'm trying to remember similar characterizations of presheaf categories on arbitrary small categories.
It would be nice if there were a characterization of presheaf categories with some number of conditions, and then adding one or two more conditions gave a characterization of presheaf categories on small discrete categories.
All of these are steps toward better understanding "categorified vector spaces" (as opposed to mere "categorified modules").
What I reported is just a particular instance of Theorem 2.4.1 in Marta Bunge's thesis http://www.tac.mta.ca/tac/reprints/articles/30/tr30.pdf
To me, linear algebra is a restriction of the logic you can study through (finite) sets and cartesian products.
Take a finite set . You can write .
Now, take another finite set . Then the Cartesian product expands as:
where .
From this perspective, a matrix can be interpreted as anything that looks like a doubly-indexed (finite) set: .
Examples include (finite) categories: , and also (finite) graphs. What categories add beyond graphs is composition: for categories you have a morphism
So, conversely, we could also say that linear algebra = a measured lens on graph theory in set.
From this viewpoint, a functor
can be seen as a non-monotone, non-injective extraction of a submatrix from a matrix . For intuition, this is the kind of situation I have in mind:
Screenshot 2026-01-02 at 7.15.27 AM.png
So when you form
what you are really doing is pushing indices around: you take coefficients from , but only along the index pattern specified by the submatrix extraction. Concretely, you can think of this as multiplying a matrix by a vector of indices describing the extraction map, e.g.
If you take with coefficients in (say with rows summing to 1 or 0), then acts like a "filter-and-shuffle" operation on . If you then apply , you get an equation of the form
for some .
Since can be seen as a colimit operation, this is reminiscent of the equation
So why restrict ourselves to matrices in , and can we do better? My instinct is that composition in categories, together with naturality (which is what gives Yoneda), is what forces this kind of restriction: naturality collapses the extra "free" arithmetic structure you would otherwise have over the integers.
Related to this - are there any nice modern texts which explicitly lay out and work with the (somewhat mysterious) connection between Linear Algebra and Category Theory?
I strongly doubt it! We're figuring it out right now, here.
Again, I don't think this is a connection between linear algebra and category theory: I think it's a connection between linear algebra and Prof, the 2-category of profunctors.
I have promoted it over and over, but never written about it systematically, since experts act like everyone knows it already and nonexperts seem uninterested (or maybe baffled).
The closest thing I've written about systematically is the connection between linear algebra and spans of groupoids:
You can turn spans of groupoids into profunctors between groupoids and vice versa, though they're not equivalent.
Todd Trimble and I are now groupoidifying some of the representation theory of the symmetric group.
I suspect many of us think "maybe one day my kung fu will be strong enough to understand vector spaces"... ;-)
Matteo Capucci (he/him) said:
I wonder if this paper has been mentioned yet: https://doi.org/10.1017/S0308210517000208
This is the paper
It begins
They then generalise to -groupoids, and note the extension to polynomial functors:
Finally, the theory of slices and linear functors is subsumed into the theory of polynomial functors, where a further right adjoint enters the picture, the right adjoint to pullback. The theory of polynomial functors over -categories is developed in [17]; see [16] for the classical case.
John Baez said:
I love how there are very few Lawvere theories all of whose algebras are free, two being the Lawvere theory for sets and the Lawvere theory for vector spaces over a chosen field.
What do we know in this vein about -algebraic theories?
Presumably at least -groupoids and pointed -groupoids.
[[Infinity-field]] [[modules]]?
Btw, when I just complained that few have taken up the cause of groupoidification, I was unjustly neglecting Joachim Kock and Andrew Tonks, who are doing it the right way!
Joachim and I have some further ideas we want to work on....
Yes, @David Corfield. Modules of the infinity-fields called the Morava K-theory spectra K(n) are all free in a certain precise sense: every module spectrum of K(n) is a wedge of suspensions of K(n).
Learning this fact has made eager to finally understand [[Morava K-theories]] and the [[chromatic filtration]].
@John Baez said:
Again, I don't think this is a connection between linear algebra and category theory: I think it's a connection between linear algebra and Prof, the 2-category of profunctors.
Yes, I agree that profunctors have a role here, but I think we also need to put equal emphasis on how to interpret the coefficients.
More specifically, let and be small categories in . For every profunctor , I see two complementary sources of equations:
First, observe that carries a weak-rig-like structure:
For any integer , choose a product operation and define, for any ,
Define and .
Then all rig equations hold except that the product-related ones only hold up to canonical isomorphism (hence "weak"):
Let me denote this weak structure by .
With this in mind, one can view a set-valued "matrix" as a functor .
If we work directly in , this is not a fair comparison with ordinary linear algebra, where one typically fixes a ring or field of coefficients for a given class of matrices.
So instead, fix a set and consider the sub-collection of objects of consisting of subsets of some power for some .
This yields an proper rig (not merely weak), because we can force products to land strictly in the expected power: if and , then we regard as a subset of via the canonical identification . With this convention we get strict equalities:
Let me denote this rig by .
For every subcategory , define a -matrix in to be:
Examples:
If is the category whose objects are the finite ordinals , then this recovers ordinary finite matrices with coefficients in .
If is the category of finite sets, this matches the setting of my previous post.
If , then these are profunctors.
To connect -matrices with the usual arithmetic intuition from rings and fields, we would like rig-theoretic analogues of constructions such as kernels or null spaces.
In past work I developed a notion of null space in a rig setting (see Theorem 5.113) in the paper:
https://arxiv.org/pdf/1805.07004
Although the paper is motivated by an application to genetics, Section 5 is written in an application-agnostic way and is intended to stand on its own. The earlier sections mainly serve as motivation for why this rig-theoretic null space is a useful concept.
That's a very nice analysis, @Rémy Tuyéras. Of course we should also let ourselves take advantage of your analysis and replace by some other rig-like category. For example, we can look at -enriched profunctors for some 2-rig - where I'm (temporarily) defining a 2-rig to be a symmetric monoidal cocomplete category where the tensor product distributes over colimits.
Yes, absolutely, this would be even more complete
I was trying to work out the relevance of @John Baez's allusion to freeness above. Does this relate to what Jacobs describes in the paper @fosco mentioned earlier?
So, then, the fact that the Lawvere theory for vector spaces over a chosen field has free algebras means that any map may be understood in terms of bases, and so vector spaces provide a particularly apt location for linear algebra.
Take another monad on , such as the powerset monad. It's algebras are complete lattices. There's a corresponding coalgebra structure/basis for such an algebra if it is atomic. Things look like linear algebra when we consider the free algebras, , and maps between these correspond to relations on sets.
In this vein, are profunctors appearing in this context as Kleisli maps for the presheaf 2-monad, there being a categorified version of the result above from Jacobs's paper for 2-monads on ?
Do we have interesting examples of 2-monads all of whose algebras are free?
David Corfield said:
Take another monad on , such as the powerset monad. It's algebras are complete lattices. There's a corresponding coalgebra structure/basis for such an algebra if it is atomic. Things look like linear algebra when we consider the free algebras, , and maps between these correspond to relations on sets.
Yes, this is the idea. I would only add that the free algebra structure on may need to be extended to its underlying rig defined above. The reason is that, otherwise, we would still miss the coefficients forced by profunctor compositions (which generate tuples).
In practice, the case is straightforward. Choosing a basis allows us to establish pointwise relationships that can be used to solve linear equations.
For example, let
Consider the system:
This immediately implies that
Thus, even though there is no negation, we still obtain a mechanism reminiscent of Gaussian elimination. In Jacobs' paper, I believe this corresponds to working with the equaliser , where is the basis morphism
Concretely:
The equation
implies that can be expressed in terms of .
The equation
implies that can be expressed in terms of .
Together, these constraints are intended to characterize in terms of the image of , rather than as an absolute element.
If we now want to solve matrix equations of the form
the rig structure becomes necessary. The overall approach is similar, but the structure provides additional elements, extending the previous equations to string-like expressions.
For example:
In this case, the solution is
Finally, consider the non-free case. While linear algebra over the free-like field is standard, we also work with linear algebra over structures such as or . This situation can be recovered within the rig framework by introducing additional relations (i.e. quotients) on the string-like elements, for example via letter shuffles:
In this setting, one must repeatedly refer back to the underlying free structure in order to interpret solutions.
Would you be able to spell out (or give a reference to) this Gaussian-elimination-like algorithm? I would be happy to see it spelled out just for rigs first. Even knowing conditions on when such an algorithm works would be useful.
Jacques Carette said:
Would you be able to spell out (or give a reference to) this Gaussian-elimination-like algorithm? I would be happy to see it spelled out just for rigs first. Even knowing conditions on when such an algorithm works would be useful.
My best attempt at formalizing such a Gaussian-elimination-like algorithm is presented in the following paper:
https://arxiv.org/pdf/1805.07004
While the paper explores specific applications in genetics, readers who are primarily interested in the formal treatment may want to focus on the following parts:
Section 4 (Sections 4.1–4.5), which mainly introduces the relevant definitions and notation.
Section 5, which lays out the algorithmic steps (the theorem you are after is Theorem 5.113). In this section, elements such as are systematically converted into formal sums
where is a formal element in a finite semiring (specifically, a semiring with four elements), and then into vectors
with respect to a chosen basis. This representation makes it possible to decompose the overall complexity of the algorithm into simpler, more manageable steps.
Beyond Gaussian elimination itself, the paper also investigates several structural properties and theorems that may be useful for generalizing the framework. Overall, the presentation is intentionally top-down, aiming to clarify both the algorithmic core and the broader categorical setting in which it operates.
Another data point to consider for a broad category-theoretic account. According to [[Fourier-Mukai transform]]:
where is a well-behaved scheme and its [[derived category]] of [[quasicoherent sheaves]].
Which leads on to [[geometric ∞-function theory]]:
"is represented by a matrix" is mathematical euphemism for 'can be written down in finite terms in a particular language'. What is the categorical equivalent of that?
Jacques Carette said:
"is represented by a matrix" is mathematical euphemism for 'can be written down in finite terms in a particular language'. What is the categorical equivalent of that?
Linear maps are represented by matrices through the Yoneda Lemma. For example, a ring is represented by/expressed as an image of the Yoneda embedding for some in the theory for -modules.
Now, the Yoneda lemma gives:
And then:
So a linear map is an element of , i.e. a matrix of elements in .
My impression was that you can always represent maps out of a coproduct and into a product by a matrix? I didn't realise this was yoneda-specific.
Yes, a morphism from a product to a coproduct
where and are arbitrary sets, is the same as a 'matrix' of morphisms
The situation is nicest in a category with [[biproducts]], since then every finite coproduct is also a finite product and the asymmetry of the above description dissolves: we often use to denote biproducts, so we can write
where now the index sets are finite.
Indeed, I remember this being emphasised in the abelian category section of my introductory course. What's especially nice about biproducts there is that morphism composition ends up corresponding to matrix multiplication!
Right, that's the most important point.
And the most pleasant case of all is a category with biproducts where every object is a (finite) biproduct of copies of the same object. That's what we get in the category of finite-dimensional vector spaces over a field, or finitely generated free modules over a commutative rig.
That makes sense - how about the category of finite-dimensional representations of a group? I guess in that case you have a known list of objects which everything else is a biproduct of, and Schur's lemma should guarantee that the matrices for different objects don't "mix"?
Actually, this reminds me of the fusion categories I (implicitly) came across in my "anyons and TQFTs" course. In that case we placed a lot of emphasis on studying various coherence matrices like the R and S matrices, which I suppose arise from this same biproduct idea.
Ruby Khondaker (she/her) said:
That makes sense - how about the category of finite-dimensional representations of a group? I guess in that case you have a known list of objects which everything else is a biproduct of, and Schur's lemma should guarantee that the matrices for different objects don't "mix"?
That's right, that's second in simplicity only to the case where this list has just one element.
More generally we can do this sort of thing with any [[semisimple category]]. That's an abelian -linear category for some field where every object is a biproduct of some list of 'basis' objects for which is for and for .
People in homological algebra regard semisimple categories as completely boring, but that boringness means they're a tractable framework for doing interesting things!
(When you're walking down the sidewalk, you usually don't want the surface of the sidewalk to be interesting.)
Actually, this reminds me of the fusion categories I (implicitly) came across in my "anyons and TQFTs" course.
Yes, I think those are usually assumed to be semisimple categories.