Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: Linear Algebra = Category Theory


view this post on Zulip Alex Kreitzberg (Dec 30 2025 at 21:46):

These comments seemed like the start of an interesting topic

John Baez said:

fosco said:

Alex Kreitzberg said:

This is an aside, but I find it a bit strange that the subject with the most tools for structural analogies is powerless to explain its own analogy with linear algebra.

we know too little category theory to explain the mysteries of linear algebra!

I was going to say something similar. It's not that the subject is powerless to do this: it's just that we haven't yet figured out the right way to do this. Math isn't done yet!

Fosco and Todd Trimble are working on a theory of categorified rigs, and that's probably part of what we need.

Tai-Danae Bradley commented - when discussing linear algebra as an analogy for category theory - "...I don't think it's more than an analogy which is kind of puzzling" from https://www.youtube.com/live/_LgWD3UTKfw?si=cj_qT9kOyYqomJM9&t=3142

And from Fosco's text Coend Calculus - "and yet the analogy between integral calculus and co/ends seems to be too elusive to justify."

It sounds like I read both of these sentiments too strongly? Or maybe there are new ideas for formalizing such analogies?

What's the state of the art with formalizing such analogies? Why is it so hard? Or is there anything fun to say about any of the above?

view this post on Zulip fosco (Dec 30 2025 at 21:52):

Look for a thread I opened a year or so ago, where I say that there is some interest in what an "eigenvalue" for a profunctor is -I have a definition for eigenvalues and eigenvectors, which is not great but hints at the fact that some structure is present and that it makes sense.

view this post on Zulip fosco (Dec 30 2025 at 22:18):

I can only answer in a laconic one-sentence way at the moment, but I believe the problem lies in this analogy:
image.png
This analogy is a masterpiece: it's effective and inspiring, it doesn't fail you if you think in those terms (the tiny objects of [Cop,Set][C^{op},Set] are the delta distributions, infinitely concentrated into a point/object? Doesnt it make a ton of sense?

view this post on Zulip fosco (Dec 30 2025 at 22:20):

...it doesn't fail you until you dare to ask the question: ok, but then why is it true? At that point you have to make sense of the fact that integrals with respect to a measure should be instances of a categorical concept; in some sense they are, but not in a way that relates to coend calculus as we know it (as I know it -and even after all this time I don't believe I understand it enough)

view this post on Zulip fosco (Dec 30 2025 at 22:22):

Even more problematic is the fact that there is another compelling analogy that co/ends bear: quantifiers! So, in what sense there is a concept which specializes to /\forall/\exists when you do logic, that categorifies to co/ends when you upgrade to non-thin categories, and that in the same fell swoop describes (or is described?!) by "integrals with respect to a measure"?

view this post on Zulip fosco (Dec 30 2025 at 22:23):

The kind of generalization this problem needs is analogous to Lawvere's recognition that metric spaces are enriched categories --probably even more, the recognition that order, metric and topological structures can be fruitfully encompassed by the common concept of (T,V)-category.

And we don't have that unifying concept yet...

view this post on Zulip fosco (Dec 30 2025 at 22:25):

Let me tag two people who can enrich the conversation (pun intended):

view this post on Zulip fosco (Dec 30 2025 at 22:26):

(and Happy new year if I dont happen to open zulip before January 1)

view this post on Zulip Ruby Khondaker (she/her) (Dec 30 2025 at 23:13):

I remember Baez saying to me earlier this year that there was a sense in which Category Theory was "linear algebra one level higher", and I feel like this has only gotten more true as I've investigated the subject.

A pithy remark might be that linear algebra studies a well-behaved but rich class of functions (linear maps), and category theory is "abstract function theory" to a large extent? I'm not sure how much further I could take that analogy, though.

view this post on Zulip Matteo Capucci (he/him) (Dec 31 2025 at 01:24):

I wonder if this paper has been mentioned yet: https://doi.org/10.1017/S0308210517000208

view this post on Zulip John Baez (Dec 31 2025 at 14:56):

I hope I said that category theory with profunctors is linear algebra one level higher. This is what I actually believe:

"The invention of profunctors is to category theory as the invention of linear algebra is to set-based mathematics".

view this post on Zulip Ruby Khondaker (she/her) (Dec 31 2025 at 15:39):

John Baez said:

At a very raw intuitive level I think of the presheaf category on a category as analogous to the free vector space on a set, with colimits taking the place of linear combinations, and the Yoneda embedding as analogous to the map sending elements of the set to the corresponding basis vectors. This can be elaborated, e.g. profunctors are like matrices, etc. I think of this whole body of math as the "rediscovery of linear algebra, one level up". It's very worth pondering why linear algebra was useful in the first place, and why this reincarnation is useful.

I was referencing this comment, which is indeed in the context of profunctors!

view this post on Zulip fosco (Dec 31 2025 at 16:00):

John Baez said:

The invention of profunctors is to category theory as the invention of linear algebra is to set-based mathematics

I wonder (I have wondered for a while) whether this analogyis ultimately related, or even induced by, the self-dual properties of the cartesian bicategory of relations/profunctors, vs the self-dual properties of the abelian category of vector space.

What makes linear algebra "linear algebra" must be in some way related to freeness: modules over rings that are not fields display all sorts of new phenomena that linear algebra can safely ignore.

view this post on Zulip John Baez (Dec 31 2025 at 16:06):

I love how there are very few Lawvere theories all of whose algebras are free, two being the Lawvere theory for sets and the Lawvere theory for vector spaces over a chosen field.

You seem to be hinting that this simple situation reappears in the theory of profunctors, which can be seen as cocontinuous functors between free cocompletions of categories.

view this post on Zulip Alex Kreitzberg (Jan 02 2026 at 01:47):

Before digging deeper, I'd like to test my understanding of the analogy.

A set XX can be made into a vector space over R\mathbf{R} by inheriting the vector space structure of R\mathbf{R} via hom(X,R)\textrm{hom}(X, \mathbf{R}). So for example the set {x,y,z}\{x, y, z\} gets sent to all linear combinations of the functions δx,δy,δz\delta_{x}, \delta_{y}, \delta_{z} where δ\delta is the Kronecker delta. Functions between sets become linear maps between vector spaces via precomposition.

So by analogy,

A category C\mathcal{C} can be made into its free cocompletion by inheriting the colimit structure of Set\textrm{Set} via hom(Cop,Set)\textrm{hom}(C^{\textrm{op}}, \textrm{Set}). Explicitly, the objects of a given category C\mathcal{C} are embedded into our presheaf category by sending them to their representable functors, the analogs of Kronecker deltas, whose colimits then, like a basis, express any presheaf by the co-yoneda lemma. Functors between categories then must lift to the relevant monoidal functors via precomposition.

In the case of finite sets often a bijection is made with the natural numbers {0,1,,n1}\{0, 1, \ldots, n - 1\} which allows the deltas to be expressed as vectors [0,,1,,0]\left [0, \ldots, 1, \ldots, 0 \right ] with the position of the 11 indicating the element of our cardinal set.

A matrix viewed as a map X×YRX\times Y \rightarrow \mathbf{R} generalizes to profunctors Dop×CSet\mathcal{D}^{op}\times\mathcal{C}\rightarrow \mathrm{Set}, which then have a notion of "matrix multiplication" given by a coend cCF(x,c)G(c,y)\int^{c \in \mathcal{C}} F(x, c) \otimes G(c, y). Specializing GG to a representable hom functor and FF to a "one dimensional matrix", or just a presheaf, gives us an explicitly solvable expression by the ninja yoneda lemma, which Fosco shared above.

I'd appreciate feedback on this description I just gave.

view this post on Zulip Ruby Khondaker (she/her) (Jan 02 2026 at 06:33):

As far as I can tell this is an accurate understanding of the analogy!

view this post on Zulip fosco (Jan 02 2026 at 10:25):

Profunctors between discrete categories really "feel" like matrices, because a functor p:X×YSetp : X \times Y \to Set is indeed a X×Y|X|\times |Y| sized table of sets, and profunctor composition is exactly matrix multiplication, up to iso and up to replacing sums with coproducts

view this post on Zulip fosco (Jan 02 2026 at 10:28):

If you like a thin example, the same is true for profunctors valued in a quantale QQ, where Cartesian product is, instead, replaced with the monoid operation \otimes of QQ

view this post on Zulip fosco (Jan 02 2026 at 10:29):

these are really matrices, over which you can do the usual linear algebra, with the only caveat that Set and Q are semirings, and the absence of additive inverses seems innocuous but is, instead, an important generalization much like passing from fields to rings

view this post on Zulip fosco (Jan 02 2026 at 10:35):

categorical probability people will show you that there is a certain spectrum of variation between probability monads:

  1. The usual monad of finite distributions sends a set X to the set of distributions DX={p:X[0,1]xpx=1,p=0 a. e.}DX=\{p : X \to [0,1]\mid \sum_x px=1, p=0 \text{ a. e.}\}
  2. The monad of subdistributions sends a set X to DX={p:X[0,1]xpx1,p=0 a. e.}D_\le X=\{p : X \to [0,1]\mid \sum_x px\le 1, p=0 \text{ a. e.}\}
  3. The monad of "unbounded distributions" sends X to DX={p:X[0,)xpx<,p=0 a. e.}D_\infty X=\{p : X \to [0,\infty)\mid \sum_x px<\infty, p=0 \text{ a. e.}\} (I think I'm recalling it correctly, tell me if I'm wrong)

Clearly, these monads are related, and thus their algebras relate.

The third is the monad whose Eilenberg-Moore category is semivector spaces over the semiring of nonnegative, finite reals. And indeed, working with unbounded distribution feels not probability-ish, but instead linear algebra-ish.

view this post on Zulip fosco (Jan 02 2026 at 10:39):

continuing on the linear algebra analogy, in a different direction, Jacobs https://arxiv.org/abs/1309.0844 will then show you that if you take a monad T, and the comonad ST=FTUTS^T=F^TU^T on EM(T)EM(T), then STS^T has coalgebras "the TT-algebras with a choice of basis".

view this post on Zulip fosco (Jan 02 2026 at 10:49):

Yet another point: a kk-module MM (k a ring) is free if and only if there is a set SS and an isomorphism MsSkM\cong \bigoplus_{s\in S}k; in turn, a category C\mathcal C is equivalent to SetSSet^S, i.e. it's (equivalent to) a free PShPSh-algebra on a set SS if and only if:

  1. it has finite limits and small coproducts;
  2. coproducts are disjoint and universal;
  3. P=Sub(1)P=Sub(1) (subobjects of the terminal object) is a small poset
  4. every object AA of C\mathcal C is a coproduct of atoms of PP

On one side, the resut is evident; on the other take C\mathcal C as above: then CSetAtoms(Sub(1))\mathcal C\cong Set^{Atoms(Sub(1))}.

The categorified general result "feels dirtier" than linear algebra, yet there are reasons to push it as the real analogue of the isomorphism MsSkM\cong \bigoplus_{s\in S}k; this establishes a contravariant equivalence between the categories C\mathcal C as above, and SetSet...

view this post on Zulip John Baez (Jan 02 2026 at 11:15):

Thanks for that characterization of presheaf categories on small discrete categories (= sets), @fosco! I repeatedly wonder about things like that. Now I'm trying to remember similar characterizations of presheaf categories on arbitrary small categories.

It would be nice if there were a characterization of presheaf categories with some number of conditions, and then adding one or two more conditions gave a characterization of presheaf categories on small discrete categories.

All of these are steps toward better understanding "categorified vector spaces" (as opposed to mere "categorified modules").

view this post on Zulip fosco (Jan 02 2026 at 11:32):

What I reported is just a particular instance of Theorem 2.4.1 in Marta Bunge's thesis http://www.tac.mta.ca/tac/reprints/articles/30/tr30.pdf

view this post on Zulip Rémy Tuyéras (Jan 02 2026 at 12:07):

To me, linear algebra is a restriction of the logic you can study through (finite) sets and cartesian products.

Take a finite set X={x1,,xn}X = \{x_1, \dots , x_n\}. You can write X=i=1nxiX = \sum_{i=1}^n x_i.

Now, take another finite set Y={y1,,ym}=j=1myjY = \{y_1,\dots , y_m\} = \sum_{j=1}^m y_j. Then the Cartesian product expands as:

X×Y=i,jxiyjX \times Y = \sum_{i,j} x_i \cdot y_j

where xiyj=(xi,yj)x_i \cdot y_j = (x_i,y_j).

From this perspective, a matrix can be interpreted as anything that looks like a doubly-indexed (finite) set: M=(Xi,k)i,kM = (X_{i,k})_{i,k}.

Examples include (finite) categories: C=(C(x,y))x,yObj(C)C = (C(x,y))_{x,y \in \mathsf{Obj}(C)}, and also (finite) graphs. What categories add beyond graphs is composition: for categories you have a morphism

:CCC\circ: C \cdot C \to C

So, conversely, we could also say that linear algebra = a measured lens on graph theory in set.

view this post on Zulip Rémy Tuyéras (Jan 02 2026 at 12:39):

From this viewpoint, a functor

F:D(x,y)C(F(x),F(y))F: D(x,y) \to C(F(x),F(y))

can be seen as a non-monotone, non-injective extraction of a submatrix DD from a matrix CC. For intuition, this is the kind of situation I have in mind:

Screenshot 2026-01-02 at 7.15.27 AM.png

So when you form

F(x)×C(y,x),F(x) \times C(y, x),

what you are really doing is pushing indices around: you take coefficients from CC, but only along the index pattern specified by the submatrix extraction. Concretely, you can think of this as multiplying a matrix MM by a vector JJ of indices describing the extraction map, e.g.

(1,2,4,5,7,8,9,).(1,2,4,5,7,8,9, \dots).

If you take MM with coefficients in {0,1}\{0, 1\} (say with rows summing to 1 or 0), then MJMJ acts like a "filter-and-shuffle" operation on JJ. If you then apply max\mathsf{max}, you get an equation of the form

max(MJ)=Jk\mathsf{max}(MJ) = J_{k}

for some kk.

Since max\mathsf{max} can be seen as a colimit operation, this is reminiscent of the equation

XF(x)×C(y,x)=F(y).\int^X F(x) \times C(y, x) = F(y).

So why restrict ourselves to matrices in {0,1}\{0, 1\}, and can we do better? My instinct is that composition in categories, together with naturality (which is what gives Yoneda), is what forces this kind of restriction: naturality collapses the extra "free" arithmetic structure you would otherwise have over the integers.

view this post on Zulip Ruby Khondaker (she/her) (Jan 03 2026 at 06:50):

Related to this - are there any nice modern texts which explicitly lay out and work with the (somewhat mysterious) connection between Linear Algebra and Category Theory?

view this post on Zulip John Baez (Jan 03 2026 at 10:58):

I strongly doubt it! We're figuring it out right now, here.

Again, I don't think this is a connection between linear algebra and category theory: I think it's a connection between linear algebra and Prof, the 2-category of profunctors.

I have promoted it over and over, but never written about it systematically, since experts act like everyone knows it already and nonexperts seem uninterested (or maybe baffled).

The closest thing I've written about systematically is the connection between linear algebra and spans of groupoids:

You can turn spans of groupoids into profunctors between groupoids and vice versa, though they're not equivalent.

Todd Trimble and I are now groupoidifying some of the representation theory of the symmetric group.

view this post on Zulip fosco (Jan 03 2026 at 11:00):

I suspect many of us think "maybe one day my kung fu will be strong enough to understand vector spaces"... ;-)

view this post on Zulip David Corfield (Jan 03 2026 at 12:03):

Matteo Capucci (he/him) said:

I wonder if this paper has been mentioned yet: https://doi.org/10.1017/S0308210517000208

This is the paper

It begins

image.png

image.png

They then generalise to \infty-groupoids, and note the extension to polynomial functors:

Finally, the theory of slices and linear functors is subsumed into the theory of polynomial functors, where a further right adjoint enters the picture, the right adjoint to pullback. The theory of polynomial functors over \infty-categories is developed in [17]; see [16] for the classical case.

view this post on Zulip David Corfield (Jan 03 2026 at 12:34):

John Baez said:

I love how there are very few Lawvere theories all of whose algebras are free, two being the Lawvere theory for sets and the Lawvere theory for vector spaces over a chosen field.

What do we know in this vein about (,1)(\infty,1)-algebraic theories?

Presumably at least \infty-groupoids and pointed \infty-groupoids.

view this post on Zulip David Corfield (Jan 03 2026 at 12:42):

[[Infinity-field]] [[modules]]?

view this post on Zulip John Baez (Jan 03 2026 at 13:17):

Btw, when I just complained that few have taken up the cause of groupoidification, I was unjustly neglecting Joachim Kock and Andrew Tonks, who are doing it the right way!

view this post on Zulip John Baez (Jan 03 2026 at 13:18):

Joachim and I have some further ideas we want to work on....

view this post on Zulip John Baez (Jan 03 2026 at 13:40):

Yes, @David Corfield. Modules of the infinity-fields called the Morava K-theory spectra K(n) are all free in a certain precise sense: every module spectrum of K(n) is a wedge of suspensions of K(n).

Learning this fact has made eager to finally understand [[Morava K-theories]] and the [[chromatic filtration]].

view this post on Zulip Rémy Tuyéras (Jan 03 2026 at 14:00):

@John Baez said:

Again, I don't think this is a connection between linear algebra and category theory: I think it's a connection between linear algebra and Prof, the 2-category of profunctors.

Yes, I agree that profunctors have a role here, but I think we also need to put equal emphasis on how to interpret the coefficients.

More specifically, let CC and DD be small categories in Cat\mathbf{Cat}. For every profunctor M:Cop×DSetM : C^{\mathsf{op}} \times D \to \mathbf{Set}, I see two complementary sources of equations:

1) Sets

First, observe that Set\mathbf{Set} carries a weak-rig-like structure:

Then all rig equations hold except that the product-related ones only hold up to canonical isomorphism (hence "weak"):

X(YZ)(XY)ZX \cdot (Y \cdot Z) \cong (X \cdot Y) \cdot Z

1XXX11 \cdot X \cong X \cong X \cdot 1

Let me denote this weak structure by SetRig\mathbf{SetRig}.

With this in mind, one can view a set-valued "matrix" as a functor M:Cop×DSetRigM : C^{\mathsf{op}} \times D \to \mathbf{SetRig}.

2) Bounded coefficients

If we work directly in SetRig\mathbf{SetRig}, this is not a fair comparison with ordinary linear algebra, where one typically fixes a ring or field of coefficients for a given class of matrices.

So instead, fix a set SS and consider the sub-collection of objects of SetRig\mathbf{SetRig} consisting of subsets of some power Sn=i=1nSS^{n} = \prod_{i=1}^n S for some n0n \geq 0.

This yields an proper rig (not merely weak), because we can force products to land strictly in the expected power: if XSpX \subseteq S^{p} and YSqY \subseteq S^{q}, then we regard XYX \cdot Y as a subset of Sp+qS^{p+q} via the canonical identification Sp×Sq=Sp+qS^{p} \times S^{q} = S^{p+q}. With this convention we get strict equalities:

X(YZ)=(XY)ZX \cdot (Y \cdot Z) = (X \cdot Y) \cdot Z

1X=X=X11 \cdot X = X = X \cdot 1

Let me denote this rig by Rig(S)\mathbf{Rig}(S).

3) Profunctors as matrices

For every subcategory CCat\mathbf{C} \subseteq \mathbf{Cat}, define a C\mathbf{C}-matrix in Rig(S)\mathbf{Rig}(S) to be:

Examples:

4) Null spaces for rigs

To connect Cat\mathbf{Cat}-matrices with the usual arithmetic intuition from rings and fields, we would like rig-theoretic analogues of constructions such as kernels or null spaces.

In past work I developed a notion of null space in a rig setting (see Theorem 5.113) in the paper:

https://arxiv.org/pdf/1805.07004

Although the paper is motivated by an application to genetics, Section 5 is written in an application-agnostic way and is intended to stand on its own. The earlier sections mainly serve as motivation for why this rig-theoretic null space is a useful concept.

view this post on Zulip John Baez (Jan 03 2026 at 14:15):

That's a very nice analysis, @Rémy Tuyéras. Of course we should also let ourselves take advantage of your analysis and replace Set\mathbf{Set} by some other rig-like category. For example, we can look at R\mathbf{R}-enriched profunctors for some 2-rig R\mathbf{R} - where I'm (temporarily) defining a 2-rig to be a symmetric monoidal cocomplete category where the tensor product distributes over colimits.

view this post on Zulip Rémy Tuyéras (Jan 03 2026 at 14:20):

Yes, absolutely, this would be even more complete

view this post on Zulip David Corfield (Jan 05 2026 at 10:39):

I was trying to work out the relevance of @John Baez's allusion to freeness above. Does this relate to what Jacobs describes in the paper @fosco mentioned earlier?

image.png

So, then, the fact that the Lawvere theory for vector spaces over a chosen field has free algebras means that any map may be understood in terms of bases, and so vector spaces provide a particularly apt location for linear algebra.

view this post on Zulip David Corfield (Jan 05 2026 at 11:01):

Take another monad on SetSet, such as the powerset monad. It's algebras are complete lattices. There's a corresponding coalgebra structure/basis for such an algebra if it is atomic. Things look like linear algebra when we consider the free algebras, 2X2^X, and maps between these correspond to relations on sets.

view this post on Zulip David Corfield (Jan 05 2026 at 12:26):

In this vein, are profunctors appearing in this context as Kleisli maps for the presheaf 2-monad, there being a categorified version of the result above from Jacobs's paper for 2-monads on CatCat?

Do we have interesting examples of 2-monads all of whose algebras are free?

view this post on Zulip Rémy Tuyéras (Jan 05 2026 at 12:40):

David Corfield said:

Take another monad on SetSet, such as the powerset monad. It's algebras are complete lattices. There's a corresponding coalgebra structure/basis for such an algebra if it is atomic. Things look like linear algebra when we consider the free algebras, 2X2^X, and maps between these correspond to relations on sets.

Yes, this is the idea. I would only add that the free algebra structure on 2X\mathbf{2}^X may need to be extended to its underlying rig Rig(X)\mathbf{Rig}(X) defined above. The reason is that, otherwise, we would still miss the coefficients forced by profunctor compositions (which generate tuples).

1) The case R=2XR = \mathbf{2}^X

In practice, the case R=2XR = \mathbf{2}^X is straightforward. Choosing a basis XX allows us to establish pointwise relationships that can be used to solve linear equations.

For example, let
X={a,b,c,d,e,f,}.X = \{a, b, c, d, e, f, \dots\}.

Consider the system:

{b,c}+Z={a,b,c},\{b,c\} + Z = \{a,b,c\},
Z+{e}={e,a}.Z +\{e\} = \{e,a\}.

This immediately implies that
Z={a}.Z = \{a\}.

Thus, even though there is no negation, we still obtain a mechanism reminiscent of Gaussian elimination. In Jacobs' paper, I believe this corresponds to working with the equaliser (μ,β)(\mu,\beta), where β\beta is the basis morphism

β:RT(R).\beta : R \to T(R).

Concretely:

Together, these constraints are intended to characterize β(Z)\beta(Z) in terms of the image of β\beta, rather than as an absolute element.

2) Matrix equations and the rig structure

If we now want to solve matrix equations of the form

MV=U,MV = U,

the rig structure becomes necessary. The overall approach is similar, but the structure provides additional elements, extending the previous equations to string-like expressions.

For example:
{baa,cda}+Z{a}={baa,cda,eaa},\{baa, cda\} + Z \cdot \{a\} = \{baa, cda, eaa\},
{eaa}+Z{a}={eaa,cda}.\{eaa\} + Z \cdot \{a\} = \{eaa, cda\}.

In this case, the solution is
Z={ea,cd}.Z = \{ea, cd\}.

3) The non-free case

Finally, consider the non-free case. While linear algebra over the free-like field Q\mathbb{Q} is standard, we also work with linear algebra over structures such as R\mathbb{R} or Q[i,2]\mathbb{Q}[i,\sqrt{2}]. This situation can be recovered within the rig framework by introducing additional relations (i.e. quotients) on the string-like elements, for example via letter shuffles:

{baa,cda}={baa,cda}={bda,caa,baa,cda}.\{baa, cda\} = \{b|aa, c|da\} = \{bda, caa, baa, cda\}.

In this setting, one must repeatedly refer back to the underlying free structure in order to interpret solutions.

view this post on Zulip Jacques Carette (Jan 05 2026 at 14:26):

Would you be able to spell out (or give a reference to) this Gaussian-elimination-like algorithm? I would be happy to see it spelled out just for rigs first. Even knowing conditions on when such an algorithm works would be useful.

view this post on Zulip Rémy Tuyéras (Jan 05 2026 at 15:02):

Jacques Carette said:

Would you be able to spell out (or give a reference to) this Gaussian-elimination-like algorithm? I would be happy to see it spelled out just for rigs first. Even knowing conditions on when such an algorithm works would be useful.

My best attempt at formalizing such a Gaussian-elimination-like algorithm is presented in the following paper:
https://arxiv.org/pdf/1805.07004

While the paper explores specific applications in genetics, readers who are primarily interested in the formal treatment may want to focus on the following parts:

Beyond Gaussian elimination itself, the paper also investigates several structural properties and theorems that may be useful for generalizing the framework. Overall, the presentation is intentionally top-down, aiming to clarify both the algorithmic core and the broader categorical setting in which it operates.

view this post on Zulip David Corfield (Jan 05 2026 at 15:46):

Another data point to consider for a broad category-theoretic account. According to [[Fourier-Mukai transform]]:

image.png

where XX is a well-behaved scheme and D(X)D(X) its [[derived category]] of [[quasicoherent sheaves]].

Which leads on to [[geometric ∞-function theory]]:

image.png

view this post on Zulip Jacques Carette (Jan 05 2026 at 16:33):

"is represented by a matrix" is mathematical euphemism for 'can be written down in finite terms in a particular language'. What is the categorical equivalent of that?

view this post on Zulip Rémy Tuyéras (Jan 05 2026 at 17:27):

Jacques Carette said:

"is represented by a matrix" is mathematical euphemism for 'can be written down in finite terms in a particular language'. What is the categorical equivalent of that?

Linear maps are represented by matrices through the Yoneda Lemma. For example, a ring RR is represented by/expressed as an image of the Yoneda embedding Y(t,):ModRSetY(t,-): \mathsf{Mod}_R \to \mathbf{Set} for some tt in the theory ModR\mathsf{Mod}_R for RR-modules.

Now, the Yoneda lemma gives:
[ModR,Set](R,M)M[\mathsf{Mod}_R, \mathbf{Set}](R, M) \cong M

And then:
[ModR,Set](Rm,M)[ModR,Set](i=1mR,M)Mm[\mathsf{Mod}_R, \mathbf{Set}](R^m, M) \cong [\mathsf{Mod}_R, \mathbf{Set}](\oplus^m_{i=1} R, M) \cong M^{m}

So a linear map RmRnR^m \to R^n is an element of Rn×mR^{n \times m}, i.e. a matrix of elements in RR.

view this post on Zulip Ruby Khondaker (she/her) (Jan 05 2026 at 17:29):

My impression was that you can always represent maps out of a coproduct and into a product by a matrix? I didn't realise this was yoneda-specific.

view this post on Zulip John Baez (Jan 05 2026 at 17:33):

Yes, a morphism from a product to a coproduct

f:iSXijTYj, f : \coprod_{i \in S} X_i \to \prod_{j \in T} Y_j ,

where SS and TT are arbitrary sets, is the same as a 'matrix' (fij)iS,jT(f_{ij})_{i \in S, j \in T} of morphisms

fij:XiYj f_{ij} : X_i \to Y_j

The situation is nicest in a category with [[biproducts]], since then every finite coproduct is also a finite product and the asymmetry of the above description dissolves: we often use \oplus to denote biproducts, so we can write

f:iSXijTYj f : \bigoplus_{i \in S} X_i \to \bigoplus_{j \in T} Y_j

where now the index sets S,TS, T are finite.

view this post on Zulip Ruby Khondaker (she/her) (Jan 05 2026 at 17:34):

Indeed, I remember this being emphasised in the abelian category section of my introductory course. What's especially nice about biproducts there is that morphism composition ends up corresponding to matrix multiplication!

view this post on Zulip John Baez (Jan 05 2026 at 17:36):

Right, that's the most important point.

And the most pleasant case of all is a category with biproducts where every object is a (finite) biproduct of copies of the same object. That's what we get in the category of finite-dimensional vector spaces over a field, or finitely generated free modules over a commutative rig.

view this post on Zulip Ruby Khondaker (she/her) (Jan 05 2026 at 17:39):

That makes sense - how about the category of finite-dimensional representations of a group? I guess in that case you have a known list of objects which everything else is a biproduct of, and Schur's lemma should guarantee that the matrices for different objects don't "mix"?

Actually, this reminds me of the fusion categories I (implicitly) came across in my "anyons and TQFTs" course. In that case we placed a lot of emphasis on studying various coherence matrices like the R and S matrices, which I suppose arise from this same biproduct idea.

view this post on Zulip John Baez (Jan 05 2026 at 17:49):

Ruby Khondaker (she/her) said:

That makes sense - how about the category of finite-dimensional representations of a group? I guess in that case you have a known list of objects which everything else is a biproduct of, and Schur's lemma should guarantee that the matrices for different objects don't "mix"?

That's right, that's second in simplicity only to the case where this list has just one element.

More generally we can do this sort of thing with any [[semisimple category]]. That's an abelian kk-linear category for some field kk where every object is a biproduct of some list of 'basis' objects xαx_\alpha for which hom(xα,xβ)\text{hom}(x_\alpha, x_\beta) is {0}\{0\} for iji \ne j and kk for α=β\alpha = \beta.

People in homological algebra regard semisimple categories as completely boring, but that boringness means they're a tractable framework for doing interesting things!

(When you're walking down the sidewalk, you usually don't want the surface of the sidewalk to be interesting.)

Actually, this reminds me of the fusion categories I (implicitly) came across in my "anyons and TQFTs" course.

Yes, I think those are usually assumed to be semisimple categories.