You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Matrices, Markov kernels and profunctors are all examples of "linearization". We have some sort of rig or 2-rig like real numbers, nonnegative real numbers or sets, and we define a morphism from to to be a map from , or in the categorified case, to this rig or 2-rig.
We can do most of the same tricks in each case.
Relations are another example since a relation from a set to a set the same as a function from to the boolean rig, .
Spans are another example since a span from a set to a set is the same as function from to .
So, whenever you learn a trick that works for one of these things - matrices, Markov kernels, relations, spans, profunctors - you should immediately try to use this trick for all the rest. I often forget to do this, but when I remember it usually works.
Can I ask, why is this called linearization? I don't see a connection to total orders or linear maps.
As in linear algebra and matrices, whether over a field, or just a rig like the nonnegative integers, or its categorification (FinSet or Set), or some more distant cousin. The unifying theme is that there is some kind of addition: or or
Linearization is replacing a set by the free vector space on , or by the free commutative monoid on , or by the category of sets over , ..., where you have this new "addition" operation.
Oh ok, in my understanding of a finite dimensional vector space, you have vectors which can be viewed as maps where is a field and here we are relaxing the requirement that the dimensionality be a natural number and relaxing that be a field. Then you can construct matrices out of vectors to get . That makes sense. You end up with functions where are objects and is a rig. These things should behave like matrices in a lot of ways. That explains why matrix multiplication is equivalent to relational algebra joins. And the axioms of the rig get transferred to these matrix like constructions so that the sum of matrices is the elementwise sum in the rig.
@John Baez said:
So, whenever you learn a trick that works for one of these things - matrices, Markov kernels, relations, spans, profunctors - you should immediately try to use this trick for all the rest.
I'd love to hear more about this. Also I think of matrices and dynamical systems interchangeably. Am I getting something wrong? Thanks.
James Fairbanks said:
Can I ask, why is this called linearization? I don't see a connection to total orders or linear maps.
It sounds like you got it by now, but:
In the cases I'm talking about, we have a rig or 2-rig , and for any set or category we can form the free -module on by taking . There's always a Yoneda-like embedding
We can then describe any map of free -modules
using a "matrix"
Furthermore we can compose these maps by matrix multiplication.
A popular example is , , . Then
and I'm saying we can describe a linear map from to using an matrix of real numbers,
In this situation the Yoneda-like embedding
is just the inclusion of the n-element set into the vector space as the standard basis!
The same story works for Markov kernels, relations, spans of sets, and profunctors. In every case are taking the category of sets (or 2-category of categories) and embedding it in a "linear" category (or 2-category) - that is, one where we can add maps, and we can compose maps by matrix multiplication. This lets us apply the ideas of linear algebra.
I only got the point of presheaf categories and profunctors when I realized that they were a way to take category theory and linearize it.
Thanks John, this is really beautiful. Is the free -module on a set really when is infinite? I'm not sure if you need to take only the functions with finite support.
You're right, Jade: the free -module on a set is only when is finite.
Category theory works a bit better: for example the free cocompletion of a category is whenever is small.
When a set is not finite, to get the free -module on it we need to take a subset of consisting of finite linear combinations of elements of .
When a category is not small, to get its free cocompletion we need to take a subcategory of consisting of small colimits of representables.
So "small" is the big boys' version of "finite".
By the way what I just said about should work for more generally, but I'm too lazy to remember/write down the fine print. Steve Lack has a paper about cocompletions of large enriched categories.
Also it's good to imagine that the "op" is hiding in even when is a set: it's just invisible then. This makes the analogy more perfect.
Of course a set is just a category with only identity arrows :smirk:
Here is a dumb "pattern-matching" inspired question. There is a slogan that "homology is the linearization of homotopy". is there any way at all that fits with this picture, or are the two just different uses of the term "linearization"?
It's pretty darn similar. When we take homotopy theory and "linearize" it to get homology theory, what we're doing is taking an infinity-groupoid and turning it into a strict stable infinity-groupoid, which is the same as chain complex of abelian groups.
Another way to think of the exact same thing that we're taking a simplicial set and freely turning it into a simplicial abelian group. (Simplicial sets are a model for infinity-groupoids, and the Dold-Kan theorem says simplicial abelian groups are the same as chain complexes of abelian groups.)
The second perspective is probably more helpful now. Taking a simplicial set and freely turning it into a simplical abelian group has, as a special case, forming the free abelian group on a set . This is just when is finite. And this process is precisely an example of the "linearization" idea I was just talking about!
Baez goes into a lot more detail about this stuff in his online course in chapter 4:
https://www.azimuthproject.org/azimuth/show/Applied+Category+Theory+Course#Chapter_4
My respect for matrices (as opposed to coordinate free presentations) deepened after the discussion.