Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: applied category theory

Topic: "Linearization"


view this post on Zulip John Baez (May 28 2020 at 05:34):

Matrices, Markov kernels and profunctors are all examples of "linearization". We have some sort of rig or 2-rig like real numbers, nonnegative real numbers or sets, and we define a morphism from XX to YY to be a map from X×YX \times Y, or Xop×YX^{\mathrm{op}} \times Y in the categorified case, to this rig or 2-rig.

view this post on Zulip John Baez (May 28 2020 at 05:35):

We can do most of the same tricks in each case.

view this post on Zulip John Baez (May 28 2020 at 05:36):

Relations are another example since a relation from a set XX to a set YY the same as a function from X×YX \times Y to the boolean rig, {0,1}\{0,1\}.

view this post on Zulip John Baez (May 28 2020 at 05:37):

Spans are another example since a span from a set XX to a set YY is the same as function from X×YX \times Y to Set\mathsf{Set}.

view this post on Zulip John Baez (May 28 2020 at 05:41):

So, whenever you learn a trick that works for one of these things - matrices, Markov kernels, relations, spans, profunctors - you should immediately try to use this trick for all the rest. I often forget to do this, but when I remember it usually works.

view this post on Zulip James Fairbanks (May 28 2020 at 14:20):

Can I ask, why is this called linearization? I don't see a connection to total orders or linear maps.

view this post on Zulip Reid Barton (May 28 2020 at 14:24):

As in linear algebra and matrices, whether over a field, or just a rig like the nonnegative integers, or its categorification (FinSet or Set), or some more distant cousin. The unifying theme is that there is some kind of addition: ++ or ⨿\amalg or \vee

view this post on Zulip Reid Barton (May 28 2020 at 14:26):

Linearization is replacing a set XX by the free vector space on XX, or by the free commutative monoid on XX, or by the category of sets over XX, ..., where you have this new "addition" operation.

view this post on Zulip James Fairbanks (May 28 2020 at 15:18):

Oh ok, in my understanding of a finite dimensional vector space, you have vectors which can be viewed as maps v:nKv:n\to K where KK is a field and here we are relaxing the requirement that the dimensionality be a natural number and relaxing that KK be a field. Then you can construct matrices out of vectors to get A:n×mKA: n\times m \to K. That makes sense. You end up with functions f:X×YKf:X\times Y \to K where X,YX,Y are objects and KK is a rig. These things should behave like matrices in a lot of ways. That explains why matrix multiplication is equivalent to relational algebra joins. And the axioms of the rig get transferred to these matrix like constructions so that the sum of matrices is the elementwise sum in the rig.

view this post on Zulip Daniel Geisler (May 28 2020 at 16:17):

@John Baez said:

So, whenever you learn a trick that works for one of these things - matrices, Markov kernels, relations, spans, profunctors - you should immediately try to use this trick for all the rest.

I'd love to hear more about this. Also I think of matrices and dynamical systems interchangeably. Am I getting something wrong? Thanks.

view this post on Zulip John Baez (May 28 2020 at 18:42):

James Fairbanks said:

Can I ask, why is this called linearization? I don't see a connection to total orders or linear maps.

It sounds like you got it by now, but:

In the cases I'm talking about, we have a rig or 2-rig VV, and for any set or category XX we can form the free VV-module on XX by taking VXopV^{X^{\mathrm{op}}}. There's always a Yoneda-like embedding

XXop X \mapsto {X^{\mathrm{op}}}

We can then describe any map of free VV-modules

F:VXopVYopF: V^{X^{\mathrm{op}}} \to V^{Y^{\mathrm{op}}}

using a "matrix"

T:Y×XopVT: Y \times X^{\mathrm{op}} \to V

Furthermore we can compose these maps by matrix multiplication.

A popular example is X=mX = m, Y=nY = n, V=RV = \mathbb{R}. Then

VXop=RmV^{X^{\mathrm{op}}} = \mathbb{R}^m

VXop=RnV^{X^{\mathrm{op}}} = \mathbb{R}^n

and I'm saying we can describe a linear map from Rm\mathbb{R}^m to Rn\mathbb{R}^n using an n×mn \times m matrix of real numbers,

T:n×mRT: n \times m \to \mathbb{R}

In this situation the Yoneda-like embedding

nRnn \to \mathbb{R}^n

is just the inclusion of the n-element set into the vector space Rn\mathbb{R}^n as the standard basis!

The same story works for Markov kernels, relations, spans of sets, and profunctors. In every case are taking the category of sets (or 2-category of categories) and embedding it in a "linear" category (or 2-category) - that is, one where we can add maps, and we can compose maps by matrix multiplication. This lets us apply the ideas of linear algebra.

view this post on Zulip John Baez (May 28 2020 at 18:48):

I only got the point of presheaf categories and profunctors when I realized that they were a way to take category theory and linearize it.

view this post on Zulip Jade Master (May 28 2020 at 22:36):

Thanks John, this is really beautiful. Is the free VV-module on a set XX really VXV^{X} when XX is infinite? I'm not sure if you need to take only the functions with finite support.

view this post on Zulip John Baez (May 28 2020 at 22:51):

You're right, Jade: the free VV-module on a set XX is only VXV^X when XX is finite.

Category theory works a bit better: for example the free cocompletion of a category XX is SetXop\mathsf{Set}^{X^{\mathrm{op}}} whenever XX is small.

When a set XX is not finite, to get the free VV-module on it we need to take a subset of VXV^X consisting of finite linear combinations of elements of XX.

When a category XX is not small, to get its free cocompletion we need to take a subcategory of SetXop\mathsf{Set}^{X^{\mathrm{op}}} consisting of small colimits of representables.

So "small" is the big boys' version of "finite".

view this post on Zulip John Baez (May 28 2020 at 22:52):

By the way what I just said about SetXop\mathsf{Set}^{X^{\mathrm{op}}} should work for VXopV^{X^{\mathrm{op}}} more generally, but I'm too lazy to remember/write down the fine print. Steve Lack has a paper about cocompletions of large enriched categories.

view this post on Zulip John Baez (May 28 2020 at 22:53):

Also it's good to imagine that the "op" is hiding in VXV^X even when XX is a set: it's just invisible then. This makes the analogy more perfect.

view this post on Zulip Jade Master (May 28 2020 at 23:18):

Of course a set is just a category with only identity arrows :smirk:

view this post on Zulip Gershom (May 29 2020 at 07:04):

Here is a dumb "pattern-matching" inspired question. There is a slogan that "homology is the linearization of homotopy". is there any way at all that fits with this picture, or are the two just different uses of the term "linearization"?

view this post on Zulip John Baez (May 29 2020 at 19:42):

It's pretty darn similar. When we take homotopy theory and "linearize" it to get homology theory, what we're doing is taking an infinity-groupoid and turning it into a strict stable infinity-groupoid, which is the same as chain complex of abelian groups.

Another way to think of the exact same thing that we're taking a simplicial set and freely turning it into a simplicial abelian group. (Simplicial sets are a model for infinity-groupoids, and the Dold-Kan theorem says simplicial abelian groups are the same as chain complexes of abelian groups.)

The second perspective is probably more helpful now. Taking a simplicial set and freely turning it into a simplical abelian group has, as a special case, forming the free abelian group on a set SS. This is just ZS\mathbb{Z}^S when SS is finite. And this process SZSS \mapsto \mathbb{Z}^S is precisely an example of the "linearization" idea I was just talking about!

view this post on Zulip Alex Kreitzberg (Jun 27 2020 at 02:37):

Baez goes into a lot more detail about this stuff in his online course in chapter 4:

https://www.azimuthproject.org/azimuth/show/Applied+Category+Theory+Course#Chapter_4

My respect for matrices (as opposed to coordinate free presentations) deepened after the discussion.