You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
The category of linear relations is a handy category with lots of nice features and applications. In PROP form, it has as objects natural numbers (as always), morphisms are subspaces of the vector space , and composition is done relation-style. That is, . One can then show forms a subspace whenever and do, so it all fits very nicely together into a (symmetric monoidal) category.
I'm interested in doing some basic, concrete calculations from efficient representations of linear relations. I more or less know how to do many of these in the graphical linear algebra way, by drawing diagrams to represent linear relations then applying graphical rewrite rules, but I'd like to also get some competence working with these using vectors and/or matrices and Gauss-Jordan type tricks.
A basic example is composing linear relations from their basis representations. For example, suppose and , how can I efficiently get a spanning set of vectors for ?
One can also ask the "perp" of this question. Suppose and are each described by a set of independent homogeneous linear equations, how can I compute the equations for ?
In other words, given spanning sets for and , what is a spanning set for ?
I think there are many ways to do this, possibly by doing some stuff that might seem quite weird in an undergrad linear algebra course, like computing a pullback of matrices. My question is, are there easy, elementary tricks for doing these kinds of calculations?
Maybe some of you who have been working with these categories a lot can shed some light on this. @Pawel Sobocinski @Brendan Fong @Cole Comfort
A map in can be (fully and faithfully) represented as a matrix over the Booleans. Therefore, given a prime and a map in ; forgetting the linear structure and strictyfying (via the functor ), this is just a matrix over the Booleans because, the set has cardinality . So if you can explain matrix multiplication to an undergraduate, you could get them to crunch numbers and compute composition in linear relations over a finite field... they might not understand what is going on, though. I'm not sure if this answers the question or not.
The key word here is efficiently computing compositions. I'd like to be able to do pen-and-paper calculations up to or so, and fast computer calculations up to maybe .
The benefit of linear relations over generic relations is you only need a quadratic amount of data to represent a linear relation (e.g. as a spanning set), and Gauss-based calculations tend to be cubic. As you say, if you have a finite field, you can always compute the entire subset and do normal relation composition, but this gets expensive very quickly.
Maybe here's a tack we can take:
Q: Suppose I have a linear relation , where is the linear span (not to be confused with "spans" in the CT sense). Can I represent this as a span of matrices ?
A first guess, based on the dimensions is the become the columns of and the become the columns of . Is that right?
Quoting @Pawel Sobocinski https://graphicallinearalgebra.net/2016/02/03/28-subspaces-diagrammatically/#comment-3460 :
So the “Fundamental Theorem of Graphical Linear Algebra”, if there is such a thing, is that any diagram can be written in span form and in cospan form, i.e. as a span or a cospan of matrices. When you look at examples, to put a diagram into span form is to express the associated linear relation as a span of a list of basis vectors, and in cospan form as a solution of a list of homogenous linear equations.
So Pawel just confirms what you are saying.
If I were trying to explain this to undergraduates; perhaps I would take the cospan approach because computing the pushout corresponds to adding both sets of constraints together.
Perhaps this is the most amazing coincidence of terminology I have come upon, that both notions of spans coincide in this setting.
Yes, it's a very nice coincidence. However, I can't figure out if that makes the terminology clash better or worse. :)
Okay, I think I've found a suitably down-to-earth strategy for composing linear relations in span-form.
Take two linear relations , given as and . We need to compute a basis for .
Arrange the vectors into the columns of a matrix with rows, as follows:
For convenience, I've put all of the 's at the top, because we want to "zero these out". That is, we perform Gaussian elimination on the columns to obtain:
where all of the 's are non-zero (i.e. they contain a pivot).
Now, we throw away the first columns (and any all-zero columns), and use the rest to form :
So, where the hell did that come from? (you might ask). Actually, re-reading the post you linked on @Pawel Sobocinski's blog made it click.
As usual, it's easier to treat everything as a state, by bending some wires. N.b. "caps" and "cups" don't coincide for fields other than , so we define them relative to the black dot, but convention.
This relation:
image.png
is spanned by the columns of:
This relation:
image.png
subtracts the -primes from the 's and swaps the middle row to the top. Giving:
Finally, the tricky bit is plugging a white dot into the first wires:
image.png
This is the "op" of the zero vector, which restricts to the largest subspace which is zero in the first rows. This is like a "partial kernel" of a matrix, which is exactly what the bit of Gauss-Jordan we did here:
is used to compute.
Anything with a pivot in the first rows can't be involved in any linear combination in the resulting relation, so we throw them away and keep the rest.
This gives the "wire-bent" version of the composition because:
image.png
This is super cool @Aleks Kissinger , but I guess there's a -1 missing? To turn the white cap to a black cap in your second last step.
ah ha! i thought there was something funny going on with the signs. I'm used to thinking in , where plus = minus, and the two cups coincide.
(Note: the figures above have now been fixed to incorporate the via an antipode.)
I guess in your construction you should just start with -S and it will all be good.
rather the writing it again, i'm going to rewrite history above :)
Right :) the v primes should be negated.
okay done. does that look right to you?
I was somewhat inspired by the concrete construction of the pullback of matrices from Fabio Zanasi's thesis (p.74 of https://arxiv.org/pdf/1805.03032.pdf ).
There, you can see a minus sign appearing in the . I think it's there for the same reason.
Yeah, exactly! It's in whole bunch of control theory literature also for this reason. In fact, there the symmetry between spans and cospans breaks: cospans describe all behaviour (as Ker(A|-B)), spans only the controllable behaviours. This is explained in my LiCS16 paper with @Brendan Fong and Paolo Rapisarda.
Hmm, not sure I get that. But I haven't read the paper, and don't really understand the connection to dynamical systems that well.
I guess both spans and cospans of matrices give a (somewhat redundant) description of a linear relation. Spans as and cospans as .
It seems any linear relation can be written in either form, but maybe you care about this extra, redundant data. The extra data can be thought of as a particular choice of presentation (either as a spanning set or a system of homogeneous equations).
Yes. And the minimum size of the representation (which corresponds to the number of wires "in the middle") is the dimension (for the span case) and codimension (for the cospan case).
So, in terms of representing linear relations, they seem totally equivalent. What am I missing here? Does this talk about "behaviours" go outside of the LinRel world?
Perhaps this is related to the comment of Zanasi in his thesis?
Remark 3.18. It is worth mentioning that the same reasoning does not apply to pushouts: they
exist in MatR for purely formal reasons, being the category self-dual, but cannot be generally
calculated as in ModR. This asymmetry actually plays a role in our developments: we shall return
to it in Remark 3.54.
@Aleks Kissinger you're referring to my previous comment about control theory? In that case yes, it's not LinRel anymore.
okay, that makes more sense. So, why the minus sign in kernel-form and not image-form?
It's how you get from the span/cospan to the thing which has been rewired to have boundaries on one side only
In the case of cospan you get (A⊗B); black cap. For this entire thing to be in matrix form you want to turn this into a white cap. So you get
(A⊗-B);white cup. Now this expression is just .
In the case of span black cup;(A⊗B). This thing after the black unit is already in matrix form, and this expression already gives you .
I don't know how to latex here.. but it should be clear
that makes sense. It's basically because matrices want to have white-dot mult and black-dot comult. The asymmetry comes from the fact that we are using matrices in both cases, rather than treating a span of matrices as a cospan of "op-matrices", i.e. relations of the form { (Mv,v) | v in V }.
ps. i think its not showing the latex right because you have \begin{array} without \end{array} and some funny extra braces.
The fact that LinRel is equivalent to its op in (at least) 2 different ways is a bit mind-bending...
Pawel Sobocinski said:
I don't know how to latex here.. but it should be clear
You forgot \end{...}, and for some reason the extra curly braces around the contents of the matrix broke it! The bmatrix environment is another option if you don't mind square brackets around your matrices.
Right, thanks :)
@Aleks Kissinger Yes.. exactly. Matrices do a lot of work in traditional accounts, and the diagrams really help to explain it. Actually I just submitted a paper with João Paixão on Graphical Linear Algebra... we have a whole bunch of stuff in the pipeline that will hopefully result in a book, eventually. I should hit you up for book writing advice at some point!