Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: Strings and graphs


view this post on Zulip Nathaniel Virgo (May 05 2020 at 08:10):

By default, string diagrams for monoidal categories live in a kind of Galilean space-time: there's a "space" dimension (tensor product) and a "time" dimension (composition), with the main rules being that you can't cross strings over each other or bend them backwards in "time".

However, duals and symmetry and so on relax these constraints. If we have a category like Rel we have almost complete freedom to move the strings around however we want. In this case it seems as if we could just forget about the spatial nature of the diagram altogether, and just think of the strings as edges in a graph, so a string is just an abstract connection between a port on one morphism and a port on another. It seems like working in this representation might make some things a lot simpler when it can be done.

In other cases (e.g. if every objects has a dual but there's no braiding) it seems like we still need a spatial diagram, but the distinction between space and time goes away, so that what matters is an embedding of a planar graph in a 2D plane, but not its orientation.

I'm wondering if this idea can be / has been formalised. In everything I've read so far, we always treat the space/time distinction explicitly, even when we have isomorphism classes that allow us to sort-of ignore it. I guess I'm asking if there's a way of working in the isomorphism classes directly instead.

view this post on Zulip sarahzrf (May 05 2020 at 08:12):

abstract index notation springs to mind

view this post on Zulip sarahzrf (May 05 2020 at 08:13):

aklijbilcja^{ij}_{kl} b^l_i c_j, say

view this post on Zulip sarahzrf (May 05 2020 at 08:14):

each morphism is canonically viewed as having domain 1 and everything bent around so that it has some outputs that are dualized or not

view this post on Zulip sarahzrf (May 05 2020 at 08:15):

then you use indices to label the outputs upper vs lower to indicate whether they're dualized or not

view this post on Zulip sarahzrf (May 05 2020 at 08:15):

double-use indicates that two things should be wired together, so one should be upper and one should be lower in order for that to typecheck

view this post on Zulip sarahzrf (May 05 2020 at 08:17):

of course, usually when people write expressions like this, they're thinking about indexing into a multidimensional array of numbers and double-usage indicates a summation... :)

view this post on Zulip sarahzrf (May 05 2020 at 08:19):

but why restrict ourselves to the category where objects are natural numbers and morphisms nmn \to m are m×nm \times n matrices?

view this post on Zulip Nathaniel Virgo (May 05 2020 at 08:22):

Right, and string diagrams arose as an evolution of abstract index notation in the first place (Penrose, 1971) image.png

view this post on Zulip Henry Story (May 05 2020 at 08:22):

Nathaniel Virgo said:

By default, string diagrams for monoidal categories live in a kind of Galilean space-time: there's a "space" dimension (tensor product) and a "time" dimension (composition), with the main rules being that you can't cross strings over each other or bend them backwards in "time".

However, duals and symmetry and so on relax these constraints. If we have a category like Rel we have almost complete freedom to move the strings around however we want. In this case it seems as if we could just forget about the spatial nature of the diagram altogether, and just think of the strings as edges in a graph, so a string is just an abstract connection between a port on one morphism and a port on another. It seems like working in this representation might make some things a lot simpler when it can be done.
...

Regarding Rel\mathbf{Rel}, there is this excellent article Knowledge Representationn in Bicategories of Relations by @Evan Patterson that explains Rel\mathbf{Rel} with string diagrams.
It turns out that one can represent first order logic as a functor from a bicategory of relations to Rel\mathbf{Rel}. Now I have always had the feeling that Logic has an atemporal nature, so I wonder if your point supports that.

view this post on Zulip Nathaniel Virgo (May 05 2020 at 08:31):

Thinking out loud, there are two possible interpretations of this image:

image.png

In the first interpretation the strings are just another way of writing the abstract index expression on the right, so they're really just abstract connections between the ports. In this interpretation the cups, caps and crossings are just artefacts of how it's drawn on the page - they don't have any formal significance and the equalities are just syntactic in nature.

In the second interpretation, the first two expressions are diagrams in a monoidal category, the cups, caps and crossings are morphisms and natural transformations, and the equivalence between the diagrams is a non-trivial result. Category theory takes the second interpretation by default, but I'm wondering if it's possible, when our category has the right structure, to formally transition to the first, simpler, interpretation instead.

view this post on Zulip Nathaniel Virgo (May 05 2020 at 08:49):

(Penrose seems to use both interpretations in his paper. When he writes that diagram he's talking fairly informally about connecting wires to represent sums, but then later on he introduces cups and caps as specific diagram elements for the raising and lowering of indices.)

view this post on Zulip Jules Hedges (May 05 2020 at 10:04):

Nathaniel Virgo said:

I'm wondering if this idea can be / has been formalised. In everything I've read so far, we always treat the space/time distinction explicitly, even when we have isomorphism classes that allow us to sort-of ignore it. I guess I'm asking if there's a way of working in the isomorphism classes directly instead.

A compact closed category induces an operad that allows you to do this, if I understand it correctly. Morphisms in a category are oriented from a ""past"" to a ""future"", but general operads give you much more freedom, including to relax it to a single "outside" boundary. David Spivak implicitly does this quite a lot in his papers, drawing diagrams without a strict division into left and right boundaries

view this post on Zulip Jules Hedges (May 05 2020 at 10:07):

If you have a "categorical systems theory" like open graphs then it's natural to expect compact closure, because the 2 dimensions really are interchangeable. If you have a "categorical process theory" where one of the dimensions really is time then it's more of a surprise to get compact closure.... like happens in categorical quantum mechanics. I believe this is the basic motivation for studying causality in CQM, to re-introduce a decent notion of time that got erased by compact closure

view this post on Zulip Amar Hadzihasanovic (May 05 2020 at 10:15):

I have spent a long time thinking about this idea during the first couple of years of my PhD -- long story short, I ended up “on the opposite side”, and now think that it is not a good idea. (This is briefly recounted in the preface of my thesis if you're interested.)

For me the fundamental insight (but it came after a long time tinkering, so it may be one of those monad/burrito cases where “the metaphor helps you only after you've figured it out yourself”) was that “diagrams are like maps, not like territories”. If your “territory” has a certain symmetry, wanting your “maps” to have that symmetry will usually make them worse, not better. Like cardinal directions may have no bearing on the territory but you sure want “up” to be “north” in your map. Or in a more mathematical example, “all vectors are the same” in a vector space -- there's no preferred directions -- but you certainly don't want coordinate systems to have the same property.
This is not to say it can't be done, just that I don't think it brings any advantage and gives you a less flexible tool (for example, you may not be able to relate compact closed stuff to non-compact closed stuff)...

view this post on Zulip Jules Hedges (May 05 2020 at 10:20):

I'm much less of an expert but came to roughly the same conclusion: that structured monoidal categories (symmetric, traced, compact closed, hypergraph, ...) is basically a hack that lets you work in a whole bunch of different useful operads in a uniform and more technically tractable way

view this post on Zulip Jules Hedges (May 05 2020 at 10:37):

Jules Hedges said:

A compact closed category induces an operad that allows you to do this, if I understand it correctly. Morphisms in a category are oriented from a ""past"" to a ""future"", but general operads give you much more freedom, including to relax it to a single "outside" boundary. David Spivak implicitly does this quite a lot in his papers, drawing diagrams without a strict division into left and right boundaries

On reflection this doesn't sound right, I think what I mean to say is that the theory of compact closed categories induces an operad, an individual CCC induces an algebra of that operad