You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
By default, string diagrams for monoidal categories live in a kind of Galilean space-time: there's a "space" dimension (tensor product) and a "time" dimension (composition), with the main rules being that you can't cross strings over each other or bend them backwards in "time".
However, duals and symmetry and so on relax these constraints. If we have a category like Rel we have almost complete freedom to move the strings around however we want. In this case it seems as if we could just forget about the spatial nature of the diagram altogether, and just think of the strings as edges in a graph, so a string is just an abstract connection between a port on one morphism and a port on another. It seems like working in this representation might make some things a lot simpler when it can be done.
In other cases (e.g. if every objects has a dual but there's no braiding) it seems like we still need a spatial diagram, but the distinction between space and time goes away, so that what matters is an embedding of a planar graph in a 2D plane, but not its orientation.
I'm wondering if this idea can be / has been formalised. In everything I've read so far, we always treat the space/time distinction explicitly, even when we have isomorphism classes that allow us to sort-of ignore it. I guess I'm asking if there's a way of working in the isomorphism classes directly instead.
abstract index notation springs to mind
, say
each morphism is canonically viewed as having domain 1 and everything bent around so that it has some outputs that are dualized or not
then you use indices to label the outputs upper vs lower to indicate whether they're dualized or not
double-use indicates that two things should be wired together, so one should be upper and one should be lower in order for that to typecheck
of course, usually when people write expressions like this, they're thinking about indexing into a multidimensional array of numbers and double-usage indicates a summation... :)
but why restrict ourselves to the category where objects are natural numbers and morphisms are matrices?
Right, and string diagrams arose as an evolution of abstract index notation in the first place (Penrose, 1971) image.png
Nathaniel Virgo said:
By default, string diagrams for monoidal categories live in a kind of Galilean space-time: there's a "space" dimension (tensor product) and a "time" dimension (composition), with the main rules being that you can't cross strings over each other or bend them backwards in "time".
However, duals and symmetry and so on relax these constraints. If we have a category like Rel we have almost complete freedom to move the strings around however we want. In this case it seems as if we could just forget about the spatial nature of the diagram altogether, and just think of the strings as edges in a graph, so a string is just an abstract connection between a port on one morphism and a port on another. It seems like working in this representation might make some things a lot simpler when it can be done.
...
Regarding , there is this excellent article Knowledge Representationn in Bicategories of Relations by @Evan Patterson that explains with string diagrams.
It turns out that one can represent first order logic as a functor from a bicategory of relations to . Now I have always had the feeling that Logic has an atemporal nature, so I wonder if your point supports that.
Thinking out loud, there are two possible interpretations of this image:
In the first interpretation the strings are just another way of writing the abstract index expression on the right, so they're really just abstract connections between the ports. In this interpretation the cups, caps and crossings are just artefacts of how it's drawn on the page - they don't have any formal significance and the equalities are just syntactic in nature.
In the second interpretation, the first two expressions are diagrams in a monoidal category, the cups, caps and crossings are morphisms and natural transformations, and the equivalence between the diagrams is a non-trivial result. Category theory takes the second interpretation by default, but I'm wondering if it's possible, when our category has the right structure, to formally transition to the first, simpler, interpretation instead.
(Penrose seems to use both interpretations in his paper. When he writes that diagram he's talking fairly informally about connecting wires to represent sums, but then later on he introduces cups and caps as specific diagram elements for the raising and lowering of indices.)
Nathaniel Virgo said:
I'm wondering if this idea can be / has been formalised. In everything I've read so far, we always treat the space/time distinction explicitly, even when we have isomorphism classes that allow us to sort-of ignore it. I guess I'm asking if there's a way of working in the isomorphism classes directly instead.
A compact closed category induces an operad that allows you to do this, if I understand it correctly. Morphisms in a category are oriented from a ""past"" to a ""future"", but general operads give you much more freedom, including to relax it to a single "outside" boundary. David Spivak implicitly does this quite a lot in his papers, drawing diagrams without a strict division into left and right boundaries
If you have a "categorical systems theory" like open graphs then it's natural to expect compact closure, because the 2 dimensions really are interchangeable. If you have a "categorical process theory" where one of the dimensions really is time then it's more of a surprise to get compact closure.... like happens in categorical quantum mechanics. I believe this is the basic motivation for studying causality in CQM, to re-introduce a decent notion of time that got erased by compact closure
I have spent a long time thinking about this idea during the first couple of years of my PhD -- long story short, I ended up “on the opposite side”, and now think that it is not a good idea. (This is briefly recounted in the preface of my thesis if you're interested.)
For me the fundamental insight (but it came after a long time tinkering, so it may be one of those monad/burrito cases where “the metaphor helps you only after you've figured it out yourself”) was that “diagrams are like maps, not like territories”. If your “territory” has a certain symmetry, wanting your “maps” to have that symmetry will usually make them worse, not better. Like cardinal directions may have no bearing on the territory but you sure want “up” to be “north” in your map. Or in a more mathematical example, “all vectors are the same” in a vector space -- there's no preferred directions -- but you certainly don't want coordinate systems to have the same property.
This is not to say it can't be done, just that I don't think it brings any advantage and gives you a less flexible tool (for example, you may not be able to relate compact closed stuff to non-compact closed stuff)...
I'm much less of an expert but came to roughly the same conclusion: that structured monoidal categories (symmetric, traced, compact closed, hypergraph, ...) is basically a hack that lets you work in a whole bunch of different useful operads in a uniform and more technically tractable way
Jules Hedges said:
A compact closed category induces an operad that allows you to do this, if I understand it correctly. Morphisms in a category are oriented from a ""past"" to a ""future"", but general operads give you much more freedom, including to relax it to a single "outside" boundary. David Spivak implicitly does this quite a lot in his papers, drawing diagrams without a strict division into left and right boundaries
On reflection this doesn't sound right, I think what I mean to say is that the theory of compact closed categories induces an operad, an individual CCC induces an algebra of that operad