You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
so string diagrams and such like are found everywhere within parts of category theory now (especially in e.g. applied category theory, quantum physics, etc.), and lots of people have written many times about all the advantages that come with using such a graphical language. i've always found the diagrams a bit confusing personally (and i'm not asking for people to try to "convert" me! :wink: ), but i was wondering if there are any aspects of string diagrams that are somehow negatives, be it in specific examples or more generally
i.e. if we write up a "pros and cons" list of using graphical notation over "classical" notation, is the "cons" column entirely empty? i've never seen anything written in it!
Perhaps one con, though it's more of a metatheoretic issue, is that graphical calculi are more difficult to design and extend than term calculi, especially when there are multiple operations that increase the dimensionality.
Though this doesn't pose any problems for using a graphical calculus.
Perhaps: "Graphical proofs are harder to LaTeX." :wink:
The big negative: "graphical proofs were hard before LaTeX".
And so, Frege's Begriffsschrift and Peirce's existential graphs and Feynman diagrams and Penrose's tensor diagrams did not appear in publications as much as they should have... so only in the late 1980s did people start realizing the unity of physics, topology, logic and computation.
Even now there's still work left to understand how circuit diagrams, chemical reaction networks, Petri nets and other kinds of string diagrams are connected! Each field of engineering and science seems to invent its own version of string diagrams, and because these diagrams aren't taught in a typical undergraduate math course each field thinks these diagrams are its own personal possession!
John Baez said:
Even now there's still work left to understand how circuit diagrams, chemical reaction networks, Petri nets and other kinds of string diagrams are connected! Each field of engineering and science seems to invent its own version of string diagrams, and because these diagrams aren't taught in a typical undergraduate math course each field thinks these diagrams are its own personal possession!
I think this is one of the sources of why I struggle with string diagrams: I can never remember what specific moves are allowed in any given context. They're so "physical" looking that I always get tricked into thinking that you can literally just treat them like strings, and untangle them however you want, but obviously different contexts mean different constraints on the types of moves allowed
I think you're worrying about it too much. The point is that you can treat them like strings and do what you want. Of course there are lots of special variants, but in practice you just need to remember whether you are dealing with diagrams in 3 dimensions, where you can't pass one string through another, or in 4 dimensions, where you can (actually you're passing one string around another in 4 dimensions).
In other words: you need to know whether you're dealing with a braided monoidal category or a symmetric monoidal category. In all the examples listed above they're symmetric monoidal categories.
It's really not much harder than remembering whether you're working with a commutative ring (where you can do ab = ba) or a noncommutative ring (where you can't).
So I suggest just relaxing. In math, if you screw up, you rarely suffer very much.
What's really hard is knowing when you can change lanes in traffic.
Typesetting diagrams is still much harder than typesetting equations, although much easier than it used to be
Another disadvantage: computer representation of diagrams is a massive can of worms (aka a very interesting research topic), whereas computer representation of terms is totally obvious
Since we live in the real world and not an ideal world, the fact that some people don't like diagrams is a disadvantage of them
Jules Hedges said:
Typesetting diagrams is still much harder than typesetting equations, although much easier than it used to be
This isn't always true. Lots of book-keeping can be done with string diagrams that can often be very annoying to do algebraically, or with commutative diagrams.
And typesetting giant commutative diagrams is not an easy thing to do
Substitution in diagrams is much more subtle than substitution in terms
Jules Hedges said:
Since we live in the real world and not an ideal world, the fact that some people don't like diagrams is a disadvantage of them
Yes, that is a disadvantage of those people.
I would say that in many cases the choice to use diagram is either "right" or "wrong" depending on what you're doing. For example, if you're doing calculations in a Lawvere theory than terms are "right" and diagrams are "wrong" (in the sense of being more complicated than necessary for the task), whereas if you're working in a monoidal theory than terms are "wrong" and diagrams are "right"
Cole Comfort said:
And typesetting giant commutative diagrams is not an easy thing to do
There are editors that make typesetting giant commutative diagrams easy :)
Nathanael Arkor said:
Cole Comfort said:
And typesetting giant commutative diagrams is not an easy thing to do
There are editors that make typesetting giant commutative diagrams easy :)
Even so, you sometimes have to massage the diagram before it is in a typesettable form. But with string diagrams you can just use isotopy without having to line everything up nicely.
That's an advantage I hadn't appreciated before.
Jules Hedges said:
I would say that in many cases the choice to use diagram is either "right" or "wrong" depending on what you're doing.
Yes, very much so. Often the best use of diagrams is to make an idea vivid. For example this picture stands for a certain integral over 12 real variables:
quark-antiquark production with gluon radiation, produced by electron-positron annihilation
and before Feynman invented his diagrams it was harder to keep in mind what the integral meant.
But when you do the integral you do it the usual way (these days with a computer if it's more complicated than this one).
Like Jules said, some calculations work well with diagrams, and some don't.
I think the most important use of really big diagrams is in electrical circuits:
Here the diagram is an actual functioning device!
It doesn't matter that these diagrams are too complicated to easily understand; people design them with the help of computers which are made of similar diagrams!
I would add that many of the uses of string diagrams in applied ct or cqm are made mathematical-physics-style, leaving things somewhat semi-formal (which is not surprising considering that both John and Bob Coecke, two of the biggest early sponsors, are mathematical physicists), so someone who is already well-acquainted with the underlying theory “knows” what's admissible, and how things can be made precise, but it may be confusing for those who are more formally inclined; that seems to be Tim's case...
Some specific things that are repeated over and over in these “semi-formal” presentations are just plain wrong in the most obvious interpretation. I've been confused for a long time by some of them, and I keep seeing people confused by those same things when they approach string diagrams.
Probably also just a consequence of being a young field (in its formal side, of course informal diagrammatics have been around for ages) without any formal textbook-style references.
I don't want anything I say to be "just plain wrong" - so if there's something wrong, let me know.
The main reason I go for a semi-formal treatment is that the fully precise treatment, e.g. in Joyal and Street, is quite long, and it already exists - and almost nobody could understand the point of such a long complicated business if they didn't first see an informal treatment.
I'm sorry if I wasn't clear, but mine was not at all a criticism, I completely agree with that approach.
“Just plain wrong” is perhaps a bit harsh -- what I mean is that they have a literal interpretation which is wrong or nonsensical, and a non-literal interpretation which is useful but not immediate.
One example is the statement that “Associativity of composition becomes obvious/implicit in string diagrams”. I think that's as meaningful as saying “If we have a monoid, we can make associativity obvious by writing a string of elements of the monoid like this: ”
If anything that's saying that associativity is obvious in the string-of-characters representation of the elements of the free monoid on the underlying set. It has nothing at all to do with associativity in the monoid.
I think if you say "implicit", the statement is correct. If you say "obvious", it's incorrect.
Associativity is implicit in expressions like .
In other words, expressions like this are only well-defined thanks to associativity.
Similarly various laws in a monoidal category are implicit in string diagram notation.
I think the correct statement is that “Because of associativity, every diagram of this form has a unique composite”.
The expression makes sense anyway! It's a string of elements of a set.
Well, it's also a bunch of letters with little numbers underneath.
I don't know if I'm making myself clear. Saying that “By drawing a diagram like this, we make some equation implicit”, it makes it sound like: “Just because we can draw a diagram like this, it makes the equation true.”
And then people draw some other nonsensical diagrams and just because they can, they think it has a meaning.
I've seen it...
Okay; here in the US we have lots of people who think a pizza restaurant was where a bunch of Democratic politicians were secretly running a sex slave ring: there's really no limit to human stupidity.
To intelligent people, "X is implicit in Y" means that "if X were not true, it would not be reasonable to do Y".
Associativity is implicit in a notation for multiplication where we leave out parentheses, because in a nonassociative algebra the product of 3 things is undefined unless you include parentheses.
It would not be reasonable to leave out parentheses in a nonassociative algebra.
That's all "implicit" means.
Amar Hadzihasanovic said:
I don't know if I'm making myself clear. Saying that “By drawing a diagram like this, we make some equation implicit”, it makes it sound like: “Just because we can draw a diagram like this, it makes the equation true.”
Thinking about this is making my brain hurt
John Baez said:
It would not be reasonable to leave out parentheses in a nonassociative algebra.
Yes, I agree with everything you are saying. But that falls into what I was saying before: “if you are already acquainted with the theory, you know what makes sense and what doesn't”.
I think what's happening is confusion between a representation and the representation you use for talking about the representation. For example: Let be the monoid of strings with concatenation. The string has 3 symbols in it, even though I just wrote it that uses 4 symbols. Similarly even though I just wrote them differently to each other, because I put the bracket symbols in different places. [brain melts]
Saying "implicit" means you can go down a level in the representation
I think John gave a good definition of what implicit means, namely “whatever holds because otherwise I would be stupid to use this notation” ;)
Yes, but it's best if you actually list the laws that are implicit in the given notation. There is also usually a theorem involved.
For example, it's a nontrivial theorem that the associative law (xy)z = x(yz) implies the equality of all well-formed parenthesized products of an ordered list of n elements! Everyone should prove this sometime.
This theorem implies that you're allowed to leave out parentheses in a product of an ordered list of elements when the product is associative.
And there's definitely a grey area in what is implicit. Is the Frobenius law / spider fusion implicit in string diagrams? Is the bialgebra law implicit?
Well, now you're making Tim Hosgood feel he's right to be scared of string diagrams, which I really don't want.
:ghost:
I don't want him to feel scared, and I don't want him to feel he's right. :upside_down:
but yes, i think this is exactly what i struggle with: knowing what’s obvious and true vs what’s “obvious” and false
Sounds like math in general to me.
I don't think there's reason to be scared. But I can relate to some of the struggle, as I think different people are comfortable with different “styles”.
If I had a permanent job and enough time, I would write the book on “string diagrams for mathematicians who want to know exactly what's going on”...
I used to want to write a book on string diagrams, and part of it would probably explain them more carefully than I've ever had time to do, but most of it would use them to do a lot of interesting math, like group representation theory.
Unfortunately there are so many introductions to string diagrams now that it's becoming less exciting to write such a book.
John Baez said:
I used to want to write a book on string diagrams, and part of it would probably explain them more carefully than I've ever had time to do, but most of it would use them to do a lot of interesting math, like group representation theory.
Would it be more like string diagrams for the sake of string diagrams, or string diagrams for the working mathematician?
Definitely not string diagrams for the sake of string diagrams!
The point of them, as far as I'm concerned, is to do interesting things.
Yeah, I think there's quite some competition on the front of “string diagrams for exciting stuff”.
That's why I'd like to write the boring technical reference book, instead.
I would like to write about the representation theory of all the simple Lie algebras using string diagrams. Cvitanovic's free book has a lot of amazing material, but it doesn't use enough category theory for me, and it doesn't quite prove theorems:
Amar Hadzihasanovic said:
That's why I'd like to write the boring technical reference book, instead.
Like expanding Peter's survery of graphical calculi into a book?
I think at this point it is already kind of outdated already
I wouldn't say it's "outdated". I'm sure it's incomplete, but I don't think the facts have passed their sell-by date.
It'll be outdated when someone writes something better in that direction.
Also just about different formalisations of what diagrams are, topological, combinatorial etc. And of course I've been focussed on “diagrams in higher dimensions” so that too.
Once you get into higher-dimensional diagrams the subject explodes in complexity.... now we're talking a series of volumes.
Anyway, I hope you write something like this someday, Amar.
Thank you, John. If I do it will mean that I have some job stability, so I'm looking forward to it.
John Baez said:
I wouldn't say it's "outdated". I'm sure it's incomplete, but I don't think the facts have passed their sell-by date.
Yeah that is what I meant to say. Still is the canonical reference.
Imho, the only valid piece of criticism about string diagrams in this conversation is Jules': String diagrams are computationally harder than terms. All the other forms of criticism more or less depend on the fact that you are thinking about diagrams as another way to represent terms or morphisms in a category, but this doesn't necessarily have to be the case. I could make up a diagrammatic syntax, with diagrammatic rules, and not mention nor ask how this is related to categories or to term syntax in any way. It would still be an acceptable kind of calculus with its own dignity.
The worst that could happen is that my calculus could be trivial, with everything equal to everything else. This is not worse than writing an inconsistent theory in logic.
"0 = 1 is implicit in your notation". :upside_down:
What I am trying to say is that string diagrams may come with all kinds of disadvantages if one thinks about them as an alternative way to represent stuff we'd usually represent with formulae or so
But this doesn't necessarily have to be the way to think about them. They are a reasoning device, that isn't necessarily subordinated to other reasoning devices. I think we still have to get totally rid of the idea that "if you cannot represent mathematics using formulae made terms, then you are not doing mathematics." This is what kept people skeptical of string diagrams in the first place
I once saw a professor do a long calculation where an integral sign slowly morphed into the letter . So if you're going to be suspicious of string diagrams - which you should be, a little - you have to also be suspicious of proofs using terms written out in strings of symbols in the usual way.
Symbols and our way of using them were developed to be effectively "discrete", making them more robust for reasoning than, say, some kinds of geometrical diagrams. You're not supposed be able to change the meaning of an expression by writing it in a slightly different way (though my professor did).
People get nervous when they first encounter string diagrams because they don't look "discrete" in the same way. But what really matters about string diagrams is their equivalence classes under some relations like isotopy, and those equivalence classes are discrete.
In modern lingo, they are "topologically protected".
John Baez said:
In modern lingo, they are "topologically protected".
Oo, I never heard that term before
Course you can also present the free monoid on a set using string diagrams: you only get one string, nothing can go in parallel. Just nobody does it that way, because the isotopy classes can be described in a simpler way
Jules Hedges said:
John Baez said:
In modern lingo, they are "topologically protected".
Oo, I never heard that term before
People use it in condensed matter physics to describe properties of a system that are robust against small perturbations because they are topological. Topological quantum computing relies on this idea... or it will if it ever works.
One thing that gives me pause about graphical notations is that it seems harder to build abstractions in the notation, aside from liberal uses of the dotdotdot notation which I generally dislike. (Although I am known to resort to it out of convenience ...)
When I’m working in term languages, I’m pretty clear on how I can abstract out patterns to capture their most general form when I need to (thus to make it widely applicable) and how I can break down complex inputs, but it seems with graphical notations it’s harder to make diagrams that work over collections of objects, without relying on the reader to infer what is meant.
Maybe this is a solvable problem, though, and I just haven’t seen those solutions. Or possibly diagrams are just generally better for working with concrete shapes.
This might be one of the examples I was thinking of of where graphical linear algebra starts reaching its limits of abstraction: https://graphicallinearalgebra.net/2017/09/12/eigenstuff-diagrammatically/
There is a formal theory of "!-boxes" to deal with "..." in diagrams. I think it was worked out because it was needed to formalise "..." in software, namely Quantomatic
Jules Hedges said:
And there's definitely a grey area in what is implicit. Is the Frobenius law / spider fusion implicit in string diagrams? Is the bialgebra law implicit?
I love using each of those things in string diagrams, but I would not consider them implicit to string diagrams qua string diagrams. That is, you can't perform any of those things with physical beads on physical strings in 4D. I would consider these rules as add-ons that would need to be commonly understood, either by familiarity with the particular flavor of string diagram or by explicit statements, before calculations are performed.
The gestalt I get from this thread is that graphical and term calculi are related, but different tools. Using a screwdriver as a hammer or vice versa might work clumsily. But if you have both tools available and know how to use them as intended, using the appropriate tool for the job will work much more gracefully.
I would argue that the Frobenius law is implicit, because it's a topological move. The special law is also a "topological" move where you create or pop a bubble
The Frobenius law is not an ambient isotopy of diagrams rel boundary. It's a homotopy. So it's a more dramatic sort of move than, say, moving one portion of a diagram up or down relative to another.
And that makes sense, because "ambient isotopy rel boundary" is the sort of change to a diagram that doesn't change a morphism in a compact closed category - but not every object in a compact closed category is a Frobenius monoid!
So yeah: to really understand this stuff you need to know which kinds of ways of changing a string diagram are permitted for which kinds of objects in which kinds of categories.
I would suggest that people start by learning the rules for monoidal, braided monoidal and symmetric monoidal categories, and also for these 3 kinds of categories "with duals for objects". That's what Mike and I sketched in the Rosetta stone paper.
(A symmetric monoidal category with duals for objects is called a "compact closed category".)
Then if you're not happy yet you can go to Selinger's paper and learn other examples. But he probably doesn't cover hypergraph categories, which is the world Jules seems to be living in. (In there, every object is a commutative special Frobenius monoid.)
I would also argue against the special law being implicit to string diagrams, per se. There are graphical calculi where having a bubble is different from not having a bubble. Similarly for the "extra" law of adding or removing extra strands of string that aren't connected to anything.
Those laws are not ambient isotopies rel boundary. In less technical terms: you're really changing the topology of the string diagram when you apply those laws!
("Rel boundary" means you don't get to move the very top and the very bottom of your string diagrams.)
I guess if "every wire has a (non-special) Frobenius algebra structure" you get... ambient homotopy equivalences rel boundary?
Something like that, but 1) I've never actually seen a theorem about this, and 2) I've never heard of "ambient homotopy equivalence" so I don't know what it means.
Let me explain ambient isotopy. "Ambient" refers to the space surrounding the diagram in question:
Suppose X and Y are smooth submanifolds of a smooth manifold M. Then we say X and Y are ambient isotopic if there is a smooth map F: [0,1] x M M such that
So, in intuitive terms, we're bending and twisting the whole space M (the "ambient" space) in such a way that we carry X to Y.
Now, a string diagram is not always a smooth submanifold, and it sits in a space M that has a boundary, and even corners, like M = . But the concept of ambient isotopy can be generalized to cover this case, and we say "rel boundary" (short for "relative to the boundary") to require that F(t,-) is the identity on some open neighborhood of the boundary of M.
Actually I think if we want to handle string diagrams where all objects have a commutative Frobenius monoid structure we should do this. Instead of treating the strings in the string diagram as 1-dimensional curves, locally like (0,1), we should "thicken" them so they are locally like (0,1) x , where is the unit disc.
Then the Frobenius identities become a special case of ambient isotopy!
In fact now I remember my student Aaron Lauda has written a bit about this. He gave a talk "Frobenius algebras and thick tangles", which is mentioned on his webpage... but the talk slides are missing. This is based on his paper Frobenius algebras and planar open string topological field theories. That paper has a lot about the category theory of thick tangles, but it relies on other people's work about the topology.