You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Often, when undergraduate and graduate students in CS are assigned their first project involving string diagrams, or simply express curiosity about the subject, they find themselves wondering where to start. With @Fabio Zanasi, we wrote this introduction to provide a starting point (which grew out of notes designed to onboard students in our own research). The idea is to offer a glimpse into the state of research with string diagrams in 2023.
It is first draft---we eagerly welcome comments and feedback to improve its content!
https://arxiv.org/abs/2305.08768
This is sure exciting! I'll give it a read at some point! In your opinion, for a CS student, what would be their expected/recommended background with understanding string diagrams (like perhaps in terms of prior classes, exposures to ideas, etc.)? @Robin Piedeleu and @Fabio Zanasi ?
It really depends under which perspective string diagrams are introduced. The way we do it in this paper, it is useful to be familiar with how the syntax of a programming language is presented via BNF/grammars. Also knowing what 'operational semantics' and 'denotational semantics' of a PL is helps. Finally, exposure to some basic algebra of datatypes (which may come from Haskell for instance, but also from a basic course on groups/monoids etc.)
I would say though that, at least for the first few pages, we try not to assume anything without at least reminding the reader first.
This looks great! I'll advertise it on Mathstodon.
An incredibly tiny comment: where you say "certain areas of computer science, such as quantum theory" it might be better to say "such as quantum computation" or something like that, because the current phrasing suggests that quantum theory is (merely?) an area of computer science, which will shock some physicists. I know that's not what you meant, but people have an amazing ability to misread things.
John Baez said:
An incredibly tiny comment: where you say "certain areas of computer science, such as quantum theory" it might be better to say "such as quantum computation" or something like that, because the current phrasing suggests that quantum theory is (merely?) an area of computer science, which will shock some physicists. I know that's not what you meant, but people have an amazing ability to misread things.
Fair point! We're going to release v2 soon and will amend this kind of things.
"such as those involving quantum theory" would work.
Another equally tiny, nitpicky comment:
Having products is a property of a category. A monoidal product, on the other hand, is a structure over a category.
This is a great point to make. I think the correct phrasing is "structure on a category".
An incredibly tiny comment: where you say "certain areas of computer science, such as quantum theory" it might be better to say "such as quantum computation" or something like that, because the current phrasing suggests that quantum theory is (merely?) an area of computer science, which will shock some physicists. I know that's not what you meant, but people have an amazing ability to misread things.
I have some memory of a (famous?) computer scientist remarking during a talk that mathematics is a branch of computer science
I'll try to dig up the story...
A couple of things I found missing while skimming through.
I don't know what examples of proofs using diagrammatic reasoning are in this paper, but if the hardest example is something like "left unitality + commutativity right unitality", some people will wonder what's the point. After all, we usually just say
and be done with it. Yes, you're doing it in greater generality. But there are some proofs where string diagrams really make things a lot easier.
Another thing you might add is a few examples of tools in computer science that are (secretly, or not so secretly) using string diagrams. When I advertised this paper, Jesus Lopez pointed out the example of UML (Unified Modeling Language, a popular diagram-based modeling environment).
Todd Schmid said:
I have some memory of a (famous?) computer scientist remarking during a talk that mathematics is a branch of computer science
Robert Harper?
https://twitter.com/lindsey/status/161997443616083968
But have not been able to find what talk that was.
heh, imo "a proof is when you program paper" is not a bad way to define constructive proofs to programmers, especially because it can be made rigorous via Curry-Howard etc
I advertised this paper on Mastodon, and @David Michael Roberts made a good point. The first main page of the paper throws some expressions in Backus-Naur form at the reader. A lot of computer scientists are familiar with BNF, but 1) a lot of people, including most mathematicians, are not, and 2) as far as I can tell the rest of the paper doesn't use BNF. So this seems like a bit of unnecessary "gatekeeping", falsely making readers feel that if they don't understand this stuff they won't be able to read the rest of the paper.
Thanks for reporting this point. A couple of counter-points:
1) Overall I tend to agree to the replies that say that our intention, already manifested in the title, is to make an introduction primarly aimed at computer scientists. There are already quite a few entry points for mathematicians and physicists, and we were interested in the idea of proposing a different perspective.
2) Having said that, I don't find the incipit with the grammars so discouraging, even for people with a different background. It's a simple inductive definition in disguise. Importantly, we don't really make assumptions about its understanding: we report in words what that expression means and what's its purpose.
3) Even though it's not used later, I believe it's important to draw a connection between BNF and string diagrams, because it highlights the importance of syntactic specifications. To a CS student seeing it with a BNF will look more familiar than with, say, signatures and algebras, which is what perhaps would look more familiar to a matematician.
Any CS student who doesn't understand BNF has either just started or has been very ill-served by their university. Given the clearly expository nature of the paper and the tendency of CT (as a subject and a community) to obscurity and jargon there is a glass house or black pot adjacent to any serious claim to gatekeeping that almost anyone other than an expositor like @John Baez could possibly make
If BNF were used in the paper it would be fine to put it on the first page like the sign on Plato's Academy saying "let no one ignorant of geometry enter here". But I didn't see it used anywhere else.
I don't see that. We are starting from something very basic, and we also explain it. It's not like we begin with geometry of interaction just to be fancy. And I re-iterate that if one reads the first page in its entirety, it should be understandable even to someone that has not encountered grammars before. Especially if you compare (1) with the example of natural numbers given immediately below (I hope induction on natural numbers is considered universal enough, at least). I find any claim of gatekeeping unwarranted here.
My big mistake was making my second comment here: it added nothing new. Note to self: suggesting changes to a paper is probably best done privately - and if the author doesn't want to make those changes, then let it drop!
That syntax (in the form of BNF etc) isn't usually visible in mathematics is an old tragedy: Godel's critique of Principia Mathematica was that its syntax was a step back from Frege's (which incidentally, was also partly diagrammatic). Even books like Enderton's classic on first order logic defines the syntax of FOL in a way that a computer scientist would find unusual nowadays. This phenomenon is especially surprising to me because so many early results in group theory were about syntax, and its invariance under substitutions of various kinds, etc. Anyway, here's Godel:
"It is to be regretted that this first comprehensive and thorough-going presentation of a mathematical logic and the derivation of mathematics from it [is] so greatly lacking in formal precision in the foundations (contained in 1-21 of Principia) that it represents in this respect a considerable step backwards as compared with Frege. What is missing, above all, is a precise statement of the syntax of the formalism. Syntactical considerations are omitted even in cases where they are necessary for the cogency of the proofs".
Math should be more syntactic and will be more syntactic, whether people like it or not. People based math on abstract principles such as: existence, equality, true and false. But computers don't understand anything of that and so we are now obligated to find more sane foundations. At the same time, super abstract math also converge towards more syntactic stuff based on type theory because it happens that it is what we need to prove difficult things. I likely imagine than in 50 years, proofs of big theorems will consist into a sequence of 5000 rewriting rules relating two terms of the same type in some syntax. Well I don't know maybe not exactly, I want to say things on this subject but every time I write something, I erase it because I think "Hmm no, it's not exactly that, in fact I don't understand anything to these matters."
Jean-Baptiste Vienney said:
Math should be more syntactic and will be more syntactic, whether people like it or not. People based math on abstract principles such as: existence, equality, true and false. But computers don't understand anything of that and so we are now obligated to find more sane foundations. At the same time, super abstract math also converge towards more syntactic stuff based on type theory because it happens that it is what we need to prove difficult things. I likely imagine than in 50 years, proofs of big theorems will consist into a sequence of 5000 rewriting rules relating two terms of the same type in some syntax. Well I don't know maybe not exactly, I want to say things on this subject but every time I write something, I erase it because I think "Hmm no, it's not exactly that, in fact I don't understand anything to these matters."
TBH the stuff you imagine 50 years from now is my definition of 'the stuff from nightmares'.
For me it's very easy: You like 'math on paper'? Probably you are a semantics person. You like 'convincing computers than your proofs make sense'? Probably you are a syntax person. I've always considered logic to be like a language, with maths being the literature expressed in that language. Probably it is already evident from this intuitive perspective that I've always considered myself as a very semantics-oriented person. Clearly I recognize the importance of syntax, even more now that we live in the age of 'nice automated theorem provers' (where by nice I mean 'the amount of tears you cry while using them has a vague upper bound'). Still, the idea of 'converging towards a more syntax-based mathematics' is repulsive to me, as it makes me feel like the human equivalent of a panda in a zoo. I'd really like to survive the singularity.
you're saying mathematicians aren't just machines for turning coffee into proofs faster than a computer can systematically enumerate all of them, Fabrizio? like a late John Henry racing his hands against a steam engine? I'm joking of course
If a computer proves something and nobody is around to understand or use it, does it matter? The four-color theorem didn't change anything. Everyone knew it was true before and nobody learned much of anything from the proof except that it was so uninteresting a human could not be bothered to do it directly. If proof assistants become orders of magnitude more effective with their tactics, then great. Who really cares about the details of a 900 page proof of some theorem in general relativity versus its mere correctness and any key ideas that might be extracted? If a computer can prove something, surely it can summarize the proof as well. So we can concentrate not on proofs, but on ideas and applications. Wonderful!
If we all turn into critics, curators, or engineers instead of artists then great. If artistry isn't actually that special after all compared to the appreciation and use of it, then it will be a funny twist on the old conceit that the opposite was true.
It's a basic fact of mathematical exposition that if you don't need to use something later, or aren't going to explain what it is and why the reader needs to spending time thinking about it, then what is that thing actually contributing to the piece? If the article had added a single sentence reminding the reader what a BNF specification of a grammar does (inductively generate programs, in this case with reasoning about natural numbers, and loops or whatever), even though you assume every reader is familiar with it, I would not have blinked. It doesn't hurt adding reasonably basic stuff to an expository paper (to a point), in the same way that a colloquium goes over points most people in the audience are completely familiar with.
Remember that it is supposed to be a paper for computer scientists... I don't think than BNF is usually introduced by CS guys as a sequence 1) blablaba it does that 2) definition 3) examples. CS people don't seem to be offended that you introduce BNF only by showing one or two examples of syntax defined in the BNF way without the canonical sequence as above. They are not math people and it's ok for them to write things like this as far as I understand. So, the critics here for me are just the reaction of math guys that want CS guys to write CS stuff like math guys and CS guys don't really care about this speech because they just want that math guys let them do their stuff as they are used to do. That's my understanding of what's going on here in this debate from a "sociological perspective" as a person who likes CS and math at the same time. I think that it is important to take into account the background of each person to understand this interaction.
Because this is not really a debate about something but more the result of putting different people together, introduce a subject they deal with differently, and see what happens.
I find it really funny.
Ah well. You know your audience and target. I was just thinking the audience could be even broader, with a very minor tweak.
David Michael Roberts said:
If the article had added a single sentence reminding the reader what a BNF specification of a grammar does (inductively generate programs, in this case with reasoning about natural numbers, and loops or whatever), even though you assume every reader is familiar with it, I would not have blinked.
You may find it's not exactly what you wanted, but I think we do have such sentence, immediately below the equation:
Screenshot-2023-05-22-at-15.47.23.png
By the way, we are publishing v2 of the introduction (it should be on the ArXiv anytime soon), with more examples of diagrammatic reasoning and incorporating other suggestions we received. We also tweaked this initial part, in the hope of making it even clearer. However, it is not a major revision: a deep dive into grammars is going to distract the reader from the main point we are trying to make there.
By the way, I think David and I would agree that a deep dive into grammars is not called for! In fact it's almost the opposite of what were asking for. We'd like such a shallow dive that the reader barely sees the water. :upside_down: