You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Does anyone know of a graphical language to work with bimonoidal categories? Maybe something like what string diagrams are for monoidal categories
Yes! @Antonin Delpeuch and @Cole Comfort and me are working on this paper right now
cool! do you have a rough preview you could share? I'm just interesting in how the graphical language would be, not that much in theorems and proofs
We are following http://www.cs.ox.ac.uk/people/samuel.staton/papers/popl2015.pdf and https://arxiv.org/abs/1906.05937
The rough idea is that we have a branching surface diagram that looks like where is a string diagram (ie. it's a "fat" string diagram), and then we draw string diagrams on that surface
oh, so 3d stuff, right?
I'll take a look at the papers, thanks!
In a free bimonoidal category you can put every object into a normal form (like polynomial normal form, or DNF in logic) where all the goes to the inside and all the goes to the outside, ie. a sum of products. Each individual factor goes onto one sheet, and the collection of factors onto parallel sheets
Yes. There are also a couple of proposals in the literature for hacking it in 2 dimensions, but it "wants" to be 3-dimensional
A bimonoidal category is a tricategory with [bunch of restrictions]
Our paper came together way quicker than I expected (because Antonin put a bunch of work into it), now we're in the long process of making it readable, but it may even be out this year
cool, I'll monitor arXiv
The things we cite for hacking it in 2 dimensions are https://arxiv.org/abs/0903.5154 and [Roshan James and Amr Sabry, Information effects]
This year? I think it could be this month :)
You underestimate how busy I am. I can spend a few hours a week on this paper, and there's a whole bunch of stuff still to do
But the world needs string diagrams for bimonoidal categories right now! :-D
Yep, so I can see
Also if you are interested, linearly distributive categories also have 2 monoidal structures, except there is a different distributivity between them... And the morally correct graphical calculus turns out to be 3d surfaces, where the 2d proof nets are merely projections, losing some of the topological information needed to make equivalent diagrams isotopic in a certain sense.
https://arxiv.org/abs/1601.05372
The only other such type of distributivity that I know of is that of duoidal categories, so someone should find their proof surfaces as well.
Are the bimonoidal categories you're working with the same thing as the "ring categories" developed by Kelly and Laplaza, which the nLab wisely calls rig categories?
The main annoying thing about rig categories is the large number of coherence laws involving the "distributor" natural isomorphisms.
I've been working with a lot of things like rig categories lately, writing a paper with Todd Trimble and Joe Moeller on "Schur functors and symmetric functions" - basically, updating some old ideas in combinatorics and representation theory.
John Baez said:
Are the bimonoidal categories you're working with the same thing as the "ring categories" developed by Kelly and Laplaza, which the nLab wisely calls rig categories?
Yes
(Words like "rig" and "actegory" are funny but I avoid them in serious writing)
"Rig" seems no more funny than "ring". It must have been hilarious, the first ten times anyone said it, to call a mathematical structure a "ring". :ring:
I am not going to call a "rig" a "semiring", either, because this analogy is hopelessly broken:
group : semigroup :: ring : semiring
More importantly: so, in your work on rig categories, are you really dealing with that mass of coherence laws that Kelly and Laplaza came up with?
John Baez said:
"Rig" seems no more funny than "ring". It must have been hilarious, the first ten times anyone said it, to call a mathematical structure a "ring". :ring:
It was almost certainly in German. quickly checks mathematical dictionary So the German word for ring is Ring, and in standard German it seems to have all the same meanings as in English. So, yes
John Baez said:
More importantly: so, in your work on rig categories, are you really dealing with that mass of coherence laws that Kelly and Laplaza came up with?
We rely on the existing coherence theorem as a hammer, they figured out just how much strictness it's safe to assume. (I'm on the wrong laptop to give you the citation)
Okay, cool.
You probably know this, but I find this cool: if you assume everything about distributivity and associativity is strict you get dangerously close to commutativity, since
but
where only the terms in the middle have been commuted.
If we can subtract, we then get
and if we can set and the unit law holds strictly we get
This is why in the definition of "ring" we don't need to assume addition is commutative.
But for a "rig" this argument fails: we don't get commutativity of addition for free.
I am one more person who desperately wants such a graphical calculus, indeed for rig categories. Amr Sabry and I continued the work you cite above that he'd done with Roshan, see Computing with Semirings and Weak Rig Groupoids at ESOP 2016. The 3 of us together have also published Embracing the Laws of Physics: Three Reversible Models of Computation where we also wanted these diagrams.
Thanks for these references, it's super useful :) We are on it!
You can see all the coherence conditions that Laplaza discovered in full (linear) glory in that first paper - and in the Agda code that accompanies the paper. We proved that the category of Types is indeed a symmetric rig groupoid. That was... a lot of work.
Yes there are a lot of coherence conditions, it's not exactly easy to work with…
Interestingly, some of the coherence conditions are 'fragile'. In particular, there are 2 proofs that (0 * 0 = 0), by eliminating on the left, or on the right, and these are supposed to be equivalent. If you pick the 'wrong' way to perform these proofs (by contradiction instead of passing on the 'other' impossibility along), that coherence does not hold!
A lot of the graphical calculi elide 0 and 1 from the pictures almost entirely, and this is a grave mistake, because it can lead to an unsound calculus. The 'survey' on graphical notations is quite careful about this, but not everyone is.
Our aim is to perform the Int construction in this setting. It is well-known that if you do it mindlessly, it doesn't work - in that it causes everything to collapse into being equivalent to the 1-point groupoid. Ultimately, we'd love to do it for both structures, but the Rig Completion paper already tells us we'll need to go 'up' in dimensions.
In what setting exactly do you mean to apply the construction?
To the symmetric rig groupoid of Types. Applying it to the additive structure ought to give something as close to a ring structure at the level of categories as one might hope to get. @Todd Trimble pointed us towards cobordisms as one place to look, but we couldn't get the details to work out [which says more about us than about the suggestion.]
Interesting!
Like how pregroup grammars give you left and right inverses for every type.
Yes, very much like that.
We have managed to get 'a' notion of multiplicative inverse, published in Fractional Types: Expressive and Safe Space Management for Ancilla Bits at RC 2020. Basically, an inhabitant of 1/T is a garbage collector for a specific value of T. This makes things like 'ancilla bits' used in quantum computing work properly (and gets rid of soundness problems in the implementation of certain current QPLs with respect to handling ancillas). They obey many of the nice laws of (positive) rational numbers.
The Ph.D. thesis of @Chao-Hong Chen will contain many more nice results along those lines. [He's supervised by Amr, I'm not involved other than as co-author of some of the earlier papers.]
That sounds super cool
I should read this paper because I am also interested in reversible computing. Normally I just approach dealing with ancilla bits by adding partiality... so for classical reversible computing this means working in finite sets and partial injections.
It is cool that this can be dealt with at the level of types.
I definitely recommend to everyone to start with the "Embracing the Laws" paper, as it is meant to be quite approachable, but it also deals with setting up terminology (and it has some nice results about Curry-Howard that more people ought to know...). Then this paper. Although it sounds like you (@Cole Comfort ) could dive right in.
Not just types: dependent types. Without them, things don't work. But with, it works out quite nicely.
Our paper about diagrams for bimonoidal categories is finally on the arXiv: https://arxiv.org/abs/2010.13361. Looking forward to your impressions, @marcosh and @Jacques Carette :)
Sample diagram: sheet_diagram.png
cool! thanks for the notification!
I definitely will give you my impressions (past that it does indeed look quite cool), as soon as I've deal with a number of urgent things on my schedule.
Antonin Delpeuch said:
Our paper about diagrams for bimonoidal categories is finally on the arXiv: https://arxiv.org/abs/2010.13361.
Cool paper and tool! A few questions and a comment to follow.
Can your results be adapted to accommodate the other styles of notation from Duncan or James & Sabry? Can you see any places where that sort of notation is either misleading or overly restrictive? I love how the 3D diagrams look (and reason), but I think the others are easier to read and write, in practice.
Once you start flattening these diagrams, they start to look like an extension of both ribbon diagrams (as in Selinger) and functorial boxes (as in Fritz & Perrone). Have you thought about possible relations?
Is there a nice graphical notation for the symmetry, or does it look just like any other map with the same boundary?
I'm a bit unsatisfied by the heavy reliance on normalization for the objects; in some sense there is no way to write down the object , much less distributivity.
I think you could build them in by allowing vertical, as well as horizontal, seams, to represent multiplication of sums. I've been thinking of these as "Ziploc diagrams". Applying distributivity is opening the bag. See the pictures below.
First the object , then the diagram from p. 9 of your paper. ziploc1.jpg ziploc2.jpg
Note that this makes the diagrams a bit more compositional; we can see the identity right in the middle of the diagram, in the same way we would ordinarily draw it. Plus, this opens up undotted sheet-crossing as the -swap, rather than needing it for the identity.
That's a lot of great points!
Concerning the other notations, they are obviously practical too. I'd say one shortcoming of Duncan's notation is that it's not easy to represent a symmetry for . For James & Sabry, I imagine it must be hard to represent morphisms such as (since is just whitespace, it would not be very clear to have this whitespace followed by a sum and something else).
Concerning the relation to ribbon diagrams and functorial boxes, I don't remember us thinking about that - it'd be fun if there could be a connection there!
For the symmetry, it's simply the crossing of sheets. You have quite a lot of them in the paper, for instance page 12. (The renderer tries to make sure that the wires of both sheets do not cross at the same horizontal positions in the swap, to make it clearer that they are unrelated).
For the reliance on normalization, I think that's an important feature of the syntax. For string diagrams in monoidal categories, you also collapse structure by forgetting the bracketing of your products, because it's irrelevant. That's what makes the syntax powerful - it handles this coherence for you. If you want to represent distributors explicitly (and also associators, unitors) you can just follow Vicary & Dunn's approach, by drawing all the coherence isomorphisms as cells in a monoidal 2-category (https://lmcs.episciences.org/5645). You could do that for quite a lot of types of categories (including monoidal categories themselves actually), and it works out of the box but doesn't feel as useful to me since it requires you to represent all the coherence morphisms explicitly (so it's not much easier than just working with good old morphism expressions).
I think it's really a design choice to make all objects normalised. It means we get simpler diagrams, at the expense of being complicated. It doesn't have to be that way though
Personally I'm really terrible at 3d visualisation, so for me keeping the diagrams as simple as possible is perfect
It's true that normalization has a cost (for instance creating copies of morphisms which would not need to be duplicated otherwise). It would be great to have the choice indeed.
Note that the notation from James & Sabry is likely not worth emulating, in that it doesn't have any proven "good properties". Roshan, Amr and I also sought a better notation for our joint paper Embracing the Laws of Physics: Three Reversible Models of Computation. The notation in the paper should be understood as aiding intuition only, unlike the work here, which has formal underpinnings.
It seems to me (though I haven't thought hard about it), that we can view an extension like the one I suggested as syntactic sugar on top of what you've already got. It's not so much either/or as either/both. I'm mostly interested from a user interaction perspective (where duplication is very costly), but the abstract syntax is much less useful without your normalization sitting under the hood.
@Spencer Breiner perhaps there is a way to make this work, but it seems really non-trivial to me. We heavily rely on Joyal & Street for the soundness and completeness theorem, and as soon as you allow vertical splits your sheet diagram is no longer an extrusion of a planar one, so you probably need to switch gears and prove it from scratch. But for me, even before considering isotopies, there are diagrams with vertical splits that I am not even not sure how to interpret, really.
Certainly it would be some work to make precise, but do you have an example in mind where a problem might arise?
My intuition would be something like this:
1) First, "unzip" each vertical seam in the diagram, meaning copy everything on the "zipped" side of the vertical seam and between the nearest horizontal seams.
2) A not-too-sophisticated induction should show that we can do this in a way that terminates. A bit of cleverness ought to show that any unzipping strategy will terminate.
3) The order of the unzipping will matter, but only up to equivalence; I think this is the geometric content of Def'n 10.
4) Invoke your result to determine equivalence classes for the reduced maps.
I have thought a bit more and it seems more doable to me now… It would be great because it would even let us define the tensor product of two morphisms simply as the remaining horizontal composition. I don't have a scanner at hand but as soon as I get one (in the coming days) I'll draw examples of things where I am currently stuck.
@Spencer Breiner One problem I have is this: image.png
Unless you see a natural interpretation for the last slice, it seems that we cannot accept arbitrary planar compositions of ziplocs as slices. It makes the syntax a bit less compositional than one might hope (but perhaps it's not a blocker).
Spencer Breiner said:
Can your results be adapted to accommodate the other styles of notation from Duncan or James & Sabry? Can you see any places where that sort of notation is either misleading or overly restrictive? I love how the 3D diagrams look (and reason), but I think the others are easier to read and write, in practice.
I haven't followed this thread super closely, but I suddenly had the thought, maybe these can be made precise with careful application of string diagrams for 2-categories? Like, the 0-cells correspond to sheets/"worlds", and there are extra 1-cells corresponding to the "+" operator / dotted line
One hope I had with this project was that by recognizing our diagrams as a special class of other diagrams, we would be able to see bimonoidal categories as special cases of the corresponding structures. Because it feels easier to find analogies between structures when looking at them from a graphical perspective (instead of lists of axioms). But it hasn't really happened yet as far as I know. The tensor exchange rule reminded us of bialgebras, but we haven't investigated that much. Also the diagrams feel like they could live in a monoidal double category - not sure if there is anything interesting to extract from that.
Antonin Delpeuch said:
One hope I had with this project was that by recognizing our diagrams as a special class of other diagrams, we would be able to see bimonoidal categories as special cases of the corresponding structures. Because it feels easier to find analogies between structures when looking at them from a graphical perspective (instead of lists of axioms). But it hasn't really happened yet as far as I know. The tensor exchange rule reminded us of bialgebras, but we haven't investigated that much. Also the diagrams feel like they could live in a monoidal double category - not sure if there is anything interesting to extract from that.
Upon more reflection think it is more like a bimonoid than a bialgebra, in the same sense as the distributivity of multiplication over addition in the lawvere theory for rings. Hence the name....
Antonin Delpeuch said:
Spencer Breiner One problem I have is this: image.png
Unless you see a natural interpretation for the last slice, it seems that we cannot accept arbitrary planar compositions of ziplocs as slices. It makes the syntax a bit less compositional than one might hope (but perhaps it's not a blocker).
It looks to me like this should reduce to . We get the same answer whichever order we unzip, so that seems reasonable. What's interesting here is that there is that this is a connected diagram that can't be represented as a single product. It's like a new piece of syntax that interpolates between . Very cool!
This diagram kind of looks like the distributor in a linearly distributive category. I wonder if there is any connection there.
Sorry for taking so long @Antonin Delpeuch to get to this. I managed to read the paper carefully over a week ago, but not to write up my comments until now.
Note: even though I'm going to make some critical comments below, I'm doing so because I believe that this endeavour is worthwhile, so that it is worth my time to write up my critique.
First and foremost, you should look at Cockett and Seely's Proof Theory of the Cut Rule, especially the diagrams in Figure 1 on p.13. Those dotted lines are very important! They were, of course, already present in Natural deduction and coherence for weakly distributive categories; the reason for them is well explained in section 2.3. The thing that "goes wrong" is well illustrated on p.242 (p.14 of the PDF). This is also the source of many complications in Laplaza's paper.
It seems to me that Definition 9 is inadequate, in that it does not actually normalize. Take just as one example: your definition says that it is normalized (where it is not). Similarly, none of , , are 'simplified' by your definition, even though later on (such as in Lemma 12), you assume they do.
In the proof of Lemma 12, you dismiss Laplaza's (X) axiom as straightforward. In my own work in proving that the Rig category of Types satisfies this, that particular axiom was in fact the hardest to prove correct! The issue is related to the 'dotted lines' I already mentioned above: there are genuinely different (constructive) proofs that , and if you choose the wrong one, you will get an incoherent result. [It probably shows up elsewhere too, that's where it bit me in my Agda proof. I am aware that this one is 'just' for the symmetric case.]
I think that the issue stems from what has been mentioned above already: you move to a 'normal form' quickly, assuming that the normalization procedure produces nothing of interest -- and this is not the case. There is a reason why Laplaza's definitions are rather non-trivial: the weird issues are not with and and distributivity, but rather with weird interactions of and . Yes, there is also the issue of repeated factors - but Laplaza treats that well, as do you.
In other words: I don't think you've adequately defined the set of expressions over which your results hold. And I wonder if the set of expressions where it does hold, is actually the set where the most interesting behaviour happens.
Hi @Jacques Carette, thanks a lot for your feedback!
Concerning the two papers you mention, I am a bit confused: these are about linearly distributive categories, which indeed behave very differently from bimonoidal categories (the distributivity that holds between the two monoidal structures is not the same). I totally agree with you that in their case, the dotted lines linking the units to where they are introduced is very important. Are you trying to say that our graphical calculus should have those dotted lines as well? Can you point out a diagram of ours (or equivalence of diagrams) which is incorrect because of that?
Concerning Definition 9, you should keep in mind that we work in a semistrict setting anyway, so (and similarly for the other unit relations).
I think the rest of your concerns arise from the same mixup between linearly distributivity and classical distributivity. It would be worth adding a paragraph about this to emphasize the difference in the introduction.
I'm very happy if the basic problem is that I'm confused... (as writer, you still have to make sure that doesn't happen to your readers, but that's easier to do than fix broken math).
Bimonoidal (or rig) categories:
Linearly distributive categories:
They are just very different beasts :)
Indeed! Oops. However, there are still issues, I think, with the units - not the ones due to linear, but the ones due to assuming semi-strictness.
I am all ears about these issues then!
I think the passage from non-strict to strict has lots of non-trivialities. That would prevent people from using your work in applications.
I would certainly not characterize the semi-strictification theorem as trivial, for sure. But it is not clear to me to me why relying on it would make it harder for people to use these diagrams.
For example, has 2 non-equivalent inhabitants. You rule out but not . At least, from my understanding.
Neither nor are semi-strict, as far as I understand.
is not even strict as a monoidal category but that does not prevent people from using string diagrams for monoidal categories with it, right?
These string diagrams ignore all associators and unitors and we want them to do so, because it is convenient and safe.
Ignoring associators and unitors: agreed. That is indeed the whole point, as they just clutter things up.
Then the story is the same for bimonoidal categories: we want to strictify as much as we can without losing information, and that is what semi-strictification does.
Concerning , we do indeed have multiple sheet diagrams for that: one where the two empty sheets cross, and one where they do not (for the identity).
You just have to make sure that you don't also ignore things that does matter. being that same as is safe. But is a real problem. So is .
No, in a free bimonoidal category, there is a single isomorphism : that is axiom (X).
(axiom (X) both in our paper and in Laplaza, we kept the same numbering)
What I meant was that it is easy to choose incoherent proofs of and (and in the opposite handedness) that do not witness .
My first attempt at proving bimonoidal fell into that trap.
Sure, then in that case you do not have a bimonoidal category, so you cannot use our diagrams.
The problem is that of course is a bimonoidal category, I just chose the wrong witnesses (proof relevance!).
what is actually?
It's like but without assuming excluded middle (or choice, obviously; but also without assuming axiom K).
It's what Agda pronounces , but that's been regarded as an error.
Jacques Carette said:
It's what Agda pronounces , but that's been regarded as an error.
cubical
library renames it Type
everywhere :joy:
As it should!
Also, have you seen the thread https://twitter.com/davidad/status/1327406604476669956 ?
Nice! Yes I am also confused by the role played by symmetry here - it probably means that these diagrams need more dimensions than three to be comfortable.
Sorry if my feedback has been sub-optimally useful.
Jacques Carette said:
Also, have you seen the thread https://twitter.com/davidad/status/1327406604476669956 ?
Does he mean... for diagrams representing individual morphisms? Otherwise there are definitely diagrams I would want to draw in categories that would not fit in 1d
Commutative diagrams are actually diagrams in the 2-categorical nerve of a category
That's an interesting way to formulate it. Is that written formally/carefully somewhere ?
Oh, I used “2-categorical nerve of a category” playfully, I don't think it's really used. It would come from treating the truncation functor which turns all 2-cells into identities as a kind of 'realisation of 2-categories in categories', so that the usual inclusion , which is its right adjoint, could well be called a 'nerve'.
Then any “commutative -gon” in a category is a 2-cell in the 2-category , and various “pastings of commutative diagrams” are diagrams of 2-cells in . That doesn't exhaust all sorts of things that people call “commutative diagrams” (for example the ones with two parallel arrows which are not necessarily presumed equal) but covers most uses.
This is all folklor-ish but I'm sure you can find it, for example, in Leinster's book on higher categories and higher operads.
To make the “nerve/realisation” language a bit more pertinent: this picture would come from seeing as a reflective subcategory of the presheaf category , where is Joyal's category restricted to level 2.
There is a functor which induces a nerve-realisation pair between and , which actually restricts to the adjunction between and .