You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Does anybody working on open Petri nets knows how does Baez, Master et al's theory relate to the stuff talked about on this video? https://www.youtube.com/watch?v=77KwKNSb7uc
Very good question. To my knowledge the PROP approach and the free monoidal categories approach are still unrelated atm.
(And yes, we'll have to eventually fix this)
Can someone summarize what "this stuff" is, so I don't have to watch a 50-minute video?
I saw this ages ago, but afaik it's a presentation of the brand of open Petri nets where composition is performed via transition synchronisation (instead of the other brand which prefers gluing along places). If I remember correctly, @Pawel Sobocinski also shows how their operational semantics (which markings can reach which other markings via which transitions) defines a monoidal functor to a category of open labelled transition systems (or a subcategory of spans of graphs).
In the case of elementary nets (Petri nets that can have at most one token in a place), there are only a finite number of possible markings (ways of placing tokens in places), so the reachability semantics factors through some category of finite-state transition systems whose labels are pairs of vectors of 0s and 1s. Each instance of the reachability problem, i.e. a pair of markings of the same net, gives an initial and final state, so we not only have a transition system but an nondeterministic finite-state automata (NFA), which recognises a regular language (for this weird alphabet of pairs of vectors of 0s and 1s). A key point is that, for a closed net, the corresponding language is non-empty iff one can reach from . This gives a compositional approach to reachability: you can break down a given (closed) net into smaller open nets, map them functorially to NFA which encode the open reachability problem for each of them, compose them as NFA and compute whether the language recognised by the resulting NFA is empty. You can make this procedure more efficient by using language-equivalence to discard redundant internal states at each composition step. For well-chosen decompositions (although it is not clear what those are in general), this method seems to reduce the complexity of reachability-checking and has been shown to outperform several existing reachability tools.
As for Daniel's original question, like Fabrizio, I'm not sure what the relationship between the two forms of composition (by transition synchronisation or gluing along places) is. They seem to be just two different ways of composing Petri nets. This makes sense given that, purely syntactically, Petri nets look like hypergraphs, so we're free to compose them along hyperedges or vertices.
(This has also made me nostalgic for the first statebox workshop in Croatia! Seems like a lifetime ago in the current covid era)
My main goal with this work was to get a compositional account of Petri net semantics. Compositional here means exactly what Robin said: there is a monoidal functor from the "category of syntax" (where syntax here is Petri nets) to the "category of semantics" (the category of open transition systems). In a way this project was motivated by 90s and 00s Concurrency Theory -- the usual stereotype was that either you do process algebra, where you care about compositionality, or you do Petri nets, where you don't. I wanted to 1) show the Petri net people that compositionality can be useful; not only for model checking but also for other things and 2) show the process algebra people that syntax for concurrency can be more interesting than a Dijkstra parallel composition operator
Probably the best reference for all this is Owen Stephens' PhD thesis: https://www.ioc.ee/~pawel/theses/stephens.pdf
Philosophically, gluing along transitions has also always made much more sense to me. Places represent resources and so are local. I interact with the outside world by synchronising on publicly visible actions, and these actions have corresponding local effect on resources... I still think that some of the examples of this in action are quite compelling.
The most surprising thing was that this led directly to Graphical Linear Algebra... when you think about the algebra of transitions, a place can connect to several transitions (an additive structure) but a transition can connect to several places, consuming and producing multiple tokens (a copying structure). This was my first "prop" paper https://arxiv.org/abs/1303.3371.
Shortly after this, I spent a sabbatical in Lyon with Filippo and Fabio Zanasi was just starting his PhD. We wanted to write down the equational theory of Petri nets but it seemed super messy.
So we though: what if we forget the mess, write down the most aesthetically pleasing equations, and try to figure out what they mean. It turns out they meant LinRel_{Z_2} :laughing:
Going by vague memory here (I was in the room but I haven't rewatched the talk since!) - is this related to the 2 different ways of glueing graphs? Either you can glue graphs along nodes (which you get from arbitrary cospans of graphs glueing by pushout), or you can glue open graphs along half-open edges, where you can simulate a half-open edge via dummy nodes of degree 1
Kinda. Except these are not graphs but hypergraphs.
By the way, you can also glue (hyper)graphs along edges using pushouts. For me, the really insightful part of Pawel's approach is that you can develop a purely syntactic calculus to describe the gluings
This is of great importance because computers are generally bad at dealing with computing pushouts, and much better at rewriting stuff applying rewriting rules
Robin Piedeleu said:
I saw this ages ago, but afaik it's a presentation of the brand of open Petri nets where composition is performed via transition synchronisation (instead of the other brand which prefers gluing along places). If I remember correctly, Pawel Sobocinski also shows how their operational semantics (which markings can reach which other markings via which transitions) defines a monoidal functor to a category of open labelled transition systems (or a subcategory of spans of graphs).
In the case of elementary nets (Petri nets that can have at most one token in a place), there are only a finite number of possible markings (ways of placing tokens in places), so the reachability semantics factors through some category of finite-state transition systems whose labels are pairs of vectors of 0s and 1s. Each instance of the reachability problem, i.e. a pair of markings of the same net, gives an initial and final state, so we not only have a transition system but an nondeterministic finite-state automata (NFA), which recognises a regular language (for this weird alphabet of pairs of vectors of 0s and 1s). A key point is that, for a closed net, the corresponding language is non-empty iff one can reach from . This gives a compositional approach to reachability: you can break down a given (closed) net into smaller open nets, map them functorially to NFA which encode the open reachability problem for each of them, compose them as NFA and compute whether the language recognised by the resulting NFA is empty. You can make this procedure more efficient by using language-equivalence to discard redundant internal states at each composition step. For well-chosen decompositions (although it is not clear what those are in general), this method seems to reduce the complexity of reachability-checking and has been shown to outperform several existing reachability tools.
Thanks Robin. What do you mean by language equivalence here?
I mean the relation that equates two NFA if they recognise the same language. And I am speaking here of NFA that come from an open elementary net equipped with an initial and final marking.
Any open elementary net can be seen as labelled transition system with labels in the alphabet , i.e. pairs of vectors of 0s and 1s with dimension given by the number of dangling transitions on the left and on the right. Each such pair of vector identifies a set of open transitions that can be fired simultaneously when transitioning from one state/marking to another.
If we further single out an initial marking and final marking , we can treat the net as an NFA and consider the language it recognises: this language is a set of words over ---or sequences of Boolean vectors of the right dimension---such that, starting in state we can reach state by firing the open transitions identified by each , in the specified order.
Robin Piedeleu said:
I mean the relation that equates two NFA if they recognise the same language. And I am speaking here of NFA that come from an open elementary net equipped with an initial and final marking.
Any open elementary net can be seen as labelled transition system with labels in the alphabet , i.e. pairs of vectors of 0s and 1s with dimension given by the number of dangling transitions on the left and on the right. Each such pair of vector identifies a set of open transitions that can be fired simultaneously when transitioning from one state/marking to another.
If we further single out an initial marking and final marking , we can treat the net as an NFA and consider the language it recognises: this language is a set of words over ---or sequences of Boolean vectors of the right dimension---such that, starting in state we can reach state by firing the open transitions identified by each , in the specified order.
Thanks Robin that makes sense. I'm now wondering if there are any canonical ways to replace an NFA with a simpler language equivalent one.
I guess in this case you can discard internal states and still get the same language...between the boundaries. But I'm not sure how to know when a state is internal.
Or what to replace it with.
Jade Master said:
I guess in this case you can discard internal states and still get the same language...between the boundaries. But I'm not sure how to know when a state is internal.
I wonder if this is what the operational semantics part is for? It kind of sounds like what compiler folks do when implementing optimizations, where they prove that the optimization does the same thing as the not-optimized version using operational semantics. A wild guess.
Jade Master said:
I guess in this case you can discard internal states and still get the same language...between the boundaries. But I'm not sure how to know when a state is internal.
I'm not sure that the internal/external distinction really applies here---all state are internal in some sense since we're not using them as interfaces for composition. Note also that the composition of NFA that we use is quite unusual: much like the composition of nets via transition synchronisation, we compose NFA by synchronising on the shared component of the pairs that make up their respective alphabets. If I remember correctly, it's something along the lines of: recognises iff there exists s.t. recognises and recognises . You see that it's really an extended form of the usual relational composition.
And, to answer your first question, there are canonical ways of replacing an NFA with a simpler language-equivalent one. Well, that's not entirely true: there is a canonical minimal (i.e. with the smallest number of states up to isomorphism) DFA that recognises a given language. Now, I'm not sure this is what @Pawel Sobocinski and his coauthors did but that would be one way of discarding useless states. Determinise and then minimise, although that does sound quite prohibitive in terms of complexity. There might be simpler heuristics that give you sufficient improvements.
Thanks Robin. I am also curious about the mechanism that Sobocinski and Stephens used for this simplification. Maybe a technique from here https://en.wikipedia.org/wiki/DFA_minimization ...
I just checked their paper and it seems like they do use minimisation as I suggested earlier. They say on p.27 that they've adapted a minimisation algorithm from this paper.