Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: open Petri nets


view this post on Zulip Daniel Plácido (Jul 08 2021 at 15:01):

Does anybody working on open Petri nets knows how does Baez, Master et al's theory relate to the stuff talked about on this video? https://www.youtube.com/watch?v=77KwKNSb7uc

view this post on Zulip Fabrizio Genovese (Jul 08 2021 at 19:32):

Very good question. To my knowledge the PROP approach and the free monoidal categories approach are still unrelated atm.

view this post on Zulip Fabrizio Genovese (Jul 08 2021 at 19:33):

(And yes, we'll have to eventually fix this)

view this post on Zulip John Baez (Jul 09 2021 at 00:00):

Can someone summarize what "this stuff" is, so I don't have to watch a 50-minute video?

view this post on Zulip Robin Piedeleu (Jul 09 2021 at 08:39):

I saw this ages ago, but afaik it's a presentation of the brand of open Petri nets where composition is performed via transition synchronisation (instead of the other brand which prefers gluing along places). If I remember correctly, @Pawel Sobocinski also shows how their operational semantics (which markings can reach which other markings via which transitions) defines a monoidal functor to a category of open labelled transition systems (or a subcategory of spans of graphs).

In the case of elementary nets (Petri nets that can have at most one token in a place), there are only a finite number of possible markings (ways of placing tokens in places), so the reachability semantics factors through some category of finite-state transition systems whose labels are pairs of vectors of 0s and 1s. Each instance of the reachability problem, i.e. a pair of markings (a,b)(a,b) of the same net, gives an initial and final state, so we not only have a transition system but an nondeterministic finite-state automata (NFA), which recognises a regular language (for this weird alphabet of pairs of vectors of 0s and 1s). A key point is that, for a closed net, the corresponding language is non-empty iff one can reach bb from aa. This gives a compositional approach to reachability: you can break down a given (closed) net into smaller open nets, map them functorially to NFA which encode the open reachability problem for each of them, compose them as NFA and compute whether the language recognised by the resulting NFA is empty. You can make this procedure more efficient by using language-equivalence to discard redundant internal states at each composition step. For well-chosen decompositions (although it is not clear what those are in general), this method seems to reduce the complexity of reachability-checking and has been shown to outperform several existing reachability tools.

view this post on Zulip Robin Piedeleu (Jul 09 2021 at 08:42):

As for Daniel's original question, like Fabrizio, I'm not sure what the relationship between the two forms of composition (by transition synchronisation or gluing along places) is. They seem to be just two different ways of composing Petri nets. This makes sense given that, purely syntactically, Petri nets look like hypergraphs, so we're free to compose them along hyperedges or vertices.

view this post on Zulip Robin Piedeleu (Jul 09 2021 at 08:48):

(This has also made me nostalgic for the first statebox workshop in Croatia! Seems like a lifetime ago in the current covid era)

view this post on Zulip Pawel Sobocinski (Jul 09 2021 at 08:55):

My main goal with this work was to get a compositional account of Petri net semantics. Compositional here means exactly what Robin said: there is a monoidal functor from the "category of syntax" (where syntax here is Petri nets) to the "category of semantics" (the category of open transition systems). In a way this project was motivated by 90s and 00s Concurrency Theory -- the usual stereotype was that either you do process algebra, where you care about compositionality, or you do Petri nets, where you don't. I wanted to 1) show the Petri net people that compositionality can be useful; not only for model checking but also for other things and 2) show the process algebra people that syntax for concurrency can be more interesting than a Dijkstra parallel composition operator

view this post on Zulip Pawel Sobocinski (Jul 09 2021 at 08:56):

Probably the best reference for all this is Owen Stephens' PhD thesis: https://www.ioc.ee/~pawel/theses/stephens.pdf

view this post on Zulip Pawel Sobocinski (Jul 09 2021 at 08:59):

Philosophically, gluing along transitions has also always made much more sense to me. Places represent resources and so are local. I interact with the outside world by synchronising on publicly visible actions, and these actions have corresponding local effect on resources... I still think that some of the examples of this in action are quite compelling.

view this post on Zulip Pawel Sobocinski (Jul 09 2021 at 09:01):

The most surprising thing was that this led directly to Graphical Linear Algebra... when you think about the algebra of transitions, a place can connect to several transitions (an additive structure) but a transition can connect to several places, consuming and producing multiple tokens (a copying structure). This was my first "prop" paper https://arxiv.org/abs/1303.3371.

view this post on Zulip Pawel Sobocinski (Jul 09 2021 at 09:03):

Shortly after this, I spent a sabbatical in Lyon with Filippo and Fabio Zanasi was just starting his PhD. We wanted to write down the equational theory of Petri nets but it seemed super messy.

So we though: what if we forget the mess, write down the most aesthetically pleasing equations, and try to figure out what they mean. It turns out they meant LinRel_{Z_2} :laughing:

view this post on Zulip Jules Hedges (Jul 09 2021 at 11:21):

Going by vague memory here (I was in the room but I haven't rewatched the talk since!) - is this related to the 2 different ways of glueing graphs? Either you can glue graphs along nodes (which you get from arbitrary cospans of graphs glueing by pushout), or you can glue open graphs along half-open edges, where you can simulate a half-open edge via dummy nodes of degree 1

view this post on Zulip Pawel Sobocinski (Jul 09 2021 at 12:19):

Kinda. Except these are not graphs but hypergraphs.

view this post on Zulip Fabrizio Genovese (Jul 09 2021 at 12:49):

By the way, you can also glue (hyper)graphs along edges using pushouts. For me, the really insightful part of Pawel's approach is that you can develop a purely syntactic calculus to describe the gluings

view this post on Zulip Fabrizio Genovese (Jul 09 2021 at 12:49):

This is of great importance because computers are generally bad at dealing with computing pushouts, and much better at rewriting stuff applying rewriting rules

view this post on Zulip Jade Master (Jul 12 2021 at 22:59):

Robin Piedeleu said:

I saw this ages ago, but afaik it's a presentation of the brand of open Petri nets where composition is performed via transition synchronisation (instead of the other brand which prefers gluing along places). If I remember correctly, Pawel Sobocinski also shows how their operational semantics (which markings can reach which other markings via which transitions) defines a monoidal functor to a category of open labelled transition systems (or a subcategory of spans of graphs).

In the case of elementary nets (Petri nets that can have at most one token in a place), there are only a finite number of possible markings (ways of placing tokens in places), so the reachability semantics factors through some category of finite-state transition systems whose labels are pairs of vectors of 0s and 1s. Each instance of the reachability problem, i.e. a pair of markings (a,b)(a,b) of the same net, gives an initial and final state, so we not only have a transition system but an nondeterministic finite-state automata (NFA), which recognises a regular language (for this weird alphabet of pairs of vectors of 0s and 1s). A key point is that, for a closed net, the corresponding language is non-empty iff one can reach bb from aa. This gives a compositional approach to reachability: you can break down a given (closed) net into smaller open nets, map them functorially to NFA which encode the open reachability problem for each of them, compose them as NFA and compute whether the language recognised by the resulting NFA is empty. You can make this procedure more efficient by using language-equivalence to discard redundant internal states at each composition step. For well-chosen decompositions (although it is not clear what those are in general), this method seems to reduce the complexity of reachability-checking and has been shown to outperform several existing reachability tools.

Thanks Robin. What do you mean by language equivalence here?

view this post on Zulip Robin Piedeleu (Jul 13 2021 at 10:07):

I mean the relation that equates two NFA if they recognise the same language. And I am speaking here of NFA that come from an open elementary net equipped with an initial and final marking.

Any open elementary net mnm \rightarrow n can be seen as labelled transition system with labels in the alphabet Σ={(x,y)xBm,yBn}\Sigma = \{(x,y) \,|\, x\in \mathbb{B}^m, \, y\in \mathbb{B}^n\} , i.e. pairs of vectors of 0s and 1s with dimension given by the number of dangling transitions on the left and on the right. Each such pair of vector identifies a set of open transitions that can be fired simultaneously when transitioning from one state/marking to another.

If we further single out an initial marking aa and final marking bb, we can treat the net as an NFA and consider the language it recognises: this language is a set of words over Σ\Sigma---or sequences (x0,y0)(x1,y1)(xk,yk)(x_0,y_0)(x_1,y_1)\dots (x_k,y_k) of Boolean vectors of the right dimension---such that, starting in state aa we can reach state bb by firing the open transitions identified by each (xi,yi)(x_i,y_i), in the specified order.

view this post on Zulip Jade Master (Jul 13 2021 at 20:22):

Robin Piedeleu said:

I mean the relation that equates two NFA if they recognise the same language. And I am speaking here of NFA that come from an open elementary net equipped with an initial and final marking.

Any open elementary net mnm \rightarrow n can be seen as labelled transition system with labels in the alphabet Σ={(x,y)xBm,yBn}\Sigma = \{(x,y) \,|\, x\in \mathbb{B}^m, \, y\in \mathbb{B}^n\} , i.e. pairs of vectors of 0s and 1s with dimension given by the number of dangling transitions on the left and on the right. Each such pair of vector identifies a set of open transitions that can be fired simultaneously when transitioning from one state/marking to another.

If we further single out an initial marking aa and final marking bb, we can treat the net as an NFA and consider the language it recognises: this language is a set of words over Σ\Sigma---or sequences (x0,y0)(x1,y1)(xk,yk)(x_0,y_0)(x_1,y_1)\dots (x_k,y_k) of Boolean vectors of the right dimension---such that, starting in state aa we can reach state bb by firing the open transitions identified by each (xi,yi)(x_i,y_i), in the specified order.

Thanks Robin that makes sense. I'm now wondering if there are any canonical ways to replace an NFA with a simpler language equivalent one.

view this post on Zulip Jade Master (Jul 13 2021 at 20:24):

I guess in this case you can discard internal states and still get the same language...between the boundaries. But I'm not sure how to know when a state is internal.

view this post on Zulip Jade Master (Jul 13 2021 at 20:25):

Or what to replace it with.

view this post on Zulip dan pittman (Jul 13 2021 at 21:07):

Jade Master said:

I guess in this case you can discard internal states and still get the same language...between the boundaries. But I'm not sure how to know when a state is internal.

I wonder if this is what the operational semantics part is for? It kind of sounds like what compiler folks do when implementing optimizations, where they prove that the optimization does the same thing as the not-optimized version using operational semantics. A wild guess.

view this post on Zulip Robin Piedeleu (Jul 13 2021 at 22:00):

Jade Master said:

I guess in this case you can discard internal states and still get the same language...between the boundaries. But I'm not sure how to know when a state is internal.

I'm not sure that the internal/external distinction really applies here---all state are internal in some sense since we're not using them as interfaces for composition. Note also that the composition of NFA that we use is quite unusual: much like the composition of nets via transition synchronisation, we compose NFA by synchronising on the shared component of the pairs that make up their respective alphabets. If I remember correctly, it's something along the lines of: P;QP ; Q recognises (x0,z0)(xk,zk)(x_0,z_0)\dots (x_k,z_k) iff there exists y0,yky_0, \dots y_k s.t. PP recognises (x0,y0)(xk,yk)(x_0,y_0)\dots (x_k,y_k) and QQ recognises (y0,z0)(yk,zk)(y_0,z_0)\dots (y_k,z_k). You see that it's really an extended form of the usual relational composition.

And, to answer your first question, there are canonical ways of replacing an NFA with a simpler language-equivalent one. Well, that's not entirely true: there is a canonical minimal (i.e. with the smallest number of states up to isomorphism) DFA that recognises a given language. Now, I'm not sure this is what @Pawel Sobocinski and his coauthors did but that would be one way of discarding useless states. Determinise and then minimise, although that does sound quite prohibitive in terms of complexity. There might be simpler heuristics that give you sufficient improvements.

view this post on Zulip Jade Master (Jul 14 2021 at 17:29):

Thanks Robin. I am also curious about the mechanism that Sobocinski and Stephens used for this simplification. Maybe a technique from here https://en.wikipedia.org/wiki/DFA_minimization ...

view this post on Zulip Robin Piedeleu (Jul 14 2021 at 18:03):

I just checked their paper and it seems like they do use minimisation as I suggested earlier. They say on p.27 that they've adapted a minimisation algorithm from this paper.