You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Is there a notion of input and output for petri nets?
Analogously, a discrete dynamical system is a set of states with an update rule . But an input/output discrete dynamical system would additionally have a set of inputs , a set of outputs , an update , and a readout
There's not a notion for Petri nets per se, but this is probably just what open Petri nets give you!
You may have another idea in mind, and there are various imaginable ideas, but our "input" and "output" are just functions , where our Petri net is , so is the set of 'places'.
Hmmm...I think of open petri nets as "resource sharing" which feels distinct from inputs which parameterize the dynamics
In the reachability semantics discussion in our paper, we consider starting off an input Petri net with some marking of the inputs, running it, and seeing if it can wind up with a marking that comes from a marking of the ouputs.
Marking of the inputs means some dots in the places?
Note that a marking of the inputs is an element of ; we can use to carry this to a marking of the places of our Petri net.
Here is the set of inputs in our open Petri net (see above for my notation).
got it
So we can start with a marking of the inputs, map it to a marking of the place of our Petri net, "run" the Petri net and get other markings, and then ask if any of these other marking come from a given marking of the outputs (via ).
The yes-or-no answer to this question gives a relation, the "reachability relation" between and .
So this is one thing you can do....
One difference between this and what you're talking about is that we don't have a deterministic dynamics: we have a "possibilistic" dynamics.
I just want to make sure I understand this! This is an example of a marking of the inputs image.png
from your talk
But that picture doesn't show any "inputs".
I'm imagining the inputs are the two places on the left
Okay, so let's say has two elements and they're getting mapped into the 2 places on the left in a bijective way.
We can do that.
and let's say has one element getting mapped to the 1 place on the right
In my notation it's that's mapped by , but fine.
I'm not trying to be a notation boss, honest.
Hmm... I'm running this in my head and I'm worried that it never ends up with only markings in the output
This one might not.
I'm happy to use your notation!
Actually I think this one can get to a marking with just one dot at right...
Really? I'm not seeing it
I keep ending up with a lone dot in the bottom left place
Umm, in my slides I "run" this Petri net for a while:
Let's see what it does... I think it's the same one: on the first slide I think.
Yeah, eventually I get to a marking with two dots at right.
I'm actually confused about how you get from slide 5 to slide 6
I expected the transition eat the marking in the top left
Whoops, maybe I made a mistake! Let me look.
Ha! That was a mistake! Thanks for catching it.
I'll fix my slides. :blushing:
Hmm, I don't like the look of that "blushing" emoticon.
It looks too bug-eyed.
haha, it's actually one of my favorites
I guess writing code in TikZ made it hard for me to pay attention to whether my tokens were doing the right things.
I think if we start with three in the upper left then it will have the behavior that ends up with 2 on the right
Okay. Now I'm tempted to start fixing my TikZ, but I guess the wrong version is immortalized on YouTube.
Anyway, I can sort of half-listen to you while I start fixing this TikZ.
Were you getting at some point, about reachability?
I'm still musing if this input/output scheme matches up with what I want. But first I'm making sure I understand it
Okay.
I bet there are a number of schemes.
In particular the demand that all tokens wind up at places mapped to by outputs is rather restrictive. One could also consider allowing "leftover junk" at other places, at the end. But then you have to ask if starting with "leftover junk" at other places at the beginning is allowed.
I like time-symmetric schemes since I'm a physicist at heart.
I think that matches the analogy with discrete dynamical systems where the starting and ending with leftover junk is the internal state
Those dynamical systems you're talking about... are they called "Moore machines" or "Mealy machines"?
moore machine
Okay, good.
Thinking out loud: Can I turn a petri net into a moore machine by taking states to be markings of the petri net?
Eugenio Moggi was trying to create a decorated cospan category of "open Moore machines".
We found that some obvious-seeming way to do it gave rise to a setup different from some "usual" way to do things.
I guess composition worked differently.
oh interesting! I've been thinking about this as well and had convinced myself it's not possible without some extra structure
I think his Moore machines had a bit more stuff... maybe a "read-in" as well as a "read-out"????
Or maybe some "start" and "end" states???
neat!
is there a place to read more?
I did a quick internet search but didn't find anything
I don't know where. Moggi would know.
There's a big literature on automata, and he knows it... I don't. He was just showing me stuff on the blackboard, and we were trying to work stuff out, and we gave up because decorated cospans wasn't matching the "usual" approach, even though he admitted that decorated cospans was giving something reasonable.
If you really want to work on this stuff, I could ask him.
That would be great!
Umm, okay.
Thank you so much! But please only if its comfortable and convenient for you :smile:
Just a small aside: The notation input/output may be misleading. Gluing along places is an asynchronous operation, meaning that it does not have directionality per se. So suppose you compose nets N and M, where the output of N matches the input of M. You would expect then that tokens in the output of N get consumed by M, but N could as well "take them back" (e.g if the output place of N is also the input of one of its transitions). For this reason it's probably better to say left and right ports
Also, there is some work about composing automata using operads and lenses. Maybe this is closer to what you want: https://www.youtube.com/watch?v=dEDtaJhgQOY
Fabrizio Genovese said:
Just a small aside: The notation input/output may be misleading. Gluing along places is an asynchronous operation, meaning that it does not have directionality per se. So suppose you compose nets N and M, where the output of N matches the input of M. You would expect then that tokens in the output of N get consumed by M, but N could as well "take them back" (e.g if the output place of N is also the input of one of its transitions). For this reason it's probably better to say left and right ports
Also, this sort of behavior is what makes difficult to compositionally evaluate the reachability relation starting from the reachability relation of the components. That is, when you compose along places you can, for instance, create new loops in the composed net, which make the reachability relation richer
Another notion of composition which is somehow friendlier wrt reachability is composing along transitions, as it's done here: https://arxiv.org/abs/1307.0204
In this case, the point of view is very different: composition is observational and synchronous, so you can interpret left and right ports not as carrying tokens around, but as carrying observational data. In this respect, ports connect things at a "message level": When something on the port happens, it instructs transitions connected to that port to fire. The problem with this approach is that now nets are not sharing resources anymore (which is exactly why the impact on the reachability relation is somehow friendlier to manage)
I agree with Fabrizio on this. I know that I struggled for a while on the open petri nets until I realized that the input/output structure of the decorated/structured cospans need not be related to where the tokens go when the net fires. My initial impression was that the inputs needed to be the "reagent" places and the outputs were the "product" tokens, but that isn't a requirement. That is why I've come to think of decorated cospans as a model of model fusion rather than a model of the dynamics. A morphism in FinSet Cospans is a recipe for gluing models together on variables. The way you do that gluing introduces left and right ports but those don't need to map onto the understanding of a PetriNet as a chemical reaction with reagents and products as inputs and outputs.
Precisely. The way I think about cospans is as a tool that allows you to "punch connections into a structure" that allow you to compose things, but this happens in a way that is often unrelated to what the underlying structure does. This is, imho, the inevitable price you have to pay if you want a compositional framework at this level of generality
The result is that you have a compositional way of gluing things, but not necessarily a compositional way to compose their properties. This is captured by the idea that the functors decorating cospans are lax monoidal and not strict. That is, the best you can say often is "the behavior of the composition is bigger than the behavior of the parts"
For a lot of things, that will have to be good enough because the behavior of a system is more complex than the behaviors of the parts. FinSet cospans capture a "systems are networks" perspective and networks do exhibit emergent complexity. If everything was describable by strict functors, there would be no emergent complexity and the CT model of networks as decorated FinSet cospans would be a bad abstraction. I think we can do a lot with just having descriptions of complex systems that are compositional. Instead of representing a network by an edge list, we can represent it as a formula in terms of
even if you can't functorially compute the solution from the formula, you can say a lot of the structure of the problem. That is the focus of my upcoming research projects, if anyone wants to collaborate.
I agree, for a lot of things this perspective is unavoidable. Still, the way I see it is that "when you have a strict functor then you can claim to really have captured what is going on". To give an example, Quantum Mechanics is full of emergent behavior (think about entangled states) but all the monoidal functors in CQM are strict. This means, in my opinion, that we are able to completely classify the emergent behavior in this field. All in all there is a difference between "emergent behavior which is accounted for in your model" and "emergent behavior that we don't know exactly where it comes from".
Yeah I think that difference is important. It is almost like "complex behavior" the behavior of a complex (multiple things together) is more complicated than the behavior of the parts, vs "emergent behavior" when you complex things together, it gets complicated in an unexplained way. We can probably think of that as unexplained variance in a statistical model. As you increase the complexity (degrees of freedom) of a statistical model, you explain more of the variance in the data. But there is always some unexplained variability that is the randomness inherent to the underlying phenomena.
The freedom the decorated/structured cospan approach gives you in terms of choosing the inputs/outputs is interesting/under appreciated/potentially confusing. When talking to certain audiences, I would pivot and talk about 'boundary states' as the union of the images of input/output cospan legs, e.g. https://www.mdpi.com/1099-4300/18/4/140/htm
Then if you want to glue along part of the boundary instead of the whole thing, you basically have to introduce the subset of the boundary shared with the other system and you more or less get back into this whole input/output or left/right interface setup. One way I thought about this stuff (inspired by the chemistry/markov semantics) is that boundary states don't obey the 'closed' system dynamics, they have some other stuff going on. If your open reaction network or Markov process is in a steady state, there can be some extra stuff flowing in/out beyond the closed system dynamics at the boundary states. In chemistry/physics this extra stuff can come from a 'reservoir,' which basically is a system that can give/take particles/probability as needed to maintain the steady state. Alternatively, you can look at having some other open chemical reaction network/markov process provide/receive the extra stuff, i.e. composing and then looking at steady states.
One interesting thing about the structured setups is that if you have some gizmo, i.e. directed graphs, petri nets, maybe with some labels etc, then all the all the left adjoints, say from set into the category of gizmos, give you different possible 'interface' types.
This is all very cool! Recently, I've been wondering about systems that both have this asynchronous "gluing along boundary states" of composition and a synchronous input/ouput composition in the style of these open dynamical systems https://youtu.be/8T-Km3taNko. Dynamical systems defined by indexed vector fields have both types of composition and I was wondering about other systems that do. My original questions was how to think about petri nets with this second type of composition!
Maybe the "message level" connections @Fabrizio Genovese pointed to will fit the bill
Sophie Libkind said:
This is all very cool! Recently, I've been wondering about systems that both have this asynchronous "gluing along boundary states" of composition and a synchronous input/ouput composition in the style of these open dynamical systems https://youtu.be/8T-Km3taNko. Dynamical systems defined by indexed vector fields have both types of composition and I was wondering about other systems that do. My original questions was how to think about petri nets with this second type of composition!
This is one of the main things I've worked on in the last year or so. Precisely how to do both things in the same formalism
Both types of composition in the same formalism?
There are nice formalisms to do these things separately (unfortunately they are bot called "open petri nets" causing some confusion), but doing both at the same time is tricky
I've run into the same linguistic problem with dynamical systems
Yes. I want one categorical model where I can connect both synchronously and asynchronously
So the stage I'm at now is that I disregarded ports altogether and studied the morphisms in the category of nets with a semantics attached to them which give me this kind of gluing
what makes doing both at the same time tricky?
Now I should get back to ports and try to have them back in in the picture
So basically gluing along places amounts to compute pushout, while gluing along transitions amounts to do a double pushout
And I still don't know precisely how could I have ports that allow me to do both at the same time :slight_smile:
a transition here is a transition of the petri net?
Yup
Gluing along transitions = synchronizing
Let me see if i understand this...
I'm confused!
What does it mean to synchronize along transitions
?
It means that you wanna say "when this transition fires, this other transition has to fire as well"
So you can immagine that transitions can "exchange messages between each other" that signal when a transition is firing
and you want to synchronize the firings of different transitions
For instance you could have a petri net with two transitions A -> B and C -> D. If you synchronize them then you obtain a petri net with one transition A,C -> B, D, in which the two "single transitions" are conflated into one
So the Petri net you get is the one that models the idea that "every time the first transition fires the second has to fire as well, and vice-versa"
Do you want to do this without conflating the places?
Yes, transitions should be able to sync even without sharing resources
This is literally the modelling of something happening at "event level"
You are saying "every time this event happens, this other event has to happen as well"
I'm pretty new to petri nets and so far I've mostly used them as petri nets with rates to get a differential equation. Do these concepts translate?
I don't know about PT nets with rates, we always use "discrete nets" with tokens
for us firing is a discrete event, so it makes sense to think about them in this way
that makes sense
When you think about discrete nets with tokens, what category are you using?
The category I "really" work in is the category of free symmetric strict monoidal categories
since every Petri nets presents a free SMC
And gluings amount to quotient object/morphism generators in these categories
You can find a reasonably clean explanation of how we use PT nets here: https://arxiv.org/abs/1906.07629
The relevant chapters are from chapter 4 I think
Thanks!
So the most imporant difference with other formalisms is that we don't use these nets for chemistry, but for orchestraring computation. So we need to be able to distinguish single tokens. This is called "single token philosophy". The consequence is that we don't use the net to present free commutative strict monoidal categories like the UCR group does, but we use them to present free symmetric strict monoidal categories
Which means, I believe, that you need pre-nets.
I should make Jade rewrite our open Petri net paper for pre-nets. She did a bunch of work on those in Generalized Petri nets. She could easily do open generalized Petri nets, which would include open pre-nets. But the fun part would be getting a functorial semantics for them.
John Baez said:
Which means, I believe, that you need pre-nets.
Those won't cut it. I wrote a paper about why: https://arxiv.org/abs/1904.12974
I've read the generalized Petri nets paper by Jade, it's very nice!
Sophie Libkind said:
Thinking out loud: Can I turn a petri net into a moore machine by taking states to be markings of the petri net?
There's probably some good ways to do that! Don't forget that thought!
Fabrizio Genovese said:
Fabrizio Genovese said:
Just a small aside: The notation input/output may be misleading. Gluing along places is an asynchronous operation, meaning that it does not have directionality per se. So suppose you compose nets N and M, where the output of N matches the input of M. You would expect then that tokens in the output of N get consumed by M, but N could as well "take them back" (e.g if the output place of N is also the input of one of its transitions). For this reason it's probably better to say left and right ports
Also, this sort of behavior is what makes difficult to compositionally evaluate the reachability relation starting from the reachability relation of the components. That is, when you compose along places you can, for instance, create new loops in the composed net, which make the reachability relation richer
oh, this is really interesting
(tangentially: personally i would almost define "compositionality" as the opposite of "emergent-ness" ("emergency"?), so something lax sounds to me like it's non-compositional)
the thing about creating loops from gluing together reminds me of something, though...
oh, i think i know what it is—thinking about string diagrams in compact closed categories & how they appear to have "loops" and "backwards arrows" but formally everything can be sliced up just like normal in an ordinary monoidal category string diagram
sarahzrf said:
(tangentially: personally i would almost define "compositionality" as the opposite of "emergent-ness" ("emergency"?), so something lax sounds to me like it's non-compositional)
I totally agree with you on this, but this point of view is controversial. I imagine for many people a lax functor is still compositional.
In the case of petri net, the drawback is evident in applications: If the functor is not strict, you can't use the component nets to simplify the model-checking of the composite net.
BTW, we know reachability is a superexponential problem, so there will always be corner cases. Still, we can develop compositional model checking techniques to reduce the corner cases to a minimum.
compact closed categories and Petri nets are related, but in a bit more strange way: https://arxiv.org/abs/1805.05988
The results in this paper were then further generalized by @Jade Master in https://arxiv.org/abs/1904.09091
oh hey i have that second paper open in a tab right now :)
Speaking of laxness and compositionality: Jade and I came up with a reachability semantics for open Petris that's a lax monoidal double functor in our paper Open Petri nets, and the laxness comes from the ability of tokens to make "zigzags" like this:
But in my talk this week I described a different reachability semantics that's a strong monoidal double functor, not lax.
So, it's a fun and slightly subtle business.
You mean it's lax both as a functor and as a monoidal functor, right? A "lax monoidal lax double functor"? Unless I'm confused, this example was one of my motivations for #practice: applied ct > measuring non-compositionality. Missed the seminar, now I have some serious motivation to catch up to find out what the trick is
I'm talking about laxness of functoriality, not monoidality.
The lax functoriality of Jade's & my original reachability semantics is explained in this blog article starting at "It's easy to see why":
https://johncarlosbaez.wordpress.com/2018/08/18/open-petri-nets-part-2/
I didn't discuss this at all in my talk; instead I presented another reachability semantics that's strongly, not laxly functorial. But this other semantics is much more "bulky" - it's not doing any "black-boxing", it's keeping track of reachability relations involving not just input and output places, but all places.
@Fabrizio Genovese Do you know anything about the complexity of the reachability problem for integer nets?
Nope, but i'd guess is lower than the one for standard nets
Since you can produce/cancel stuff everywhere you have more strategies to travel from a place to another
so it's at best as complex as the standard case
An idea I had quite a while ago was to take Jade's Lawvere theory setup for generalized nets and bump it up to 2-theories so you can plug in the SMC and cartesian cat pseudomonads. The hope would be to represent individual token philosophy.
god dammit is there anyone on this server who hasn't apparently had that idea
It's the natural next step.
alright this is true
I don't think it would work
I already tried to tame the combinatorial nightmare of symmetries not composing as we would like them to using bicategories
And well, it doesn't work. It was a more naive setting than the one you are suggesting tho
@Joe Moeller I like that idea.
Also @sarahzrf. Maybe the generating data would be like a graph in the Kleisli category of the 2-monad
In the case of SMC's you would have a category of transitions, a category of places, and two functors from the transitions to the free smc on the places.
Hm. This isn't quite what you said but maybe it's fun to think about too.
Right. The problem is that if you "2" the Lawvere theory you also gotta "2" the category of models that it lands in.
So that requires you to have more sophisticated generating data like what I described.
I mean, you can still take models in Set, because it should be generating a smc internal to Set, so just an smc.
that sounds slightly off to me
if SMC is the lawvere 2-theory, then we want nice pseudofunctors SMC → Cat to be SMCs, right?
so a model in Set should be a nice pseudofunctor SMC → Set
but that's not an internal SMC, i think? it's just a commutative monoid again, because the higher structure necessarily trivializes into equalities
Oh duh. Yes, you take models in Cat to get actual general SMCs.
A note on the application of Petri nets to compartmental models in epidemiology, such as the commonly cited SIR model:
...these are simplifying models, which are predicated on the assumption that the epidemic process is Markovian. While this is a plausible assumption for the reaction Susceptible+Infected --> Infected+Infected, it is not plausible for the reaction Infected --> Recovered. The distribution of the infection period is hardly exponential. The simplest model for the infection period would be that it is a constant K. That would lead to a rate equation in which the derivative of the Recovered population equals the size of the Infected population from K days back.
While network models capture contact more accurately, the assumption that the underlying stochastic transmission and recovery processes are memoryless (Keeling and Eames 2005; Volz 2008; House and Keeling 2011) remains restrictive. Of course, memoryless processes are mathematically more tractable and relatively simple to analyse when compared to models where the inter-event times are chosen from distributions other than the exponential. However, when compared to data, these assumptions are often violated. For example, diseases can exhibit unique and non-Markovian behaviour in terms of the strength and duration of infection. In this respect, the distribution of the infectious period is usually better approximated by some peaked distribution with a well defined mean, see e.g. Bailey (1954), Gough (1977), Wearing et al. (2005) and references therein.
The Markov assumption involved in SIR could be a workable approximation as the infection period is small in comparison to the duration of the epidemic. But it does suggest that the application SIR etc. deserves to reviewed in each empirical context as there is a built-in error term here.
Of course there is another simplifying assumption in these models, which is that the population is well-mixed. But that is commonly acknowledged. This one deserves to be included in the footnotes as well. The wikipedia article on compartmental models in epidemiology, for example, makes no mention of it. I will add something there...
Interesting..... I've been wondering whether Petri nets are actually used in real life epidemiology, or if that's a just a lie that keeps circulating. (From what little I've seen of epidemiology it seems to be mostly statistics)
It looks like both compartmental models and agent-based models are being used these days.
Agent-based models are a stochastic refinement, which in principle will track the stochastic behaviors of each individual. As such, they are data-hungry.
Yes compartmental models are the main tool for understanding the dynamics a the macro scale. ABMs are useful for understanding the effects of intervention strategies like modeling the movement of people through a hospital floor and deciding how to place PPE or hand hygiene resources to maximize effectiveness.
That makes sense.
Thanks!
Abstract. Continuous time deterministic epidemic models are traditionally formulated as systems of ordinary differential equations for the numbers of individuals in various disease states, with the sojourn time in a state being exponentially distributed. Time delays are introduced to model constant sojourn times in a state, for example, the infective or immune state. Models then become delay-differential and/or integral equations. For a review of some epidemic models with delay see van den Driessche [228]. More generally, an arbitrarily distributed sojourn time in a state, for example, the infective or immune state, is used by some authors (see [69] and the references therein).
I see there's a whole bunch of literature on non-Markovian stochastic Petri nets.
Here are a bunch of different implementations of SIR models https://github.com/epirecipes/sir-julia
When things get fancy enough it may cease to be useful to call them "Petri nets", but they can still be useful.
The maintainer of that repo was at a University in the UK until recently now he works at Microsoft studying infectious disease prediction.
One of our goals in using open Petri nets is to expand the complexity of the models while ensuring correctness and the ability to estimate parameters. Because as model complexity increases, the data requirements for calibration also increase and we want to find ways to mitigate that.
@James Fairbanks Is there a good place to read more about the project(s) you are referring to?
I'm glad Petri nets are being useful to you, James!
The AlgebraicJulia repo is where we are currently coordinating activity
Thanks James!
I'm looking for a solid, modern textbook in mathematical epidemiology. I see that Springer has a bunch of books on the subject. If anyone has a favorite to recommend... Thanks
I don't have one. Sounds like a nice question. (You could ask Carl Bergstrom on Twitter.)
Regarding delayed differential equations, you may be able to sequence several Markov flows in series to approximate the delay. Each will increase the order of a gamma function, approximating a delayed delta function for N large enough.
Speaking from 10,000 feet away (having never done such simulations):
I would think that, given that you are running a simulation, using delayed differential equations wouldn't be worse than using non-delayed equations, given that you have access to the whole history of the simulation.
There's also the theoretical question of putting a bound on how much error would be caused by simplifying to the Markovian equations.
Searching around, I'm having trouble finding any clear answers to how long the covid-19 "infectious period" actually is. Ultimately we'd want to know its distribution, and use that as a windowing function when looking back in time during the simulation. Even that would be simplified, as it ignores the fact that infectiousness is a matter of degree.
One could use simulations to get a handle on the error bound. I.e., just compare the global solutions produced by refined models of the infectious period with the solutions produced by the simple ODEs.
If there were a sensitivity here then we could regress against empirical data to estimate the infectivity parameters.
My WAG is that there isn't a significant sensitivity here, that these variations would just fall into the pot of noise that's already there. Partly this is based on the fact that you don't hear a lot of discussion about this aspect of the simulation. But clearly these are not truly satisfying answers.
Given the latest information that the virus is airborne, and therefore air circulation and dispersion are important factors, a rethinking in terms of fluid dynamics and filtration may be forthcoming. Clean-room thinking
Let me know if I'm missing something here, but it sounds like that's still consistent with a compartmental, reaction-network based framework. It would be change to our understanding of the physical mechanisms by which the reactions are "implemented."
The idea of compartments clearly remains. And the aggregate law which says that the derivative of Infected is proportional to the product of Infected and Susceptible still looks valid, given a set of general conditions for quality of air filtration, etc.
So you have a "stochastic Petri net," with a coefficient for each process. These coefficients show up in the ODEs for the rate equations. The coefficients themselves are complex functions of natural parameters like virus transmissibility, incubation period, as well as social parameters like the degree of distancing; now we add into that mix parameters like average quality of air filtration.
Not sure what the goal is. If it can be shown that by increasing social distance by another meter that viral transmission can be reduced by orders of magnitude via 3D dispersion, that's what will be important to model. I would like to concentrate on the applied aspects and potentially determine which factors make a difference to stopping the spread of the disease.
That would be a great goal for you to pursue, especially given your expertise in the physical sciences.
But we’re talking about different branches of epidemiology. One is the investigation of local physical mechanisms of transmission. The other is “macro-epidemiology,” which studies the movement of aggregate quantities, abstracted from physical detail.
When a baseball player hits a home run, the event can be understood on multiple levels: the sportscaster’s description; the physicist speaking in terms of center of mass of the ball, collisions, parabolic trajectories; the more detailed view of the materials scientist.
Regarding goals. As science, the first goal is to understand the reality. This includes modeling, and reconciliation with empirical data.
Compartmental models are integral part of the textbook introductions to mathematical epidemiology, e.g. https://www.springer.com/us/book/9781489976116
There’s more, though, as there is a clear need for macro-epidemiological models to inform the trade-offs faced by public policy planners. It’s all over the map that SEIR models are a key part of the practitioner’s toolbox for COVID-19 modeling.
More generally speaking, applied reaction networks are of particular interest to the ACT community, as they give a clear example of a mathematical concept with categorical properties and substantive empirical applications.
Culturally, the aim here is to create an interdisciplinary community drawing from a diverse range of mathematical fields and application domains, with the hope of promoting new collaborations, connections and discoveries. It’s an exploratory quest.
Diffusion is compartmental modeling on an infinitesimal scale, whereby each tiny compartment is connected to an adjacent compartment by a random walk. This is a basic interpretation based on the understanding that an airborne virus will travel by convection and turbulent diffusion. I mention this because it is an application of compartmental models, but extended to a continuum limit.
Nice observation
Notes on aerosol transmission https://twitter.com/Bob_Wachter/status/1283970874631000065 "Aerosol scientists like Milton think about mechanisms/particles differently than MDs."
1/ Covid (@UCSF) Chronicles, Day 121 Grand rounds today, here: https://tinyurl.com/y5ut4435. As I said in my intro (@ 1:00), we covered perhaps the most important topics in the world today: how SARS-Co-V-2 spreads & what can be done to prevent it.
- Bob Wachter (@Bob_Wachter)