You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
It seems one popular way to talk about "operational semantics" is to look at the final coalgebras of an endofunctor. I guess a simple example from Turi's thesis Functorial Operational Semantics goes like this. You have a set of "actions", and you look at the endofunctor given by
where means "power set".
Turi calls a set of "programs" (though I might call an element not just a program but a program in some state of being executed). Then, if being a coalgebra for the endofunctor just means we have a map
This sends each program to some set of pairs . What does it mean that a pair is in ? This is the case iff when we run the program for one "step", it can take the action and turn into the program .
The use of the power set here is to handle "nondeterminism": when we run our program for a step, there's a whole set of allowed action-program pairs . If we replaced with the identity functor on , we'd instead be talking about a deterministic program.
Anyway, my question is: have people looked at this same setup where we replace sets by measurable spaces and by the [[Giry monad]] , where is the space of probability measures on ?
This would let us handle "probabilistic" programs, which are different from nondeterministic ones: when we run a probabilistic program for one step, we get a probability measure describing the probability of what happens.
This seems like an obvious idea so I imagine someone has tried it.
Two things:
The polynomial approach to this would make the update map be a polynomial morphism , or more generally for an arbitrary polynomial . If we upgrade to a category of dependent stochastic lenses (polynomial morphisms are deterministic dependent lenses), as @Toby Smithe writes about in his thesis: https://arxiv.org/pdf/2212.12538.pdf, then we would have stochastic programs.
But notice that stochastic lenses are deterministic on the base map and only stochastic on the fiber map. So each state would deterministically output an action, and then update stochastically based on the result of that action.
I'm curious what lead you to be interested in probabilistic operational semantics?
Hi John, Owen (& thanks for tagging me, Owen, else I wouldn't have seen this).
I am not an expert in coalgebra but I believe the answer to your question, John, is "yes": coalgebra is a rich field and indeed probabilistic systems and their logic has been studied. For some references, you can look at Fig 4.2 of Bart Jacobs' Introduction to Coalgebra book. For some actual examples of modelling such systems, Alexandra Silva has quite a few papers, or here is a paper formalizing some aspects of Markov decision processes (MDPs) using coalgebraic semantics in Isabelle/HOL, or here is a paper using coalgebra to study policy improvement in MDPs. Helle Hvid Hansen (an author on the latter) works in coalgebra, and there is an abstract of the latter presented at Women in Logic 2019 available here.
I'm not entirely sure what the question may have been precisely, so this answer is a bit scattered; but hopefully there are some useful pointers. I should add that not all the probabilistic systems studied using coalgebra make use of the Giry monad: sometimes people just work with finitary distributions or other monads. And I should also say that, following the approach in my thesis, one need not be restricted to discrete-time systems: it's possible to consider Markov processes coalgebraically, too. But I'm not aware of much work on that.
Let's ping @Sergey Goncharov, @Ralph Sarkis, and @Valeria Vignudelli just in case they're not reading already.
To add to the already given answers, from what I understand, people consider that the Giry monad isn't expressive enough. I think it's because it only models non-determinism coming from the program itself, as opposed to the environment (e.g., a scheduler). So they study coalgebras for more complicated endofunctors. For a typical work in this area, Falk Bartels's thesis has a syntactic format for probabilistic operational semantics based on Segala systems, which are coalgebras for the functor , where denotes finite powerset, and finitely supported distributions (I think).
Here are two other papers on this. One dealing with probabilities only (the finite distribution monad on and the Giry monad on ) and one dealing with both probabilistic and nondeterministic choices (but not the Giry monad).
Thanks very much, everyone!
@Toby Smithe
I'm not entirely sure what the question may have been precisely....
It was simply:
Anyway, my question is: have people looked at this same setup [studying operational semantics using coalgebras of the functor ] where we replace sets by measurable spaces and by the [[Giry monad]] , where is the space of probability measures on ?
I'm starting to think about this and I didn't want to be ignorant of previous related work! So thanks: your pointers, and everyone else's, are very helpful.
Owen Lynch said:
- Kris Brown and David Spivak have worked on these kind of state machines in a general setting, abstract over the monad, but the Giry monad is one of the intended applications: https://arxiv.org/abs/2304.14950.
That's really interesting, especially since 1) I'm getting into these generalized state machines in my project on agent-based models, and 2) this project involves a team of people who will meet at the ICMS for 6 weeks next summer, and this team includes.... Kris Brown!
I've posted more about this here:
Very basic stuff....
Owen Lynch said:
I'm curious what lead you to be interested in probabilistic operational semantics?
I guess I've sort of answered that by now, but it's because I'm trying to find a very general framework for agent-based models, for the ICMS project we're carrying out in Edinburgh next year.
In the context of abstract GSOS, a paper I know that systematically approaches the issue of generalizing the behavior functor beyond the standard one is this one: https://arxiv.org/abs/1410.0893 The involved systems combing nondeterministic transitions and "quantitative" transitions, where "quantity" can in particular be "probability". I believe, this only covers discrete variants of the Giry monad.
One cool paper where coalgebras of this type appear is Jacobs and Staton's De Finetti's construction as a categorical limit. (Actually I guess they don't have this type - they are coalgebras of in a Markov category, which means the morphisms are stochastic maps rather than deterministic ones as in your case. Sorry for the misinformation.)
One thing to note is that if then the state is updating stochastically. So the state can change without having an immediately observable effect. This may or may not be desirable.
If it's not desirable then I work out an alternative in my ACT2023 paper called "unifilar machines." The idea is that a unifilar machine first chooses an output randomly, and then updates its state deterministically, as a function of its previous state and its randomly chosen output. This has the nice consequence that the states of the terminal unifilar machine can be identified with infinite sequences of random variables.
John Baez said:
Owen Lynch said:
I'm curious what lead you to be interested in probabilistic operational semantics?
I guess I've sort of answered that by now, but it's because I'm trying to find a very general framework for agent-based models, for the ICMS project we're carrying out in Edinburgh next year.
I have some thoughts about this sort of agent based modeling, and to what extent it can be considered agent based modeling vs. "agent-flavored probabilistic computing". Specifically, when your model is such a general program, there are very few things that you can productively say about it!
I wonder if another approach based on clock processes/semi-Markov processes might be more "domain-specific", and also include things like the continuous-time stochastic semantics of a Petri net ala Quantum Technique for Stochastic Mechanics.
Owen Lynch said:
John Baez said:
Owen Lynch said:
I'm curious what lead you to be interested in probabilistic operational semantics?
I guess I've sort of answered that by now, but it's because I'm trying to find a very general framework for agent-based models, for the ICMS project we're carrying out in Edinburgh next year.
I have some thoughts about this sort of agent based modeling, and to what extent it can be considered agent based modeling vs. "agent-flavored probabilistic computing". Specifically, when your model is such a general program, there are very few things that you can productively say about it!
I don't claim to have said anything specific to agent-based modeling in this thread. If you look at my blog article, you'll see I list a bunch of decisions that need to be made to specify an agent-based model. Then I focus on specific decision: a choice of how we describe 'dynamics'. And I only consider the case of a single agent, to trivialize many of the other decisions. So this blog article should be the first of dozens.
Ah, I should have read the accompanying blog post!
John Baez said:
I've posted more about this here:
Very basic stuff....
really excited to see how these ideas develop!
and in defense of coalgebras satisfying equations: it seems quite a bit like what happens in the semantics of modal logics. to give semantics for a particular modal logic, we need to restrict from all Kripke frames/powerset coalgebras to just those satisfying certain conditions (like transitivity and reflexivity of the relation for , or symmetry for , and so on and on).
what makes the above useful for logic is that those conditions (sometimes called "coequations") can also be phrased as the validity of some axiom schema, e.g. for transitivity.
and maybe it's also worth comparing to the dual picture? often we don't want to consider all algebras of some endofunctor, just those satisfying certain conditions. the endofunctor is more like a generalized signature/language, and we need to use some 'logical' data (the equational theory) to carve out the intended models from the space of all structures of that signature. we could use a different signature, perhaps, so all algebras are the intended ones (as in an 'unbiased' presentation of algebraic theories), but sometimes that's hard!
Thanks, @Evan Washington. I'm still very confused about the role of imposing equations on coalgebras of endofunctors in operational semantics. I'm happy that two people (Tobias Fritz and Omar Antolin) on my blog suggested a way of avoiding equations in the particular example where I seemed to need them. But the trick used there imposes certain costs, so I'll have to keep an eye on this issue. Your suggestion to compare the dual picture might be helpful eventually. I'm not feeling quite smart enough yet to hold the "galaxy-brained" picture unifying modal logic and operational semantics in my head.
Sorry I must have missed this: what kind of equations are you thinking of, @John Baez?
Evan Washington raised the topics of coalgebras (and algebras) obeying extra equations because he read my blog post based on the discussion we've been having here:
In that blog post, I was tried to put a vector field on a manifold by making that manifold into a coalgebra of a certain endofunctor on the category of manifolds. I failed, because only coalgebras obeying a certain equation were vector fields. Then @Tobias Fritz and Omar Antolín-Camarena succeeded - by switching to a different coalgebra on a different category.
Oh, ok, sorry. One thing that comes to mind is that the particular endofunctor you're considering is copointed, i.e., it comes with a natural transformation to the identity functor. So, just like algebra structures for pointed endofunctors are commonly required to admit the point as a section, it doesn't seem too odd to require coalgebras for copointed endofunctors to be sections of the copoint. I know that the endofunctors on which we take coalgebras in operational semantics are often copointed, but not whether their algebras are commonly required to be sections of the copoint. Maybe the experts do?
In any case, I had never thought of the coalgebraic take on differential equations as continuous-time operational semantics, it's really nice!
In the blog post, there is an example working with the category of smooth manifolds . We consider an endofunctor that sends any smooth manifold to its tangent bundle. A coalgebra for is then a manifold together with a morphism that sends each state to some tangent vector . The idea, I think, is to describe how a given state changes over a very short period of time by specifying a tangent vector that gives the speed and direction of change at .
However, as discussed in the blog article, for this interpretation to make sense, we need to be not just any tangent vector in but specifically a tangent vector in the tangent space of .
I don't know enough about smooth manifolds to be able to properly think about this specific example in detail. But I was reflecting if perhaps it's telling us something simple (and maybe important). Namely this: there can be different kinds of states for an agent, and different kinds of data that we can associate with a given state - and the kind of data that it makes sense to associate with a state can depend on the kind of the state.
In the manifold example, each point on the manifold corresponds to a different "kind" of state: there is data associated with a given point (namely the tangent vectors at that point) that can't be sensibly associated with other points. But we could probably imagine a more discrete example, too.
Assume we have the endofunctor , where is the identity functor and is the functor which is constant at the set . Then a coalgebra for this endofunctor is a set together with a function . So a state is sent to a pair , where we think of as the next state and as an observation made in state .
Imagine now that there are two kinds of states, and two kinds of observations. So maybe we have a function (where 2 is a set with two elements) so that is the kind of state , and a function that tells us the kind of an observation. The idea is that we can only ever observe from state if . But of course an arbitrary function won't necessarily obey this restriction.
I wonder if there is a way you can set up this idea of "kinds" of states and observations (by using an appropriate category and endofunctor) so that every coalgebra automatically only gives observations of the kind that correspond to the kind of the current state.
There is a comment on the blog article made by Omar Antolín-Camarena that suggests using a slice category. I think this relates closely to the idea I describe above.
I think that the move from "arbitrary coalgebras" to "kind-respecting coalgebras" can be described as moving to a slice category. I start by working with to illustrate the idea. In , pick an object that will be used to classify the different "kinds" of the elements of sets. Then, form the slice category , where an object is a set together with a function . The objects of can be thought of as sets where each set element has a specific kind . A morphism in is a function that "respects kinds". So, if we think of a morphism as describing an observation, an observation of a given kind can only be made from a state having the same kind.
An endofunctor of sends an object to some object . Any coalgebra of this endofunctor will involve a function that respects kinds.
I hope that the example with the manifold can be worked out using a slice category as sketched above. In this case, the "kind classifying" morphism for the manifold in the case where is just . The idea is that each point is its own unique kind of thing. And a natural "kind classifying" morphism for the tangent bundle manifold is the projection which sends to . So the "kind" of a tangent vector in the tangent space at is just .
I suspect that the equation (discussed in the blog) ends up being exactly the condition we need for to specify a morphism from to in the slice category , where is the category of smooth manifolds. I am hoping that we can think of as the output of some endofunctor when we provide the input . If so, then I think that really would specify a coalgebra in .
Tom Hirschowitz said:
One thing that comes to mind is that the particular endofunctor you're considering is copointed, i.e., it comes with a natural transformation to the identity functor. So, just like algebra structures for pointed endofunctors are commonly required to admit the point as a section, it doesn't seem too odd to require coalgebras for copointed endofunctors to be sections of the copoint.
Thanks! I didn't know that algebras of pointed endofunctors are commonly required to admit the point as a section. Is there a really simple example of why this is a good idea?
Not an example, but abstractly I would say a pointed endofunctor is halfway between a plain endofunctor and a monad. An algebra for an endofunctor has no conditions; an algebra for a monad has a unit and associativity condition; so in between, an algebra for a pointed endofunctor has a unit condition but no associativity.
David Egolf said:
I don't know enough about smooth manifolds to be able to properly think about this specific example in detail. But I was reflecting if perhaps it's telling us something simple (and maybe important). Namely this: there can be different kinds of states for an agent, and different kinds of data that we can associate with a given state - and the kind of data that it makes sense to associate with a state can depend on the kind of the state.
It's too late at night here for me to think very hard, but I think this is a nice observation. I have lately been worrying a lot about a special feature of vector fields on manifolds , namely that they generate dynamics not in discrete time steps but in continuous time - unlike all the other examples of coalgebras I listed in my blog article. But you are pointing out another way in which this example differs from all the other examples of 'coalgebraic dynamics'.
(Btw, on my blog @Tobias Fritz pointed out that all my examples of coalgebras giving discrete-time dynamics only work properly because the endofunctors involved are actually monads: this is what allows one to iterate the time evolution! The case of vector fields on manifolds also arises from an endofunctor that's a monad, but what we can do with this fact seems to be different.)
David Egolf said:
There is a comment on the blog article made by Omar Antolín-Camarena that suggests using a slice category. I think this relates closely to the idea I describe above.
I think that the move from "arbitrary coalgebras" to "kind-respecting coalgebras" can be described as moving to a slice category.
Again, I'm too tired to say anything intelligent, but: this sounds really cool. As soon as I heard of Omar's slice category trick I started thinking about "dependent types". I don't really know much about dependent types but I've heard they're connected to slice categories. While that may sound fancy, in this example it seems to amount to a fairly simple (yet important) observation: the space of tangent vectors at a point of a manifold depends on the point, so "space of vectors tangent to the point " is what type theorists would call a dependent type.
Yes, I absolutely agree: if the copointing of your copointed endofunctor is a fibration ( display map dependent projection), then a coalgebra for it satisfying the unitality law can be described as having a "dependently typed structure map" with the equation wrapped up in the typing. Syntactically, an endofunctor like this takes any type to a type dependent on that same type :
and a pointed-coalgebra for it is a type equipped with a dependently typed coalgebra structure map . Then a terminal such coalgebra is a sort of coinductive type whose destructor is a dependently typed function.
@Astra Kolomatskaia and I are using this sort of dependent coinduction to define semisimplicial types.
Thanks, Mike! I gave up and went to sleep before reading your previous comment, which helps a lot, and now this latest one helps more.
It's amazing how all this fits together, or more precisely that I ran into it thinking about the tangent bundle of a manifold as a way of describing the dynamics of an agent. I've been thinking about vector fields on manifolds as dynamical systems for about a million years but this way of thinking about them is new to me.
John Baez said:
(Btw, on my blog Tobias Fritz pointed out that all my examples of coalgebras giving discrete-time dynamics only work properly because the endofunctors involved are actually monads: this is what allows one to iterate the time evolution! The case of vector fields on manifolds also arises from an endofunctor that's a monad, but what we can do with this fact seems to be different.)
But it's still of the same flavour, no? What we can do with the monad is still describing iterated dynamics. More concretely, given (copointed) and , which would normally be called vector fields, their Kleisli composition is exactly the sum of the vector fields. Now if you think of a vector field as dynamics in an infinitesimal time step, then the sum of vector fields is exactly what you get when composing/iterating these dynamics.
BTW in this case, making sense of general Kleisli morphisms is a bit more tricky than in the other cases, but still meaningful. One can think of it as discrete dynamics followed by infinitesimal dynamics, and like this even general Kleisli composition makes sense. (Here I'm not talking about the slice, just the tangent bundle monad on the category of manifolds.)
In fact this particular endofunctor , sending each manifold over a manifold to its tangent bundle, is not only pointed but a monad. As @Tobias Fritz pointed out, this allows us to "compose" coalgebras of this endofunctor: they are endomorphisms in its Kleisli category, so we can compose them in the Kleisli category. A way of making into a coalgebra of this endofunctor is a vector field on , and it turns out that composing two such coalgebras is just adding vector fields.
Or, spoken like a true category theorist: adding vector fields on is just composing coalgebras of the tangent bundle endofunctor on the slice category of . :upside_down:
BTW^2, perhaps non-endomorphism Kleisli maps are of some interest in general? I imagine situations where an agent changes their set of internal states, for example by splitting into two agents, dying etc :see_no_evil:
I'm definitely interested in the question of what the non-endomorphism Kleisli maps are good for in this context, and maybe that's it.
Yes, I guess that's what they're good for!
John Baez said:
Or, spoken like a true category theorist: adding vector fields on is just composing coalgebras of the tangent bundle endofunctor on the slice category of . :upside_down:
For reference, this is the subject of Section 3.2 in Cockett and Cruttwell's 'Differential structure, tangent structure, and SDG', and also appears earlier in Rosickỳ's 'Abstract tangent functors'
In particular, they address the situation @Tobias Fritz is talking about
Thanks, @Matteo Capucci (he/him)! I often to learn things by thinking about stuff, getting ideas, getting stuck, and hoping people tell me things. For example if a month ago someone had just come out and told me that adding vector fields on is composing coalgebras of the tangent bundle endofunctor on the slice category of , I would have said "how boring!" But now it's interesting because I'm trying to do something with it.
Indeed :)
On discrete vs. continuous time in coalgebras, I keep puzzling over this. Maybe it's worth mentioning the following in case people have any thoughts about it. I've been meaning to ask about this for ages and I hope it's not a distraction. Basically what I'm wondering is whether there's a formal way to talk about the semantics of these continuous-time examples (e.g. to make formal the idea that a coalgebra generates a continuous-time dynamical system), and the below is my unfinished attempt to do that.
In the discrete-time examples, like a coalgebra , we can take a coalgebra and compose it with itself times to get . So it's sort of like a dynamical system in discrete time, except that there's an 'effect' that occurs on every time step, in this case spitting out an element of .
To make that a bit more formal, consider the following category, which I'll call for now:
an object is an object of our base category
a morphism consists of (i) an endofunctor together with (ii) a map
composition is given by . So we compose the two functors in reverse to get and we use the functor to compose the two morphisms, .
Then a coalgebra is an endomorphism in , and it generates a functor where means the free category generated by a single endomorphism. This is one sense in which a discrete time coalgebra gives a discrete-time dynamical system.
(If we have the extra data making into a monad then we can also say that a map induces a dynamical system in its Kleisli category, but the above construction doesn't need to be a monad.)
In an example like the tangent bundle functor we also get a dynamical system, but it's not a discrete time one. A map is a set of differential equations, and if we solve them we get a family of smooth maps , indexed by , where each one maps an initial state to the final state after integrating for time . This can be seen as a functor , where means the monoid of nonnegative real numbers, seen as a one-object category.
Unlike in the discrete time case, the morphisms in the image of this functor are not just powers of . (In fact they are all of the form for some map , but I think there are other continuous-time examples where the functor components will not be the identity.) They are somehow a bit "like" powers of though, and I'd be tempted to write them as something like .
So there seems to be some sense in which the discrete-time examples 'generate' a functor and the continuous time ones 'generate' functors . But in the continuous-time case I don't really know how to think about the relationship between the generator and the dynamical system it generates.
If we knew how to do that it seems like it would be quite useful, because then we'd have a formal way to think about the semantics of these continuous-time examples, rather than an informal one.
As an example that doesn't land in identity functors, consider a coalgebra , so a dynamical system with an output. If your category of manifolds is nice enough it seems like should be a map , mapping an initial state in to the final state at time , together with the trajectory of the output variable. This is a map where is the functor .
Nathaniel Virgo said:
To make that a bit more formal, consider the following category, which I'll call for now:
an object is an object of our base category
a morphism consists of (i) an endofunctor together with (ii) a map
composition is given by . So we compose the two functors in reverse to get and we use the functor to compose the two morphisms, .
Very nice! I think this really wants to be a 2-category: for every two morphisms and , we can consider 2-morphisms given by natural transformations such that . In particular, this means that whenever as functors, then every morphism is isomorphic to a morphism . This is surely something that one would like to have.
Nathaniel Virgo said:
In an example like the tangent bundle functor we also get a dynamical system, but it's not a discrete time one. A map is a set of differential equations, and if we solve them we get a family of smooth maps , indexed by , where each one maps an initial state to the final state after integrating for time . This can be seen as a functor , where means the monoid of nonnegative real numbers, seen as a one-object category.
That's not quite right: a differential equation can "blow up" in finite time, so you can't take in general. That's why Wikipedia defines a dynamical system as a partial map . On the other hand, continuous-time dynamics is necessarily reversible, so you can actually consider it as a partial map whose domain is a neighbourhood of . And then, as per this Azimuth comment, the fact that every differential equation has a unique solution for every initial condition implies that there is an equivalence of categories between the copointed coalgebras of (usually called vector fields) and continuous-time dynamical systems in the Wikipedia sense. This is how we can formulate the correspondence between differential equations and dynamical systems in categorical terms.
Nathaniel Virgo said:
As an example that doesn't land in identity functors, consider a coalgebra , so a dynamical system with an output. If your category of manifolds is nice enough it seems like should be a map , mapping an initial state in to the final state at time , together with the trajectory of the output variable.
How exactly is this supposed to work? For example when is just a singleton, your is an element of , or in other words just a single tangent vector on . This doesn't generate any sort of dynamics, no? Speaking in terms of , the problem is that the only kind of maps that can be exponentiated are endomorphisms, but your isn't an endomorphism of or .
You could take it to be a copointed map instead, together with a specified condition , and then solving the dynamics gives the desired map . But then again this works only provided that the differential equation doesn't blow up...
Continuous-time things are hard!
Continuous-time things are hard!
Continuous time is a gremlin introduced to put physicists off track for decades / centuries :sweat_smile:
(ignore me - I'm just a finitist who is still debugging code way past his bedtime and half unconscious :sweat_smile: )
Tobias Fritz said:
Nathaniel Virgo said:
To make that a bit more formal, consider the following category, which I'll call for now:
an object is an object of our base category
a morphism consists of (i) an endofunctor together with (ii) a map
composition is given by . So we compose the two functors in reverse to get and we use the functor to compose the two morphisms, .
Very nice! I think this really wants to be a 2-category: for every two morphisms and , we can consider 2-morphisms given by natural transformations such that . In particular, this means that whenever as functors, then every morphism is isomorphic to a morphism . This is surely something that one would like to have.
This is indeed applied to evaluation action of on , hence a bicategory!
Eric Forgy said:
Continuous time is a gremlin introduced to put physicists off track for decades / centuries :sweat_smile:
Perhaps, but if so then a very useful gremlin! More like a kobold: "mischievous household spirit who usually helps with chores and gives other valuable services [..] He is temperamental and becomes outraged when he is not properly fed". Sounds like a good metaphor for struggles with the continuum in general!
(Can't believe I hadn't seen that "Struggles with the Continuum". Thank you! :blush: )
Matteo Capucci (he/him) said:
This is indeed applied to evaluation action of on , hence a bicategory!
Neat! It sounds like you're using a more general definition of than the nLab definition, namely one that starts with an [[actegory]], right? Where is that spelled out?
Indeed! That's in this paper
The issues you raised about continuous vs. discrete time dynamics, @Nathaniel Virgo, are exactly the things I'm concerned with now! Thanks for explaining them.
I'm too busy to say anything interesting about the math right now, so I'd like to say this.
Tobias Fritz said:
How exactly is this supposed to work? For example when is just a singleton, your is an element of , or in other words just a single tangent vector on . This doesn't generate any sort of dynamics, no? Speaking in terms of , the problem is that the only kind of maps that can be exponentiated are endomorphisms, but your isn't an endomorphism of or .
Ah sorry, what I wrote didn't make sense. I think what I should have written was . Given a state in you get a direction in and also an output in . Then I think what I said makes sense - at least morally, the resulting dynamical system should be a family of maps like . (Ignoring issues like blowing up and so on - I guess I'm imagining some fantasy category of compact manifolds where that can't happen, for now. Maybe it should be otherwise.)
I'm using as notation for whatever process would take a map and give you a family of maps , or whatever the corresponding type is. I don't really know what that process would look like and it surely needs a lot of structure to be assumed. Maybe it's significant that a map is an endormorphism in , I don't know.
Tobias Fritz said:
Very nice! I think this really wants to be a 2-category: for every two morphisms and , we can consider 2-morphisms given by natural transformations such that . In particular, this means that whenever as functors, then every morphism is isomorphic to a morphism . This is surely something that one would like to have.
Maybe it really wants to be a double category. The tight morphisms would be maps in and a square
would be a natural transformation such that
commutes. (I'd guess you can also get that from the Para construction?)
Oh yes, indeed!
Nathaniel Virgo said:
(I'd guess you can also get that from the Para construction?)
Very much indeed :) I updated the nLab page with some of the most recent folklore about Para, including this thing
Nathaniel Virgo said:
To make that a bit more formal, consider the following category, which I'll call for now...
Wow, this is so cool! It's awesome that we can compose a coalgebra with itself even when the endofunctor of the coalgebra isn't part of a monad.
Intuitively, in the case when our endofunctor is , I think this composition involves keeping track of the state updates of our agent, and also keeping a list of all the actions that the agent takes. If we also had a monoid structure on so that we have a multiplication , then we could collapse this list of actions down to a single "net" action. (And in this case we could make a monad having endofunctor ). But I love how we don't need to specify a monoid structure on to compose with itself!
Matteo Capucci (he/him) said:
This is indeed applied to evaluation action of on , hence a bicategory!
Very nice! Thanks, everyone, for bringing it to this point. I think this is a very nice way to think about general forms of discrete-time dynamics. I should blog about this, just to explain it to myself and other people.
Continuous-time dynamics still seems a bit more mysterious. We often discretize time when numerically solving ordinary differential equations (ODE). But still, many modelers want to specify models by giving ODE, and we shouldn't deny them that right. If we always solved these ODE using Euler's method, we could think of an ODE as simply a summary of a family of discrete-time systems with all possible choices of time step . But it's not quite that simple. Still, something like this might still be a good attitude: we can choose ODE as our "syntax", but then map them to discrete-time systems various functors following the pattern of "functorial semantics"
I had an idea for a way to talk about both discrete and continuous dynamical systems.
We need the following data:
And we require that has the structure of a monoid. (By I mean the set of morphisms from to in ).
For example, I think this monoidal structure can be obtained in the case when we have a monad . The unit of the monoid is , and composition of elements of is done by viewing these as endomorphisms of in the Kleisli category for our monad.
Given this setup, we need two more pieces of data:
Note that if , then a monoid homomorphism is entirely described by what it does to . (What it does to is forced: must map to the unit of the monoid . But if , it might take a lot more data to describe .
Basically, the idea is to get a lot of coalgebras that can vary depending on the amount of passage of time they represent. is supposed to advance the system by time.
So, in the discrete case where , is described by , where I think describes "time advances by one step". Using this approach, all we need is a single coalgebra when working with discrete time. But we need a lot more (related) coalgebras to handle continuous time.
Sorry I'm late -- I'd think of Prakash Panangaden's book "Labelled Markov Processes" as a key reference for coalgebras for the Giry monad, and I don't think anyone mentioned it yet. Doberkat also had a book around a similar time. There was a lot of activity in the years leading up to those books, some of it discussed above already.
(Sorry, this is mainly jumping back to @John Baez's original question about a reference, not about continuous time, although I think the same people did look at that e.g. here)
One subtlety is bisimulation, because you can't necessarily copy the notions of coalgebraic bisimulation from the category of sets. But this was sorted and those books discuss bisimulation.
David Egolf said:
Given this setup, we need two more pieces of data:
- a monoid , which we think of as a model for "passage of time" (we might set to model the discrete case)
- a monoid homomorphism
In the case where is a monad and the monoid on comes from that, I think this is the same thing as a functor . It's also quite similar to my idea of a functor , though not quite the same. These things all have the same property that in the discrete case you only need to specify one morphism but in the continuous time case you have to specify a whole compatible family of them.
Nathaniel Virgo said:
In the case where is a monad and the monoid on comes from that, I think this is the same thing as a functor . It's also quite similar to my idea of a functor , though not quite the same. These things all have the same property that in the discrete case you only need to specify one morphism but in the continuous time case you have to specify a whole compatible family of them.
Thanks for explaining this! I like the idea of a functor to from a monoid. I think that's a bit more general than a functor to , which is what I was considering. (I think it's more general because we don't need to be part of a monad in this case).
John Baez said:
Continuous-time dynamics still seems a bit more mysterious. We often discretize time when numerically solving ordinary differential equations (ODE). But still, many modelers want to specify models by giving ODE, and we shouldn't deny them that right. If we always solved these ODE using Euler's method, we could think of an ODE as simply a summary of a family of discrete-time systems with all possible choices of time step . But it's not quite that simple. Still, something like this might still be a good attitude: we can choose ODE as our "syntax", but then map them to discrete-time systems various functors following the pattern of "functorial semantics"
I'm legally obliged to point to @David Jaz's book on categorical systems theory (CST), which I think ties together much of these talking point (David himself wanted to chip in in the conversation at some point).
Using that framework one can unify coalgebraic systems theory, ODEs, decorated/structured cospans, behavioural systems à là Schultz-Spivak, and much more. There isn't yet a systematic treatment of 'time' in this framework, though CST does allow for a somewhat unified view on it, since trajectories of all these kind of systems are picked out by 'corepresentable behaviour functors', meaning in all examples I know of there is a 'walking trajectory' or 'clock system' whose maps into other systems correspond with trajectories. In fact, you get quite a lot of flexibility in what you want to consider a trajectory, so you're not locked in any particular choice. You can as easily talk about periodic trajectories, 'bounded' trajectories, branched trajectories, and so on. Moreover, each 'doctrine of systems' usually has a very clear notion of what the walking trajectory should be.
TL;DR: in my opinion and experience, CST is really effective at accomodating, explaining and systematizing concepts relative to systems, all without having to commit to a specific notion of system. The downside is that is very new and not as mature and well-known as other frameworks to do system theories.
CST very much follows the 'functorial semantics' philosophy by the way, and it's easy, withing the framework, to describe how various theories map to each other.
@Nathaniel Virgo In cs, one common construction that feels a bit like the one you described is taking the morphism from the considered coalgebra into the final coalgebra. By analogy: does the tangent bundle copointed endofunctor admit a final copointed coalgebra, and what does it look like? In a similar vein, what is the cofree comonad on the tangent bundle copointed endofunctor?
Matteo Capucci (he/him) said:
I'm legally obliged to point to David Jaz's book on categorical systems theory (CST), which I think ties together much of these talking point (David himself wanted to chip in in the conversation at some point).
He chipped in on my blog.
TL;DR: in my opinion and experience, CST is really effective at accommodating, explaining and systematizing concepts relative to systems, all without having to commit to a specific notion of system. The downside is that is very new and not as mature and well-known as other frameworks to do system theories.
I don't mind it being new. I want to see if I can use it to develop a good general concept of 'agent-based model' which can:
1) become the basis of software that can easily handle the various styles of agent-based model that people most often use in epidemiology,
2) make it easy to compose models, including 'hybrid' models: models with different parts in different styles, e.g. finite-state machines coupled to systems described by ordinary differential equations
3) make it easy to 'nest' models, e.g. models of agents that are made of parts including 'sub-agents'.
I'm talking about this once a week with @Nathaniel Osgood, @Xiaoyan Li, William Waites, @Evan Patterson, @Sean Wu and soon also @Kris Brown. We need to get some of these ideas converted into working software by around mid-June, so we probably can't afford to be too grandiose right away. It'll be better to get some code that can handle a few agent-based models and can be expanded later, than to tackle something too ambitious and not have clear results by mid-June.
I'd be happy to pull other people into this project - like you and David Jaz Myers, if you're interested - and others. And ultimately I want this project to shift focus from epidemiology to climate change.
However, there's the danger that having too many smart people with too many good ideas will make it harder to get something finished quickly! Right now it seems to be working pretty well for me to talk to people here on Zulip, separately from the weekly meetings I mentioned.
But I do expect this project will continue beyond June 2024, and expand to include some other people who have good ideas and are interested in joining.
David Egolf said:
Given this setup, we need two more pieces of data:
- a monoid , which we think of as a model for "passage of time" (we might set to model the discrete case)
- a monoid homomorphism
Note that if , then a monoid homomorphism is entirely described by what it does to . (What it does to is forced: must map to the unit of the monoid . But if , it might take a lot more data to describe .
Thanks!
I think this could be a practical way to describe the two most important forms of "the passage of time" - discrete time given by and continuous time described by . (You said but many stochastic systems cannot be run backwards in time, so we often want to use its submonoid .)
@Matteo Capucci (he/him) mentioned another more general approach to describing the passage of time:
CST does allow for a somewhat unified view on it, since trajectories of all these kind of systems are picked out by 'corepresentable behaviour functors', meaning in all examples I know of there is a 'walking trajectory' or 'clock system' whose maps into other systems correspond with trajectories.
I'll think about this too, e.g. how it easy it is to implement in code. But right now I'm getting sufficiently happy with "the passage of time" that I want to worry about other issues, like composing and "nesting" systems.
I summarized a bit of the above discussion here: