You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Based on this post, I'd like to implement the stuff @David Jaz discussed in the MIT seminar. https://categorytheory.zulipchat.com/#narrow/stream/229156-practice.3A-applied.20ct/topic/petri.20nets/near/193605154
If anyone has ideas or references that would be a good place to start I'd really appreciate it.
I still need to watch that seminar. But I can guess roughly what you have in mind, and in my opinion it's one of the most exciting things in all of ACT
I imagine a 21st century version of the programming language Dynamo for systems dynamics: https://en.wikipedia.org/wiki/DYNAMO_(programming_language)
Yeah if we have an ACT approach to dynamical systems that yielded useful technology for engineers and scientists, that would be awesome. The types of knowledge representation and computation you can do in Catlab + Dynamical Systems would be a huge "killer app" for ACT.
@Evan Patterson https://github.com/epatters/Catlab.jl/issues/154
Or if you want to really get people's attention, make a better Simulink
This table of constructions tho. If you could build software for engineers that was able to manipulate all of these systems in the appropriate way. image.png
Yes. I think this is realistically feasible
I too found both @David Jaz 's talks extremely elegant and well-thought. I was super excited whatching them, such a beautiful formalism!
Yeah simulink is a modeling tool that is specialized for signal flow graphs. You can shove a lot of stuff into a signal flow graphs if you are willing to hack it in there. What if the tool knew the CT representations of the modeling language?
(Spoken from my comfortable position of not really a programmer)
Yeah I think it is feasible (spoken as a someone who writes scientific software semi-pro)
James Fairbanks said:
If anyone has ideas or references that would be a good place to start I'd really appreciate it.
David Spivak has a paper with some of the beginnings of the formalism here: https://arxiv.org/abs/1908.02202
Arnold and Avez (1968), Perspectives of nonlinear dynamics, Vol 1, E. Atlee Jackson, p 51.
Let be a measure preserving manifold, a measure on defined by a continuous positive density, a one -parameter group of measure-preserving diffeomorphisms. The collection is called a classical dynamical system.
I am intrigued by the concept of implementing open dynamical systems but I don't understand the idea well enough to do so. So suppose I take an automaton with and update function (SxI) -> S' and readout function S -> O. What is composition here? If I follow what I think of as the lens composition of sys1 and sys2, the output 1 becomes the state 2 of the next piece. and the updated state 2 becomes the input 1 to update sys 1. This doesn't really make sense to me. I could imagine horizontally composing the lens in parallel for independent systems, but the actual composition of lens doesn't match anything that I can intuitively identify in a dynamical system
@Sophie Libkind and I were talking about composing automata just yesterday here on Zulip.
I told her I'd ask Moggi to remind me about what he knows about that.
I think the basic idea is that we need a bit more structure for our automata than you described. Let me explain....
A Moore machine has a set of states , an update function and a read-out function , but also a bit more...
It also has a "start state". But I think it'd be more symmetrical to have not just a single start state but a bunch, which can described using a "read-in function" .
So, you can "read-in", then keep updating, and then "read-out".
Then I can imagine composing such machines if the read-out set of the first one is the read-in set of the second one. But the result is not another of the same sort of machine.
Hmm, I need to ask Moggi what he was telling me. It worked better than this.
James, this sounds like an exciting idea. I need to watch the talk, but my initial impression is that this seems doable and very much worth doing.
Yeah I th
ink that it will connect Catlab to a proper CAS for describing the smooth functions for like Continuous Time Continuous Space Systems
On the differential equation front, what I imagine what one would build is a categorical flavored frontend to off the shelf ODE solvers. ODE solvers basically take in a function f, such that xdot = f(x) , initial conditions, and find a single trajectory. There are some that will also take in constraint equations g(x,xdot)=0 in addition. I can perhaps begin to see how you can compositionally talk about building up the total system f out of the f of the subpieces basically using functional programming style combinators. I don't really understand if or how this relates at all to the ideas presented in the talk.
In 1986 I had several conversations with Stephen Wolfram. I was working on extending tetration to the complex numbers. Wolfram was interested in how I moved from a discrete dynamical system to a continuous one. I was fascinated by Wolfram's typification of PDE and iterated functions as the two mathematical systems capable of serving as a foundation for physics. Wolfram was interested in unifying physics by unifying the underlying mathematics. Also he wanted to unify chaotic and non-chaotic phenomena.
I have been been interested in the idea of a universal dynamical system. A single system complex enough to emulate any other basic dynamical system, but simple otherwise. That is why I focus on and it's Taylor series. I focus on the most general "data" that make sense to me, . I hope to be able to show the unity of maps and flows. In A New Kind of Science Wolfram showed that iterated smooth functions were capable of the same behaviors as cellular automata.
@John Baez Conal Elliot’s concat library has an implementation of a Symmetric Monoidal Category of Mealy machines.
Here’s what the update function and the inputs and outputs look like.
-- State transition function type X s a b = a :* s -> b :* s -- Combine Mealy machines op2 :: forall con a b c d e f. CartCon con => (forall s t. (con s, con t) => X s a b -> X t c d -> X (s :* t) e f) -> (Mealy con a b -> Mealy con c d -> Mealy con e f) op2 op (Mealy (f :: a :* s -> b :* s) s0) (Mealy (g :: c :* t -> d :* t) t0) = Mealy (f `op` g) (s0,t0) <+ inOp @(:*) @(Sat con) @s @t
Would the resulting machine from e to f count as the same sort of machine given that there’s an operation that combines the state transition functions from X s a b and X t c d together to act over a pair of starting states? And does such a pairing construction equal the bunch of starting states you’re describing?
@James Fairbanks I was very excited about the potential for combining the formalism with Julia’s excellent support for neural ODEs
https://julialang.org/blog/2019/01/fluxdiffeq/
Do you have thoughts in this direction?
(By the way, @Faez Shakil, you can wrap Haskell code between ```hs
and ```
on separate lines to make it a little easier to read.)
(Thanks @Reid Barton, much prettier now)
Also is anyone thinking about how modern Reinforcement Learning ties into all of this? i.e is a Markov Decision Process straightforwardly reducible into a mealy machine with some hand-waviness about function approximation or would a compositional explanation of it also take into account the dynamics of gradient-based updates to said function approximators and how to put them together while still preserving the semantics of the learning problem?
Conal Elliot fanboy here. Thanks for pointing out that section of concat. https://github.com/conal/concat/blob/838b5a866c9932dc85996ba1391e72177df94cd7/examples/src/ConCat/Synchronous.hs#L58 It appears he is defining composition of two machines as a new machine with the product of the states where the output of machine 1 is fed as the input of machine 2. This is a reasonable definition I think and makes sense. It doesn't obviously match the form given for the open dynamical system.
@Faez Shakil yes I think that the Julia ODE and AutoDiff Ecosystems are the most exciting place to be doing this kind of work. Combining it with Catlab is definitely the way to go.
We have been using OpenPetriNets to combine systems and then giving them Mass Action Kinetics to model Epidemics and Biochemistry. If you can do that for arbitrary Dynamical Systems via the readout and update functions. That would be awesome.
Petri.jl and SemanticModels.PetriCospans are the relevant Julia modules.
I agree that a lot of ML esp. active and reinforcement learning looks like dynamical systems.
Philip Zucker said:
Conal Elliot fanboy here. Thanks for pointing out that section of concat.
It appears he is defining composition of two machines as a new machine with the product of the states where the output of machine 1 is fed as the input of machine 2. This is a reasonable definition I think and makes sense. It doesn't obviously match the form given for the open dynamical system.
Hello Hi that happens to be my primary identity these days as well! There really should be a separate thread to gush about concat somewhere - have you been using it for realwork™?
So my impression was that op2 is just the function defining sequential composition. There are also instances for Monoidal Sums and Products, as well as Braiding of both Sums and Products. Doesn't that capture almost all of the ways that you can compose open dynamical systems (boxes inside boxes etc wired together under the rules given in the talk)?
I haven't ever actually used concat. It does seem like a reasonable way to compose stateful machines. Maybe this is an exact representation of what is presented in the talk, but I don't personally understand the translation. I don't see explicit readout and update functions. I don't really see a Lens structure. I don't think the talk was about just the idea of having an open dynamical system that can be plugged together. I thought he was getting at some specific organization or formalism that I don't understand
I'm not stating facts, what I'm doing is trying to say what things I understand and what things I don't and hoping someone might provide helpful clarification, because this is a topic that is interesting to me.
I'm not sure that open dynamical systems are a mature subject. Consider the following graph, [tetration.png]. Tests of an open dynamical systems could be their ability to transform multiplication into exponentiation, exponentiation into tetration, or Lie algebras into Lie groups. The limited information published on tetration indicates the limits of current open dynamical systems due to chaos. My research is extending the Ackermann function or hyperoperators from the natural numbers to invertible matrices. So I have to use a mature version of dynamical systems. https://www.overleaf.com/read/zjwkzgftsqkm
A simple yet complete? description of open dynamical systems take a time input as a matrix function, and manifolds described by matrix function as an input and an output. Quantum mechanics considers matrix multiplication as capable of transferring the universe from one instant to the next. Matrix multiplication is the composition of matrices, so matrix multiplication of possibly infinite matrices could be important. A comment was made that dynamics is like ML. I would say that ML is like dynamics as that is what ML must emulate.
(https://categorytheory.zulipchat.com/user_uploads/21317/WcKza1nthyKLduSrtD9Vj7Hf/tetration.png)
A simple open dynamical system is and . But that is just the first term of the Taylor series of an open dynamical system.
Since I'm professionally a programmer, I am learning Julia to implement an open dynamical system. Place your orders for features or let me know what is flawed in my model.
See http://tetration.org/Combinatorics/Julia/index.html
Cool. What features of an open dynamical system are you hoping to simulate?
I'm hoping that I have all the features down. This isn't meant to be a partial simulation of a dynamical system, it is the Taylor series of a dynamical system of smooth functions. So it is even more general that the classical dynamical system because it has no constraint to be measure preserving.
The following constraints were added to enable my assertions to be clearly proven. I should mention that , where is an EGF. The benefit of this dynamical system is that all computations can be done in rational arithmetic proving that . I have two different dynamical systems, OFG and EFG with the EFG converted to Julia to .
What's "open" about this? What you're writing about here sounds like dynamical systems.
i cant read this at all image.png
what do you mean by constructing an iterated function at a fixed point, and what does it mean that a composition can do it?
Yes, mathematical language is not being used in the way we usually do, here.
Dynamical system means many things (but not everything); here's a pretty general treatment:
https://en.wikipedia.org/wiki/Dynamical_system_(definition)
Open dynamical systems are what we category theorists are interested in. These are more general than dynamical systems. Roughly, they allow "inputs" and "outputs" that can influence the dynamics of the system. I could point you to some formal definitions, but they'd be tentative: open dynamical systems have not been hammered out for the last 50 years the way dynamical systems have! Doing it right seems to require category theory. That's why we're interested.
Anyway, I think you'll have trouble getting people here interested in your work on "implementing open dynamical systems" until you really tackle open dynamical systems.
reading up this thread is making me wanna work on some of this stuff once i have some more free time in a couple weeks...
well honestly what i want to do is implement some kind of modal type theory that lets you do a more thorough "compiling to categories", like ive probably rambled about here before
in parallel: i'm wondering, what relation do open dynamical systems bear to frp?
or perhaps DCTP, if you wanna get all conal about it? :-)
i bet it's a close one!
@John Baez i found a paper you coauthored that formulates open dynamical systems using decorated cospans—can it be done using structured cospans instead?
well: mostly i'm wondering what the category would be that the cospan is in if you did it that way
That's one of the few decorated cospan categories that I don't know how to also do using structured cospans!
The problem, briefly, is that there appears to be no "free dynamical system on a finite set" - no left adjoint from the category of finite sets to the category of finite sets for which is equipped with a vector field.
Kenny Courser and Christina Vasilakopoulou and I are writing a paper "Structured vs Decorated Cospans", about when you can switch viewpoints, and sometime I need to figure out exactly why you can't in this case. I've been putting it off.
It's possible I'm not being creative enough about what to take as the category of finite sets for which is equipped with a vector field - the category you're mostly wondering about. There's a sort of "obvious" choice, and I'm pretty sure that one doesn't give you the necessary left adjoint.
what would the morphisms in that category be, though?
just functions between finite sets?
oh wait... functions between finite sets, that induce the right vector field?
what relation does this stuff bear to open dynamical systems btw? https://arxiv.org/abs/1405.6881
:eyes: image.png
sarahzrf said:
what would the morphisms in that category be, though?
just functions between finite sets?
Then it'd just be equivalent to the category of finite sets: structure not preserved by morphisms is ignorable "fluff" that doesn't affect the category (up to equivalence).
oh wait... functions between finite sets, that induce the right vector field?
Yes, that's what I'd call the obvious choice.
sarahzrf said:
what relation does this stuff bear to open dynamical systems btw? https://arxiv.org/abs/1405.6881
Fun question! There should be a functor from this category to open dynamical systems. But it's a bit subtle because signal flow diagrams give you higher-order linear systems of ODE, not first-order nonlinear systems of ODE. Any higher-order linear system of ODE can can be rewritten as a first-order system by introducing extra variables. I've never tried to work this out as a kind of functor, though!
I'm glad you're getting interested in this stuff. You're asking good questions.
I was wondering about this recently. It would be really nice if you could just add nonlinear elements to the same kinds of networks in that paper, to get nonlinear odes. On one level it seems like that should correspond to replacing linear relations with nonlinear ones (i.e. replacing with some other category of 'nice' relations that includes the graphs of nonlinear functions as well as linear ones). It seems like maybe getting all the details to work out might be hard, but do you think something like that might be possible?
I guess conceptually it would work like this: consider a set of variables and functions of them over time, e.g. x(t), y(t), z(t). Consider a category whose objects are indexed by sets of variables, and are to be thought of as the set of all 'nice' functions of those variables over time. Then morphisms would just be 'nice' relations between those. (The work is in figuring out what nice means.)
So, e.g. a diode can be modeled as a morphism given by a relation like this (a nonlinear relation in which time doesn't play a role)
and a capacitor would be a morphism something like this (a linear relation in which time enters through the derivative)
Since they're relations you should just be able to draw cups and caps and have them behave like signal flow diagrams, just like in the control systems paper. So if they're electrical components like in this example, the strings would just behave like wires in a circuit diagram.
Does this work? Or am I reinventing something? It would be great to read about it if I am.
(Hmm, I guess the electronics example adds an extra complication, because the cups and caps and copy/add etc. have to treat current and voltage differently from each other. But that can be ignored if we're just interested in dynamical systems. At least I think it can. I can explain the issue in more detail if someone's interested.)
As a non-electronics example, here's what the logistic growth equation ought to look like, :
image.png
The black dots and the cup constrain all their inputs and outputs to be equal at all times, the nodes with constrain their output to be a function of their inputs at every time, is the relation , and is the relation iff
I guess you could draw the d/dt as an integral pointing the other way - then it would look more like the signal is flowing around the loop. But that's just a style thing.
Just an observation: One possible way to make this precise may be using non-standard analysis. There you work with non-standard reals/complex numbers, meaning that infinitesimals are numbers as any other. So you could have boxes standing for "divide by dx', and basically work with calculus leibniz style
We used something similar to extend categorical quantum mechanics to infinite dimensions. I don't know if this technique would actually buy you something in this setting, but looking at that diagram you posted the first thing I'd like to do is making dy/dx into a box like any other
Differential geometry needs a better PR department, everyone's first thought is to jump to nonstandard analysis when differential geometry can do the job just as well
I think it's more a phylosophical thing for me, I really don't like the idea of infinity as "something you can get arbitrarily close to but never reach". It has a very clear geometric intuition but I come from algebra, so...
I wonder if we really need either in this case, though. If we're restricted to smooth functions of , then " iff for all " is already a well-defined relation on that set. (If we're considering a more general class of functions, maybe " is differentiable and for all " would do the job.) It might have looked suspicious to write , but that's just a label. I may well be missing some subtlety here, though.
It seems like the trickier thing is to decide what class of relations to allow in general - we want to be more general than linear relations, but "any relations at all" seems like it would lead to weirdness.
The short answer, @Nathaniel Virgo , is that a lot of this stuff in that paper, Categories in control, can be generalized to nonlinear situations, but doing so requires some choices. The people who are experts at making such choices are differential geometers and analysts - mathematicians who study analysis.
We hadn't wanted to study nonlinear control theory in our paper, because there are whole books, very wonderful books, on linear control theory, and it's very useful. When you're trying to control a system you can often arrange to keep it in a regime where a linear approximation is good. An example is the classic problem of balancing a long stick on your finger: as long as it's almost vertical you can approximate by , and that's what they do when writing programs for robots that do this trick.
The math of linear control theory uses the Laplace transform: this allows you to reduce systems of linear ordinary differential equations to systems of linear equations over a larger field. So, linear control theory is applied linear algebra... but in the category of linear relations over a field that's not just the real or complex numbers. The challenge of understanding this category gave us a nice math paper.
The nonlinear generalization will be a bit more complicated, it will involve making some choices (which would best be done after reading up on nonlinear control theory and seeing what the experts in that subject do!), and it won't use the Laplace transform so it won't lead to such an elegant mathematical subject (which is, by the way, the reason control theory puts a lot of focus on the linear case: you can use the elegant math to do a lot of amazing things). Still, it's definitely worth doing!
I'll mention something @sarahzrf was already talking about: in A compositional framework for reaction networks, @Blake Pollard and I came up with a nice categorical framework for a special class of nonlinear ordinary differential equations, namely polynomial-coefficient first-order ODE. This is not general enough for nonlinear control theory, but it's good enough for the "rate equation of a reaction network".
The limitation to polynomials offers some extra power (as these limitations tend to do); for example we used the Tarski-Seidenberg theorem, which says that relations definable using polynomials and inequalities are closed under composition.
These are called semialgebraic relations, and they're a nice manageable class of relations that lies between "linear relations" and "all relations".
I guess one question is, what's the relationship between the relation-based view of dynamical systems and the open dynamical systems view? An open dynamical system imposes a relation between its inputs and outputs (as functions of time), so they connect in that way, but how do the cospan-based ways of sticking things together relate to just composing relations as morphisms in a symmetric monoidal category? I keep getting puzzled by versions of that question.
seems like a lot of this has to do with relations vs spans vs cospans
something ive seen that's relevant in some work @Jade Master is doing w/ objects like this is that if u have a cospan u can get a span instead by taking the pullback, if pullbacks exist in your category
wait no that's not quite right, what's the construction i'm thinking of...
ack right it was the comma category, which was fairly specific to the fact that we were working in Cat...
i'll take tentative formal definitions for 10 please john.
I think I have some picture of how spans and relations compare for finite linear relations at least, which is an extremely concrete and computable arena. A linear dynamical system has behaviors that are constrained by it's equations of motion to be in a linear subspace of all behaviors. Open linear dynamical systems have behavior on "input" and "output" ports that live in linear subspaces. So you need to have a way to describe linear subspaces. Two usual ways are as a linear combination of generators (range of matrix) or as the space obeying a set of constraints (nullspace of matrix). The apex of a span corresponds to the vector space of generators. The two spaces on the end of the span are the two spaces of ports. Composing two spans by taking the pullback construction amounts to finding a new minimal set of generators for which the interior port is consistently solved out. Alternatively using the other representation, the bottom of the cospan corresponds to the vector space of constraints. The pushout construction finds a new minimal set of constraints that similarly behavor consistently on the interior port. I don't think that necessarily one has to parametrize or think about linear relations this way, but it does seem nice. All of these things are calculable using standard linear algebra packages. I had sort of a more blobby picture of linear relations http://www.philipzucker.com/linear-relation-algebra-of-circuits-with-hmatrix/ just being a thing not composed out of linear maps originally, until @James Fairbanks and @Evan Patterson explained the pullback thing in a way I could understand http://www.philipzucker.com/computational-category-theory-in-python-ii-numpy-for-finvect/ .
If something seems off about this picture I'd be delighted to know. I'm trying to interpret out of language more abstract than I am comfortable in
I don't think I understand really what structured or decorated does on top of this
Open linear dynamical systems have behavior on "input" and "output" ports that live in linear subspaces.
I'm not sure I agree, but maybe I just don't understand. What if our open dynamical system is described by
where describes our system, is the "input", and describes the "output". and are arbitrary smooth functions of time, specified by the "outside world".
Would you say that and live in linear subspaces? Do you count this as an open linear dynamical system or not? Most things I consider open linear dynamical systems, like signal-flow graphs or circuits made of resistors, allow behavior of this sort.
I use structured or decorated cospans to describe systems of this sort (and many others).
[it occurred to me earlier that i think one conceptual stumbling block that made physics inscrutable to me for a long time might have been the fact that it's all mostly formulated as closed systems, which makes it rly hard to apply intuition about like "what do the laws say would happen if i poked it?"]
Good point. Physicists do know how to calculate what systems would do if you poked them, since experiments are all about 'poking' things in various ways. So, for example, it's typical to study what happens if an 'external force' is applied to a system of particles, or an 'external magnetic field' - the buzzword is 'external'. But a lot of the most formal formalisms of physics are for closed systems.
This is something I'm trying to put an end to. I'm all for 'opening things up'.
Engineers already use formalisms like signal-flow diagrams for studying open systems, because they really can't afford to focus on closed systems. For an engineer, a system that doesn't interact with an unpredictable external environment is completely useless!
I feel like that system could be included in my description. My mental model is to discretize and finitize everything, although I do understand that fourier/Laplace methods could supply a different perspective and lift this restriction. Approximate df/dt by finite difference and only talk about a finite time horizon so that time only has N slices. Then the combined system of O,f,I is 3N numbers and the equation of motion is an (N-1) x 3N finite difference matrix. If O and I are truly allowed to be arbitrary and the only port by which you can view the system, then the subspace they live in is their entire subspace of size 2N, so these equations of motion are not that interesting. If you somehow have a mathematical peephole that you can open up to see f (which I've vacillated upon whether that should be ok or not) , then the subspace being talked about is 2N+1 dimensional.
Discretizing is perfectly fine, though we don't want to be forced to do that when studying open systems.
Okay, so I see that O and I are allowed to be arbitrary. I don't know what you mean by saying "these equations of motion are not that interesting", though. It's true that when we blackbox the open system - that is, extract the relation between inputs and outputs, we get the "always true" relation, which is not that interesting. (I study blackboxing in most of my papers on categories of open systems.) But there's more to a system than it's inputs and outputs: there is also its "internal state", here given by .
Anyway, it seems our viewpoints are compatible except that you may be more inclined to treat an open system as only providing a relation between its inputs and outputs, while I develop categories where morphisms are open systems and extract the relation between inputs and outputs by applying a functor ("blackboxing") that goes to a category of relations.
It has been convenient to start from the blackbox from an implementation perspective. It allows you to eagerly project out interior information. Unfortunately, from a usage perspective, this interior information is often exactly what you're interested in. One can thread the information out through output ports, but this is ungainly.
Is this blackboxing where decorated or structured comes in or is that a separate thing?
Decorated/structured cospans allow you to construct categories where the "body" of a system has more structure than its "interfaces" (its input and output).
Like this:
The input X and the output Z are just finite sets: X = {1,2,3} and Z = {6}.
But the "body" of this system (not a technical term) is something more complicated: it's a Petri net.
That's the yellow and aqua stuff, and the edges between them.
This thing is a morphism in a structured cospan category.
What this means, in practice, is that we can take one of these things:
and another one, whose input Y equals the output of the first one:
and compose them by gluing them together, getting this:
That's an example of what we do with structured cospan categories.
So for a circuit example, is it something like a labelling variables in a circuit using finset, but then having the linear relations connecting the variables hovering over the thing?
The objects in a category of open circuits could be finite sets.
Instead of taking the morphisms to be mere linear relations connecting variables, I'd prefer to have them be actual circuits, like this:
We can then describe the "behavior" of circuits by using various functors to other categories.
For example, a category where a morphism is a linear relation involving inputs or outputs, or a category where a morphism is a linear relation involving all wires in the circuit, even the "internal" ones.
What is an "actual circuit" then? A physical circuit I hold in my hand? A graph with labelled notes and edges?
The drawing?
I ask because I was willing to call linear relations one representation of the actual circuit.
Some mathematical representation of an actual circuit. The simplest one would be a graph with labelled nodes and edges - that's been good enough for me so far.
Well, the linear relation says what the circuit does - you could say it "is" the circuit, but there are lots of famous examples of different bunches of wires and resistors that gives the same linear relation.
These examples are important in circuit theory, and we can't talk about them if we say the circuit is the linear relation.
So linear relations are great, but we also need other categories, that give other views on what circuits "are".
Structured and decorated cospan categories are supposed to be general frameworks that can handle lots of these different categories.
In my differential equation sketch above, image.png
I guess one could say the diagram is the circuit, so I guess circuits would be free symmetric monoidal categories with a specific set of generators, and then you'd have a functor from there to Rel, or to some specific category of nice relations. Is that roughly how it would work?
Yes.
I've moved my material from the Open Dynamical System topic to my website at http://iteratedfunctions.com/index.php?title=Open_Dynamical_Systems . The problem is the material is not currently appropriate for the Zuplip Category Theory forum because of concerns it is only relevant to dynamical systems and not open dynamical systems. I've spent decades deriving the most general dynamical systems I could, so I'm surprised that open dynamical systems are even more general than what I've been working on. Anyhow, dynamical systems are important in their own right, but I'd appreciate any insights of how to get to an open dynamical system. Feel free to drop on by. - Daniel
Okay!
John Baez
What's "open" about this? What you're writing about here sounds like dynamical systems.
What is open is the iterator , but I'm building up to explain how to generalize it to invertible matrices. Perhaps some challenges of what an open dynamical system can do that you think can't would be nice. My model is connected to the definition of quantum mechanics using matrix multiplication to advance in time from one instant to the next, so it should be valid for any system described by QM, including the Universe.
I'm not gearing up to discuss the classical dynamical system because the core my work focuses on providing a solid foundation for extending tetration and the higher hyperoperators to , and . I questioned whether my derivation of hyperoperators was compositional and thus open.
sarahzrf : what do you mean by constructing an iterated function at a fixed point, and what does it mean that a composition can do it?
When dealing with dynamical systems it is typical, without loss of generality, to move a fixed point to the origin. Going back to Ernst Schröder's originating paper on iterated functions; taking the dynamics at a fixed point provides great simplification.
I'm building a case to view iterated functions as the iterated composition of a function. Also I want to build towards showing that because the composition of entire functions is convergent, that iterated entire functions are convergent. Since is entire it serves as the foundation for building a countably infinite hierarchy of entire functions. So is finite where .
It just occurred to me that one of my uses of matrices is for handling an open ended number of inputs and outputs. Does that make an open dynamical system?
Consider . Then is a category as and as an iterated functions it satisfies composition.
This is great if , but in the classical dynamical system .
My question is that with two inputs and one output, how is the identity function applied to .
i don't follow—what are the objects and morphisms of the category?
and is that set supposed to be a set of states x, f(x), etc, or a set of functions id, f, f ∘ f, etc?
(my impression is that you mean the latter, but idk if you have a particular x in mind or something)
sarahzrf: i don't follow—what are the objects and morphisms of the category?
The objects are and the morphisms are .
sarahzrf: and is that set supposed to be a set of states x, f(x), etc, or a set of functions id, f, f ∘ f, etc?
I was trying to point out that when satisfies .
the objects are x? do you mean the objects are the points of the state space, or do you mean that there's one object, which is called x?
The objects I feel comfortable working with are matrices and my interpretation is based on physics cannon. So is the matrix representation of the initial state of the system and is the next instant.
that sounds an answer to a different question from the one i meant to ask
OK, one object called .
alright, so the morphisms are the iterations of f? and they are all endomorphisms of x?
it sounds to me like f is already an endomorphism of the state space, probably in the category of smooth manifolds or something, and you're taking the subcategory generated by that one morphism :thinking:
er, that one morphism and its inverse, if you're including negative powers—so we need it to be an automorphism
Yes. I am talking roughly about what @Jade Master discussed in the paper of the definition of dynamical systems showing it's connection with automorphism.
i'm not sure which discussion you're referencing, but i think it's tangential anyway :)
i'm not really sure what you mean about
Daniel Geisler said:
My question is that with two inputs and one output, how is the identity function applied to .
can you rephrase the question?
by "two inputs" do you mean x and t? but if so, how is that different from the discrete case?
I'm building from the discrete to the continuous case.
OK I get the need for , but what about ? What does even mean?
@sarahzrf @Jade Master's article on dynamical systems is at https://jadeedenstarmaster.wordpress.com/2019/03/31/dynamical-systems-with-category-theory-yes/ .
Definition: A dynamical system or flow on a manifold is a group homomorphism
where is the group of diffeomorphisms from to itself. The idea is that your system of ordinary differential equations is given by some vector field on .
I think maybe you're saying that you have a category with on object given by your manifold and with morphisms given by some set of automorphisms of that manifold
In which case your question will probably be answered by figuring out exactly what you mean by x...
If it's the whole manifold then Id is the identity function on that manifold
So id(f^t(x)) =f^t(x)
Jade Master
In which case your question will probably be answered by figuring out exactly what you mean by x...
Yes, exactly! The symmetry of comes from the symmetry of . Consider where . The Taylor series of can simplify based on it's symmetry at and results in the Classification of Fixed Points.
okay, here's a thought: maybe open dynamical systems should be spans, not cospans
i'm remembering that time when @John Baez and @Sophie Libkind were talking past each other regarding pullbacks vs pushouts, & my take was that state spaces are often constructed by a contravariant construction, so the issue was that the construction was either a pushout or a pullback depending on whether you applied that contravariant thing first
in the case of something like a petri net, taking the state space is passing to finite multisets of places, right? that's nearly like homming into N (and it is homming into N if your set of places is finite, as is typical aiui)
EDIT: i know you can also make it covariant but that's not how i wanna think about it rn
altho i do wonder what the relationship is
but dynamical systems are usually formulated where you already have the state space in front of you, instead of the outline that gets contravariantly turned into a state space
if you have an open petri net as a structured cospan, passing to markings gives you a span—the space of markings on the whole net has projections that forget everything but how the in ports are marked and how everything but the out ports are marked
so how about an open dynamical system is a span of smooth manifolds or something, where the apex is the state space of the system and we have projections that give the components of the state that are externally visible?
(what i wrote there ofc does not account for the dynamics, but)
after some double checking, since i do not actually know diff geo: how about we use spans of submersions, and then we have nice pullbacks for composition of systems
that would seem to also line up nicely with the interpretation of the legs as projections of some kind
@sarahzrf Wow, what a gift!!!
I know very little of category theory, but is unique in my knowledge of math because what was a process becomes an object. We go from iteration as a verb to a noun.
Moving from the abstract to the concrete see
http://tetration.org/Combinatorics/Julia/index.html
http://tetration.org/Combinatorics/CIGF/index.html .
The first page has the code for a general dynamical machine written in Julia and the second page explains the mathematics.
An example of a dynamical machine using rational coefficients is where .
processes becoming objects is pretty ubiquitous in math
if anything, that's what codifying the notion of a function is about, and functions are one of the most pervasive and central objects in all of math
oh, this looks similar to what i was suggesting https://arxiv.org/abs/1710.11392
oh wow i went back to the topic i was referencing & took another look & a bunch of what i said above was basically already said there, or even referenced as being known for a while, huh :sweat_smile:
Sarah wrote:
after some double checking, since i do not actually know diff geo: how about we use spans of submersions, and then we have nice pullbacks for composition of systems
Yes, that's what David Weisbart and Adam Yassine and I do in our work on open systems in classical mechanics... which unfortunately is taking a long time to be finished.
Oh, I see now that you've referred to a paper by Adam. I'll warn you that this paper has a lot of mistakes.
However, the idea of composing spans of submersions (I think he uses surjective submersions) is good and will remain!
The main problem was the treatment of composing the Hamiltonians or Lagrangians needed to describe dynamics.
So yes, "open state spaces" seem to be nicely handled using spans.
Composing open dynamical systems seems hard, though, if your concept of dynamical system is just a set of states with a map , or a one-parameter group of maps .
I've decided that in some sense this may be why physicists describe dynamics in fancier ways, using Hamiltonians and Lagrangians. These allow "compositionality".
extension vs intension :sunglasses:
just remembered that i think i actually said something wayback in the seminar thread for your structured cospans talk about how i'd had this sense about openness having this tension with extensionality...
Are there two connected meanings for open dynamical systems? We aren't primarily talking about dissipative structures with no fixed boundary for entropy to flow through, are we?
Nope, basically by "open dynamical system"we mean a system with ports
I don't know if you followed the cospans zoom chat but imagine something similar but with dynamical systems instead of petri nets
So yes, it's not just about composing things, but about having "ports" that allow you to compose things as they were lego bricks
Thank God, just checking.
More technically, what you want is that your dynamical systems become the morphisms of some category, possibly a monoidal one, so that you can compose them in parallel or in sequence
That is, "Open dynamical systems" means that you can basically do string diagrams where your systems are the boxes, and you can wire them together
Yes, what I thought.
I'm not following very well, but this suggests me you could say ODS are cospans, but their behaviours (states) are spans, since taking behaviours is usually a contravariant functor.
This fits very well with what you (@sarahzrf ) said:
so how about an open dynamical system is a span of smooth manifolds or something, where the apex is the state space of the system and we have projections that give the components of the state that are externally visible?
By the way, in 20 minutes (at least if I got my time zones right) @Sophie Libkind is doing a seminar on open dynamical systems on #MIT Categories Seminar
Matteo Capucci said:
I'm not following very well, but this suggests me you could say ODS are cospans, but their behaviours (states) are spans, since taking behaviours is usually a contravariant functor.
This fits very well with what you (sarahzrf ) said:so how about an open dynamical system is a span of smooth manifolds or something, where the apex is the state space of the system and we have projections that give the components of the state that are externally visible?
this is a good story for many kinds of open systems—the problem is that when you're talking about things at the level of generality where you're using the name "dynamical systems", my impression is that you're usually already talking about the state space directly as the definition of the object, not about some kind of underlying phenomenon giving rise to it
like, the wikipedia page for "dynamical system" suggests a smooth manifold of states equipped w/ a flow
If you told a typical mathematician you were studying dynamical systems they'd think you were studying a smooth manifold with a flow.
Or you might say "discrete-time dynamical systems" and they'd think you were studying a smooth manifold with a smooth map from it to itself... or a topological space with a continuous map to itself. Both these are huge areas of study. I have a book about continuous maps from the interval to itself.
Or you might say measure-preserving dynamical system and they'd think you were studying a measure space with a measure-preserving map from it to itself. That's what people in "ergodic theory" think about - another huge area of study.
If you said you just meant any set with a map from it to itself they'd probably raise one eyebrow and wonder what was the point of that.
hmm, the points would be the fixed points, right? :upside_down:
as an object in a topos of monoid actions, of course
Yes, that's true.
Actually there is quite a lot of fun to be had studying a finite set with a randomly chosen map from it to itself. Tom Leinster and I thought about that. LIke, what's the expected number of orbits.
Yeah, as we know traditional mathematicians like their theories raw, at the expense of generalization though. An approach like Myers' is more appealing to me.
@John Baez said
a finite set with a randomly chosen map from it to itself. ... Like, what's the expected number of orbits.
I can't give an answer to the question, but Comstruct, a project with Philippe Flajolet assistance, can. A related problem A Calculus for the Random Generation of Labelled Combinatorial Structures can. A bonus is this article is the main work on total partitions! :heart:
I submit that it is not a random accident that total partitions and their related classification system has something deep to say about how physics works. The main classification system I'm aware of is the immune system.