You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
We can discuss the talk here.
Everyone can see slides of my talk Structured cospans and double categories here:
http://math.ucr.edu/home/baez/structured/
See you on Wednesday April 1st at 5 pm UTC, which is 10 am in California, 1 pm on the east coast of the United States, or 6 pm in the UK. It will be held online via Zoom, here:
https://ucr.zoom.us/j/607160601
unless I give up due to some technical disaster and switch to YouTube.
Looking forward to it! And we can safely assume that, as long as Zoom works, this will always be the room?
@Alexander Kurz Here
(And the link is: https://ucr.zoom.us/j/607160601)
I'm afraid I won't manage on time, unfortunately. Can't wait for the recording!
Sorry, Paolo.
Okay, I'm here folks, in case anyone wants to chat. (I'll be distracted at times gettting set up.)
My talk will focus on less technical aspects because I want everyone to understand that structured cospans are a very simple tool.
But for those of you who like the nuance of higher categories, there's plenty of fun along those lines too.
For example, just as a category is "just" a monad in the bicategory Span(Set), I believe a strict double category is a monad in the bicategory Span(Cat).
But we'll be using "pseudo" double categories, which are pseudomonads in the same bicategory Span(Cat).
This means that "horizontal composition" is just associative up to an associator, etc.
Well, you see, X is just Y in the category of Z
But in the case of pseudo double categories, this is very useful.
(And that's beautiful)
Weren't we going to use this chat?
It seems we're chatting in Zoom for now, and when the video part is done, we'll come here.
Ok I'll ask a question here and catch up on the answer later: Has anyone figured out structured corelations?
I didn't even know there was something called "span"
Should've guessed
So a relation is a span so that the pairing of the morphisms is monic (so it's a subobject of a product). Then a corelation is the opposite - a cospan so that the copairing is epic.
They're sometimes called "rooves" by homotopy theorists.
rooves? what
plural of roof
ahh thanks
wow, the plural of roof is roofs
Yeah I guess you see them called that when describing the derived category
I think the plural is "roof"
:-)
reef :p
Rooves is an old secondary form, and it still appears occasionally by analogy with other irregular plurals such as hooves, but it is not common enough to be considered standard.
@Christian Williams I know what a corelation is... I'm asking about whether anyone smooshed together decorated corelations with structured cospans
I was telling everyone.
ah, sorry
Short answer, I don't think so. Maybe we can ask @John Baez right now.
(I think he's coming here after the zoom call, and since there's lots of people and I need to leave now, I'm hoping I can come back later and find an answer)
Question 1: (Plain) Petri nets present SMC. Is there a version of this fact for Petri nets with rates? Do you get like SMC enriched in ?
Question 2: How do you 'close' and open Petri Net? That is, if I have an open Petri net, how do I plug in the input/outputs with, say, numbers (so to 'assign an initial value to the I/O places')?
I noticed during the talk that is started off very close to something I know, which are Leinster's category of T-spans, for a monad T. It's all upside down, but that's just a choice. Apart from that, T-spans are spans of the form . Leinster uses these to define T-operads, as just monoids in the category of T-spans. We can concieve them as operads whose 'entry arity' is described by T. Intuitively, it seems like structured cospan would let one describe operads with many inputs and many outputs (as opposed to only one output in Leinster's case), with various shapes of arity for the input and for the output
I believe forgetting the arities, operads with many outputs are called polycategories
Has anyone tried looking at things in this direction?
Matteo Capucci said:
Question 1: (Plain) Petri nets present SMC. Is there a version of this fact for Petri nets with rates? Do you get like SMC enriched in ?
Great question. That sounds right... I haven't worked with Petri nets with rates yet; would composition correspond to adding rates or multiplying?
I'm not sure it would be that immediate actually.
The category you generate comes from the token game for a Petri net. So this is like a discrete semantics for the net. The category is token markings for the objects and executions of transitions for the morphisms.
When you have rates, you think of this like a continuous semantics for the net. I'm not sure what you would have the objects and morphisms be.
Yes! We should work out what a monad in is (for, say, open Petri nets).
If I want structured spans, do they fall out easily from structured cospans by taking the opposite of something?
Hi, folks! I've got my coffee refreshed now...
I was wondering about a "general respresentation theorem" in the sense that if you pick appropriate categories for your structured cospans you get different open systems. Should there be some higher category of these open systems such that you can have morphisms between them based on like 2-functors between different cospans or the like?
Matteo Capucci said:
Question 1: (Plain) Petri nets present SMC. Is there a version of this fact for Petri nets with rates? Do you get like SMC enriched in ?
Great question.
A Petri net gives a commutative monoidal category, which is a shockingly simple sort of symmetric monoidal category.
If each transition in our Petri net is mapped to a number in , what do we get for the morphisms in that CMC?
If the transition is mapped to a number and is mapped to , what do we want to do with or ?
Ideally we'd do something that was useful in applications.
We could wildly guess that should be mapped to or something, but I'd rather do something useful.
Jules wrote: "I'm asking about whether anyone smooshed together decorated corelations with structured cospans."
Nobody has done so.
Decorated corelations are where Brendan's work really shines, in my opinion (and his I guess).
For example he proves that every hypergraph category is a decorated corelation category.
Yes, I know... but when you're on "Press Enter to Send" it's hard to edit ones mistakes and when you're editing your previous message it's hard to get out of "Press Enter to Send"! :dizzy:
You can press "shift+enter" to put line breaks in your message.
So understanding "structured corelations", or why they don't exist, would be interesting.
(But when you're editing a previous comment that box goes away.)
Let's talk about math, eh?
Another interesting thing: Kenny and Christina have theorems for conditions under which decorated and structured formalisms agree.
But there are conditions on these theorems.
And there's one case of a very useful decorated cospan category that doesn't seem to be a structured cospan category.
Namely, the category of open dynamical systems that I called "Dynam" in my talk.
This was built using decorated cospans by Blake and I in Compositional framework for open reaction networks.
I don't see how to do it using structured cospans and I'm betting it's not possible.
So there is a lot of fun subtlety in the decorated vs. structured business.
Thibaut Benjamin said:
I noticed during the talk that is started off very close to something I know, which are Leinster's category of T-spans, for a monad T. It's all upside down, but that's just a choice. Apart from that, T-spans are spans of the form . Leinster uses these to define T-operads, as just monoids in the category of T-spans.
Yes, that's interesting. Has someone looked at things like ? I seem to recall someone building something like "-props" this way, though they didn't call them that.
In the theorem that converts decorated cospans to structured cospans, what are the conditions that this fails?
Yes, a polycategory is a monad in a two-sided Kleisli category.
The initial data is a "mixed distributive law" of a monad over a comonad .
The "free symmetric monoidal category" pseudomonad is also a pseudocomonad. This stuff was Garner's thesis.
Rongmin Lu said:
John Baez said:
And there's one case of a very useful decorated cospan category that doesn't seem to be a structured cospan category.
So I seem to recall that the motivation for structured cospans was that decorated cospans are a bit restrictive. But here, you seem to be indicating that decorated cospans are not a specialisation of structured cospans. Am I getting this right?
The motivation was not that they were too restrictive.
The original motivation was that the concept of "isomorphism of decorated cospans" was too restrictive - it didn't match what you'd intuitively want.
Thus, decorated cospan categories typically have "too many morphisms" - morphisms that "should be equal but are not".
We fixed this using structured cospans.
Then Kenny and Christina generalized decorated cospans to fix it in that framework too!
Then they proved that the two approaches agree if 1) we "fix" decorated cospans, and 2) some assumptions hold.
Christian Williams said:
In the theorem that converts decorated cospans to structured cospans, what are the conditions that this fails?
Believe it or not, I haven't had time to think about this.
Do you have an example in Dynam where dynamical systems which "should be equal but are not"?
Although maybe this has the same answer as Christian's question :)
let me try to state some basics and ask if this is correct: ok so the general setup is "take your category of 'closed' things, now a structured cospan over that is an 'open' thing"? and within that a vertical morphism is a remapping of the open bits (input and outputs) and a two-morphism is a remapping of "everything at once"?
Sophie Libkind said:
Do you have an example in Dynam where dynamical systems which "should be equal but are not"?
Problem 1, "morphisms that should be equal but are not", is quite general in decorated cospan categories; this is different from the problem 2, "not every decorated cospan category matches a structured cospan category even after we fix that problem 1".
Dynam might be a great example of the second problem.
so if i have some category of "stuff without inputs and outputs" then in general i would hope to be able to "hit it with structured cospans" to get my hands on the open version?
John Baez said:
Yes, that's interesting. Has someone looked at things like ? I seem to recall someone building something like "-props" this way, though they didn't call them that.
Not to my knowledge, I wanted to point out that approach to "T-operads", because it felt so similar, and the mental picture I have is almost the same. Also the point of studying these are very close, in both cases, it is about composing things, while putting some arities on the side. So i guess there may be a general setting that generalizes both these approaches
@Sophie Libkind - if you want to see an example of problem 1 (see above for what I mean by that), I can easily show you one, but Dynam is not the easiest place to see it.
I would like to see one!
In fact I think problem 1 (the "too many morphisms" problem) does not occur in Dynam.
@Gershom, yeah. Critically, the functor is being thought of as encoding exactly what a "boundary" should consist of.
So if I'm working in a decorated cospan category and notice that it seems like I have too many morphisms that's evidence that maybe I should be working in a structure cospan category instead?
Okay, @Sophie Libkind. Since pictures are really helpful for understanding problem 1, please take a look at page 11 of these slides by Kenny: http://math.ucr.edu/home/baez/SYCO4/courser_syco4.pdf
He's doing the example of "open graphs".
Okay
The punchline is that if two open graphs differ only in the names of some edges, they are not isomorphic in Brendan's original decorated cospan formalism.
The isomorphisms let us rename vertices, but "the decorations just go along for the ride" - so we can't rename edges with an isomorphism in his original approach.
I see!
is this like your example with the symmetric petri net?
This is clearly a defect, and in fact it results in there being a large set of morphisms between any two objects - a proper class, basically!
Matteo Capucci said:
Question 2: How do you 'close' and open Petri Net? That is, if I have an open Petri net, how do I plug in the input/outputs with, say, numbers (so to 'assign an initial value to the I/O places')?
@John Baez - an amazingly simple question that I never considered.
Also @Sophie Libkind asked:
If I want structured spans, do they fall out easily from structured cospans by taking the opposite of something?
Also, in Leinster's approach, is a monad, an the structure of a monad is used to defined the composition, but the monad appears only on one side . A monad can always be decomposed as the composition of two adjoint functors. In your approach it feels like taking half of the monad and putting it on the other leg. This reminds me a lot of the definition of adjoint functors that can be transfered from source to target (except here it is from one leg of a cospan to the other)! I have to re-read Leinster and try to see that makes sense
Sophie Libkind said:
is this like your example with the symmetric petri net?
Umm, not quite.
Matteo Capucci said:
Question 2: How do you 'close' and open Petri Net? That is, if I have an open Petri net, how do I plug in the input/outputs with, say, numbers (so to 'assign an initial value to the I/O places')?
One thing you can do, which may not be what you want, is simply take the underlying closed Petri net of an open Petri net: that is, given , just take .
Christian Williams said:
Matteo Capucci said:
Question 2: How do you 'close' and open Petri Net? That is, if I have an open Petri net, how do I plug in the input/outputs with, say, numbers (so to 'assign an initial value to the I/O places')?
John Baez - an amazingly simple question that I never considered.
Also Sophie Libkind asked:
If I want structured spans, do they fall out easily from structured cospans by taking the opposite of something?
turn it into an endomorphism of the empty set.
Well I think the key part of the question is how to add initial values for Petri nets with rates.
Endomorphisms of the monoidal unit are always interesting. In this case, its the collection of closed nets.
Also @Sophie Libkind asked:
If I want structured spans, do they fall out easily from structured cospans by taking the opposite of something?
I guess given a left adjoint we get a right adjoint , and the original structured cospans are "structured spans" in this new setting!
so we take op of everything!
:+1:
I've never felt the need for "structured spans", but I probably should.
I'm using spans to describe open systems in classical mechanics, but unfortunately they are more complicated than mere "structured spans" or "decorated spans".
I started wanting spans when I was thinking about how to generalize the Dynam decorated cospan category to be over general Riemannian manifolds instead of euclidean spaces with maps induced by maps of finite sets
@Rongmin Lu - I think the buzzword to look under is "T-multicategory".
@Christian Williams I think that should come out of what are the nets you compose on either side, the ones which are one-sided.
@Sophie Libkind - cool!
That's really interesting! Now I see why you might be interested in super-technical questions about Dynam.
Leinster's book higher operads, higher categories, is where I learnt this. Leinster uses this for defining globular operads
Thanks!
That paper is sort of a subobject of the book.
Gershom said:
I was wondering about a "general respresentation theorem" in the sense that if you pick appropriate categories for your structured cospans you get different open systems. Should there be some higher category of these open systems such that you can have morphisms between them based on like 2-functors between different cospans or the like?
So you're talking about going up a level to the world of initial data, the 2-category of finite-colimit preserving functors? In the paper, I think they formalize as a (double?) functor . Then if you take the "double Grothendieck" of that, I think you could have different kinds of open systems interacting. Is this what you mean?
@Sophie Libkind - I think you'd be really interested in what David Weisbart and Adam Yassine and I are doing on open systems in classical mechanics. We're doing Hamiltonian and Lagrangian systems, not general dynamical systems (= manifolds with vector fields), so we don't need to fight for priority.
That sounds really cool! How do I learn more?
Hmm. David and Adam have a paper on this. It seems they haven't put it on the arXiv yet for some reason.
This paper is full of serious mistakes: https://arxiv.org/abs/1710.11392
Adam quit being my grad student and started working with David Weisbart, another prof at UCR.
They fixed that paper, which required some very nice brand new ideas.
So now they have a fixed one, but I believe David wants to polish it more.
One technical thing is that the category of manifolds doesn't have pullbacks, so composing spans is more delicate.
The more physically interesting thing is how you combine Hamiltonians or Lagrangians when you compose open systems. This cannot be done in the "decorated span" approach.
Wow! This sounds very cool. I don't know much about the physics perspective but am excited to learn
Anyway, I could pester David to release a paper, or just talk to you, at a rate proportional to how serious you are about pursuing open dynamical system.
@Jade Master was interested in studying open dynamical systems, but we haven't really done it.
Btw, I think Dynam is a case where the "too many morphisms" problem does not arise.
The reason is that a vector field on is just equipping with extra "structure", not extra "stuff" (in the technical sense of "stuff").
Whereas equipping a set with a bunch of edges to get a graph is equipping it with extra "stuff" - that's where the problem starts.
the stuff, structure, and properties was one of the first things that got me really jazzed about category theory!
Hi yes I was interested in this but have gotten seriously distracted. Here's a blog post I wrote about what the category of dynamical systems is like
https://jadeedenstarmaster.wordpress.com/2019/03/31/dynamical-systems-with-category-theory-yes/
Thanks Jade!
I was thinking about dynamical systems as manifolds with and action of R...this is the integrated version of the vector space approach.
You're welcome :)
One moment...I have more to say but I g2g
I'd love to talk more :)
The R action sounds really interesting and much more general than the vector field approach
Christian Williams said:
"In the paper, I think they formalize as a (double?) functor ."
In our paper, Kenny and I show how a morphism in , i.e. a finitely cocontinuous functor between categories with finite colimits, gives a symmetric monoidal double category of structured cospans.
Then we show how a square in commuting up to natural isomorphism gives a map between such symmetric monoidal double categories. This result is very useful - @Jade Master and I used it in our work on open Petri nets.
Clearly these are just steps toward the final result you're dreaming of. I guess that'd be something like a 2-functor from the "arrow 2-category of " to a 2-category of symmetric monoidal double categories!
We got tired before reaching the final result.
the idea that you want your stuff to be open in order to facilitate compositionality—
mmm, a lot of things bubbling in my head :thinking:
so, for one thing, ive definitely seen the phenomenon before—it crops up a ton of in PL in the guise of having to extend a definition from closed terms to open terms in order to be able to do induction, because closed terms contain open terms
but another thing it's making me think about is, like...
well—i can't quite articulate what the general idea is that i have in mind, but one relevant thing is game semantics
John Baez said:
Yes, that's interesting. Has someone looked at things like ? I seem to recall someone building something like "-props" this way, though they didn't call them that.
@Christian Williams basically answered this, but I just wanted to add some extra emphasis. Whenever you have a distributive law between a pseudomonad and a pseudocomonad on a bicategory, you can build a two-sided Kleisli bicategory. When "=" is the free symmetric strict monoidal category monad on , one choice of distributive law gives you polycategories. But another choice of distributive law should give you properads, or props, etc.; all sorts of categorical structures that have multiple inputs and also multiple outputs. There should be a whole theory of "generalized polycategories" (or "generalized props", I'm not sure which term is best) analogous to the theory of generalized multicategories, and in particular that works with virtual double categories rather than bicategories (thereby explaining in what sense can "equal" ). However, to my knowledge no one has developed this idea further than Garner's paper that shows how to get polycategories.
so i had a conversation a little while ago where i argued that the point of game semantics is to describe what happens at a boundary, without regard to what kind of process or object gives rise to the interaction, but also that something you have to do when working with game semantics is shift where boundaries lie
(I keep mentioning generalized polycategories in hopes that someday someone will take them further!)
and for a while now ive had this mental itch about the idea of stuff like... say you work in Set, in classical foundations; then the world is maximally "closed" and "extensional"—a function's domain is part of its data, and it is defined on exactly that domain, and it has a specific value that it takes for each of the inputs in its domain, and there is no question of inquiring any other information—total contrast to, say, the idea in programming that a function consists of some kind of logic that gives rise to the outputs you get when you apply it
whereas if you deal directly in syntax, then the world is maximally "open" and "intensional"—a function really is its logic, and if you wanted to, you could plug in anything you like, change the rules of your logic or operational semantics (depending on what kind of thing you're doing) and use it in all kinds of ways that are accounted for by the intension of the term, but which are lost if you take its graph in the most basic kind of semantics
and, hrmmm,,,, i dunno, something about the notion of wanting to make your systems open so that they are compositional...
I vaguely remember there's some good stuff about openness and compositionality in the introduction of Brendan Fong's thesis. (And also some pretty similar ideas in the introduction of my thesis, which came out the same year)
hm
(The interesting bit of my thesis introduction got turned into a blog post, https://julesh.com/2017/04/22/on-compositionality/)
haha i saw the definition you gave of compositionality at the beginning, almost tabbed back to start complaining that i didnt think it was quite the point, then realized that you were probably about to start making the same complaints :sweat_smile:
tbh id say that compositionality has a lot to do with homomorphisms
Yes, a better definition is that behaviour is functorial, I wasn't thinking about that when I wrote it. But then you have problems with all the places that compositionality appears without having category theory around, which is most of them
heh
I think this is enough for the compositionality ~ openness idea, maybe
anyway, like—one thing ive been harping on lately is the idea that a lot of semantic constructions in logic and/or PL are basically trying to keep enough intension around to be able to talk about properties that would naïvely get destroyed when passing to semantics, without having to have the burden of total intensionality
Nice.
(e.g., this thread https://twitter.com/sarah_zrf/status/1237496653520797697)
oooookay i think ive started to figure out where i initially wanted to go with this :sob:
Mike Shulman said:
(I keep mentioning generalized polycategories in hopes that someday someone will take them further!)
@Mike Shulman: maybe this is going off on a tangent, but do you have a picture for how a theory of generalised polycategories/etc. in a virtual double category framework would relate to operadic categories, which were also developed to describe similar sorts of structures?
so i brought this stuff up, and the game semantics thing, because one of the big themes in it is like... making something interact with stuff that isn't originally part of the universe it lives in, but which it can still deal with as long as you have enough of the intension left to know what it would mean in this new context
example: you can define what PL people call System T (i think "System T" is something different to logicians though?), it's a simply-typed lambda calculus with natural numbers and functions and structural recursion over the natural numbers. it is total—every well-typed term you can write in it terminates.
if i write a term in system T like λ(f : nat → nat). f 3
, that's something that exists within system T, and which i can only apply to other system T terms, which are all computable (total, even)
but you can give system T naïve set-theoretic semantics—interpret the type nat
as , and each type t_1 → t_2
as the set of functions between the interpretations
and then you can interpret terms as giving rise to elements of the sets associated to their types
so the term above, λ(f : nat → nat). f 3
, gives rise to a whole function
and you can apply it to all kinds of things that can't be defined in system T
well, i guess ive kinda contradicted myself with this example since i suggested earlier that naïve set-theoretic semantics is maximally closed and syntax is maximally open...
:cry:
uh, well. anyway.
i think what i wanted to bring up / ask about was like... how this kind of phenomenon plays into open system stuff
e.g., i'm looking at a petri net and thinking, "sure you can compose this with other petri nets, but what if you want to have some external mechanism adding things?"
seems like openness opens the door to that question & also opens the door for answers to it—what formalisms are there for that?
@Nathanael Arkor reference?
Yes, that's a fun question. You could make up lots of other "machines" that can hook up to an open Petri net and feed in stuff or remove it. You could try to create a kind of framework for describing all possible such machines, or at least a large class.
I haven't tried that!
I thought about a question that might be the same question, if you have an open system , what's a context/hole that you can plug the system into?
You mean, what's the fully general kind of contexts that it can be plugged into?
A generic but usually not adequate answer is: a pair consisting of a state and a costate
@Mike Shulman: Operadic categories and Duoidal Deligne's conjecture (https://arxiv.org/abs/1404.3886) is the one I'm thinking of, but Koszul duality in operadic categories (https://arxiv.org/abs/1812.02935) also goes into some detail
anyway i chewed on some similar stuff a little while ago
to do with like
gaps and plugging in and formal duality
I might actually have an axiomatic answer to that question completely by accident, as a side effect of something I did with open games. Coming up with any interesting examples is a whole other thing though
and i keep. having this nagging feeling. of analogy to möbius inversions.
@Jules Hedges - okay, that's a pretty easy answer: or a morphism and a morphism . We can compose these with and get a morphism .
i feel kind of silly but i swear im not joking :sob:
But I thought you were looking for a much more cosmic answer, @Jules Hedges. Like: we have an open system, which is a morphism in some category, but we want to know what morphisms in other categories we might meaningfully compose it with. That's harder!
errr, not the "möbius inversion formula" which google pulls up if you search "möbius inversions", i mean like a plane being turned inside out as a kind of möbius transformation
anyway i bet the answer to the question is something to do with profunctors
the most obvious case seems to be if you have a functor from open petri nets to some other category, like you gave an example of, and then you compose the image of a petri net w/ a morphism there
but that's just a representable profunctor, im pretty sure
or, uh, something
I might be going off on a total tangent here. In https://arxiv.org/abs/1904.11287 I defined a thing called a "monoidal category with contexts". If is your category of systems, for objects you have a set of contexts into which you can plug a morphism of type . It wants to have type
Being a functor tells you how you can take a system and consider it part of the context of another system
It's interaction with a monoidal product is less obvious, it's not just a monoidal profunctor, I work it out in that paper
In any case, I haven't come up with any example outside of game theory. Maybe it's a useful definition, maybe not
This sounds closer to the "more cosmic" version of the question you raised.
someone read this and tell me the answer, please https://twitter.com/sarah_zrf/status/1230320321871450112
Oh yeah, and I proved that if your category of systems is traced (which at least includes all the compact closed examples), defining the contexts for to be the morphisms satisfies the conditions I wrote
ooooh this looks VERY similar in vibe to the kind of thing im talking about
look at this too https://twitter.com/sarah_zrf/status/1230509512769011719
picking this stuff up again: seems to me like, for Set-valued functors at least, coends would be more naturally phrased as being of things C × C^op → Set, because the thing on the "left" is the start of the gap and the thing on the "right" is the end of the gap https://twitter.com/sarah_zrf/status/1230320321871450112
- depleted sarahzrf (@sarah_zrf)Cool! Let's talk about this.... but tomorrow, I need to go to sleep first
man they werent kidding when they said that the notion of "inside" and "outside" is actually really tricky, gg camille jordan @_@
It is really tricky indeed. But lately I'm finding the notion of input and output, or of "boundary" altogether, more problematic :slight_smile:
So I guess something I really am wrestling with at the moment is that we are trying to develop tools to "compose things together" and to "open systems" that are very general, and often abstract over the underlying systems themselves
At Statebox we needed a way to glue Petri nets with each other, and I tried different methods for a couple of years, and basically no method ended up doing what we really needed. Then I finally realized that, for our applications, ports were actually an hindrance and not a feature (!!!)
._.
See it this way: A Petri net is not an open object per sé. It doesn't know anything about ways to merge and compose with other nets. That's a notion that you basically impose on it, but it doesn't come natural
So in the end I realized that everything that I needed was taking coequalizers on the places and transitions of the net. So I was able to restate things in terms of functorial mappings between free SMCs and to classify the functors that gave me exactly the glue behavior I wanted
But this has nothing to do with ports. You can definitely plug ports on top of this and impose the composition to follow these rules I was talking about above, but that's an arbitrary step which is "extrinsic" from the net itself
In other words, it's you that decide which places/transitions are open and which are closed. The net doesn't know anything about this and if you are able to merge places/transitions, you are able to merge all of them, not only the open ones.
It's still quite difficult for me to explain clearly what I mean I guess... xD
oh, i mean sure that's true
One thing you can do is just say "The category of Petri nets has finite colimits, therefore we are able to glue them together".
That's more or less what I ended up doing. Precisely, I classfied the colimits that give me gluings that model asynchronous behavior (glue along places) and synchronous behavior (glue along transitions). What complicates things a bit is that not each colimit works. When you consider nets in "context" (that is, with a functor to a semantics attached to them), then things become a bit more complicated
But the point is that when you then put ports on the net, and define the behavior in terms of such colimits, the behavior of the ports is less imposed and more organic. I guess this turns out to be similar, more or less, to what the structured cospans approach gives you
I don't understand "synchronizing along transitions" very well yet, alas.
I'm sorta hoping @Jade Master will think about those and enhance our open Petri net stuff to include those. It's probably quite nice.
So my interpretation of synchronizing along transition is the following (and it's weird): Imagine you have the following net:
O -> | -> O -> | -> O
(I hope my poor ascii drawing skills are understandable enough). Now I want to synchronize the two transitions. This intuitively means "Each time the one on the left fires, the one on the right fires instantly after". The synchronized net is then
O -> | -> O O
The place in the middle becomes detached from the main net because no token really stays there anymore, it gets immediately consumed after
So basically this amounts to go back to the work of Montanari in 1990 and start thinking again about functors that are not "transition-preserving", that is, functors that map a transition of a net to a sequence of transition of another net. The transitions in the first net represent the synchronization of the transitions in the second.
Filippo Bonchi, Pawel Sobocinski and others gave a port-based, observational-behavior version of this in some papers around 2013 I believe. But really allowing this more general class of functors just encompasses both place and transition gluings quite easily. But then you get "too much stuff", so it's a matter to classify the functors that actually correspond to place and transition gluings. Both can be characterized as equalizers of some other stuff, which would be a nightmare to define here
But really, this idea of synchronization becomes clear imagining a "message layer" on top of the net, that is, you can attach little message wires between transitions that say things like "this firing event happened". This may be pointless from a chemistry point of view but it's extremely needed if you want to use petri nets in programming
Hmm, your example is (as you warned me) a bit weird because it seems to involving "breaking" things, not just gluing them.
In the end the real insight for me is that transitions gluings are synchronous events. You can just "conflate" multuple firings into one saying (this chain of firings happens instantly). On the countrary, place gluings are asynchronous: when you glue places you are conflating "bags of resources", but said resources do not have to be consumed instantly. This is also the main reason why gluing along places is more problematic wrt the reachability relation of the net, while transition-gluing is usually a bit friendlier
John Baez said:
Hmm, your example is (as you warned me) a bit weird because it seems to involving "breaking" things, not just gluing them.
Indeed when you spell things out this gluing amounts to a double pushout rewrite in the category of free symmetric strict monoidal categories and functors between them, so there is definitely an "erasing" part going on
You first introduce a new transition representing the synchronization of the two, and then erase the old ones. This seems absurd when you consider the net as is, but when you consider it along with a semantics it makes more sense, since this new transition is mapped exactly in the composition of the images of the two old generators in the semantics
I guess this is related to what @sarahzrf was saying. I always work with nets connected to some sort of semantics (usually types and functions in some programming language), so for me a transition always corresponds to "doing a thing" in this target category, and all the net operations I do need to preserve this assignment of meaning
Anyway it's super late here and I'm going to sleep. All this stuff is explained in a new version of a paper I put on the arXiv some time ago. I wanted to submit it somewhere before updating the arXiv version but then the apocalypse happened, so I'm a bit confused wrt what to do :confused:
You could submit it to the journal Mathematical Structures in Computer Science; they seem pretty good. Or are all journals beneath contempt for computer scientists?
yes
Okay. There will also be a proceedings for ACT2020... but that conference is probably less prestigious than older ones. Are the older ones too stodgy to switch to meeting online?
i only posted that as a joke ;p
although now that i think about it, i feel like ive heard of there being a stereotype of cs people always going for conferences and not journals, so maybe it wasnt entirely wrong
Fabrizio must fit that stereotype at least a little, since he doesn't know what to do with a paper now that coronavirus has struck! As a mathematician my reaction would be "publish it, of course" - the journals haven't shut down.
afaik most conferences are proceeding just going virtual
I'm not sure it's a stereotype rather than publishing in conferences being the de facto norm for CS
for whatever reason, there's a culture of conference publications in CS
oh oops
Stereotypes can be true.
what's a presheaf on petri nets like?
i'm imagining, like...
a presheaf on petri nets should correspond to a "type of object that can act as a sink for tokens"
actually no, not just a sink i guess, but something that "drives the output end"
yoneda-embedding a set S of places gives you the presheaf where the type of object is "petri nets with output places S"
hmm, presheaves have always for a while felt rather coinductive to me
I've thought a bit about this question of into which contexts can I plug an open Petri net or other open gizmo. It is definitely the case that for a closed Petri net, you get to decide which places to think of as 'inputs' or 'outputs' and so you have a lot of freedom if you start with a closed Petri net or two and ask if you can 'open' them in a way that they can be composed. Similarly, in hypergraph categories, or whenever you can duplicate input/outputs, the interfaces seem to lack a little luster, as you can just copy (or delete?) things as needed to get interfaces to match up.
I guess I was more of coming from the perspective of given some 'typed' hole with as interfaces, I'd like to 'query' my Petri net library for something to stick in there, but in these setups is 'big.' You can add some conditions on the legs of your cospans, but it is a bit unsatisfying. I'm not sure if it was a motivation, but I gather it is related to the stuff Pawel's group was doing at ACT Leiden with bicategories for open systems and later, the stuff David and Brendan were doing with po-categories where you have some posetal structure on the morphisms, and you can just ask for one of the best ones, whichever end that is for you.
@Fabrizio Genovese I like this idea about working with free symmetric monoidal categories to do gluings rather than pushouts. Symmetric monoidal categories are what Petri nets are really trying to get at in some sense so you can make things work better by just working with what you're morally describing.
@sarahzrf I bet presheafs on Petri nets are probably pretty cool. @John Baez wrote this blog post about the depleted version of presheaves on Petri nets: https://johncarlosbaez.wordpress.com/2019/10/06/quantales-from-petri-nets/
In this post the depleted presheafs on the free category on a Petri net is a downset. It is a set of markings which is closed under reachability.
I assume that "presheaf on the free category of a petri net" is what you meant by "presheaf on a petri net"
@Sophie Libkind The R-action is more general but the way I was thinking about it was probably too ambitious. Being able to glue R-actions together like this would mean being able to do things like combine two "2-body problems" together to a get a "3-body problem". This is a lot to ask for because the dynamics of the 3-body problem is much more complicated than the 2-body problems.
When I naively tried to go ahead and make structured cospans of dynamical systems, I ran into an issue.
ahhh actually i meant presheaves on the category of open petri nets john talked about in this talk
To compose two open dynamical systems you need a span from the output of the first system to the input of the second system. To compose them you want to take their pushout. Finding such a span in Dynam to take a pushout over seems very/rare difficult. It would mean finding a dynamical system which simulates the behavior of both the left and right open system. If you could find something like this you would be good but I don't think that is possible.
Oh okay Sarah
like... say we have some notion of "device that can be hooked up to a petri net's output places to drive them"
then this defines a presheaf—
F(S) for a place set S is the set of devices that can be hooked up to the set of output places S
Right. There's a lot of them. Let me just watch that part real quick :nervous:
& then we can hook up a petri net with output places S to such a thing to get the action on morphisms
say the elements of F(∅) can be seen as fully-autonomously-functioning systems
@Jade Master i mean a category of structured cospans whose morphisms are petri nets
im not too picky beyond that
@Jade Master In 1986 I had several long discussions with Stephen Wolfram who commented that unifying physics could require the underlying mathematics to be unified. I was working on extending tetration to the complex numbers. Wolfram was interested in generalizing my work to fractionally iterated dynamics. [BPIF.pdf] I'm writing up my research that deals with a combinatorial analysis of smooth iterated functions. (https://categorytheory.zulipchat.com/user_uploads/21317/NXh9pqM9V6ykkx-TelXJ-OYM/BPIF.pdf)
Hm interesting. What connection do you think this has with dynamical systems?
@Jade Master I have a very general extension of iterated functions, more general than dynamical systems in physics. I'm using combinatorics to express the Taylor series of a general dynamical system.
Oh. I see how discrete dynamical systems are iterated functions but I'm not sure how continuous ones would be.
@Jade Master Yes, that is my unique research. Given a map and add a symmetry results in a flow. I'm claiming all maps are also flows, except for superattracting fixed points. Consider how extending tetration to the complex numbers raised the same sort of issues as going from discrete dynamics to continuous dynamics.
I'm afraid this is far from my expertise.
John Baez said:
You could submit it to the journal Mathematical Structures in Computer Science; they seem pretty good. Or are all journals beneath contempt for computer scientists?
Do you remember that tweet of mine when I said that I was appalled by the latex class the journal used, that wasted 4 hours of my time with no success whatsoever? Well... :slight_smile:
sarahzrf said:
F(S) for a place set S is the set of devices that can be hooked up to the set of output places S
In my experience, the difficult part of using (pre)sheaves is not defining what they do on objects, that's usually intuitive. But what they do on morphisms. They need to be functors after all, and in many context the intepretation saying "I attach to this thing the set of things that..." doesn't have a clear interpretation in terms of what functions between these sets should do, and how they are related with the underlying topology.
I'm working with @Matteo Capucci on stuff that can be extended to sheaves of petri nets, but the focus is the opposite. It's much more fun to have petri nets as the codomain of your presheaf. This represents something akin to "attaching a computation device to this point in space". This is also a context where you really need open nets to make sense of how morphisms in the underlying topology are transported to the net layer.
We're going a bit slow with this lately, we'll probably open a topic in #applied category theory about this
well yes, note that i said "can be hooked up", and then added
& then we can hook up a petri net with output places S to such a thing to get the action on morphisms
Fabrizio Genovese said:
Do you remember that tweet of mine when I said that I was appalled by the latex class the journal used, that wasted 4 hours of my time with no success whatsoever? Well... :)
No, I don't remember that. When @Joe Moeller published there he found out that the latex class used by the journal doesn't work; I believe he emailed an editor and found out that you shouldn't bother trying to use it.
I can't even remember how I dealt with this issue when I published in this journal... but whatever I did, it apparently wasn't painful enough for me to remember it!
The University of California has a deal with this journal that lets us publish papers there open-access for free. That's worth something.
Yes, I wrestled with the MSCS class for a bit, but then I told the editor I was having problems and he said to just send it as is, and they took care of it. So I didn't end up having to do anything.
Also yes, UC and CUP have a deal so UC authors get open access fees covered. As far as I can tell, I was the second paper published open access in MSCS at all!
https://www.cambridge.org/core/journals/mathematical-structures-in-computer-science/open-access
Joe Moeller said:
Also yes, UC and CUP have a deal so UC authors get open access fees covered. As far as I can tell, I was the second paper published open access in MSCS at all!
https://www.cambridge.org/core/journals/mathematical-structures-in-computer-science/open-access
Then I'll probably mail them and do the same! Thanks!
Jade Master said:
Sophie Libkind The R-action is more general but the way I was thinking about it was probably too ambitious. Being able to glue R-actions together like this would mean being able to do things like combine two "2-body problems" together to a get a "3-body problem". This is a lot to ask for because the dynamics of the 3-body problem is much more complicated than the 2-body problems.
This reminds me of something I've been running up against. If I only know the behaviors of two systems, then if I try to compose them by identifying part of their state spaces then I really don't know what the total system is doing; the interaction could completely change the behavior of the individual systems! On the other hand, if I know something about the mechanism of why the behaviors arise (say I have a vector field on the state space), then if I compose dynamical systems I can "add" the effects of the two systems on the shared states.
Jade Master said:
To compose two open dynamical systems you need a span from the output of the first system to the input of the second system. To compose them you want to take their pushout. Finding such a span in Dynam to take a pushout over seems very/rare difficult. It would mean finding a dynamical system which simulates the behavior of both the left and right open system. If you could find something like this you would be good but I don't think that is possible.
What are you taking to be Dynam here?
@Sophie Libkind wrote:
This reminds me of something I've been running up against. If I only know the behaviors of two systems, then if I try to compose them by identifying part of their state spaces then I really don't know what the total system is doing; the interaction could completely change the behavior of the individual systems!
So you're trying to compose systems by, for starters, doing a pushout of their state spaces? It's interesting that in physics people never try to compose systems this way (as far as I know). They typically use a pullback. So I have almost no intuition for how I'd do a pushout... or under what circumstances I'd want to. Do you have examples in mind?
In a sense, the formalisms of classical mechanics (the Hamiltonian and Lagrangian formalisms) can be seen as providing rules for making it pretty easy to compose physical systems. But these rules are largely left informal. In my work with Adam Yassine and David Weisbart we are trying to formalize these.
i wonder if there's a miscommunication here somewhere? like
often state spaces arise from contravariant constructions such as homming from some sort of base object into a fixed codomain, so you might do a pushout on that base object and that corresponds to a pullback of the state spaces or something
e.g., you're doing pushouts of petri nets, but i imagine the spaces of markings are getting pullback'd
or, say, dont you get quantum state spaces from classicl ones in a way like this?
@John Baez I am thinking of pullbacks (as in the Dynam category you define in the Rx network paper)! I am interested in where else we might find this kind of construction
And also how to think about composition in this "sharing reasources/states" way vs. an input/output approach
sarahzrf said:
often state spaces arise from contravariant constructions such as homming from some sort of base object into a fixed codomain, so you might do a pushout on that base object and that corresponds to a pullback of the state spaces or something
Yes! Perhaps that was the miscommunication since I'm often thinking of state spaces are euclidean spaces induced by Finset
@Sophie Libkind - okay; when you write:
if I try to compose them by identifying part of their state spaces then I really don't know what the total system is doing
that sounds like a pushout. To do a pushout of spaces we take their disjoint union and then identify some points of with some points of . To do a pullback we take the product and then take a subspace of that.
I was imagining that if I have state spaces and which are identified along via , then the total system is the pullback
Another way of saying my previous comment is if I have a vector field on and (Riemannian spaces) individually then I can pullback the vector field on to a vector field on by "adding along ". However if I only have behaviors on and then it is not clear how to pullback the behaviors to .
Okay, so we're doing pullbacks, not "identifying parts of state spaces". Then yes, I agree this is a fun thing to think about. It would be cool if one could prove there's no good solution to this problem unless one uses some of the tricks physicists use - namely, to not consider all dynamical systems, but only certain specially nice ones.
i still think "identifying parts" is going on, it's just not that the state spaces are what are having parts identified
is there a good name for the thing that gets turned into a state space thru a contravariant construction?
"object space", maybe?
perhaps that's biased toward the classical mech case
I like classical mechanics. So yes, if you take two gizmos made of rocks and springs and identify a rock in one with a rock in another, we need to do a pullback of their state spaces to get the new state space.
I guess it's because the state space of a finite set of points sitting in a space is , which is contravariant in .
I don't know much about classical mechanics, but can you give an example or idea of a rule that makes physical systems easy to compose?
that's what i said, john :p
Well, one rule is "add the Hamiltonians and then add in some interaction terms to get the Hamiltonian of the composite system". I
I know it's what you said, Sarah.
I do this thing where if I don't completely understand what someone said at first, I try to say it in a way I understand.
In these systems is it typically a vector field that's driving the dynamics?
yeah ok i live in a glass house here
@Sophie Libkind classical mechanics the dynamics is given by a vector field, but the vector field is in turn determined by a function on the state space, the "Hamiltonian".
And then there's another approach, the "Lagrangian" approach, which works a bit differently, but similar.
Oh interesting! I would like to learn more about this. Do you have a recommendation for where to start?
I've also been wondering about circuits. From what I can tell many of the equations like Kirchoff's and Ohm's laws govern the behavior of the circuit, but don't explicitly give how the dynamics flow in the way that a vector field would. Do you know if there more explicit ways of describing a circuit?
interesting
i think one of the things about circuits is that youve squished your system down to being largely 1-dimensional, so you dont need vector fields as much
...is the comforting lie i believe as someone who has never had to do enough electronics to deal with the realities of EM phenomena :upside_down:
but if you idealize your circuit as really lying within 1-dimensional wires, you can largely make do with scalars if you squint
uh, i guess that's not quite what you asked, i'll stop since this is far from my specialty :zip_it:
I guess I am imagining that if you have say a NOT gate at a steady state. Then you flip the input. The current and voltage must flow to a new steady state and I am wondering what governs the dynamics of that flowing
This is certainly not my specialty either :)
i mean, you can use ohm and kirchoff along with some other ones to get diffeqs to solve for continuous dynamics of a circuit involving, say, capacitors
but you're asking about if there's formulations of that which make the dynamic quality more evident, right?
Well, the state space of a circuit with wires is .
sure, that's 2n scalars ;)
But dynamics for the circuit is given by a vector field on $$\mathbb{R]^{2n}$$
actually, here's a question
when i was mucking around with kirchoff's laws a bit a while back, i was thinking there seemed to be some arbitrary choices of orientation involved in order to make current direction a signed quantity
seemed more natural to regard stuff like that as an abstract vector space associated to the wire, or something
is there a nice formalism for that kind of thing in the context of stuff like kirchoff's current law?
Yes.
There's a huge pile of stuff about electrical circuits from a categorical point of view here:
It's probably rather tiring to read it all, but it's full of fun stuff.
I've read some of it and certainly had a lot of fun!
54 pages :scream:
As far as I can tell it takes behavioral perspective, and I think this prevents me from doing the pullback to compose systems that I can do with physical systems
Is there some analogous hamiltonian or lagrangian formalism for circuits?
sarahzrf said:
is there a good name for the thing that gets turned into a state space thru a contravariant construction?
Maybe ‘effect’.
heuh
not sure i see it—what example do you have in mind?
John Baez said:
Okay, so we're doing pullbacks, not "identifying parts of state spaces". Then yes, I agree this is a fun thing to think about. It would be cool if one could prove there's no good solution to this problem unless one uses some of the tricks physicists use - namely, to not consider all dynamical systems, but only certain specially nice ones.
I am curious about such tricks. Is it about dynamical systems that are nice individually, or is it about two systems relating nicely, like a transversality condition or something?
sarahzrf said:
not sure i see it—what example do you have in mind?
I was thinking of quantum mechanics where ‘kets’ are states and ‘bras’ are effects.
But thinking a bit more about it, I think it was a bad answer and a misguided analogy, as this takes place inside the state space, without answering the question what it is the state space of.
yeah, in QM the relevant thing would be the space of classical states i believe
well, configurations, perhaps, not states?
For circuits made of capacitors and inductors there's an equivalent Hamiltonian and Lagrangian formalism, since capacitors are analogous to "mass" in classical mechanics and inductors are analogous to "springs". But resistors are analogous to "friction", and systems with friction don't have a good Hamiltonian formalism.
But our framework for circuits does have a lot of pullbacks lurking in it. Admittedly, we emphasize the pushouts - that's how you compose decorated cospans. But if you prefer pullbacks, just work in the opposite categories! They're actually fairly natural in here.
John Baez said:
For circuits made of capacitors and inductors there's an equivalent Hamiltonian and Lagrangian formalism, since capacitors are analogous to "mass" in classical mechanics and inductors are analogous to "springs". But resistors are analogous to "friction", and systems with friction don't have a good Hamiltonian formalism.
I don't really know what I'm saying, but could Poisson geometry be of any use?
John Baez said:
For circuits made of capacitors and inductors there's an equivalent Hamiltonian and Lagrangian formalism, since capacitors are analogous to "mass" in classical mechanics and inductors are analogous to "springs". But resistors are analogous to "friction", and systems with friction don't have a good Hamiltonian formalism.
This surprises me a bit: I've always considered capacitors and inductors as sort of duals of each other. Especially considering that in AC regime capacitors shift frequencies 90 degrees forward and inductors 90 degrees backwards (or it was the opposite, I can't recall). I can understand more or less the intuitive idea of capacitors = mass, but inductors = springs doesn't really click for me. Also, I've always drawn relationships between circuits and hydrodynamics: difference of potential is difference of potential energy (how high your water tank is), resistance is friction, intensity is flow, a capacitor is like a dam, etc etc etc. Is the spring interpretation of indcutors related to this as well?
@Sophie Libkind I was talking about the category of dynamical systems I talked about in this blog post: https://jadeedenstarmaster.wordpress.com/2019/03/31/dynamical-systems-with-category-theory-yes/
It is the category where objects are actions of on a diffeological space (a generalization of manifold which has better categorical properties, In particular, they have internal hom's and are self-enriched). Let be the category of diffeological spaces. Then it turns out that an action on a manifold is the same as a -enriched functor where is regarded as a one object -category.
Then morphisms will be -enriched natural transformations. It turns out that this is the same thing as an equivariant map.
It's true. When I was thinking about pushouts of these sort of dynamical systems I was trying to compose behaviors rather than sources of those behaviors. There's no natural way to do that like "adding vector fields". That's why I didn't get very far.
It seems like you ran into the same problem when trying to do the same thing but with pullbacks instead of pushouts.
@Daniel Geisler might be interested in the category I wrote about in this blog post. It seems closer to the category of dynamical systems that they are looking for maybe.
Matteo Capucci said:
John Baez said:
For circuits made of capacitors and inductors there's an equivalent Hamiltonian and Lagrangian formalism, since capacitors are analogous to "mass" in classical mechanics and inductors are analogous to "springs". But resistors are analogous to "friction", and systems with friction don't have a good Hamiltonian formalism.
I don't really know what I'm saying, but could Poisson geometry be of any use?
Hamiltonian mechanics has the law of conservation of energy built in, so it's not good for handling friction (unless you want to keep track of the position of every molecule, where it becomes clear that friction is just transferring energy to molecular motion). Symplectic geometry and more generally Poisson geometry are forms of geometry designed for Hamiltonian physics. So they are not good for handling friction.
Attempts to handle friction in a nice geometrical way include the GENERIC formalism, where you have a manifold equipped with both a Poisson structure and a Riemannian metric, and dynamics is described using two functions: a Hamiltonian (describing the energy) and an entropy function.
I keep meaning to study this more.
John wrote:
For circuits made of capacitors and inductors there's an equivalent Hamiltonian and Lagrangian formalism, since capacitors are analogous to "mass" in classical mechanics and inductors are analogous to "springs". But resistors are analogous to "friction", and systems with friction don't have a good Hamiltonian formalism.
Fabrizio wrote:
I can understand more or less the intuitive idea of capacitors = mass, but inductors = springs doesn't really click for me.
Good, because I got it backwards! Capacitors are like springs: the electrons bunch up against the capacitor and push back. Inductors are like mass: a current creates a magnetic field that creates an electric field that pushes the electrons along, giving the current "inertia". And resistors are like friction. You can see this from the equation describing the position of a mass attached to a spring with a spring constant experiencing friction with damping constant , which is
This is just like the equation describing the current on a loop of wire containing an inductor of inductance , a resistor of resistance and a capacitor of capacitance :
except note that we define and capacitance is analogous to .
Also, I've always drawn relationships between circuits and hydrodynamics...
The analogy between mechanics, electronics and hydraulics is famous, and it's part of a bigger set of analogies:
https://arxiv.org/abs/1504.05625
Wouldn't friction in an open system be equivalent to a dynamical system that is not measure preserving?
I think the word "equivalent" is a bit too strong, but it's true that time evolution is measure-preserving in the symplectic approach to Hamiltonian mechanics - this is Liouville's theorem - and no longer measure-preserving when we generalize classical mechanics to systems with friction.
Joachim Kock said:
John Baez said:
Okay, so we're doing pullbacks, not "identifying parts of state spaces". Then yes, I agree this is a fun thing to think about. It would be cool if one could prove there's no good solution to this problem unless one uses some of the tricks physicists use - namely, to not consider all dynamical systems, but only certain specially nice ones.
I am curious about such tricks. Is it about dynamical systems that are nice individually, or is it about two systems relating nicely, like a transversality condition or something?
Physicists focus on Hamiltonian and Lagrangian systems, and focus on how the many ways in which they nice individually: e.g. they automatically have a conserved quantity called 'energy', and more generally have a conserved quantity for each 1-parameter group of symmetries (Noether's theorem).
But physicists describe complicated multi-part systems by describing each part separately and then 'gluing together' the descriptions in a way that's not been fully formalized: part of the point of all the homework problems in physics classes is to pick up the knack for doing this. E.g. "you have a ball rolling down an inclined plane, which sits on a wheeled cart..."
With Weisbart and Yassine I'm working how this 'gluing together' works. Part of it involves composing spans of manifolds, or symplectic manifolds, and this requires a suitable transversality condition. But the more 'physicsy' part is to get the Lagrangian or Hamiltonian for the whole system from the Lagrangian or Hamiltonians from the parts.
Before Stephen Wolfram wanted a New Kind of Science, he wanted a unified science, discrete and continuous, different descriptions for different types of systems and could model both chaotic and non chaotic systems. The result of our conversations is my work in dynamics, , using combinatorics an it's Taylor series. The combinatoric take was inspired by @John Baez and Flajolet's work. The advantage of this view of a dynamical system is it can be hierarchically stacked by associating it with addition, multiplication, exponentiation and the hyperoperators.
John Baez said:
With Weisbart and Yassine I'm working how this 'gluing together' works. Part of it involves composing spans of manifolds, or symplectic manifolds, and this requires a suitable transversality condition. But the more 'physicsy' part is to get the Lagrangian or Hamiltonian for the whole system from the Lagrangian or Hamiltonians from the parts.
Thanks! This is very helpful. I have found a paper by Yassine on this, which I will look into. (I am interested generally in conditions required sometimes for composition, such as transversality, orthogonality, disjointness, independence, and the algebraic structure of such conditions. That I am interested does not mean I understand it (yet). Tricks from physics is always a good source of insight.)
That paper is full of serious mistakes, which the forthcoming papers by Weisbart, Yassine and myself will correct.
As far as transversality goes, the trick is simply to look at spans whose legs are submersions.
Fair enough.
Iterated Entire Function Theorem The Taylor series of an iterated entire function can be constructed given a fixed point and .
Proof:
Assume the function is an entire function. Assume a fixed point at zero. As an entire function under composition, the Taylor series of can be constructed for radius where if and only if can be constructed for every .
Prove by strong induction.
Basis Steps:
Case . By definition , so can be constructed.
Case . Let , so can be constructed.
Case . Assume that can be constructed for all where . (Induction Hypothesis)
Induction Step:
.
Using Eq. Dynamical Recurrance Equation, . The function in only dependent on , and . By the strong induction hypothesis, can be constructed. Therefore Eq. Dynamical Recurrence Equation can be reduced to a geometrical progression based on that can be represented by a summation.
This completes the induction step that $D^n f^t(0)$ can be constructed for all whole numbers $n$.
Dynamical Equation
The Taylor series for is
Daniel Geisler said:
Iterated Entire Function Theorem The Taylor series of an iterated entire function can be constructed given a fixed point and .
Here is a combinatorial reformulation, using the Faà di Bruno bialgebra of surjections.
We consider formal power series without constant term. The coefficients of are given by the Faà di Bruno formula and involves the Bell numbers, which in turn count partitions of given shapes. For higher iteration of substitution of power series, I think it is fruitful to pass to the coalgebraic side of duality. Let me briefly recall how that goes.
For , let be the linear form on the vector space of power series that returns the $$k$$th coefficient. The Faà di Bruno bialgebra is the polynomial ring spanned by the with comultiplication dual to substitution. Precisely . The point is now that the Faà di Bruno bialgebra is the incidence bialgebra of the lattice of partitions modulo 'type equivalence' (Doubilet), or even better: it is the incidence bialgebra of the category of finite sets and surjections (Joyal). In this interpretation, is the isoclass of the surjection . So, for example: because the possible factorisations of are: (one such); (three such); and (one such).
Now we are in position to deal with the higher iterations . Its coefficients are given by iteration of the Faà di Bruno comultiplication. Precisely, the th coefficient of is given by . This is about counting composable strings of surjections , and then multiplying the coefficients corresponding to the surjections in the string. For example , because these are the possible factorisations of into three surjections. So the third coefficient of are given by multiplying the tensor factors in each term.
(Fine print: we are talking coefficients of (like in exponential generating functions), and when counting surjections (or composable strings), we are counting isoclasses, not individual surjections.)
(For anyone interested in the Faà di Bruno bialgebra, I think it is very nicely explained in the survey [Figueroa--Gracia-Bondía: Combinatorial Hopf algebras in quantum field theory, I, Rev. Math. Phys. 2005]. The part about FdB can be read without background in QFT.)
I just recently heard about Faà di Bruno. Where should I read about it?
Why are you posting your question here? Anyway, Wikipedia ain't bad, particularly this version of Faà di Bruno's formula:
https://en.wikipedia.org/wiki/Fa%C3%A0_di_Bruno%27s_formula#Combinatorial_form
Only because it was just mentioned.
Oh, I see. I usually don't see posts on a topic right above the latest post.
I often narrow down to the particular stream.
@Joachim Kock this is wonderful. It is the first time I have seen someone else use Faà di Bruno with . I've seen the Combinatorial Hopf article but I need to upgrade my math education.