You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
How to understand "structural stability" mentioned by Rene Thom?
What are the books that I can read about it?
Better not assume too much background
Where does Rene Thom mention that?
In his book structural stability and morphogenesis
Did you try the Wikipedia article on structural stability? Wikipedia is always a decent place to start for math concepts, and if they have an article it usually has references for more details.
Well for Anosov systems at least I like to think of structural stability as meaning you can couple the system to another one (as a thermostat/thermometer) and still get the results that vary analytically with the coupling strength. I mention this in a couple of places here: https://arxiv.org/abs/2104.00753
@Peiyuan Zhu do you have examples of problems you are interested in solving or are you just doing general research?
I’m just trying to understand what he’s theory is about, so no specific problems at the moment.
I recommend looking at structural stability first in low dimensional dynamics. Fixed points, limit cycles, Herman rings, strange attractors would be a good start. Plus most of them can be visualized with fractals. Also learning the details of some of the basic bifurcations would be helpful. It has been decades since I read Structural Stability and Morphogenesis, but I remember it a an excellent book. This book is an example of the benefits of reading the works of the masters. I can answer some questions in low dimensional dynamics.
How to understand this paragraph about preparation? what is that's being composed with ? What does it have to do with event ? Does this composition have to happen on some space?
It seems like he call a displacement. What does he mean by "displacement"? These explanations are too vague to be comprehended by me.
Then he seems to attempt to modify dynamical systems to make them qualitative. Can anyone help me with understanding what he means here?
I think is the function assigning a state to a point.
Daniel Geisler said:
I recommend looking at structural stability first in low dimensional dynamics. Fixed points, limit cycles, Herman rings, strange attractors would be a good start. Plus most of them can be visualized with fractals. Also learning the details of some of the basic bifurcations would be helpful. It has been decades since I read Structural Stability and Morphogenesis, but I remember it a an excellent book. This book is an example of the benefits of reading the works of the masters. I can answer some questions in low dimensional dynamics.
Are most results of structural stability targeting specific models in lower dimensions or is there any general results that works for, say, arbitrary systems? If so, would what you listed still be all that's considered as "topological invariants"?
@Peiyuan Zhu great question, I have been giving thought to this subject and would love to hear from someone with a strong math background. My understanding is that only in the simplest cases, as in one and two dimensions, is the list of topological invariants fairly well understood. In higher dimensions there are knot invariants and combinatorial complexity to contend with. I don't think it was that long ago that the Hénon map was proven to be a strange attractor.
Is the whole point of considering topological invariants due to the consideration that information are form dynamics that are substrate independent and stable under perturbation from the environmental?
I imagine you're talking about the same text rather than about topology in general. This is my first time hearing a space described as a "substrate" or as having an "environment". To me, the point of considering topological invariants is that they are things which we can actually compute about a space which tell us something meaningful about it, such as how it relates (or cannot be related) to other spaces.
Peiyuan Zhu said:
Is the whole point of considering topological invariants due to the consideration that information are form dynamics that are substrate independent and stable under perturbation from the environmental?
For the simplest example using the location of fixed points, the "dynamics" of the system becomes vastly simpler. Computer algebra systems can model the dynamics there with many fewer symbolic terms. More importantly, is the Lyapunov multiplier, the value of a function's first derivative at a fixed point. The multiplier is the dominant driver in the dynamics of a system. It determines the type of different fixed point.
If the absolute value of the multiplier is greater than unity, it is a repellor and thus unstable. The fixed point is meta-stable. When the multiplier is less than unity an attractor exists. Both satisfy the Schroeder functional equation, while when the multiplier is unity the the system satisfies the Abel functional equation. These are different symmetry conditions at the fixed point. Others exist like limit cycles from when the absolute value of the Lyapunov multiplier is a root of unity you end up with a limit cycle.
I would be remiss if I didn't touch of the application of CT to dynamics, particularly epidemiology.
Ok here's a very specific question. Can anyone help me with understanding what the author is trying accomplish here?
In my understanding the author posted a map that doens't map to its own domain and found a interval-valued inverse so that the fixed point lies in that interval, but the author didn't explain where the inverse is coming from, or why is it interesting to find fixed point of a system that doesn't map to itself.
Maybe someone else can answer it, and I'm not saying it's a bad question, but "can anyone help me with understanding what the author is trying to accomplish here?" isn't what I'd call a specific math question. For example if you asked it on MathOverflow they would shut down that question.
I think it would be good to look at MathOverflow to see examples of what they consider to be specific math questions. I'm not saying you should only ask such questions - MathOverflow is quite strict! But I think it would help you a lot to practice asking such questions, because once you learn how to do it, you'll be able to get a lot of help from mathematicians.
How about the question "how is this set-valued inverse constructed from the solution of that quadratic equation?"
The interval where and that contains the fixed point
The plain solution of the quadratic equation is without that term, but I don't understand that is added to get this interval
Maybe someone can answer that after reading the page you copied here.
John Baez said:
What are the question that mathematicians ask themselves when they read a mathematical paper?
First I want to make sure I understand the definitions of all the key terms, i.e. the words that appear in the main theorems. By "understand them" I mean:
1) I know them by heart - if someone wakes me up in the middle of the night I can say these definitions.
2) I can explain why the definitions are interesting.
3) I can give examples of things obeying those definitions and things that almost obey them but not quite.
Second I try to understand what the main theorems say. This means:
1) I know examples of things obeying the hypotheses of the theorem (and thus the conclusions).
2) I know examples of things that obey all but one of the hypotheses, and not the conclusions.
3) I can explain why the theorem is interesting. This includes being able to say why the theorem is believable... or, alternatively, why it's shocking. It also includes having some idea of what the theorem is used for.
Also, I was trying to follow what you wrote here "explaining why the definitions are interesting", if I understand you correctly this is a legit question?
It's a very important question to ask yourself, but it's sort of a personal question, because everyone has their own concept of "interesting".
So, you'd probably get in trouble asking this question on Mathoverflow.
You could ask that sort of question here, like "Why is the definition of structural stability interesting?" And different people might say completely different things.
I only mention MathOverflow because it's the easiest way to understand a kind of "standard" approach to asking questions on mathematics. I'm not saying it's always the best approach!!!! However, if you don't know this approach, you can't really do mathematics. It's just like if you don't know what D dominant seventh chord is, you can't really communicate to classical musicians.
Peiyuan Zhu said:
The plain solution of the quadratic equation is without that term, but I don't understand that is added to get this interval
So, we have . Given some output from , we want to figure out what values of could have been provided. That is, we want to find all the that satisfy . This gives us the quadratic equation: , which we wish to solve for . Rearranging, we can write this as or .
Hope this helps!
David Egolf said:
Peiyuan Zhu said:
The plain solution of the quadratic equation is without that term, but I don't understand that is added to get this interval
So, we have . Given some output from , we want to figure out what values of could have been provided. That is, we want to find all the that satisfy . This gives us the quadratic equation: , which we wish to solve for . Rearranging, we can write this as or .
Hope this helps!
Would you be able to see how does this fit into the fixed point iterations that he's talking about? So it sounds like the idea is that one think about what could've produced a value, so one turns a mathematical function into a mathematical relation, and get the set-valued inverse of that function, which is really a relation. It seems like the whole issue of fixed point is just additional issue that the author is interested in, am I right?
John Baez said:
It's a very important question to ask yourself, but it's sort of a personal question, because everyone has their own concept of "interesting".
So, you'd probably get in trouble asking this question on Mathoverflow.
You could ask that sort of question here, like "Why is the definition of structural stability interesting?" And different people might say completely different things.
I only mention MathOverflow because it's the easiest way to understand a kind of "standard" approach to asking questions on mathematics. I'm not saying it's always the best approach!!!! However, if you don't know this approach, you can't really do mathematics. It's just like if you don't know what D dominant seventh chord is, you can't really communicate to classical musicians.
I see, that makes sense! But do you think there's going to be a close to unique answer if I ask "why the author thinks this is interesting?" Then we're constrained by the context that the author's giving this discussion.
In terms of what the author is trying to accomplish, it seems to me like they are interested in talking about dynamical systems where multiple past states can lead to the same future state. For example, there will be a whole bunch of states that eventually lead to equilibrium, giving by repeatedly applying to an equilibrium state.
It sounds a bit strange to me when the author first made a map that doesn't map to itself but then made this adjustment. I always thought of dynamical systems map straight back to themselves. That's the definition that I know of.
Peiyuan Zhu said:
Would you be able to see how does this fit into the fixed point iterations that he's talking about? So it sounds like the idea is that one think about what could've produced a value, so one turns a mathematical function into a mathematical relation, and get the set-valued inverse of that function, which is really a relation. It seems like the whole issue of fixed point is just additional issue that the author is interested in, am I right?
I do think the emphasis of the author in this section is in talking about what states could have led to a given state, and it seems likely to me that they are using fixed points as a relevant example to illustrate the idea.
In general if you have a function you can construct a new function , where is the set containing all the subsets of , and where .
Peiyuan Zhu said:
It sounds a bit strange to me when the author first made a map that doesn't map to itself but then made this adjustment. I always thought of dynamical systems map straight back to themselves. That's the definition that I know of.
Yeah, that seems weird to me too. Someone more familiar with this kind of thing might have a better idea of what the authors intends here.
David Egolf said:
Peiyuan Zhu said:
It sounds a bit strange to me when the author first made a map that doesn't map to itself but then made this adjustment. I always thought of dynamical systems map straight back to themselves. That's the definition that I know of.
Yeah, that seems weird to me too. Someone more familiar with this kind of thing might have a better idea of what the authors intends here.
My guess is that perhaps the author is trying to give a new view on dynamical systems that doesn't map states to itself. In other words the system explores new states. So the author begin with a vaguely defined general state space and select parts of it as the environment, and then select parts of the environment that are viable or invariant to the system. I think the emphasis of this book is on complex adaptive systems so I guess this is reasonable if partial specification is the entire purpose of this theory.
David Egolf said:
Peiyuan Zhu said:
Would you be able to see how does this fit into the fixed point iterations that he's talking about? So it sounds like the idea is that one think about what could've produced a value, so one turns a mathematical function into a mathematical relation, and get the set-valued inverse of that function, which is really a relation. It seems like the whole issue of fixed point is just additional issue that the author is interested in, am I right?
I do think the emphasis of the author in this section is in talking about what states could have led to a given state, and it seems likely to me that they are using fixed points as a relevant example to illustrate the idea.
In general if you have a function you can construct a new function , where is the set containing all the subsets of , and where .
So it sounds like inverse problem. Given a mapping , since one cannot find fixed point by inverting to get , as it would map outside the domain , you define an inverse in which the fixed point lies.
I don't think the fixed points are being found by applying this generalized inverse. I think it's more that it's being used to find the set of states that would lead to a fixed point. I'm thinking of as being used to step backwards in time. If multiple past states could have lead to the present state (or some state of interest), then there will be multiple elements in the set returned by .
This situation is sort of like asking "what's the inverse of the function ?" This function doesn't have an inverse, because it sends multiple inputs to the same output. However, you can specify a set-valued generalized inverse that returns the set of input that could have led to a given output. For example, to use your notation, .
Now you're making me more curious. The author pretty much explains what's the problem. He basically says "yes, usually a dynamical system is a map from a set to itself, but our function does not map [0,1] to itself when , so we replace the function by a partial function that is undefined when ". And then he starts studying the dynamics of this partial function.
We can still talk about inverse images for a partial function, and I think that's what means: of any point (or set) is the set of elements of [0,1] that the partial function maps to that point (or set).
John Baez said:
Now you're making me more curious. The author pretty much explains what's the problem. He basically says "yes, usually a dynamical system is a map from a set to itself, but our function does not map [0,1] to itself when , so we replace the function by a partial function that is undefined when ". And then he starts studying the dynamics of this partial function.
From the little bit of reading I've done on dynamical systems, I guess it just feels weird to have a function being used to describe a dynamical system that you can't apply endlessly. I'm used to being able to apply over and over again, and there never being a reason that I have to stop. But in this example, sometimes can fall outside the domain of the function , as might be undefined. I suppose this is fine, but it definitely expands my notion of what a dynamical system is.
Is that not like a coalgebra that has exit points? A functor F(X) = 1 + X
has a coalgebra X -> F(X)
. It can exit at 1
making that the coalgebra of infinite and finite streams.
David Egolf said:
I don't think the fixed points are being found by applying this generalized inverse. I think it's more that it's being used to find the set of states that would lead to a fixed point. I'm thinking of as being used to step backwards in time. If multiple past states could have lead to the present state (or some state of interest), then there will be multiple elements in the set returned by .
I see. So the question is what are the states can converge to fixed points, that is, find . And because we cannot use to step backward in time, instead we use to step backward in time.
John Baez said:
Now you're making me more curious. The author pretty much explains what's the problem. He basically says "yes, usually a dynamical system is a map from a set to itself, but our function does not map [0,1] to itself when , so we replace the function by a partial function that is undefined when ". And then he starts studying the dynamics of this partial function.
Is this kind of like saying vague questions are important :p
In case this is relevant, "Universal Coalgebra, a Theory of Systems" https://www.sciencedirect.com/science/article/pii/S0304397500000566
David Egolf said:
From the little bit of reading I've done on dynamical systems, I guess it just feels weird to have a function being used to describe a dynamical system that you can't apply endlessly. I'm used to being able to apply over and over again, and there never being a reason that I have to stop.
Yes, it's unusual to consider a partial function as a dynamical system, but the category of functions is a subcategory of the category of partial functions, which in turn is a subcategory of the category of relations, so it's not insane to generalize dynamical systems this way - just as in differentiable dynamical systems we restrict ourselves to a smaller category: the category where morphisms are differentiable maps.
In particular, all the powers are perfectly well-defined as partial functions.
Also by the way, you can always turn a partial function into a function , where the element of the 1-element set means "undefined".
This is a widely used trick.
This tilde thing is a functor so you get
So, you can think of partially defined dynamical systems with state space as ordinary dynamical systems on a state space with one extra element, which means "undefined" - or more dramatically, "dead". :skull_and_crossbones:
Isn't this an important view for climate models? It isn't sensible to assume you have a full parameterization of climate dynamics. Maybe it's reasonable to consider when you'll be dead. Or in other words, some essential scarcity constraints are violated.
So I'm still trying to think about how this would related to Dempster-Shafer theory about upper and lower probabilities, or is there a generalization that allows one to talk about both, or some commonalities of them both. Is there a situation where "probabilities" are undefined so we only have upper and lower bounds? In any way, a belief assignment is a partial specification of probabilities. I think this is something very reasonably to be considered, especially the computational aspect of Dempster-Shafer theory seems unexplored.
I'll have to think about this a bit more. Maybe there are individual dynamics underlying Dempster-Shafer theory that will make sense.
I guess the hypothesis can be: every probability assignment on a frame of discernment is a partially specified local dynamic. And Dempster-Shafer theory is up to piecing these local dynamics to get some global dynamics.
There seem to be some stuff on coalgebras and Dempster-Schafer
e.g "Shallow Models for Non-Iterative Modal Logics"
https://arxiv.org/pdf/0802.0116.pdf
Also this on argumentation:
https://www.cambridge.org/core/journals/mathematical-structures-in-computer-science/article/abs/categorical-approach-to-the-semantics-of-argumentation/C99B0CF320991281D6EAE3A3DD250662
Sadly I won't have time to follow...
John Baez said:
David Egolf said:
From the little bit of reading I've done on dynamical systems, I guess it just feels weird to have a function being used to describe a dynamical system that you can't apply endlessly. I'm used to being able to apply over and over again, and there never being a reason that I have to stop.
Yes, it's unusual to consider a partial function as a dynamical system, but the category of functions is a subcategory of the category of partial functions, which in turn is a subcategory of the category of relations, so it's not insane to generalize dynamical systems this way - just as in differentiable dynamical systems we restrict ourselves to a smaller category: the category where morphisms are differentiable maps.
Is there anything you recommend to study the geometry of this kind of systems? It looks like the author tries to circumvent differential geometry to arrive novel notions of stability, so I feel like maybe there's something there. I hope after this question such a question is much more specific.
Henry Story said:
There seem to be some stuff on coalgebras and Dempster-Schafer
e.g "Shallow Models for Non-Iterative Modal Logics"
https://arxiv.org/pdf/0802.0116.pdf
Also this on argumentation:
https://www.cambridge.org/core/journals/mathematical-structures-in-computer-science/article/abs/categorical-approach-to-the-semantics-of-argumentation/C99B0CF320991281D6EAE3A3DD250662
Sadly I won't have time to follow...
@Peiyuan Zhu you got lucky, there was a connection to category theory after all!
The categorical point of view on dynamical systems consists of taking an endomorphism of an object in some category, and then deriving invariants of that pair using the structure of that category (or passing to another category and doing so there). As an example, in the category of topological spaces, a dynamical system is a space equipped with a map . We can take the image factorization of this, and consider the properties of that subspace (or similarly for iterates ). We can pull back subspaces or coverings of along and characterize what happens; this is how topological entropy is defined. We can consider the mapping cylinder of as a space and examine its properties. All of these constructions can be expressed categorically using certain features of the category of topological spaces, and that allows us to generalize them to any category with those same ingredients.
A version of the example above, where we take instead to be a partial endomorphism of or , is most easily presented in the category of partial maps between topological spaces, and some of the above constructions can surely be applied there. Another option is to really think of this as a "relative dynamical system" consisting of a map and a further constraining map such as the inclusion . Such a pair might be an endomorphism of in a category where morphisms are cospans of maps, and different constructions are possible in that set-up.
John Baez said:
Yes, it's unusual to consider a partial function as a dynamical system, but the category of functions is a subcategory of the category of partial functions, which in turn is a subcategory of the category of relations, so it's not insane to generalize dynamical systems this way - just as in differentiable dynamical systems we restrict ourselves to a smaller category: the category where morphisms are differentiable maps.
@Peiyuan Zhu wrote:
Is there anything you recommend to study the geometry of this kind of systems?
Not especially, I'd just make sure to understand the category of sets and partial functions and how it's equivalent to the category of pointed sets - but this is not 'geometry', for that you'd have to think about manifolds and smooth partial functions.
I hope after this question such a question is much more specific.
Not really: you still haven't asked what I'd consider a specific math question.
Looking at this https://ncatlab.org/nlab/show/Rel
"Synthetic differential geometry" sounds like the way to go in this case? At the very least I would like to see the old theories as some limiting case.
https://ncatlab.org/nlab/show/synthetic+differential+geometry
Does anybody have reference about how to talk about Riemanian geometry synthetically?
I found that: Differential Geometry in Toposes. There are a few pages on Riemannian geometry.
I admit that I find it surprising to talk about dynamical systems, then about Rel and then about synthetic differential geometry and Riemannian geometry. Do you really know what you want to do? If you are able to study dynamical systems in a synthetic riemannian geometry and show how Rel is a degenerate case of this, congrats. But I guess that you go a little bit too far with trying to understand advanced concepts before understanding the basic story. The good approach to any subject is first understanding the basics and then using more advanced concepts when you understand why and how it is useful. Trying to put all the complicated words together without knowing why will difficultly lead to valuable insights. You need to know the reasons for what you do at each step. That's great that you try to understand all of this, but don't go too fast and try to understand deeply the things! if you want to do good work. But I don't know, maybe you have a clear plan at the end :sweat_smile:
Was reading this article: http://pespmc1.vub.ac.be/ATTRACTO.html What is the reason that attractors are better concept than equilibrium or fixed point? Is this the question of whether uncertainty of a system can be reduced to zero? What are the real world situations that this type of situation occurs?
Maybe it’s helpful to consider just thermodynamic systems