You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Addition, multiplication and exponentiation are all dynamical systems. As they build on each other, shouldn't they be considered an example of being open dynamical systems? Aren't others and I really interested in a solid theory of open dynamical systems comprised of other open dynamical systems? But can people find further useful generalizations from dynamics without taking on issues like chaos and tetration? I do understand that CT has provided unique insights into chaotic systems.
Let's use addition as a basic dynamical system. Can anyone provide a description of an open and closed version of the addition dynamical system? Don't you need an open dynamical system based on addition to get to the multiplication dynamical system?
As they build on each other, shouldn't they be considered an example of being open dynamical systems?
i dont see how that follows
Nor I. "Open dynamical system" means something, which I have attempted to explain, and it's not "builds on each other".
Can we consider addition as a dynamical system? If so, what does "closed" and "open" addition look like?
i guess we probably can? do you mean something like ?
I am not going to play this game: I don't want to consider addition as a dynamical system. But Sarah is willing.
Sorry, my bad. I fold.
no, i think i kinda see the idea—in the naturals, at least, each operator arises by iterating the previous one, so each one is a dynamical system
i just don't see what it has to do with open dynamical systems
You are doing a good job of breathing life into Daniel's idea, Sarah. "Can we see X as Y?" is the sort of question I discourage my grad students from asking, because it puts all the burden on the other person to make sense of what's going on.
You want to talk about open dynamical systems? Mathematics has a history of working to find the simplest of a particular type of model. So I thought, isn't addition a good shot at being a simple dynamical system. I meant to play a game. I seem to have your input that my entire line of thinking is crap. That's OK because my focus is as an activist. Why come to a website if you don't listen to what people communicate with you?
can i preemptively suggest deescalating this
it might be a good idea to step back and try to to talk about, like, how one should engage on a platform like this, or something, and probably in off-topic or something rather than the open dynamical systems topic
Sarah's right. I wasn't trying to say Daniel's "entire line of thinking is crap", I was trying to say that it was expressed in a way that made me feel he was leaving it to us to do all the work of making it precise. But what I should have done, and will now do, is just bow out of this conversation!
Hello, my name is Daniel and I'm an open dynamical system... :earth_americas:
I refer to my personal TOE as All is Arithmetic. I wasn't being capricious referring to addition as a dynamical system. More strictly is the Successor function or the shift operator that is the most basic dynamical process and the only one without a fixed point not at infinity. Then addition, multiplication, exponentiation, tetration, pentation, ... to the higher hyperoperators. My specialization is the hyperoperators.
Just as I'm interested in the hyperoperator functions, I'm interested in generalized numbers like and . The following provides Julia code where one can see how functions like the can take an iterator in .
As I mentioned before, the model of where is the most general dynamical system I can think of. In the Eighties I was a follower of Stephen Wolfram's work. While he really likes cellular automata, he attempted an inventory of fundamental models of computation including iterated smooth functions. The hyperoperators, pg. 46, are one of these systems.
dynamical systems are commonly formulated on arbitrary smooth manifolds—that's generalizing in a direction orthogonal to you
i don't know whether dynamical systems on manifolds can have t be a complex number or a matrix, but even if it can only be real, that's still a huge generalization
Do you program Julia? http://tetration.org/Combinatorics/Julia/
i don't know julia
What do you know?
off the top of my head: python, ruby, haskell, javascript, java
i mean i can probably read julia
i just have no experience using it
How about Mathematica?
none
I'll check out Python's math packages.
i dont believe in proprietary PLs :halo:
its ok i can check out the julia i can probably figure it out
what are you trying to show me
where
what does that mean?
how are you defining it, i mean
That's the page I showed you. http://tetration.org/Combinatorics/Julia/
i mean, that seems to be the result of some kind of automated computer algebra to find an expression for it
but i'm asking what the definition is that gives rise to that expression
http://tetration.org/Combinatorics/CIGF/
this is pretty impenetrable
can you explain the concepts at all?
(i'm also having trouble seeing where the definition is...)
Absolutely
Have you seen this, it summarized the core of my work.
i think you linked it before
but i can't follow it at all
it doesn't use mathematical language in standard ways
I take the Taylor series for to where I can then ally Faa Di Bruno's formula. I end up with a system of difference equations.
what does t - 1 mean for GL(n)
OK, good to know the new even if not happy news. Tomorrow is the talk on Open Dynamical Systems.
?
what news? what talk? .-.
Sophie Libkind @ MIT Categories Seminar - Unifying Open Dynamical Systems: An Algebra of Resource Sharing Machines
Thu Apr 30 2020, 9:00 - 10:00
what's the non-happy news?
That I'm not a great communicator. But then I only talked to Wolfram after fifteen years of research and at fifty years I having my second round of communication here. The paper did make it through the Annals of Mathematics without getting to beat up although they didn't publish. :slight_smile:
without getting too beat up?
what did they say?
Actually lots of cool stuff. While I discussed my research, they wanted a more modern context like discussing germs. I would need to address current hot topics in dynamics. They did find one error that I rebutted.
oh, catalan numbers image.png
check this out https://gist.github.com/sarahzrf/08ddbe732965dc4b97691843500f5d8d
So you are hip to set partitions which are just another way to think about functional composition . What combinatorial structure is associated with iterated functions? The answer is total partitions. I wrote a program that iterated through the total partitions. Using Faa Di Bruno's formula I move from combinatorics to algebra. The algebra works out perfectly.
I believe that while cellular automata provide a "vertical" catalog, hyperoperators are the only "horizontal" catalog. This alines with physics' hierarchical nature. What other model of open dynamical systems can start with nothing and go on to produce the processes required for life?
Why are there no competing computational hierarchies? Maybe because the repeated iteration process tends to blur the origin. Consider and and their iteration and - tetration and iterated sin. I hypothesis that this pair converges at the higher levels. So something like has dominated by the value of the index (9) of the hyperoperator.
My odd approach to dynamics comes directly from conversations with Stephen Wolfram in 1986.
I've been working on the problem on extending tetration to the complex numbers for a very long time. Due to lack of interest by others, I had to learn to find defects in own work. I realized that my work had to be consistent with complex dynamics. Since I wanted to be able to extend the infinite number of hyperoperators, I took up Wolfram's idea of generalizing my work in tetration to that of dynamics.
My approach is weird because, except for Wolfram in 1986, I've gone fifty years doing research without having an interested party to communicate with.
Personally, I hate sounding like a mathematical country bumpkin and hope to become an effective communicator ASAP.
Using elementary mathematics I can explain a great deal of Carleson and Gamelin's Complex Analysis chapter on the Classification of Fixed Points.
When I talk about , I'm talking about an object that has a Taylor series with specific combinatorial property. Let say I give you ((1,2),(3),(4)). You can take and pick through the combinatorial results to find the part isomorphic to ((1,2),(3),(4)).
My more important result is that as is isomorphic to the set partitions, is isomorphic to the total partitions.
Now consider ((((1,2),3),4) from the total partitions. The following two objects are isomorphic
index_gr_2.gif
The first object is the unlabeled version of the next dozen labeled partitions.
Schroeder4-5.gif
As a proof of concept I wrote a Mathematica program to enumerate the total partitions and then derive their isomorphic portion of the associated derivative and arriving at .
Hi! This sounds really cool. I haven't yet looked closely enough at the connection to bell polynomials to understand that. But do I understand correctly that part of @Daniel Geisler's contribution is to find a uniform way to extend the function , where , to some nice function , for some large class of s?
If so, I'm curious what the precise versions of "uniform", "nice" and "large" are in that work. I tried to look for precise theorem statements but got confused before I found an answer.
Here are three test cases that might help me understand: if , then what is ?
If , then what is ? And if , then what is ?
In particular, it seems that extending the first case would require arbitrarily choosing one of the two roots of : is a multiplication by , or by , or a different function altogether?
In the second case, does specify a certain branch for square root?
And I'm curious how the image of changes in the third case, since, assuming everything in sight is holomorphic, the zero of would persist for a while before disappearing from 's image.
1 ; so the Lyapunov multiplier and Linearization Theorem gives .
2 ;
3 ; "classical" tetration problem. As per Ernst Schroeder, I always set a dynamical system's fixed point to the origin. See page 9 my 1990 paper .
The weak part of everyone's attempts to extend tetration to the real and complex numbers has been the need to prove convergence. My approach is to work from and take advantage of entire functions for their convergence under composition and so iteration, particularly . Since the entire chain of hyperoperators is built of iteration, composition with it's convergence holds.
A second go at convergence - Faa Di Bruno's formula is the heart of my derivation of Taylor series of dynamical systems. So the derivatives' coefficients are comprised of a finite number of additions, multiplications and exponentiations of integers.
nice - taking the Taylor series of $$f^t(x)$$ by using Faa Di Bruno's formula. Can use only elementary mathematics
so is the claim that one gets entire functions, i.e. for each complex t, is entire? Or perhaps holomorphic in an open neighborhood of its fixed points? Or something else (e.g. perhaps a formal power series)?
$$f(z)=-z$$; so the Lyapunov multiplier $$\lambda=-1$$ and Linearization Theorem gives $$G: (-1)^z$$.
Do you mean that ? If so, I'm confused because doesn't specify a unique entire function. In particular, it might reasonably mean or . Does your formalism pick one of these two? Or perhaps it returns a set of entire functions instead of one entire function?
((Whereas the question is about uniqueness as we vary , I have a dual question for the example about existence as we vary : is it the case that is not entire in the variable?))
By the way: I don't mean to come off as unsupportively interrogative: your work seems powerful, and my way of learning new math is to ask questions! Along these lines, thank you for taking the time to adopt this notation: it is clarifying for me because it helps me keep track of what is defined already (e.g. for natural) vs what the new construction is i.e. for complex.). Now that I'm beginning to understand this distinction, I'm happy to adopt your notation: .
Thanks!
@Sam Tenka (naive student)
I'm honored you are interested in my work and that you are sharing your difficult questions.
My claim is just that the composition of entire functions are entire so their iteration is also entire. So I can get nice results on convergence by staying in the realm of entire functions. My work doesn't require functions to be entire in order to be relevant, but it sure is nice for quickly proving convergence.
You asked a few questions of very specific situations, so my answers are based both on my work and on preexisting knowledge. Don't you mean to ask or ? My formalism doesn't distinguish.
If memory serves me correctly my research is consistent with the iterations of . At the Taylor series is simply , then the Taylor series go to . Since these are just polynomials, they are also entire and converge.
yep! I meant , not :slight_smile:
If your formalism doesn't distinguish between the two signs, does this mean that it gives us a sometimes-large set of possible definitions for "", instead of a uniquely determined function?
And for : I think I understand that for natural , we get . But what still confuses me is: for complex, how can be entire with respect to ? For example, requires branch cuts.
@Sam Tenka (naive student)
If your formalism doesn't distinguish between the two signs, does this mean that it gives us a sometimes-large set of possible definitions for "f^t(n)", instead of a uniquely determined function?
Thank you!!! :octopus: Let . This is why I said as per Ernst Schroeder that I take a fixed point as the origin. But there are often a countable infinity of fixed points, as with transcendental functions.
OK, in taking , the Taylor series of doesn't have a term.
I guess my question is: what does your formalism say is equal to? It seems to me that this wouldn't be an entire function.
I think something that would help me be less confused is to prove that functions are entire by means other than saying they are compositions of entire functions. This fact is of course true for finitary compositions, but (please correct me if I misunderstand!) it seems that the way it gets applied in your work is to infinitary compositions. What I mean is that:
is a "composition" on the left hand side of entire functions such as addition and multiplication, but clearly not extendible to an entire function. This is because the "composition" involves an analytic operation, namely a limit (and then some analytic continuation to make the domain all of ).
@Sam Tenka
Yikes, you caught me!!! :thinking: Thanks for raising the issue of limits not being of finite construction. But then ALL of my work is built on fixed points. I don't feel this isn't a problem with tetration as it is easy to go from the fixed point to .
Sam, does it help to say I only mean "finite composition"? Can you share where in my work I used infinitary compositions?
A second proof of convergence is that I use Faa Di Bruno's formula, which is combinatorial and without infinite processes. But this line of attack isn't so strong at proving the convergence of for .
Feel free to read and comment on Extension of the Hyperoperators which has the majority of my research in it.
do you have an answer for the question given, though?
or does "yikes, you caught me" mean that you don't?
like, can you give a direct reply to this?
Sam Tenka said:
I guess my question is: what does your formalism say is equal to? It seems to me that this wouldn't be an entire function.
"Yikes" means I suffer from not having anyone to communicate with, not that a fatal flaw has been found. I am writing a reply for all questions, so ask away.
Guys, :working_on_it: means working on it.
i didn't see you post that, sorry
@sarahzrf said,
I guess my question is: what does your formalism say
is equal to? It seems to me that this wouldn't be an entire function.
OK, this is an example of a question I answered before. So .
In answering questions I notice there is more than a single approach and that often it is best to use standard accepted techniques than to use my own proofs or "formalism".
that's not the question being asked, though
if , then what is your formula for that gives an entire function?
@Sam Tenka & @sarahzrf Good catch, the square root function is not entire!
So where is my work now? Well there is two parts, computational and analysis. My use of entire functions was an easy fix to proving convergence. By the way, convergence is the issue everyone has with extending tetration to the complex numbers. My own research is more aggressive in taking on all the hyperoperators. The Tetration Forum does take on extending tetration, pentation and hexation.
Just because a square root function is not entire doesn't mean the computational part is flawed (I think), but that the analysis needs fixed. Back to my Moiré pattern proof.
I have a counter-counter example. Consider . But .
The problem is this shouldn't be defined as a composition. If that isn't clear I need to communicate my use of the term better.
i don't understand what you're saying...
I have an issue with @Sam Tenka's use of "composition" in quotes. It indicates using the term composition in a non-standard manner. I mean to use the term composition as it is commonly understood in mathematics.
We are discussing analysis, but even if I have a flawed proof, I am pursuing three different proofs of convergence.
i don't understand what you mean by a counter-counter example though
isn't this another counterexample?
if you pick , then what entire function is ?
@sarahzrf
sarahzrf: isn't this another counterexample?
sarahzrf: if you pick , then what entire function is ?
I need to clarify that while , my meaning is .
ok
i still don't understand your "counter-counter example", though—what is f? what is it supposed to be a counterexample to?
This seems to be a communication problem. Does denote the -th power of , or the -th composition of with itself?
Does denote a function with for all or with for all ?
composition
@sarahzrf My counter-counter example is about @Sam Tenka's counter example. As I have learned from you, it is my responsibility to anticipate and answer all reasonable questions in an article for publication. But to do that I need a period of mathematical education, which I'm getting here. I only have two years of high school and two years of college. For example I often ask questions I know the answer to, but people are getting the idea that I want others to do my work. Not true and now I can adjust my communications so as to cause less frustration.
The point I'm making in my "counter-counter" example is that when I talk about composition, I'm talking about it as it is commonly understood. Do you understand my concern when @Sam Tenka has to place composition in quotes as he pushes the boundary of what is understood as composition. I just need to communicate I am referring to composition as it is commonly understood. No problem, I'm just learning that I need to communicate more effectively.
Thanks to @Sam Tenka I have learned that can be expressed in an infinite number of ways and that providing the inverse gives the needed extra information. The function can equal and the other inverse hyperoperators.
that still hasn't cleared up my confusion... as far as i can tell, you still have a problem: you haven't shown what should be when , and i dont understand how your claimed countercounterexample fixes that
Let's consider @Sam Tenka's comment about the iteration of . The context is in proving convergence, but for , we have producing square, cube roots and so on. But the iteration of square, cube and so on roots is convergence as .
So I just need to write a proof and consider the complex powers also.
OK, I find the next part cool. My "formalization" has three main parts; symmetry, computation and convergence. So it handles the symmetry of , right off the bat.
that still doesn't answer the question i posted! all i'm asking is: what are you claiming should be when ?
for is with two fold symmetry.
Is this an entire function from ?
yeah—there's no standard well-known meaning for which is an entire function , so if you do mean an entire function, that's not a full answer
The square root function is not entire.
i think that's all the counterexample was, then—so how do you have a counterexample to the counterexample?
additionally, what does two-fold symmetry mean? :sweat_smile:
nvm i found two-fold symmetry on google
Daniel Geisler said:
The square root function is not entire.
Okay. I think I'm beginning to understand. Now that we're on the same page here, my main confusion is:
Earlier, we agreed that a major result of your work is that there exists a "uniform" way to extend (for each in some "large" class of entire functions) the iterate defined by to a "nice" function .
Above, the quoted words are my own, and they are imprecise. You helped us by specifying what they mean; I'll paraphrase, so please correct me if I err:
The square root example we've been discussing shows that the statement as I interpreted above fails. Perhaps what you meant is something else, then?
I've been editing this message a lot. I think I'm done editing!
Daniel Geisler said:
I have an issue with Sam Tenka's use of "composition" in quotes. It indicates using the term composition in a non-standard manner. I mean to use the term composition as it is commonly understood in mathematics.
We are discussing analysis, but even if I have a flawed proof, I am pursuing three different proofs of convergence.
My bad for implying that you were using composition in a non-standard and erroneous way! At that time, this non-standard use was my best guess about how to interpret some of the statements I encountered. For example, your April write up states as a theorem that
The iterated function for entire function is convergent at finite points in the complex plane.
I continue to struggle to parse this, and earlier I guessed that this meant that you have been able to define as a function that is analytic at all non-infinite points on the Riemann sphere, i.e. all points of . Moreover, since nearby you mention that the composition of entire functions is entire, I thought that this compositionality might be part of the argument to prove the theorem. However, we're on the same page that this would require a non-standard notion of composition that must be reasoned about more carefully. I then used that guess to inform the questions I could ask to learn more.
Perhaps it would be better to directly ask: what does that theorem mean? Is there a typo?
A relevant link Finding such that given . FYI - I'm user37691 and have two entries. Note my two solutions. Terry Tao weighs in.
@Sam Tenka said:
- "uniform" way --- you wrote an algorithm for producing 's Taylor series given 's
All of my computer work has been to verify mathematical results. The one exception is that I spent many hundreds of hours viewing fractals.
A sign of mathematical mastery is writing multiple independent proofs of the same thing.
Taking the derivatives of exponential towers. Example: . Then , and finally . My work comes from generalizing earlier results several times.
I have experimentally replicated the following:
R. Aldrovandi and L. P. Freitas
Continuous iteration of dynamical maps
J. Math. Phys. 39, 5324 (1998)
Total partitions, a combinatorial model of , the combinatoric representation of unlabeled total partitions
Proof of concept Mathematica software that builds total partitions and then uses Faa Di Bruno's formula to evaluate them. Where Schroeder's Functional equation holds, . Abel's Functional equation is notable because it uses all rational arithmetic and .
Extension () of Exponential Generating Functions and Ordinary Generating Functions
Terse Mathematica code - Finding f such that f(f(x))=g(x) given g -
f[0]=0;
max = 3;
Solve[Table[D[g[z] == f[f[z]], {z, i}] /. z -> 0 , {i, max}],
Table[D[f[z], {z, i}] /. z -> 0, {i, max}]]
The next has to do with proving convergence. My personal interest is proving the extension of the hyperoperators is convergent. Two fact dominate - is entire, entire functions remain entire under composition.
- "large" class of s --- presumably, can be any entire function, although I didn't feel that this was really made precise.
Yes, I meant was convergent in both variables, I don't know enough the comment on two variable entire functions.
Moiré Pattern
Consider two overlapping grids, one with finite values, the other with infinite values. I argue that these Moiré patterns are inconsistent as a finite and infinite value cannot be found arbitrarily close.
- "nice" resulting --- you mentioned that one obtains Taylor series, but of course formal power series are not the same thing as functions. I guessed that you meant " is entire in both variables".
Correct
If I'm understanding the most recent post correctly, we agree that one of your results states that, for any function that is entire: is entire both in the variable and in the variable.
But isn't this statement false, given that square root example --- , ?
One might imagine that some marvelous function that's not square root may serve as for the case. But I think there is a topological obstruction, assuming that 's zeros are where we expect. Indeed, Hartog's theorem tells us that is continuous. Let's restrict ' s domain to everything but in order to consider as a map from the punctured plane to itself, for each . The upshot of puncturing is that we get a winding number for each . Since the various are homotopic, must be a constant! But in the special case that is a natural number, a direct inspection shows that , which is non-constant. This is a contradiction, so I think that no such exists.
that is, we assume that for each real-valued , only has zeros at . The argument still works even if we have this constraint on zeros for just , i.e. interpolating between second and fourth powers.
we puncture at , so . This is where we use the hypothesis ()
just counts how many times the output loops around the origin for each time the input loops around the origin. We may define it in symbols by asking that
Entire functions are not special in my research, they are just a useful tool to prove convergence. I agree the square root function is not entire, but as I mentioned, I don't need entire functions to prove that the square root function converges under iteration.
Most of this topic is rather over my head...
Yet I genuinely hope somebody in here will find some useful insight in latest vids by Grant Sanderson (3blue1brown):
Lockdown Math #8, The power tower puzzle
Lockdown Math #9, Intutition of i to the power of i
Note that there is a small bit in the end of #9 about power tower of i
and in the vid #8 around minute 42 begins part on complex power towers, apparently
I knew what was in 1970 and began generalizing the question to tetration the next year.
i'm sure euler knew that i^i is real :p
But did Euler have the solid video gear and Internet connection?
interesting topic @Daniel Geisler !and @Sam Tenka I like this proof that no such can exist! If we want to continue looking for some sort of anyway, though, it makes me wonder how we should proceed.
In particular, I’m wondering about the symmetries of the functions (with respect to their arguments): how do you deal with noninvertibility while requiring the existence of ? In the case , we of course would hope that . But if there is a that works how we expect it to wrt composition, and given that , we should have , but also ofc, by ’s symmetry, . How do we resolve this?
The only way I can imagine currently (which lets us keep the algebraic structure of the iteration) is by expanding the domain of the function somehow: , in what you might think of as a sort of “pre-emptive Riemann surface” style. We then wouldn’t actually have ; let’s construct an example of what might be. So, maybe, let’s keep the argument around, and say , our “new” on , is where we demand each , so our argument space is sequences of complex numbers: and we simply keep all of our previous arguments around. (We need to check it’s nonempty, though, but I think some lacunary stuff might help us there.) Then we look at iterated of , and recover by projection onto the first component , and is given just by dropping the first element of the list.
I don’t know if this is useful for extending the iteration parameter to , but it at least lets us extend it to , so perhaps something similar could be done! (perhaps we could have be , or some other sort of “generalized sequence”?)
Or is there another, better way around the impossibility of defining ? (Other than using some , I mean; obviously there might be better realizations of it than the one I hastily constructed here :P)
(For example, do we want to require only that forward composition respects “time addition”, i.e. only demand when or something?)
a category theorist will instantly recognize this as an element of something categorical—in this case you could call it the limit of the functor where I’m considering to be a one-object category, and the value of each irreducible arrow in under is . I’m sure a full-fledged category theorist would recognize a better construction of this space, given the constraint on , and given that we want an induced by to act on this space. Something about the poset being the loop space(?) of the free category on a graph with a single self-loop, and also starting with a single self-loop graph over on the other side of things, corresponding to on ...
Also @Sam Tenka , how do you make the nice red asterisks? :grinning_face_with_smiling_eyes: Edit:
@T Murrills said:
I like this proof that no such can exist! If we want to continue looking for some sort of anyway, though, it makes me wonder how we should proceed.
Let me clarify my take that can exist. It just can't be convergent from being entire.
Let , then simplifies to the Lagrange inversion theorem.
oh, okay; so is not really a function on , but only a formal power series?
@T Murrills, I'll get back to your question as soon as I think it out.
@Sam Tenka thanks for getting me to look at issues I missed. I'll document them on my website as well as your role in bringing them up.
any luck?