Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: deprecated: mathematics

Topic: evolutionary game


view this post on Zulip Peiyuan Zhu (Nov 07 2022 at 20:32):

image.png
I was reading Marc Harper's paper: inference as replicator dynamics. I don't understand why the second equality hold. It suggest i=1nxifi(x)=1ni=1nfi(x)\sum_{i=1}^nx_if_i(x)=\frac{1}{n}\sum_{i=1}^nf_i(x) because for instance 0.3×4+0.7×2=1.2+1.4=2.612(4+2)=30.3\times 4+0.7\times 2=1.2+1.4=2.6\ne\frac{1}{2}(4+2)=3. What am I missing? The original paper is here https://arxiv-benchmark.informatik.uni-freiburg.de/data/benchmark/pdf/0911/0911.1763.pdf

view this post on Zulip Peiyuan Zhu (Nov 07 2022 at 20:35):

Ok never mind I found the answer in the paper. Average fitness is defined as fˉ(x)=i=1nxifi(x)\bar{f}(x)=\sum_{i=1}^nx_if_i(x). Interesting.

view this post on Zulip John Baez (Nov 07 2022 at 21:19):

Yeah, if 99 people have fitness 0 and 1 has fitness 1 the average fitness is 1/100, not 50/100.

view this post on Zulip John Baez (Nov 07 2022 at 21:20):

By the way, this is not "John Baez's paper with Marc Harper" - you can see the author is Marc Harper.

view this post on Zulip Peiyuan Zhu (Nov 07 2022 at 21:58):

Oh my bad :sweat_smile:

view this post on Zulip Peiyuan Zhu (Nov 07 2022 at 22:19):

What is the rationale behind this? It sounds reasonable to me, but I failed to explain it myself

view this post on Zulip John Baez (Nov 07 2022 at 22:22):

So if you had 100 friends, and 99 of them earned $0/year, and 1 of them earned $1,000,000/year, what you say their average income was?

view this post on Zulip Peiyuan Zhu (Nov 07 2022 at 22:40):

I see, it has to be weighted by population, not by categories. So $10,000.

view this post on Zulip John Baez (Nov 07 2022 at 22:46):

Right. And if 99 people have fitness 0 and 1 has fitness 1 the average fitness is 1/100. This is because there are 100 people and they only have one child who survives.

view this post on Zulip Peiyuan Zhu (Nov 28 2022 at 18:21):

@John Baez I'm thinking of extending this replicator dynamics to a bigger space e.g. the probability assignment over space of possible events 2Θ2^\Theta instead of only the atomic events {{θ1},{θ2},,{θn}}\{\{\theta_1\},\{\theta_2\},\cdots,\{\theta_n\}\}, while relaxing some basic assumptions of probability theory e.g. law of excluded middle 0P({θi1,θi2,,θik})+P({θi1,θi2,,θik}c)10 \le P(\{\theta_{i_1},\theta_{i_2},\cdots,\theta_{i_k}\})+P(\{\theta_{i_1},\theta_{i_2},\cdots,\theta_{i_k}\}^c) \le 1 instead of 0...=10 \le ... =1 for all {i1,,ik}[n]\{i_1,\cdots,i_k\} \subset [n] as it is usually assumed in probability theory to account for underparametrization. In this way the Bayesian dynamics / geometry are limiting case of this dynamics / geometry in a bigger space. Is there any references (maybe in geometry or dynamical systems?) that you would recommend to look into this problem? I think this is a problem that can be part of your initiative of uncertainty assessment for climate.

view this post on Zulip Peiyuan Zhu (Nov 28 2022 at 18:34):

The problem seems more "topological" than it is "geometrical". Doesn't look like classical differential geometry?

view this post on Zulip Peiyuan Zhu (Nov 28 2022 at 18:36):

I saw several links on piecewise linear manifold, not sure if they're relevant. A question can be what kind of piecewise linear manifold can recover Riemannian metric as a special / limiting case? It looks like a method used in quantum gravity a lot.

view this post on Zulip John Baez (Nov 30 2022 at 15:59):

Sorry, I have no advice for you: your thoughts are too fragmentary for me to grasp them.

view this post on Zulip Peiyuan Zhu (Nov 30 2022 at 17:14):

The question is how to generalize inference dynamics & geometry without the assumption to only work on probabilities of singleton events and assumption of probability theory P(A)+P(Ac)=1P(A)+P(A^c)=1 as it is in Marc Harper's paper.

view this post on Zulip Peiyuan Zhu (Nov 30 2022 at 17:17):

As an example for hypothesis space Θ={θ1,θ2,θ3}\Theta=\{\theta_1,\theta_2,\theta_3\} we're interested in studying the dynamics on P(Θ)={{θ1},{θ2},{θ3},{θ1,θ2},{θ2,θ3},{θ1,θ3},{θ1,θ2,θ3}}\mathcal{P}(\Theta)=\{\{\theta_1\},\{\theta_2\},\{\theta_3\},\{\theta_1,\theta_2\},\{\theta_2,\theta_3\},\{\theta_1,\theta_3\},\{\theta_1,\theta_2,\theta_3\}\} but Marc Harper only studied dynamics on atomic events {{θ1},{θ2},{θ3}}\{\{\theta_1\},\{\theta_2\},\{\theta_3\}\}. If we assume law of excluded middle P(A)+P(Ac)=1P(A)+P(A^c)=1 for AΘA\subset\Theta then the higher-order events doesn't really need to be considered as their probabilities can be deduced. The question is what if we take out this assumption and see what happens geometrically.

view this post on Zulip John Baez (Nov 30 2022 at 17:26):

Okay. But that's not really a math question. Generalizing information geometry to some weird version of probability theory where we drop the assumption P(A)+P(Ac)=1P(A) + P(A^c) = 1 is an open-ended research project, not a "question".

view this post on Zulip Peiyuan Zhu (Nov 30 2022 at 17:29):

I would like to know if there are existing tools in mathematics (e.g. topology, category theory, etc.) that can handle this open-ended question. Dempster-Shafer theory is a generalization to Bayesian inference so I think the since Bayesian inference dynamics has good interpretation geometrically, so does Dempster-Shafer theory.

view this post on Zulip John Baez (Nov 30 2022 at 17:29):

If you don't mind some advice: I think it would be very good for you to learn how to ask the usual sort of math question, where you either ask if some clearly well-formed statement is true, or ask how to prove some such statement. For example, "are all functions in L2[0,1]L^2[0,1] also in L1[0,1]L^1[0,1]?", where the answer is "yes".

view this post on Zulip Peiyuan Zhu (Nov 30 2022 at 17:31):

Hmm I think the question here is I'm trying to find in a math framework if any exists; if not invent one. Maybe the first step is to define it more precisely?

view this post on Zulip John Baez (Nov 30 2022 at 17:32):

Yes. I have no idea what you're trying to do, so I can't help you do it. And to be very honest, I'm actually afraid you don't know what you're trying to do, either.

view this post on Zulip John Baez (Nov 30 2022 at 17:34):

It might help if I knew Dempster-Shafer theory - maybe then I could guess what you're trying to do. But I don't know that stuff.

view this post on Zulip Peiyuan Zhu (Nov 30 2022 at 17:36):

What criterion is "knowing what one's trying to do" satisfied? I want to know if stability of the inference dynamics of a well-known generalization of Bayesian inference can be studied in differential geometry, like it is studied in Bayesian inference by Marc Harper. But I think your advice mean to write up a formalism that embodies both Bayesian inference and Dempster-Shafer inference so it becomes a pure math problem.

view this post on Zulip Peiyuan Zhu (Nov 30 2022 at 17:40):

I get the sense that knowing what to do means you already have an answer to a question. But then I guess you don't need to ask any questions.

view this post on Zulip John Baez (Nov 30 2022 at 18:09):

Maybe someone who was an expert on Dempster-Shafer theory and information geometry could take a vague question like

I want to know if stability of the inference dynamics of a well-known generalization of Bayesian inference can be studied in differential geometry, like it is studied in Bayesian inference by Marc Harper.

and tell you something interesting about it. But I can't.

view this post on Zulip John Baez (Nov 30 2022 at 18:11):

You often seem to ask very open-ended questions. They make me feel you're asking someone else to do your research for you.

view this post on Zulip John Baez (Nov 30 2022 at 18:11):

Mathematicians can usually help more when you ask more precise questions - the sort of question you're able to ask when you have a specific plan, and you need to know if some particular statement is true.

view this post on Zulip David Egolf (Nov 30 2022 at 18:20):

John Baez said:

Mathematicians can usually help more when you ask more precise questions - the sort of question you're able to ask when you have a specific plan, and you need to know if some particular statement is true.

I believe this! But this mode of interaction with mathematicians does require figuring out helpful precise questions to ask. And I think learning how to ask such questions takes a lot of time, experience, and work. I wish I knew how to do this better myself.

For what it's worth, @Peiyuan Zhu, my theory is that learning how to ask precise questions (and develop a precise research plan) can be accomplished by (1) getting a solid basic foundation in the area you want to study (doing lots of exercises, and asking questions about those) and (2) reading and understanding in detail papers that people have written in related areas (and talking to people about these). I am still working on doing this myself, but my hope is that once (1) and (2) are accomplished it becomes easier to ask questions that are both interesting and sufficiently precise. Maybe such questions can be generated by modifying questions already asked and answered in previously published papers, at least to start with.

I'm sure many people here can offer a more insightful perspective on this process than me, though.

view this post on Zulip Morgan Rogers (he/him) (Dec 01 2022 at 14:33):

John Baez said:

Maybe someone who was an expert on Dempster-Shafer theory and information geometry could take a vague question like

I want to know if stability of the inference dynamics of a well-known generalization of Bayesian inference can be studied in differential geometry, like it is studied in Bayesian inference by Marc Harper.

and tell you something interesting about it. But I can't.

@Peiyuan Zhu a viable method is to distill the essential details of the thing you want to ask about into a summary that gives others the context needed to understand your question. For instance, rather than pasting several pages of a book or saying a name like "Dempster-Shafer theory" (which I also do not know the content of), give a paragraph summary of what you have understood or a specific example and point to the thing you don't understand. The longer the summary, the smaller the chance of engagement, but it will at least significantly lower the effort required by someone trying to engage with your question.

For this specific topic, you only have a small chance of getting a satisfying answer: either someone has tried it somewhere and someone reading this topic has seen that work and can point you to it, or no one here knows (which is likely: Dempster-Shafer theory isn't directly categorical, and is deep enough into probability theory that even the categorical probability people here may not have seen it) in which case you'll just have to try it for yourself and find out or ask somewhere else. This is a space for discussing category theory, we don't know everything!

view this post on Zulip Peiyuan Zhu (Dec 01 2022 at 17:16):

Morgan Rogers (he/him) said:

John Baez said:

Maybe someone who was an expert on Dempster-Shafer theory and information geometry could take a vague question like

I want to know if stability of the inference dynamics of a well-known generalization of Bayesian inference can be studied in differential geometry, like it is studied in Bayesian inference by Marc Harper.

and tell you something interesting about it. But I can't.

Peiyuan Zhu a viable method is to distill the essential details of the thing you want to ask about into a summary that gives others the context needed to understand your question. For instance, rather than pasting several pages of a book or saying a name like "Dempster-Shafer theory" (which I also do not know the content of), give a paragraph summary of what you have understood or a specific example and point to the thing you don't understand. The longer the summary, the smaller the chance of engagement, but it will at least significantly lower the effort required by someone trying to engage with your question.

For this specific topic, you only have a small chance of getting a satisfying answer: either someone has tried it somewhere and someone reading this topic has seen that work and can point you to it, or no one here knows (which is likely: Dempster-Shafer theory isn't directly categorical, and is deep enough into probability theory that even the categorical probability people here may not have seen it) in which case you'll just have to try it for yourself and find out or ask somewhere else. This is a space for discussing category theory, we don't know everything!

I like the comment "Dempster-Shafer theory isn't directly categorical" because I can see that way it is used has deep categorical strucutres but it isn't immediate to see how it can be categorified -- it says that some modification of the theory is needed or a different perspective is needed.

view this post on Zulip Peiyuan Zhu (Dec 01 2022 at 17:21):

I typesetted a research proposal explaining this in more details. Would it be suitable to post it here? Shall I move this to #practice: our work #practice: our papers channels?

view this post on Zulip Jean-Baptiste Vienney (Dec 02 2022 at 11:37):

@Peiyuan Zhu do you know an advisor / professor that know you better and could give you advices depending of your specific situation? If you want to do research on this subject, you have to take into account which person you can do it with etc... There are aspects which are not strictly mathematical and we don't have all the information to help you with that.

view this post on Zulip Peiyuan Zhu (Dec 03 2022 at 01:47):

Been looking for some people to critique on this recently. Just hear back from several of them. Ready to meet with some of them next week.

view this post on Zulip Peiyuan Zhu (Feb 18 2023 at 01:00):

I'm trying this evolutionary dynamical system on a simple coin tossing model to make sure I understand the concepts.

view this post on Zulip Peiyuan Zhu (Feb 18 2023 at 01:00):

A coin tossing model θ0\theta_0 is a fair coin, θ1\theta_1 only has head HH.

Suppose we have prior p(θ0)=0.2,p(θ1)=0.8p(\theta_0)=0.2,p(\theta_1)=0.8.
Suppose we observe head HH.

view this post on Zulip Peiyuan Zhu (Feb 18 2023 at 01:02):

Marc Harper's paper https://arxiv.org/pdf/0911.1763.pdf suggests that we can analyze such inference problem with solving a replicator dynamic as follows:

view this post on Zulip Peiyuan Zhu (Feb 18 2023 at 20:44):

With this evolutionary dynamics, I investigated two questions according to the paper.

view this post on Zulip Peiyuan Zhu (Feb 18 2023 at 20:44):

First question: Is the posterior a fixed point?

Answer: No

view this post on Zulip Peiyuan Zhu (Feb 18 2023 at 20:45):

Second question: Does posterior minimize KL-divergence near the fixed point?

Answer: No

view this post on Zulip Peiyuan Zhu (Feb 18 2023 at 20:46):

However, in my reading of the paper, at least one of the above two questions should give answer "yes". What am I missing in my understanding of the paper?

view this post on Zulip Morgan Rogers (he/him) (Feb 18 2023 at 21:57):

Did you post this on Stack Exchange? I suspect there are more people who would be able to answer your question there (although I appreciate that you have taken some of the earlier advice on asking questions on board!)

view this post on Zulip Peiyuan Zhu (Feb 18 2023 at 22:19):

Do you mean math stack-exchange https://math.stackexchange.com?

view this post on Zulip Peiyuan Zhu (Feb 18 2023 at 22:23):

I just made posts here https://mathoverflow.net/questions/441129/bayesian-inference-as-replicator-dynamics and here https://math.stackexchange.com/questions/4641999/bayesian-inference-as-replicator-dynamics but nobody has replied so far.

view this post on Zulip Jean-Baptiste Vienney (Feb 18 2023 at 22:24):

Morgan Rogers (he/him) said:

Did you post this on Stack Exchange? I suspect there are more people who would be able to answer your question there (although I appreciate that you have taken some of the earlier advice on asking questions on board!)

I agree with Morgan, your question is much clearer than the previous ones!

view this post on Zulip Peiyuan Zhu (Feb 19 2023 at 21:18):

Third Question: Is there a more general formula than the one calculated above?

Answer: Yes, this is a logistic equation

view this post on Zulip Notification Bot (Feb 21 2023 at 00:14):

This topic was moved here from #learning: questions > evolutionary game by Matteo Capucci (he/him).

view this post on Zulip Peiyuan Zhu (Feb 21 2023 at 01:21):

Some potential problems

[1] The result only hold for higher dimensional dynamics.

[2] The result only holds for discrete replicator dynamics.

view this post on Zulip Peiyuan Zhu (Feb 21 2023 at 21:54):

Peiyuan Zhu said:

Some potential problems

[1] The result only hold for higher dimensional dynamics.

[2] The result only holds for discrete replicator dynamics.

Response to potential problems:

[1] The high dimensional replicator dynamics would experience the same problem of degenerate solution.

[2] The KL-divergence result is indeed for continuous time replicator dynamics.

view this post on Zulip Peiyuan Zhu (Feb 24 2023 at 00:21):

For n=3n=3

There isn't a rest point on the simplex.

view this post on Zulip Peiyuan Zhu (Feb 24 2023 at 00:28):

And maybe it's because I didn't understand this proof.
image.png

I think the definition of Lyapunov function here is that a function is decreasing near an equilibrium point. The replicator equation is substituted in etc. But it doesn't say anything about the situation that equilibrium doesn't exist is that it?

view this post on Zulip John Baez (Feb 24 2023 at 00:49):

If there's no rest point there's no ESS (evolutionarily stable state) so the theorem implies that for no point is the Kullback-Leibler divergence a local Lyapunov function.

view this post on Zulip Peiyuan Zhu (Feb 24 2023 at 00:57):

So when the paper says " The replicator equation can now be understood as modeling the informational dynamics of the popula- tion distribution, moving in the direction of maximal local increase of potential with respect to the Fisher information, and ultimately con- verging to a minimal potential information state if a stablizing state (ESS) exists in the interior of the state space." it means that only if an ESS exists, but if normally ESS doesn't exist for Bayesian inference, this paper isn't fair to its title by saying "replicator equation as an inference dynamics". Am I correct?

view this post on Zulip John Baez (Feb 24 2023 at 00:59):

That question is too vague and subjective to answer. Focus on the theorems, not whether it's "fair" to title the paper a certain way.

view this post on Zulip Peiyuan Zhu (Feb 24 2023 at 01:05):

By "fair" I mean the solution of the replicator is one-to-one to Bayesian posterior. So can I understand the above sentence from the paper as "the rest point of the replicator is the Bayesian posterior obtained by minimizing Lyapunov function if and only if the rest point is an ESS"? So if I want to verify this statement with a numerical example, I would need to find an inference problem that has an ESS first. The paper didn't say anything about when the inference problem has an ESS, so I can only try very arbitrary fitness functions by myself, am I correct?

view this post on Zulip John Baez (Feb 24 2023 at 01:08):

The theorem says exactly what it says: it says that a state x^\hat{x} is an interior ESS for the replicator equation if and only iff DKL(x^x)D_{\mathrm{KL}}(\hat{x}|x) is a local Lyapunov function.

view this post on Zulip John Baez (Feb 24 2023 at 01:12):

If you want a fun example of this theorem, pick a replicator equation that has an interior ESS.

view this post on Zulip Peiyuan Zhu (Feb 24 2023 at 01:25):

I see. Now it makes sense. I tried two examples already but ESS doesn't exist in either case. In 2d interior ESS for sure doesn't exists. Now there are quite a lot of choices for 3d. But the previous example I tried above doesn't seem to have an interior ESS either. There are so many cases that ESS doesn't exists. I'll keep trying. At least from the above example I know that the replicator divides the simplex into 2 by 2 by 2 equals 8 possible regions with varying signs of derivative. The case when an interior ESS should be exactly the case when three lines cross at one point on the simplex. This is a extremely small fraction out of all legit inference problems.

view this post on Zulip Peiyuan Zhu (Feb 24 2023 at 01:50):

It’d be interesting to see what are the inference conditions that correspond to existence of ESS.

view this post on Zulip Peiyuan Zhu (Mar 13 2023 at 23:27):

Ok, it looks like I still couldn't find interior ESS

view this post on Zulip Peiyuan Zhu (Mar 13 2023 at 23:27):

Coin tossing model
Observe H
Two possible coins, p(H|xi)=a,b
dx1/dt=x1(a-ax1-b*x2)
dx2/dt=x2(b-ax1-b*x2)
There’s no interior ESS
Three possible coins, p(H|xi)=a,b,c
dx1/dt=x1(a-ax1-bx2-cx3)
dx2/dt=x2(b-ax1-bx2-cx3)
dx3/dt=x3(c-ax1-bx2-cx3)
There’s no interior ESS

view this post on Zulip Peiyuan Zhu (Apr 04 2023 at 07:22):

2E58DC6B-90D3-4D25-A556-A7EAE7C6B729.png

view this post on Zulip Peiyuan Zhu (Apr 04 2023 at 07:22):

So I’m still having trouble seeing this analogy.

view this post on Zulip Peiyuan Zhu (Apr 04 2023 at 07:23):

New evidence doesn’t depend on prior probability at all, but in fitness landscape it does seem to depend on population state. The analogy doesn’t hold.

view this post on Zulip Peiyuan Zhu (Apr 04 2023 at 07:29):

“Bayesian inference is a special case, formally, of the discrete replicator dynamic, since the fitness landscape in each coordinate may depend on the en- tire population distribution rather than only on the proportion of the i- type” The fitness landscape in Bayesian inference doesn’t seem to even depend on proportion of the i-type.

view this post on Zulip Peiyuan Zhu (Apr 04 2023 at 07:32):

Unless the prior itself are the parameters, which isn’t the standard Bayesian setting that he laid out

view this post on Zulip Peiyuan Zhu (Apr 04 2023 at 07:32):

P(E|Hi) doesn’t depend on P(Hi)