Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: deprecated: mathematics

Topic: Euler characteristic and entropy


view this post on Zulip Matteo Capucci (he/him) (Sep 21 2021 at 09:11):

Is there a connection between the two?
I was reading @John Baez's notes on Fisher information metric and it struck me how similar

H(p)=i=0piln(pi)H(p) = \sum_{i = 0}^\infty p_i \ln(p_i)

and

χ(X)=i=0(1)irk(Hn(X,R))\chi(X) = \sum_{i=0}^\infty (-1)^i \mathrm{rk}(H^n(X, \mathbb R))

are.
I know it's a very vague similarity, but I'm intrigued by the fact that the rank of a vector space can be seen as a kind of logarithm: it tells you which exponent you have to raise R\mathbb R to.

I'm probably day-dreaming, but it'd be cool if it turns out I'm not (if someone figured out a connection already)

view this post on Zulip Jens Hemelaer (Sep 21 2021 at 09:28):

Interesting! This works well if you take cohomology with coefficients over Fq\mathbb{F}_q. Then you get
Fqn=qn|\mathbb{F}_q^n| = q^n
so the dimension is the base qq logarithm of this.

view this post on Zulip Javier Prieto (Sep 21 2021 at 11:23):

If you insist on identifying the rank of the group with the logarithm you're gonna have to deal with negative probabilities. Not impossible, but an extra headache.

view this post on Zulip Ivan Di Liberti (Sep 21 2021 at 11:53):

I think there is a kinda positive answer to this question, but I am not an expert, so I will just link stuff. The general motto is that entropy measures (average?) diversity, and that the Euler characteristic can capture maximal diversity (?). I myself do not find the connection that well-drawn, but it is definitely there and @Tom Leinster probably knows it.

Check out:
https://www.maths.ed.ac.uk/~tl/qm/qm_talk.pdf
https://www.maths.ed.ac.uk/~tl/turin.pdf
https://arxiv.org/pdf/1711.00802.pdf

view this post on Zulip Matteo Capucci (he/him) (Sep 21 2021 at 14:25):

Oooh nice @Ivan Di Liberti! It seems the connection is intuited, then

view this post on Zulip John Baez (Sep 21 2021 at 19:30):

Yes, I'd say Tom Leinster, with his very general theory of magnitude that captures entropy, the Euler characteristic and many other things, is probably the go-to guy for this. Not everything he's done is in his book Entropy and Diversity: the Axiomatic Approach, but it's good source for this material - and it's free!

view this post on Zulip Matteo Capucci (he/him) (Sep 22 2021 at 08:12):

So let's see if I can summarize the link between χ\chi and HH:

  1. HH is the logarithm of a quantity called the diversity DD, which, in a sense, measures the 'spread' of a probability distribution
  2. Maximizing entropy is the same as maximizing diversity, because exp\exp is monotone increasing. It is often of interest to look at entropy-maximising distributions, since they're the ones the 'assume the least about the world', afaiu. On the other hand, it is also of interest to maximise diversity, e.g. to judge how 'imbalanced' an ecological community is, or indeed to realize what's the potential biodiversity.
  3. It turns out that the maximum diversity coincides (right?) with the magnitude of a certain matrix
  4. Magnitude of matrices (= sum of entries of its inverse) is a particular instance of magnitude for enriched categories
  5. So is Euler characteristic.

view this post on Zulip Matteo Capucci (he/him) (Sep 22 2021 at 08:14):

This is nice but I was hoping for a more direct link. In particular I was a bit disappointed by the fact magnitude is related to entropy only as the 'maximum attainable value of its exponential', and there's not an analogue of 'non-maximal entropy' in the theory of magnitude.
I might be wrong here though!

view this post on Zulip Simon Willerton (Sep 22 2021 at 10:04):

@Matteo Capucci (he/him) Entropy is (classically) defined for probability distributions and magnitude is defined for metric spaces. Leinster-Cobbold diversity (or similarity-sensitive diversity) of order 1, D1D_1, is defined for probability distributions on metric spaces. This diversity generalizes the exponential of classical entropy H (reducing to it in the case that all distances are infinite). So it is log(D1)\log(D_1) which is the analogue of 'non-maximal entropy' in the theory of magnitude.

Provided the metric space is sufficiently nice (positive semidefinite and non-negative weighting) then the maximum diversity over all probability measures is the magnitude.

To put it another way. Suppose you have a finite set X with a (nice) metric dd and a probability distribution pp. Then if pp' denotes a maximizing probabliltiy distribution on XX we have:

D1(X(,p))=exp(H(p));D1(X(d,p))=Mag(d).D_1(X(\infty, p)) = \exp(H(p)); \qquad D_1(X(d, p')) = \mathrm{Mag}(d).

view this post on Zulip Matteo Capucci (he/him) (Sep 22 2021 at 10:29):

Amazing! Thanks a lot for the clarification