You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Is there a connection between the two?
I was reading @John Baez's notes on Fisher information metric and it struck me how similar
and
are.
I know it's a very vague similarity, but I'm intrigued by the fact that the rank of a vector space can be seen as a kind of logarithm: it tells you which exponent you have to raise to.
I'm probably day-dreaming, but it'd be cool if it turns out I'm not (if someone figured out a connection already)
Interesting! This works well if you take cohomology with coefficients over . Then you get
so the dimension is the base logarithm of this.
If you insist on identifying the rank of the group with the logarithm you're gonna have to deal with negative probabilities. Not impossible, but an extra headache.
I think there is a kinda positive answer to this question, but I am not an expert, so I will just link stuff. The general motto is that entropy measures (average?) diversity, and that the Euler characteristic can capture maximal diversity (?). I myself do not find the connection that well-drawn, but it is definitely there and @Tom Leinster probably knows it.
Check out:
https://www.maths.ed.ac.uk/~tl/qm/qm_talk.pdf
https://www.maths.ed.ac.uk/~tl/turin.pdf
https://arxiv.org/pdf/1711.00802.pdf
Oooh nice @Ivan Di Liberti! It seems the connection is intuited, then
Yes, I'd say Tom Leinster, with his very general theory of magnitude that captures entropy, the Euler characteristic and many other things, is probably the go-to guy for this. Not everything he's done is in his book Entropy and Diversity: the Axiomatic Approach, but it's good source for this material - and it's free!
So let's see if I can summarize the link between and :
This is nice but I was hoping for a more direct link. In particular I was a bit disappointed by the fact magnitude is related to entropy only as the 'maximum attainable value of its exponential', and there's not an analogue of 'non-maximal entropy' in the theory of magnitude.
I might be wrong here though!
@Matteo Capucci (he/him) Entropy is (classically) defined for probability distributions and magnitude is defined for metric spaces. Leinster-Cobbold diversity (or similarity-sensitive diversity) of order 1, , is defined for probability distributions on metric spaces. This diversity generalizes the exponential of classical entropy H (reducing to it in the case that all distances are infinite). So it is which is the analogue of 'non-maximal entropy' in the theory of magnitude.
Provided the metric space is sufficiently nice (positive semidefinite and non-negative weighting) then the maximum diversity over all probability measures is the magnitude.
To put it another way. Suppose you have a finite set X with a (nice) metric and a probability distribution . Then if denotes a maximizing probabliltiy distribution on we have:
Amazing! Thanks a lot for the clarification