Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: categorical probability

Topic: uniform distributions in categorical probability?


view this post on Zulip Martti Karvonen (May 27 2021 at 13:45):

Is there an existing axiomatization of uniform distributions in Markov categories (in which they exist)? I notice Bart Jacobs denotes them with the "ground" symbol (the upside-down version of which he uses for deletion) and mentions some basic properties, but I'd expect that there's more to say about the matter. For instance, if G is a group (more generally, a hopf algebra in a Markov cat), then multiplying the identity on G with the uniform distribution should be equal to discarding the input and just ouptutting the uniform distribution.

view this post on Zulip Jules Hedges (May 27 2021 at 14:20):

The first thing that came to mind is that a uniform distribution IXI \to X should not be changed by any iso XXX \to X

view this post on Zulip Martti Karvonen (May 27 2021 at 14:25):

True in FinStoch, but in the abstract case I don't have an intuition whether this should hold for all isos or only the deterministic ones.

view this post on Zulip Tobias Fritz (May 27 2021 at 20:44):

Picturing classical and quantum Bayesian inference is one paper that comes to mind. Here, Coecke and Spekkens have axiomatized the category of finite sets with R+\mathbb{R}_+-valued matrices as what we would now call a hypergraph category (plus a dagger). The uniform distributions are encoded as scalar multiples of the counits. So as long as you're dealing with discrete probability only, then Markov categories may not be the right framewok, and the Coecke-Spekkens approach may indeed be the way to go. (Though it may be worth noting that their treatment of conditionals suffers from inconsistencies arising from zero probabilities.) Markov categories on the other hand are intended primarily for measure-theoretic probability, where one can't expect a sensible concept of uniform distribution, and in particular there isn't any probability measure that would be invariant under all deterministic isos. (BTW, isos are necessarily deterministic in most Markov categories of interest, unless you have negative probabilities?)

On the other hand, there are also good reasons to argue that even in the discrete case, one shouldn't single out any particular distributions as uniform. For example, the distribution (12,12,0)(\frac{1}{2}, \frac{1}{2}, 0) on a three-element set is essentially isomorphic to (12,12)(\frac{1}{2}, \frac{1}{2}) on a two-element set. So calling the latter uniform but not the former would do some violence to the spirit of probability, although to what extent this is a problem will of course depend on your context.

view this post on Zulip Tobias Fritz (May 27 2021 at 21:02):

The fact that multiplying with the uniform distribution on a group amounts to discarding the input amounts to this equation:
haar_integral.png

view this post on Zulip Tobias Fritz (May 27 2021 at 21:03):

(To be read from bottom to top, with the red structure the Hopf algebra multiplication, and the green the Hopf algebra counit respectively the uniform distribution = Haar integral.) This sort of thing is naturally part of the theory of interacting Frobenius algebras.

view this post on Zulip Martti Karvonen (May 28 2021 at 12:36):

Tobias Fritz said:

The fact that multiplying with the uniform distribution on a group amounts to discarding the input amounts to this equation:
haar_integral.png

Yeah this is the equation I had doodled before asking - I guess I'd actually like to derive it from some more "basic" properties of uniform distributions. However, you're probably right that Markov cats are not the right tool as I mostly care about discrete probability this time. Thanks for the answer!

view this post on Zulip Cole Comfort (May 28 2021 at 13:00):

What is really cool is that they show that this equation is actually a pullback of matrices.

view this post on Zulip Martti Karvonen (May 28 2021 at 13:13):

Now that you mentioned it, you've piqued my interest: if one works with quasiprobabilities of some sort or another, how do the corresponding Markov categories change categorically? You already mentioned the possibility of isos that are not deterministic, is there anything else that stands out at the categorical level?

view this post on Zulip Tobias Fritz (May 28 2021 at 14:45):

Ha, interesting question! The Markov category of stochastic matrices with possibly negative entries behaves quite differently from its nonnegative counterpart FinStoch\mathsf{FinStoch}, and basically everything breaks down. It doesn't have conditionals. It fails the causality axiom, which says that if
causality1.png

view this post on Zulip Tobias Fritz (May 28 2021 at 14:45):

then also
causality2.png

view this post on Zulip Tobias Fritz (May 28 2021 at 14:46):

and it fails the positivity axiom, which says that if a composite gfgf is deterministic, then also
positivity.png

view this post on Zulip Tobias Fritz (May 28 2021 at 14:47):

However, these failures are not independent, since conditionals implies causality implies positivity (where the last implication was proven only recently by @Dario Stein). So it's really only the failure of positivity, and that's why I had called that condition the "positivity axiom".

view this post on Zulip Eigil Rischel (Jun 03 2021 at 10:32):

This from @Tom Leinster may be of relevance: https://golem.ph.utexas.edu/category/2020/11/the_uniform_measure.html