You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Is there an existing axiomatization of uniform distributions in Markov categories (in which they exist)? I notice Bart Jacobs denotes them with the "ground" symbol (the upside-down version of which he uses for deletion) and mentions some basic properties, but I'd expect that there's more to say about the matter. For instance, if G is a group (more generally, a hopf algebra in a Markov cat), then multiplying the identity on G with the uniform distribution should be equal to discarding the input and just ouptutting the uniform distribution.
The first thing that came to mind is that a uniform distribution should not be changed by any iso
True in FinStoch, but in the abstract case I don't have an intuition whether this should hold for all isos or only the deterministic ones.
Picturing classical and quantum Bayesian inference is one paper that comes to mind. Here, Coecke and Spekkens have axiomatized the category of finite sets with -valued matrices as what we would now call a hypergraph category (plus a dagger). The uniform distributions are encoded as scalar multiples of the counits. So as long as you're dealing with discrete probability only, then Markov categories may not be the right framewok, and the Coecke-Spekkens approach may indeed be the way to go. (Though it may be worth noting that their treatment of conditionals suffers from inconsistencies arising from zero probabilities.) Markov categories on the other hand are intended primarily for measure-theoretic probability, where one can't expect a sensible concept of uniform distribution, and in particular there isn't any probability measure that would be invariant under all deterministic isos. (BTW, isos are necessarily deterministic in most Markov categories of interest, unless you have negative probabilities?)
On the other hand, there are also good reasons to argue that even in the discrete case, one shouldn't single out any particular distributions as uniform. For example, the distribution on a three-element set is essentially isomorphic to on a two-element set. So calling the latter uniform but not the former would do some violence to the spirit of probability, although to what extent this is a problem will of course depend on your context.
The fact that multiplying with the uniform distribution on a group amounts to discarding the input amounts to this equation:
haar_integral.png
(To be read from bottom to top, with the red structure the Hopf algebra multiplication, and the green the Hopf algebra counit respectively the uniform distribution = Haar integral.) This sort of thing is naturally part of the theory of interacting Frobenius algebras.
Tobias Fritz said:
The fact that multiplying with the uniform distribution on a group amounts to discarding the input amounts to this equation:
haar_integral.png
Yeah this is the equation I had doodled before asking - I guess I'd actually like to derive it from some more "basic" properties of uniform distributions. However, you're probably right that Markov cats are not the right tool as I mostly care about discrete probability this time. Thanks for the answer!
What is really cool is that they show that this equation is actually a pullback of matrices.
Now that you mentioned it, you've piqued my interest: if one works with quasiprobabilities of some sort or another, how do the corresponding Markov categories change categorically? You already mentioned the possibility of isos that are not deterministic, is there anything else that stands out at the categorical level?
Ha, interesting question! The Markov category of stochastic matrices with possibly negative entries behaves quite differently from its nonnegative counterpart , and basically everything breaks down. It doesn't have conditionals. It fails the causality axiom, which says that if
causality1.png
then also
causality2.png
and it fails the positivity axiom, which says that if a composite is deterministic, then also
positivity.png
However, these failures are not independent, since conditionals implies causality implies positivity (where the last implication was proven only recently by @Dario Stein). So it's really only the failure of positivity, and that's why I had called that condition the "positivity axiom".
This from @Tom Leinster may be of relevance: https://golem.ph.utexas.edu/category/2020/11/the_uniform_measure.html