You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
If we have a monoidal monad , it comes equipped with lax morphisms which, in the case of probability theory, we interpret as the embedding of pair of distributions, one on and one on into a joint distribution on .
I'm reading about Markov categories which seem to give a synthetic view on probability theory and I'm wondering: can I see the above embedding purely in terms of equations in a Markov category? I can't seem to see how it could be done
In a Markov category you also have "marginalisation" maps and it's a one-sided inverse to - if you start with a pair of distributions, stick them together than the marginalise them apart again then you get back what you started with, but the other way round isn't true. Is that what you need?
Hm, I guess it's confusing since you're in an arbitrary Markov category but you're referring to objects as , i..e using the monad
Oh yeah. Right. Your vanishes into the monoidal structure of the Markov category, you see it directly eg. if you have a state and a state you tensor them together into a state . I think the internal way to say it would be that if you have a pair of morphisms and then
equals
Right, I guess the confusing bit is that in if I have two objects I can do two things: either first take their monoidal product and then consider , or I can take their distributions separately and then take their cartesian product . And then there's a lax morphism between them.
But in an arbitrary Markov category if I have two objects I can just take their tensor product . And I guess the other analogue doesn't exist? This might or might not be related to your last message.
One thing we can say is that in a Markov category, a distribution on is a morphism , where is the monoidal unit. If we have a distribution on and on , we can form a distribution on given by , which has marginals on and given by and , and and are independent. In string diagrams it's
Of course this just corresponds to one element of being mapped into . If you want a morphism corresponding to the mapping itself then I think you have to use the language of the representable Markov categories paper, which lets you talk about things like ("the object of distributions over ") as an object in the Markov category, if the category is such that they exist.
I see. I understand the case with single distributions.
So you're saying to think of a distribution on as a map in a Markov category. I guess one could get close to something I wanted by thinking of the set of maps and and relating it to the set .
But somehow that still doesn't feel satisfying. I should probably read the paper you linked, but skimming it for a bit I'm not sure where to start... is there a particular theorem there that relates to what I said?
As @Nathaniel Virgo already mentioned, the formation of product distributions, and more generally products of Markov kernels, corresponds in the Markov category picture simply to their monoidal product. You can think of this quite explicitly in terms of the relevant string diagram: putting two morphisms side by side makes them intuitively "independent" because the diagram is disconnected, and this independence is exactly what formalizes stochastic independence. (Compare with tensor products in , where you'd also multiply the entries of two matrices pointwise in order to get their tensor product, and note that that corresponds to the multplication of probabilities in the definition of stochastic independence.)
There's more on the monad side of this stuff in Bimonoidal Structure of Probability Monads, while our more recent paper on representable Markov categories doesn't go into much detail on these aspects.
Is this going in the direction that you've had in mind?
Bruno Gavranovic said:
I should probably read the paper you linked, but skimming it for a bit I'm not sure where to start... is there a particular theorem there that relates to what I said?
I meant that since a representable Markov category has an object of probability distributions over for every , then there should be a morphism of type that embeds and into in the same way as the lax monoidal structure of a probability monad. This morphism is constructed in the proof of proposition 3.12 in the paper.
So you're saying to think of a distribution on as a map in a Markov category. I guess one could get close to something I wanted by thinking of the set of maps and and relating it to the set .
I was thinking about this last night. It generalises slightly if we consider a map as a family of distributions parametrised by . Then there's a map from the sets and and relating it to the set of parametrised families of independent distributions. This map is actually just the action of the functor on morphisms. (I guess that's just what Tobias Fritz said above.)
Hi, just wanted to say, thanks for all the answers!
I've been thinking about this but I'm not sure does this answer my question, there's some confusion in my mind and I'm not even sure what the confusion is.
But somehow I've started doing other stuff now so I might come back to this question sometime when I feel more ready :grinning: