You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I have been thinking about how to characterize conditionals categorically. Here is a formulation different (I think) than @Tobias Fritz's .
Let be a category of the form
in which objects are standard measurable spaces with the same sample space, and morphisms are identity maps from finer spaces to coarser ones.
Let be a cone from the singleton set to .
Then there exists another category with the same objects as and morphisms pointing the other way, along with a contravariant cone such that the following diagram commutes:
where is the natural transformation in which, for all , is the identity map.
I think that this universal property entirely characterizes conditionals.
Nice idea! I need to think about it a little more and need to go soon, but for now let me just comment that your terminology and notation is a bit non-standard, which may impede communication with other category theorists. Specifically:
This is not to criticize the idea, which I think goes in the right direction. (More on that later.) It's normal that your parlance will be a bit non-standard at the beginning, especially if you haven't talked to category theorists much before. I'm trying to point it out early so that it won't perpetuate.
Now how about that definition of conditionals itself? Let me first try to formulate it in my own words, and then see whether this is indeed what you have in mind.
Let be the category of measurable spaces and measurable maps, and let be the Kleisli category of the Giry monad on , so that its morphisms are Markov kernels between measurable spaces. Then we have a canonical functor . It turns every measurable map into the corresponding (deterministic) Markov kernel. It's worth noting that this functor is not faithful.
Suppose that we have in . Denoting the singleton measurable space by , a morphism in is exactly a probability measure, so that is a probability space. The pushforward of this measure is the composite in . If we denote it by , then we can consider both measures as the two components of a cone from to the diagram in .
Now a conditional is a section of in , which means that , and in addition such that . In other words, must be a section such that also forms a cone with respect to the diagram .
Finally, it's worth noting that in some situations, such as in the theory of stochastic processes, we have more than just two measurable spaces, but rather a whole diagram, like . In this case, the definition of conditional can be generalized straightforwardly by using the formulation in terms of sections and cones.
Is this an accurate representation of what you have in mind?
Thank you for the considered response. That is indeed what I had in mind.
, , and are actually functors from into ; and is a natural transformation.
It seems to me that there should be such a thing as a "contravariant" natural transformation, from a functor to a contravariant functor. Then and would be just such a thing.
Here is something interesting:
If and are objects in and is the pushforward of the identity map, then must have the same sample space as and a finer -algebra.
As you mentioned above, any measure can be turned into a cone:
A conditional expectation then makes the following diagram commute:
Where is the pushforward of the copy map , and is the identity morphism.
The equality
is the "conditional determinism" property of conditional expectations, and the equality
is the projection property.
You're very welcome! It would be nice if we had some others chime in as well.
Concerning natural transformation from a covariant functor to a contravariant one, and things like that the important thing is to explain what you mean for any terminology that isn't standard (not textbook material). So by contravariant natural transformation, I assume that you mean this, but it's still a bit of a guess.
I can't quite parse your second diagram. What is the map ?
What you have in mind may be this diagram? While this not a commutative diagram because the pentagon does not commute, it is a partially commutative diagram in the sense that it does commute when you start at . And this is precisely the definition of conditioning (in the form of Bayesian inversion) given by Cho and Jacobs, as in their (5) on p.10. (They write instead of and instead of and instead of .)
(One more silly terminology thing: I'm pretty sure that you don't intend to be the conditional expectation, since that would be a map from functions on one space to functions on the other. What you seem to be referring to is called a regular conditional probability.)