You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I've been working with generative testing in Haskell a bit, and there's some strange patterns that keep cropping up that I think might be accounted for using category theory. It seems to me that there is such a thing as a category of "sets equipped with probability distributions". An object of this category consists of a set and a probability function:
A morphism in this category is then a function between the underlying sets such that:
There is an obvious identity function, and associativity feels like it should work, although I haven't yet proved it works (if you can see how it already falls apart here, please let me know!).
To take a simple example, we might consider uniform distributions on the integers and booleans, and a morphism between them is the function which determines whether an integer is even.
Building further on the house of cards, it seems to me that this category is monoidal in a couple of interesting ways.
Given two sets with probability distributions, we have a probability distribution on the cartesian product of the two sets, such that the probability of any pair is the product of the probabilities assigned to its elements (i.e. the joint probability of independent events). The singleton set has an obvious probability distribution that will have no effect when combined in this manner with the probability distribution of any other set. Given an allocation of privileged probability distributions to sets, we end up with a lax monoidal functor from a subcategory of sets to this category of distributions, cohering the product structure in with this monoidal structure of joint probabilities and cartesian products.
The disjoint union of two sets can also be assigned a probability distribution, in which the probability of any element is its original probability scaled by the size of its containing set as a fraction of the size of the disjoint union. Here I run into some difficulty because the unit would presumably be the empty set, but there's no way to construct a sensible probability function on it (i.e. there is no way to make it integrate to 1). I tried tweaking the concept to assign probability to subsets instead of the elements of the set, which gives me a way to get a probability function for the empty set, but that creates a big muddle out of everything else. Do you think there's a way to salvage the idea?
That's as far as I got before I thought I'd ask here. Is this idea something that is already well studied, either abstractly or in the specific context of generative testing?
One thing that people do is to work with the "subdistribution monad", i.e. sum bounded by 1, instead of equal to 1.
I initially assumed you were talking about probability-preserving functions, but I realize I do not understand your notation. You wrote a condition that says " such that: ...'' but in the ... you just wrote a set without any condition. What is your condition?
In my response that follows, I will assume you meant to say something like (that's what it means to be measure-preserving for finite sets). Some of these ideas are described in a very readable paper by Baez, Fritz, and Leinster https://arxiv.org/abs/1106.1791 (that's where I first learned about this stuff). In particular, they describe a "disjoint union" structure similar to yours in terms of what they call "convex combinations." They call this category FinProb. I don't think they talk about the tensor product, but this is discussed in other papers, such as Fritz https://arxiv.org/abs/1908.07021.
Some points:
sorry about that. the condition I meant to describe in plain english is that the probability assigned to any element of the codomain is the sum of the probabilities assigned to the elements of its preimage
Perfect, then that's exactly the same as the condition I wrote, so I believe what I said applies.
Thanks. I have to go to bed for now, but I'll go through what you said tomorrow.
Cool. Feel free to ask questions or if something I said needs clarification. Btw, I like the idea of rescaling the probability measures by the size of the set. Even if it doesn't define a monoidal product on the category, it might still define an associative product... (I need to think about it).
Based on some rough calculations, it seems that your disjoint union assignment is not a functor. It is associative on objects, but look at what happens if you take two probability-preserving functions and . You need the function ( is my notation for disjoint union) to send the probability measure on to on . I don't think this will work in general even if and were probability-preserving, unless , i.e. (which, now that I think about it, makes intuitive sense). To see this, use the probability-preserving condition to see what the probability of and are.
If you start with the category of conditional distributions (ie. the kleisli category of the finite support probability monad on Set) and then take its co-slice category over the 1-element set. Objects are sets together with a probability distribution (since kleisli morphisms are the same as distributions on ), and morphisms are conditional distributions satisfying the condition that . I believe you have the sub-category of this where you require morphisms to be deterministic, ie. factors through the dirac unit
If that's right then you probably get a symmetric monoidal structure by general abstract nonsense, since conditional distributions has a symmetric monoidal structure (among other things) and it most likely passes through those 2 constructions
@Jules Hedges , I think @Asad Saeeduddin is trying to work with two monoidal structures. One is the one I think you're describing (which I think @Asad Saeeduddin already understands---this is the product of probability distributions). But there's another one he was trying to formulate, which involves appropriately rescaled disjoint unions. The category of conditionals does have a disjoint union, but this is not what I believe @Asad Saeeduddin was attempting to construct. The general abstract nonsense argument doesn't apply in this case because he is explicitly using the probability measures in his definition of the "rescaled disjoint union" (it does not come from a monoidal product in the category of conditionals).
@Jules Hedges Out of curiosity, what happens to the disjoint union in the coslice category? Maybe you can elaborate on the abstract nonsense, because I think one needs some additional assumptions to apply it. Let's call the monoidal product on the category of conditionals (it can be either cartesian product or disjoint union). Given objects and in (objects in your coslice cat), how is defined? One requires a natural map satisfying certain conditions in order to define the composite to get an object. In the case of cartesian product, you have such a map and it's unique. In the case of disjoint union, such a map corresponds to a probability measure on a 2-element set. I believe there is no such probability measure that will turn this into a monoidal product in the coslice category. There are two choices to get an associative product (the two Dirac distributions), but it will not be symmetric with either of those choices. There is only one choice that will give a symmetric product (uniform distribution), but it will not be associative (hmm... reminds me of some operations in algebraic topology!). One of the interesting things is that @Asad Saeeduddin suggested an idea that's at least associative and symmetric on objects. However, it is only a partially defined operation.
It's hard to conceptualise the disjoint union of two probability distributions. An element of the sample space is either an event in A or an event in B. The question is what should the probability of such an event be, and how can we interpret it probabalistically?
You could say "flip a coin, then sample from A if it's heads and from B if it's tails." Then for an event in A we would have , and for an event in B we'd have . But I think you perceived that this is a little arbitrary - why choose an unbiased coin instead of a biased one? A general unbiased coin would give and , for some , and the question is how to choose .
From what I understood above, you want to use a biased coin where the bias is proportional to the sample size, so . This may make sense in some cases, but it's also somewhat arbitrary - why should we be 3/2 times more likely to roll a D6 than a D4, and how should we handle infinite supports?
However, if you instead choose a category where the objects are spaces of probability distributions, then this arbitrariness can be avoided: you just have one distribution in your space for every possible value of .
This might be a good time to mention this great talk that @Sam Tenka (naive student) gave, which I found online and have watched several times. He defines a category where the objects are spaces of probability distributions, in such a way that what I just described is the coproduct. The category also has a Cartesian product which is the space of all distributions of the form (i.e. where they're independent), and a tensor product, which I believe is the space of all possible joint distributions . (My description here is kind of rough and possibly not quite right - it's better to watch the talk for the details.) It comes from thinking about probability distributions not as normalised measures but as points in convex sets, which live in a general category of convex spaces. It's an amazingly cool idea.
@Nathaniel Virgo Although these resources don't describe everything Sam does in his talk, you might find Chapter 1 in @Paolo Perrone's PhD thesis (http://www.paoloperrone.org/phdthesis.pdf) a nice read. Particularly, the relationship between affine maps of convex spaces and stochastic maps is discussed. There's also some overlap in @Tobias Fritz' convex spaces paper (https://arxiv.org/abs/0903.5522). Though I do agree, I definitely learned things in Sam's talk that I did not learn elsewhere.
@Arthur Parzygnat Thanks for investigating this further. I don't quite understand your argument about the functor. In particular, I don't understand how the sum of probability functions arises. An element in the codomain of is either an element of or an element of . If it is , we define its probability to be . If on the other hand it is , we define its probability to be . The requirement on if I'm thinking correctly is to independently ensure that:
and that:
Am I misunderstanding?
Take a simple example. Consider the following two functions:
I think the requirement for probability-preservation is (let me just do it for f)
but as you can see, the left-hand-side is
which only equals the right-hand-sde if the relationship I provided earlier holds.
Then there should be a function from to , but the probability distributions you defined on these two sets are such that there are no probability-preserving functions between them at all.
Because on the left you have probabilities like , and on the right probabilities like and and .
Right, and as you can also check, @Reid Barton's example does not satisfy the ratio equality (the size ratio between source and targets for both functions must be equal).
ok, i'll set aside the disjoint union operation for now.
If you're not working with finite sets, then you have to be careful about what you mean by uniform distribution. There is no uniform probability measure on the integers. And if you're working with infinite sets, you're probably using the language of measurable spaces (and then, yes, you are assigning probabilities to subsets and this leads one to the study of measure spaces).
What I'm working with is inductive data (i.e. what I'm loosely interpreting as sets are actually types). So they need not be finite, but the way in which they are infinite is somewhat restricted. I'm not sure whether you find this relevant to anything though
Can you provide a formal definition of inductive data and types in this context?
The empty set is a perfectly fine set and has a natural measure on it (zero), but no probability measure, so there is no unit for it in your category. @Exequiel Rivas provided a reasonable solution: use sub-probability measures.
I haven't managed to understand much about measures yet, but based on my cursory reading: wouldn't be a valid probability measure for ( being the -algebra on the sample space )?
No, because is an axiom of a measure.
Could the unit just be a set with one element?
Btw, the reason it is an axiom is because the union of the empty set with itself is still the empty set. If then which would only allow and then every set has infinite measure, and you generally don't want to do that.
@Oliver Shetler There is some explanation here: https://ncatlab.org/nlab/show/inductive+type if you're interested. I'm not an expert in type theory either, so I wouldn't want to try and explain it
Could the unit just be a set with one element?
Which operation are you considering the unit of?
@Arthur Parzygnat
Is your functor well-defined?
Good point, it is not. I think I had the functor backwards: there is at least a functor that merely forgets the extra probabilistic equipment, no?
Which operation are you considering the unit of?
Oops, I was thinking of products but I see you were talking about disjoint unions.
The sub-probability measures fix seems semantically sound though. If the empty set canonically represents an impossible outcome, then the union of two impossible outcomes also ought to be impossible. That's how Coecke and Kissinger's Quantum string diagrams also handle it.
Hmm. I hadn't thought of this before, but in the context of generative testing it is actually the case that sampling may fail entirely, and a generator that always fails acts as the unit of the append operation I am trying to reconcile with a lax monoidal functor structure. This is perhaps another pointer that I really should be working with sub-probability measures.
Assuming both a category of sets with full probability measures and a category of sets with sub-probability measures exist, is there some relationship between the two?
One of the problems I'm trying to solve in the applied domain is that generators that can fail (which it seems correspond well to sub-probability measures) are inefficient, since the probability of getting a successful result tends to diminish as generators are compounded. So we end up with generators that we have to run several times in order to get a successful sample.
If there is some systematic way of assigning probability distributions and restricting functions between them such that a proper probability distribution can be assigned to the disjoint union of any two underlying sets (and a probability preserving function to the product of any two probability preserving functions), this would be very useful in the context of a generative testing library.
@Nathaniel Virgo thank you. the talk is very interesting, but I'm having some trouble imagining a continuous "mixing" function that applies to my use case
How would this solve the generator problem you mentioned above? By reducing the need for generator composition?
Well if the probability distributions are total, it does not matter how much we compose the generators. Sampling it cannot fail.
I wish I knew more about the context here. It's a little hard for me to grasp the problem you are trying to solve with disjoint unions. Can you give a concrete example of a case where this problem comes up?