You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Recently, I found a formula that clarifies the link between stochastic and logical independence, but I'm still struggling to escape the measure theory aspect. Ideally, my goal would be to abstract both concepts of independence in a single setting.
For anyone reading this: I am potentially looking for insights or references that could help clarify the content below in terms of category theory. Any direction or references would be welcome.
For a given set of axioms (say written in a logic which uses propositional logic as a basis), define .
One says that a set of axioms in is independent from another set of axioms in if we can find two models and such that:
Let be a measure space for a probability measure .
For every pair of events and , we define . We say that the events and are independent if the formula holds.
Proposition. The formula holds.
proof. Write and and insert these expressions in the expression for . Expand, then factor out and finally observe that
Now, our goal is to reformulate the expression above. For this, we can use the product probability measure , which is generated by the images for every . Also, for every pair of events in , let us denote .
Our new notations reduce the expression of to this:
In other words, two events and are independent if the probability of finding a configuration in that validates the event is equal to the probability of finding a configuration in that validates the event .
There seems to be an analogy between stochastic and logical independence here. I tried to come up with an algebraic definition of stochastic independence by requiring a bijection , but this hasn't helped me escape the probability measure context.
Does anyone have intuitions on this or know some references that have explored this?
How about -algebra instead of measure theory?
Every [[sigma algebra]] is a boolean algebra with countable joins, and in which every non-bottom element is a member of a maximal filter closed under countable meets... IIRC thats the same as "complete". Books on boolean algebras should have this, and it looks like the connection you are making, at least from a distance. I believe you could phrase just as a monotonic (along inclusions) map from the lattice of subsets to the total order on . In logic the appropriate codomain is then just I guess. A rewrite in terms of monotonic-maps of locales should be immediate, and that at least escapes (explicit) measure theory to something category-adjacent.
Unfortunately a measure is not just a map of lattices, since it maps antichains of the algebra to sums in R. That's the fundamental quirk that messes up measure theory (and makes it useful, really)
Thanks, Matteo. If you impose inclusion-exclusion as an extra assumption, which already holds for the [edit: finite] rank functions of modular lattices, is that insufficient?
(just to state things explicitly: every distributive lattice is modular, rank functions are monotonic maps from graded posets to , which morally at least seems generalizable, IE for two lattice elements is where is a rank function, and IE has higher-order expressions as well, with alternating signs like Euler's Characteristic, as well as a generalization due to Rota for arbitrary posets.)
Oh sorry that's only the finite case.
I have not understood this paper by Peter May, so I may be missing badly, but it discusses inclusion-exclusion in the deepest and broadest context of which I am aware, and seems broadly applicable. And I believe IE is sufficient to get antichains to map to sums, so even if the lattice is not finite modular and you don't have IE for a rank function for free, maybe the category you want to generalize to is triangulated, and that could help.
I might be misunderstanding this discussion, but if not, one nice angle is effect algebras. These sort-of generalize Boolean algebras by thinking in terms of partial operations.
On a finite set , a probability measure is the same thing as an effect algebra homomorphism . Here is regarded with the partial monoid structure given by disjoint union, and is regarded with the partial monoid structure given by addition where it is possible.
It's not as ad hoc as it might sound because these things fully embed in presheaves on finite Boolean algebras.
For the infinitary case, there are -effect algebras which have the analogous correspondence, since every -algebra is also a -effect algebra.
Eric M Downes said:
Thanks, Matteo. If you impose inclusion-exclusion as an extra assumption, which already holds for the [edit: finite] rank functions of modular lattices, is that insufficient?
(just to state things explicitly: every distributive lattice is modular, rank functions are monotonic maps from graded posets to , which morally at least seems generalizable, IE for two lattice elements is where is a rank function, and IE has higher-order expressions as well, with alternating signs like Euler's Characteristic, as well as a generalization due to Rota for arbitrary posets.)
The main problem is that is not a lattice since is not idempotent. This problem aside, asking explicitly for an inclusion-exclusion relation does work (you can concot a suitable version for the countable case) but then your just asking explicitly for it, not hiding it in the underlying structure :(
(I think you need the codomain to be an ordered ring, not a lattice for monotone maps from a lattice to express independence (as in, have the possibility of being homomorphic on meet), but maybe I misunderstand what Remy is trying to do here, and admittedly I have not looked into this in detail.)
Regarding asking for IE explicitly -- yeah totally its not elegant, but the purpose was to allow us to abstract away from measure theory, and I was suggesting doing that by using lattices + IE as the latter plays well with lattices, and is an example of a kind of categorical trace discussed in the May paper. (The effect algebras Sam Stanton suggested seem like they are nearly lattices, and quite interesting.)
@Eric M Downes @Matteo Capucci (he/him) @Sam Staton Thanks for the feedback! It will take me some time to go through it all (got a pretty busy week :grimacing:), but that's the kind of feedback I was looking for!
This Saturday, I will be flying to Tokyo to give a talk at the FHE & MPC Tokyo Residency, organized by the Ethereum community. Ethereum is also hosting the EDCON 2024 in Tokyo during that time.
At the residency, I will present my paper on Constructing a Fully Homomorphic Encryption Scheme with the Yoneda Lemma. The audience will have diverse math backgrounds, though most will not be familiar with category theory.
In a few lines, asymmetric cryptography has been relying on the commutativity of the Yoneda Lemma to produce cryptosystems with sound correctness. For example:
In ElGamal, you take a (commutative) group , choose a secret key and publish the power for some publicly known . To encrypt a message , choose a homomorphism of the form and send the ciphertext .
How do you decrypt this ciphertext? If you have the secret key , you can compute .
With lattic-based frameworks, you take an -module , choose a secret key and publish a vector of the form for some publicly known -matrix and (forgettable) small random vector . To encrypt a message , choose a homomorphism of the form and send the ciphertext .
How do you decrypt this ciphertext? If you have the secret key , you can compute .
Conclusion: the main ingredients to the decryption algorithms are the commutativity equations:
Now where is Yoneda? Well, for a given limit sketch , the Yoneda Lemma gives us an isomorphism
representing any morphism as some map "" for some representative . And for every morphism , it also gives us the naturality property
which is basically recovering the commutative ingredient needed to decrypt in ElGamal and lattice-based systems. After noticing that, you can write down what you need to recover most FHE cryptosystems using the Yoneda Lemma. As a direct application, you can choose arbitrary theories and see what kind of cryptosystem you get.