You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Ok, here I'd like to talk about categorical cryptography, which is something we've been working on quite a lot in the last weeks. This may be of interest in particular for @Stelios Tsampas and for @Amar Hadzihasanovic, to which I already spoke about this.
Modern cryptography has always heavily relied on complexity theory. This is because cryptographic protocols make use of functions that are easier to compute in one way, but difficult to compute in the other, that is, finding inverses is computationally difficult. Endeavors in modelling cryptography using category theory have most often relied on the idea of using CT to talk about complexity/computability, and step to cryptography from that.
I don't think this approach is right, or particularly insightful. This is for two reasons:
Now, the securty in this algorithm comes from the fact that has a hard discrete logarithm problem, but the computational argument is not important here. What is important is that:
Observation1: Computing and publishing it and publishing and publicly computing are different things.
So, what really makes things work is that even if and are the same thing mathematically, they aren't from a process theoretic point of view: The first one is an action performed by Bob followed by an action performed by Alice, while the second one is the opposite.
Also, is even more intriguing at a protcol level, since no one really computes this, since no one has access to both and !
As you can see we don't need to account for complexity theory explicitly to understand what's going on here. We just need a framework that allows us to express:
I'm trying to model these things categorically, doing some temptative work in the contest of 2-PROPS. It is likely we'll have to work with a foliation of Set over something, but things are still murky. Hyronically I have a graphical language that works very well to talk about these things, but I'm lacking an underlying category right now. Whoever is interested just message me!
Ever seen this one https://arxiv.org/abs/1301.3393 ? @Mike Stay pointed it out to me once. I think the one-time pad is the essence of cryptography and I think it's interesting the analogy between entangled qubits as a resource and having shared secret keys as a resource.
Thanks for the write-up! I think I understand where you're going with this and I can definitely try (and ultimately fail) to come up with something useful. Right now I wanna touch on point 1.
Fabrizio Genovese said:
I don't think this approach is right, or particularly insightful. This is for two reasons:
- Computability/complexity are particularly hard to model categorically. In particular, I yet have to find an application of CT to computability/complexity that really makes me say "yes, CT saved the day here".
This is an all too familiar pitfall. Naive categorification has a tendency to hide internals/constructions, which can be especially detrimental when you want to reason about said internals/constructions! It happened to me when I was first considering operational semantics from a CT point of view ~2 years ago: I was interested in fully abstract compilers, which are compilers that satisfy a certain security property, so initially I considered taking CCCs for languagues (objects as types, terms as morphisms) and functors for compilers. This was fun and all, gave me some insight, but it was far too general to lead to anything useful. Lesson learned!
Yeah, the Lambek/Scott construction takes β-equivalence classes of terms as morphisms, completely ignoring the process of computation. It's my contention that 1) Kolmogorov or Chaitin (prefix-free) complexity and 2) runtime complexity are related to the 1- and 2-dimensional structures in rewriting systems/operational semantics.
Mike Stay said:
Yeah, the Lambek/Scott construction takes β-equivalence classes of terms as morphisms, completely ignoring the process of computation.
Well said! Thankfully, Turi and Plotkin's mathematical operational semantics (http://homepages.inf.ed.ac.uk/gdp/publications/Math_Op_Sem.pdf) came to my rescue :).
Overall, I'm very happy with the Rosetta Stone paper I wrote with @John Baez, but there is one part that bothers me in the analogy. A Feyman diagram A->B describes particles a set of particles A scattering into a set of particles B over time. A function A->B can be composed lexically, i.e. in space, with a function B->C, but it does not describe the process of computation.
The shortest implementation of a function A->B can be thought of as a critical path in space, while the fastest algorithm can be thought of as a critical path in time. There should be zeta functions associated with both. John and I touched on the former in Algorithmic Thermodynamics. I speculated on the latter in my thesis, but haven't done anything with it yet.
If I were to apply this hard-earned lesson to Fabrizio's post, I would be reluctant to simply start drawing arrows between Bob and Alice for stuff that they do or do not know. From my understanding, cryptography is mainly about observations (or observables). At least that's what I can infer from point 2. and my impression of cryptography. The study of observables is the focal point of the study of coalgebras. This is more gut feeling than proper argumentation, but I would be keen to look in that direction for cryptography.
Philip Zucker said:
Ever seen this one https://arxiv.org/abs/1301.3393 ? Mike Stay pointed it out to me once. I think the one-time pad is the essence of cryptography and I think it's interesting the analogy between entangled qubits as a resource and having shared secret keys as a resource.
I did not know that paper, thanks. I read it but there are some things I don't agree with. In particular, the paper makes a critical distinction between public and private information. In the framework I have in mind public just means "known by all actors". In particular, actors can be thought of as atoms generating a lattice, and one should be able to consider all possible sets of actors. This is particularly useful if one wants to model stuff like multisigs.
Also, there is no real machinery to manage actors. The line dividing "alice" and "bob" in the paper does not exist. This is more or less fine for simple stuff, but it becomes quickly unmanageable if we have to consider many actors interacting in a protocol.
I really want a framework where actors can be explicitly considered. I think this makes all the difference, since any cryptographic protocol speaks of actors explicitly, to the point that names denoting some cryptographic roles are now universally know (Alice, Bob, Mallory, Viktor, Peggy, ...). I guess the region-based 2-categorical framework presented in the paper can be more or less adapted to to this, but I'll have to look into it more.
Stelios Tsampas said:
If I were to apply this hard-earned lesson to Fabrizio's post, I would be reluctant to simply start drawing arrows between Bob and Alice for stuff that they do or do not know. From my understanding, cryptography is mainly about observations (or observables). At least that's what I can infer from point 2. and my impression of cryptography. The study of observables is the focal point of the study of coalgebras. This is more gut feeling than proper argumentation, but I would be keen to look in that direction for cryptography.
So, my idea is not to draw arrows between them. It is more akin to tagging objects with sets of actors, denoting who has access to that information. Then morphisms can either be fully local (they carry information owned by to information owned by ) or not. Cryptography is implementing morphisms carrying information from to using only morphisms carrying information from to and from to $$X$, where denotes any set of actors and denotes the top element in the lattice (hence public information).
The way I am shaping things now is with 2-morphisms that allow you to rewrite these tags. Then the idea is that a protcol is secure if you cannot rewrite it in a way so that information tagged with can be publically transmitted.
Philip Zucker said:
Ever seen this one https://arxiv.org/abs/1301.3393 ? Mike Stay pointed it out to me once. I think the one-time pad is the essence of cryptography and I think it's interesting the analogy between entangled qubits as a resource and having shared secret keys as a resource.
Thanks for the shout out! This paper pretty much exists solely to point out that FTL communication using entanglement can't work because the message is encrypted. It uses information-theoretically-secure cryptography, which has little in common with practical cryptography.
So, for instance, denote with the function evaluating for each . Then is the local version of where performs such operation.
Hard Log Problem gives you that
, but not the opposite. This does allow me to rewrite things so that I can prove that Diffie-Hellman reduces to something increadibly similar to the diagram DH reduces to in the paper, but it doesn't allow me to rewrite things so that I can make private information owned by public. If your groups sucks then you also have the in the equation above, and you can make 's private information public.
Notice that I am not reasoning about DH in the abstract here. I can perform these reductions with an actual implementation of DH. I think this is important if we want to use graphical languages as a practical tool in cryptography
Mike Stay said:
Philip Zucker said:
Ever seen this one https://arxiv.org/abs/1301.3393 ? Mike Stay pointed it out to me once. I think the one-time pad is the essence of cryptography and I think it's interesting the analogy between entangled qubits as a resource and having shared secret keys as a resource.
Thanks for the shout out! This paper pretty much exists solely to point out that FTL communication using entanglement can't work because the message is encrypted. It uses information-theoretically-secure cryptography, which has little in common with practical cryptography.
I agree this is not practical cryptography atm, but the idea of using 2-categories is also the conclusion I reached. It would be nice if we were able to work a categorical framework for cryptography that could actually be of practical use.
Fabrizio Genovese said:
Stelios Tsampas said:
If I were to apply this hard-earned lesson to Fabrizio's post, I would be reluctant to simply start drawing arrows between Bob and Alice for stuff that they do or do not know. From my understanding, cryptography is mainly about observations (or observables). At least that's what I can infer from point 2. and my impression of cryptography. The study of observables is the focal point of the study of coalgebras. This is more gut feeling than proper argumentation, but I would be keen to look in that direction for cryptography.
So, my idea is not to draw arrows between them. It is more akin to tagging objects with sets of actors, denoting who has access to that information. Then morphisms can either be fully local (they carry information owned by to information owned by ) or not. Cryptography is implementing morphisms carrying information from to using only morphisms carrying information from to and from to $$X$, where denotes any set of actors and denotes the top element in the lattice (hence public information).
The way I am shaping things now is with 2-morphisms that allow you to rewrite these tags. Then the idea is that a protcol is secure if you cannot rewrite it in a way so that information tagged with can be publically transmitted.
That reminds me more of information flow than encryption. Have you looked into information flow and their lattices?
Yes, so what I am trying to do is exactly studying cryptography from an information flow perspective, that is, having a categorical framework where you can prove that a crypto algorithm does not violate your information flow policies.
This seems very natural to me, since this is what cryptography tries to achieve.
Another difference is that I don't want just do distinguish between "levels of security". I want to talk about "levels of security for who". That is, I want to distinguish a secure resource for A from a secure resource for B.
I might be missing something @Fabrizio Genovese but you are getting into more threat modeling here than cryptography. " I want to talk about "levels of security for who"." Assume perfect cryptography, this is still difficult to both do and formalize.
Not exactly. Cryptographic protocols can be seen as the interaction of two things: Cryptographic primitives, that are functions/algorithms coming with some guarantees (e.g. being one-way) and a message layer. So, a cryptographic protocol can be roughly described as "a procedure to apply cryptographic primitives to exchanged information, in order to achieve something (key distribution, secure channels, you name it). What I want to do is modelling a way to infer properties of the protocols compositionally from the properties of the primitive, on which you want to abstract. That is, for example, you wanting to give a process-theoretic description of what being one way means, without havig to explicitly talk about complexity. Then, when you use one way functions in a cryptographic protocol, you can assume them being one-way to prove properties about the protocol
One-way, a.k.a. not a section?
This, done the right way, can be powerful. For instance, look at Schnorr signatures. You can prove that if you use the same nonce more than once, the scheme is not secure. This is because, if are signatures with nonces respectively, for a private key and hashes (I'm following wikipedia notation here) then you have:
I am pretty sure you can deduce this in a process-theoretic way by noticing that given the subtraction function , if I know one of the arguments of the domain and a value of the codomain, I know also the other argument of the domain.
Stelios Tsampas said:
One-way, a.k.a. not a section?
I don't know what section means in this context. I mean one way as in "G has the hard-log problem".
W.r.t. information flow, Sabelfeld's "Language-Based Information-Flow Security" introduces a "security type system" that ensures well-typed terms respect information flow. Invoking Lambek-Scott (ooooohmmmm), one can hope for a CCC where the security properties are internalized. Again, not sure if useful :/.
Fabrizio Genovese said:
Stelios Tsampas said:
One-way, a.k.a. not a section?
I don't know what section means in this context. I mean one way as in "G has the hard-log problem".
Oh, just not invertible.
Stelios Tsampas said:
Fabrizio Genovese said:
Stelios Tsampas said:
One-way, a.k.a. not a section?
I don't know what section means in this context. I mean one way as in "G has the hard-log problem".
Oh, just not invertible.
Yes. even if I don't like the wording "non-invertible" much because it means different things in crypto and in other fields of math. So it may very well be that a mathematical inverse function exists, but computing it is very hard.
So, if we can reason in this setting we can give security arguments as long as the implementation of the primitives and the messaging layer used in an actual calculator is good (which, to verify, requires using operational semantics or similar stuff, which I'd consider outside of the scope of this idea)
But, literally, if you read the description of any cryptographic protcol it goes like this " A takes this, then does this, then sends it to B, B does this, then sends it back to A, which in turn does this and that, and that's it". It really feels to me as a sequence of sending, receiving, applying, evaluating, and as such there has to be a compositional way to express this.
I hear you. So, those primitives you speak of, they're basically a set of operators of a certain arity, right?
And you combine them to produce encrypted messages or decrypt encrypted messages?
Hah, that's a tongue twister!
Precisely
But obviously this has no real utility if you don't also track the information flow at the same time
If you scroll up I made this example with Diffie Helman, where and are the same thing mathematically but are interpreted differently in terms of information flow
Then maybe it's worth thinking about an algebraic (possibly mixed coalgebraic) approach?
I don't know much about coalgebra, but I'm happy to learn
My approach is definitely algebraic, I'm trying to put together the category of sets and a lattice of actors in various ways right now (string diagrams, functors, products, double categories, ...)
But I'm still at that stage where everything I throw at the problem bounces back in my face >.<
Yeah I am also interested in coalgebra if anybody has resources.
Giorgos Bakirtzis said:
Yeah I am also interested in coalgebra if anybody has resources.
That's the best book I can recommend http://www.cs.ru.nl/B.Jacobs/CLG/JacobsCoalgebraIntro.pdf. It becomes progressively harder and you have to be patient with it, but I consider it an absolutely essential read. And it is not just about coalgebras, but also about monads, comonads, liftings etc etc.
So, before I dive into this with the risk of disappearing forever, why do you think coalgebra could be useful in this setting?
As my knowledge on the subject of cryptography is limited, it's mostly by hunch: What you have been suggesting is that there are two observable effects/behaviors: messages and their computation time. So one can imagine transition systems, or coalgebras under a certain functor that, given a string to encrypt/decrypt, they produce both a symbol (or smth similar) and a "computation-time-taken" at each transition step. The state space of this coalgebra can be the initial algebra of the algebraic operation (signature) functor, i.e. a sequence of cryptographic operations.
Via finality on the behavior functor, this induces an equivalence relation on the set of sequences of cryptographic operations called bisimilarity, which captures the kind of behavioral equivalence you are looking for: two sequences are bisimilar, or observationally equivalent, if, for each string, they produce the same string in the same amount of time.
Done naively, this bisimilarity might be stronger than you want, in that it would require the sequences of operations to compute the same character in the same time at each given step, so you have to be smarter about it. But that's a common problem in such scenarios.
Sadly the above are hard to grasp if you know little about coalgebras :/.
"The state space of this coalgebra can be the initial algebra of the algebraic operation (signature) functor, i.e. a sequence of cryptographic operations." Can you expand on this?
I am not sure I understand it fully (mainly due to me not being familiar with coalgebra), but doesn't it feels a bit too low-level?
Yes. The algebraic signature of crypto operations induces an endofunctor : . It has an initial algebra , where is the set of sequences of operations. For a given behavior functor , whose behavior is that of strings and a computation time, you want to have a coalgebra that specifies how a sequence of crypto operations generate encrypted/decrypted strings.
If there is any sense in the world (I'm joking, this is just another hunch), you probably want to be inductively defined on the structure induced by signature . Which brings to you bialgebras and Turi's Mathematical Operational Semantics, which is a flavor of semantics done categorically.
So, one problem that I see with this is that some crypto operations do not really lift to endofunctors, right? For instance, the operation $\mathbb{N} \times G \to G$$ evaluating to makes sense cryptographically only if is a group, is natural numbers, and has the hard-log problem
This is pretty low level, but if you're looking to apply category theory to get something useful out of, this might not be a bad thing ;).
Oh wait, now I understand the intiial algebra step, I guess
So yes, is basically free if you wish, and just keeps track of which operations are being carried out, without endowing them of any semantic meaning
Yeah, well-said! It's freely generated.
Up to this point, this makes sense. I am not sure about .Why do you care about computation time? I'd say it is more important to keep track of who performs what operation...
I thought it was you who cared about computation time :P.
D:
The immediate consequence is that it's not enough for two sequences of crypto operations to do the same thing mathematically, but also do it in the same time.
...to be considered observationally equivalent, that is.
Yes, so: Let's substitute "time" with "agent". I think this goes more in the direction of what I am imagining, but on top of that now you want some sort of reduction rules
Saying "it's not the same" is not enough, what you want is a "directional thing" I believe, that is, you want to say that "doing this thing-> doing this other thing"
Yeah, the reduction rules are basically your "semantics". You just need to give a distributive law between (the free monad of) over the (cofree comonad of) .
You want this because you want to implement some sort of procedure to do reductions, so that you can rewrite stuff and prove things. For instance, you can prove that Diffie-Hellman is correct by showing that the sequence of operations specified by the protocol reduces to something that is akin to "calculate this secret, copy it, send it to A and B"
Fabrizio Genovese said:
Saying "it's not the same" is not enough, what you want is a "directional thing" I believe, that is, you want to say that "doing this thing-> doing this other thing"
Oh wait, I misunderstood what you said.
It's funny because I have this very fact more or less proven on my blackboard using a graphical calculus, but I have no clue of where this calculus lives. xD
Can you give me a simple example of a reduction?
I want to make sure we're on the same page.
The most basic reduction I use is the following, for in my lattice of actors:
Now clearly this is written in my handwavy notation that makes no sense, so let me explain:
On the left it says: "A owns X, it publishes it to B and then B applies f"
On the right it says "A owns X, it applies f and publishes it to B"
The can be interpreted as "it is less secure than.
"Publishes to B" may seem convoluted, but if B is the top element of your lattice, this just means "A makes X public"
So, I require this to hold for any . If is non-invertible in a cryptographic sense, then I know that is never an equality, and really it's a .
You may also see that on the left and on the right do not have matching types, which is absurd. This is because atm I am using "actors to annotate objects to denote ownership of resources", which I don't know how to capture precisely categorically. One consequence of this is that you may have for the same , denoting different actors having access to the same type of resource (e.g. some elliptic group)
The fact that 's codomain is different than its domain is very bizarre, wow.
Hehe, see last comment. The idea there is that in "publishing something" you are not changing the underlying value. It's because you are not touching the underlying resource of type , you are just giving it to someone else.
Alright, lemme digest this...
So really the neat way to do this is that what I call and what I call actually interact on different levels. is an action "performed locally" if you want. It's whatever agent doing something on its pc. here acts at a message level, so:
: Computation non trivial, message level nothing happens
: Computation trivial, message level is doing something
By the way, why does the assertion hold? Is it provable or is it an arbitrary statement? In other words, why is this less secure exactly?
I am not sure that "less secure" is the right interpretation, but the idea is: If I give you $a$ and you know , you can do yourself. If I give you and you know , you can't infer . At least if is a decent elliptic group.
Aha, OK!
So the is justified in the sense that, on the right, doesn't have access to the element of you applied to.
Funny thing: if you have access to a , then you can invert this:
Fabrizio Genovese said:
I am not sure that "less secure" is the right interpretation, but the idea is: If I give you $a$ and you know , you can do yourself. If I give you and you know , you can't infer . At least if is a decent elliptic group.
So is the secret and is the public key?
So you see, you can slide the publishing thing left-to-right, but if you can invert then you can cheat and, by sliding, get the identity on what you do before publishing
This definitely seems to me like a 2-category, with as the 2-cells. And the 2-category is roughly the product of your poset of visibilities and the category where the computations take place. But normally in a product of two categories the morphisms from one category commute with the morphisms from the other - whereas in this case they don't necessarily commute, but there do exist 2-cells corresponding to your "most basic reduction".
It would be nice to have a neat 2-categorical criterion for a 2-category to fit this description, but I haven't come up with anything.
That's exactly my first approach. I tried to use products and the problem is exactly the one you point out, you can basically "slide ownership around too much."
Today I tried with Double categories, trying to differentiate computation/messaging as horizontal/vertical morphisms, but it doesn't seem to work either
By the way, the coalgebra I was talking about would take something like "op1 op2 op3..." and, given a string, produce a trace like:
.
Meaning that the semantics, and the coalgebra, determine how computations are made. Now for your preorder, maybe the hom-set of the kleisli category of the behavior functor (assuming B is also a monad) is order enriched?
Fabrizio Genovese said:
Today I tried with Double categories, trying to differentiate computation/messaging as horizontal/vertical morphisms, but it doesn't seem to work either
Yeah, I was thinking that too but you seem to want to be able to compose the vertical and horizontal morphisms in a way that you can't in double categories (if I understand the definition right).
It does seem like there are some nice things going on in the 2-category approach. Like if is the 2-category I described above, then I think there is an oplax functor which is the identity on 0-cells and takes a 1-cell in the product category to the 1-cell in that first does the computation and then reduces the privacy.
So, wait. is a 2-category having:
That's what I was thinking
While is defined as:
Yeah, I don't really have a precise definition of . But certainly given a morphism there'll be a corresponding 1-cell , and given there'll be a 1-cell (what you called ). And then you can also compose these 1-cells - maybe we want to say that every 1-cell in can be obtained by composing these basic 1-cells. Maybe we even want to say that a 1-cell in is freely generated by these basic 1-cells in some sensible way.
And you have a unique 2-cell between two 1-cells , iff is "less secure than" . E.g. in your example there's a unique 2-cell
That sounds exactly like the lax Gray product , where and are both seen as 2-categories.
When applied to (1-)categories, it produces exactly the 2-category whose 1-cells are generated by those of the form and those of the form for in and in , while the only non-trivial 2-generators are the "interchangers" .
This may actually work. I will run some tests tomorrow :)
I have no idea how lax Gray products interact with monoidal structures on either factor, though!
Yes, that's the other thing. If this works, then we have to slap a tensor on it somehow.
I guess that's why I didn't think of it immediately when we were chatting the other day -- I've been working on lax Gray products for years but never mixed up with SMCs :)
Amar Hadzihasanovic said:
That sounds exactly like the lax Gray product , where and are both seen as 2-categories.
Ooh that's awesome!
I've thought a bit about a nice way of getting a calculus like Fabrizio has in mind, I will share it.
The main step is that, instead of having a “monoidal 2-category” (3-category with one 0-cell) we work with a 3-category, that is, with many 0-cells. In string diagrams this is “coloured regions of the plane”.
The 0-cells correspond to the different visibility levels . An endo-1-cell of corresponds to a system with visibility . But we also have 1-cells when . In diagrams, this is like a boundary between the -coloured regions and the -coloured region. “Passing through the boundary” corresponds to changing the visibility of a system.
So, we start with a poset of agents/visibility levels, and with a strict monoidal category corresponding to our computations. If we treat as a 2-category with a single 0-cell, what we build will “contain” (admit a functor from) the lax Gray product , but also more stuff.
I will call it for now...
At the level of 0-cells and 1-cells, this is exactly like the lax Gray product of and :
I'm following up to here. What about 2-cells? (Yesterday I gave a look at the Gray product page on ncatlab but it was quite scary. Is there an intuitive way to understand what it does?)
The twist is in the 2-cells:
The idea is that, for every chain in , every morphism/2-cell in , and every way of writing
(that is, partitioning the inputs and outputs of in possibly empty segments)
we have a 2-cell, which I will still call with notational overload,
.
So in I have a version of for every way of giving different, increasing visibility levels to its inputs and outputs.
I don't understand the notation for $$f$$in the last equation, but what you say seems to be exactly what we want
I am using comma for “horizontal composition”...
Oh, wait. Yes, I forgot that now denotes a 1-cell
(Later I will post some pictures)
I use the notation overload to also identify versions of which differ in trivial ways
Like, having in the chain should “do nothing”
Yes
So I'm following what you say, but I am in that unfortunate position of notunderstanding the underlying categorical machinery. Is there something I can do to fix that?
Wait, here come the pictures...
As I said I tried to look up gray product on ncatlab but it didn't give me any real inuition for this :slight_smile:
image-517f8372-f44a-4686-8d2f-def5275b5281.jpg
This is how for example looks
image-82a58107-5414-44e4-b821-0eeb8a2188ca.jpg
In particular we have these cells corresponding to identities in ...
Oh and I also want to have this relation:
Yes, these should be the one we use to say "A passes a resource to B"
Well here I am thinking of visibility levels, so yeah, it should be “visibility is increased from to ” or “decreased from to ”
image-6fb78ae5-0466-45a8-910a-5a8217d0926f.jpg
Anyway this should also hold
(And the obvious version generalised to longer chains)
(And their horizontal-reflections duals)
So finally, I think your “less secure than” relations are pretty much all captured by this, and its “multi-colour” version:
image-168db302-a1ce-437d-bcab-fa7b97d1f08a.jpg
Vertical composition of 2-cells is as in , and that I think should conclude the description of ...
This seems to be exacly what we want!
This is how you derive your main axiom:
(I think it's coming but Zulip doesn't seem to give any visual clue as to whether it's actually uploading pictures... ;) )
I'll try again
Lol, no worries
Boom! This is actually great.
So, question: In my formalism, in the beginning, I used coloured wires to denote agents/levels. Since wires seem to live in regions, could we avoid depicting the regions and colour the wires instead?
I'm basically asking if, doing so, we get an equivalent calculus. It would make drawing diagrams easier, most likely
I think it seems plausible, yes... because is a poset, if anything.
(Time for lunch...)
Anyway you should look at my thesis, chapter 2, if you want to see more of lax Gray products in diagrams.
I will do! Lunch for me as well. Later this evening I'll try to play with this and see if I can prove stuff about crypto protocols. I think we're on the right track here.
One thing that may be problematic in these semantics is that the planarity does not allow some things that we may plausibly do, for example having operations with outputs of incomparable visibilities (in whatever diagram we form, the outputs must come in a linear order of visibility).
Mhhh, I think we have a new problem
Lol, I was about towrite this
Perhaps the right framework would be sth like “monoidal 2-categories” where you can have overlapping regions, like in this paper:
https://arxiv.org/abs/1609.07775
(In this case, in fact a monoidal 3-category...)
So,let me give an example. You'd like to do (I'm again using my horrible notation):
This is computing , publishing it, and using it to compute
But now we also know that . So the thing above should reduce to:
Actually let me take a picture, it is easier.
The horizontal thing denotes publication, or change of level if you want
Where I write I mean , because no one has access to that information, unless they both have access to the stuff both and see
This way of doing things may be totally wrong, I am open to alternatives as long as they make sense cryptographically :slight_smile:
I think this is your thing in diagrams for monoidal 2-categories...
With branching surfaces.
I am genuinely amazed :D So is this an equation that we have to impose?
Or the fact that is enough to have this?
Fabrizio Genovese said:
Or the fact that is enough to have this?
I think it is enough, yes. I'm not as used to this monoidal/branching stuff so I would have to think carefully of how things are “presented” now, but I believe it would follow from the appropriate version of the axioms I was considering earlier...
One can always “make things less visible” without losing any security, so in fact we can “carry everything to the left” to visibility and apply the equation there; we never have to make inputs less secure, which would introduce inequalities, at any point.
(Oh, I wrote + mistakenly in the diagram, but of course I meant multiplication).
True that
I am also not used to the surface diagrams business
In particular we'll start dealing pretty quickly with complex multiagent stuff
I have no clue how complicated the surfaces will become. Maybe we should start using homotopy.io to draw this stuff?
Or we can prove that there is an equivalent calculus using coloured wires like you do, and have the surface picture as the more direct representation of the algebraic/categorical framework.
Yes, that would be optimal
Jamie Vicary and I treated DH in our paper:
https://arxiv.org/abs/1301.3393
Because we were dealing with an analogy w/ quantum systems, the crypto had to be information-theoretically secure, so drawn is a variant where you only have black-box access to an exponentiator D, not the "public" info g, gˣ, or gʸ.
Screen-Shot-2020-03-26-at-12.19.04-PM.png
Yes, we discussed this a few days ago if you recall. But one key difference is that you don't have a clear separation between different agents, only between public and private, whereas in the model we are thinking about here "public" just means "known to every agent"
Each wire is a different agent. The purple "public" regions are just places where there are lots of agents to interact with that share that information:
Screen-Shot-2020-03-26-at-12.41.40-PM.png Screen-Shot-2020-03-26-at-12.41.48-PM.png Screen-Shot-2020-03-26-at-12.41.52-PM.png
But I agree that there's plenty of room for innovation here, labeled agents, etc. And I like the idea of separate surfaces.
Yes, but you don't give really a way to turn "many agents" into "public". Here "public" and "agent" are really very different things, dimensionally different even :D
But yes, reading the paper many things coincide with what I worked out on my own before our graphical calculus had any categorical foundation
For instance, the correctness of DH looks very similar to yours: You want to push all the wires on one side, apply some properties, and end up with the whole operation performed once + a copy morphism
A big difference is that because public and agents for us are all wires tho, we cannot really just have equalities, but implications
You want to be able to reduce things only in one direction
@Amar Hadzihasanovic , I think I have the complete depiction of Diffie-Hellman
Do these surfaces make sense in our formalism? On the left we have the protocol,what both agents do and how they exchange information. On the right I have the reduction. To "slide wires through" I need that sort of "cupola within a cylinder" thing, that took 1hr to draw xD
There is some ambiguity with the dots, but I hope it is more or less clear what's going on....
I'll think about it tomorrow...
hi :wave: I got on the zulip ship, didn't have much time before to do this. This weekend I'm going to catch up on the discussion here and then going to try a swap protocol.
Hey @Amar Hadzihasanovic , I don't know if this helps wrt splitting the surface, but if our actors/levels are arranged in a lattice, then and define monoidal structures on it (and even a bimonoidal structure if the lattice is distributive). In particular, we get that , which should allow us to split regions in as many copies as we want to. I am not familiar with the Grey product (even if I'm trying to read your thesis right now), so I don't know if this is enough to ensure that the diagram above exists.
@Fabrizio Genovese did this work lead up somewhere? I was just thinking about how expressing cryptographic protocols would look like with diagrams and found this old thread.
Unfortunately, this work has been buried by the mysts of time. Maybe one day...
This looks interesting. They claim to have made a sort of ur-homomorphic-cryptosystem using Yoneda objects.
https://arxiv.org/abs/2401.13255
they even have software!
https://github.com/remytuyeras/aces