Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: deprecated: logic

Topic: Exponential Modalities in Linear Logic / Type Theory


view this post on Zulip Andrew Cann (Jul 12 2020 at 13:18):

Hi people. I've got a question about linear logic / type theory:

Linear logic defines two "exponential" type constructors, the "of course" modality !P and the "why not" modality ?P. These can sort-of be defined as:

!P ≅ 1 & P & (P ⊗ P) & (P ⊗ P ⊗ P) & ...
?P ≅ ⊥ + P + (P ⅋ P) + (P ⅋ P ⅋ P) + ...

Except where each (P ⊗ P ⊗ ... ) and (P ⅋ P ⅋ ... ) is unordered. These have some nice properties where they convert the additive connectives to the multiplicative connectives, hence the term "exponential":

!⊤ = 1
!(A & B) = !A ⊗ !B

?0 = ⊥
?(A + B) = ?A ⅋ ?B

It seems though that you could define two other type constructors, let's denote them ¡P and ¿P, such that:

¡P ≅ 1 + P + (P ⊗ P) + (P ⊗ P ⊗ P) + ...
¿P ≅ ⊥ & P & (P ⅋ P) & (P ⅋ P ⅋ P) & ...

These would also have exponential-ish properties, similar to those for !P and ?P:

¡0 = 1
¡(A + B) = ¡A ⊗ ¡B

¿⊤ = ⊥
¿(A & B) = ¿A ⅋ ¿B

¡P can be thought of as being a multiset/bag of values of type P, it's also a monad. I'm not sure how to think of ¿P yet but it's definitely a comonad. Since ¡⊤ is isomorphic to Nat, if you have type-dependency, this could give you a way to define inductive types.

So I'm wondering why I've never seen these mentioned at all in the literature. Have I just not come across them? Or do they just not turn out to be very interesting on further inspection?

view this post on Zulip Eduardo Ochs (Jul 12 2020 at 16:37):

@Andrew Cann, take a look at page 5 here...
http://www.math.mcgill.ca/rags/fock/fock.pdf

view this post on Zulip Eduardo Ochs (Jul 12 2020 at 17:29):

Ooops, sorry! I was in a hurry and I didn't see that you were asking about ¡P and ¿P...

view this post on Zulip Valeria de Paiva (Jul 12 2020 at 18:04):

Andrew Cann said:

So I'm wondering why I've never seen these mentioned at all in the literature. Have I just not come across them? Or do they just not turn out to be very interesting on further inspection?

I have not seen any investigation of these exponentials you described, but I'm sure you've heard that there are too many possibilities of exponentials, so this is not surprising. Exponentials are not uniquely defined at all. but I cannot see the rationale for these exponentials myself. do you have one, apart from trying out symmetry?

view this post on Zulip JS PL (he/him) (Jul 13 2020 at 06:49):

So in the category of sets and relations, RELREL, since &=+\& = + and =⊗ = ⅋, it follows that ¡P=!P¡P=!P and ¿P=?P¿P = ?P.

view this post on Zulip JS PL (he/him) (Jul 13 2020 at 06:53):

The construction you provided of !P!P often (not always!) results in the cofree cocommutative comonoid construction on . So the same is true for ¡P¡P. In general, ¡P¡P is the free commutative monoid construction. For example, on the category of vector spaces, ¡V¡V is better known as the free symmetric algebra Sym(V)Sym(V) over VV.

view this post on Zulip JS PL (he/him) (Jul 13 2020 at 06:55):

Same for the ¿P¿P construction, this should be the general construction of the cofree cocommutative comonoid for .

view this post on Zulip JS PL (he/him) (Jul 13 2020 at 07:00):

So ¡¡ and ¿¿ provide exponentials on the opposite category.

view this post on Zulip Andrew Cann (Jul 13 2020 at 12:20):

@Eduardo Ochs

¡ and ¿ are just names I made up. If they already have a common meaning that's just an unsurprising coincidence. The paper you linked me seems to be talking about the same thing but it is a bit over my head :grimacing:

@Valeria de Paiva

I have not seen any investigation of these exponentials you described, but I'm sure you've heard that there are too many possibilities of exponentials, so this is not surprising.

I'm aware that there's lots of different ways to define the usual ! and ? exponentials, and that there's a lattice of exponentials generated by ! and ?. The exponentials I'm talking about are a bit different to ! and ? though since they convert between the additive and multiplicative groups differently. eg:

!⊤ = 1
?0 = ⊥
¡0 = 1
¿⊤ = ⊥

In terms of the above, there only seems to be four different possible exponential operators. (Or maybe it's more correct to say that there's 4 different families of exponential operators, though there might be multiple ways to instantiate each family in any given model of linear logic?). Or I have I misunderstood your comment?

but I cannot see the rationale for these exponentials myself. do you have one, apart from trying out symmetry?

Partly just trying out symmetry. Mainly though because I'm interested in programming language design and ¡P seems like it could be practically useful. Having unordered multisets as a primitive type would mean having commutativity built into the language, which would be good for modelling race-free concurrency. For instance, you could allow a bunch of threads to atomically increment some global counter while still having the language enforce race-freedom by construction. Since W types can be defined in terms of Nat it also gives a way to build recursive/inductive types without having to axiomatize them.

@JS Pacaud Lemay

Hmmm. I admit I don't know how to make use of that information. I'm not actually very well-versed on category theory, despite being on this zulip. I'm only here because I don't know where else to ask people these kinds of questions about logic and type theory :confused:. Do you have any suggestions?

view this post on Zulip Andrew Cann (Jul 13 2020 at 12:23):

Right now the main thing I'm interested in is how to define the introduction rules for ¿-types, since that would also tell me how to define the eliminator for ¡. Though if this whole avenue of investigation is likely to be a dead end that would also be useful to know.

view this post on Zulip Andrew Cann (Jul 13 2020 at 12:23):

Thanks for the replies by the way everyone! :D

view this post on Zulip JS PL (he/him) (Jul 13 2020 at 13:37):

Ah gotcha! I admit I'm a category theory who happens to work with categorical semantics of (differential) linear logic, so my linear logic/programming perspective isn't great!
But whenever I have a linear logic question I just email Robert Seely. :sweat_smile:

view this post on Zulip JS PL (he/him) (Jul 13 2020 at 13:41):

If I had to take a stab at it, I believe the rules for ¡¡ and ¿¿ would be the dual rules for !! and ??.

¡PQ¡P,¡PQ¡PQQ¡PQPQP¡Q¡P¡Q\frac{¡P \vdash Q}{¡P, ¡P \vdash Q} \quad \quad \quad \quad \frac{¡P \vdash Q}{ \vdash Q} \quad \quad \quad \quad \frac{¡P \vdash Q}{P \vdash Q} \quad \quad \quad \quad \frac{P \vdash ¡Q}{¡P \vdash ¡Q}

P¿QP¿Q,¿QP¿QPP¿QPQ¿PQ¿P¿Q\frac{P \vdash ¿Q}{P \vdash ¿Q,¿Q} \quad \quad \quad \quad \frac{P \vdash ¿Q}{P \vdash } \quad \quad \quad \quad \frac{P \vdash ¿Q}{P \vdash Q} \quad \quad \quad \quad \frac{¿P \vdash Q}{¿P \vdash ¿Q}

give or take some extra context Γ\Gamma. I derived these from the categorical semantics of ¡¡ and ¿¿.

view this post on Zulip Andrew Cann (Jul 13 2020 at 14:17):

Thanks! Those are the rules I had except I hadn't figured out the two on the right yet. They look correct though, so thank you very much for helping me :). Unfortunately though that confirms a fear I had - the elimination rule for ¡P only allows you to eliminate into another ¡Q. ie. Once you're in the ¡-monad there's no way to "get out" again. As far as I can tell this makes it impossible to define the usual eliminator for Nat (if Nat is defined to be ¡1) to, for instance, define an is_even function is_even : Nat -> 1 + 1.

I have the dual problem with !-types actually. A language like Rust allows you to take some linear piece of data and temporarily "lend it out" through a copyable reference. The references have a lifetime attached to their type which allows the type-checker to ensure that all the references have been released before you re-take unique ownership of the data. With the rules of linear logic though a term of type !P can only be constructed using other terms of type !Q, so there doesn't seem to be any way to model this. That is, there's no way to "get into" the !-comonad - you have to start there.

I don't suppose you're aware of any work-arounds for this? Are there any generalisations of linear logic (eg. with an extra axiom) that allow you to take some non-! variable A, prove that you can use A to generate infinite copies of B, then use that to construct an A ⅋ !B? Or is this already possible using some trickery I'm unaware of?

view this post on Zulip JS PL (he/him) (Jul 13 2020 at 14:40):

So in differential linear logic, you have the following extra rules for !! called codereliction, cocontraction, and coweakening (so the first three rules of ¡¡ I wrote above):

!PQPQ!P,!PQ!PQ!PQQ\frac{!P \vdash Q}{P \vdash Q} \quad \quad \quad \quad \frac{!P, !P \vdash Q}{!P \vdash Q} \quad \quad \quad \quad \frac{!P \vdash Q}{ \vdash Q}

and similarly for ?? we get:

P?QP?Q,?QP?QPP?QPQ\frac{P \vdash ?Q}{P \vdash ?Q,?Q} \quad \quad \quad \quad \frac{P \vdash ?Q}{P \vdash } \quad \quad \quad \quad \frac{P \vdash ?Q}{P \vdash Q}

These extra axioms might help do what you want to construct. Then you can ask for the dual rules for ¡¡ and ¿¿, and hopefully you get want you. (There are lots of models where all these axioms hold)

view this post on Zulip Dan Doel (Jul 13 2020 at 14:41):

@Andrew Cann Some types are coalgebras of !, which gives you T -> !T. For instance, pure 'data' (not using any functions internally) would generally behave this way, because you can do matching on one to build up two equivalent values (although perhaps in a real language you would want to not have to do that). This is explained in e.g. Filinski's Linear Continuations.

view this post on Zulip Dan Doel (Jul 13 2020 at 14:52):

One of the examples is λ b. if b then (f true, g true) else (f false, g false), where the boolean stuff is syntax for 1 + 1. This is essentially the same as λ b. (f b, g b). Presumably the same would be possible for the natural numbers.

view this post on Zulip Andrew Cann (Jul 13 2020 at 14:59):

@JS Pacaud Lemay

I haven't looked into differential linear logic at all, but those rules seems to break everything :grimacing:. If !P ⊦ Q implies P ⊦ Q then doesn't the entire theory become non-linear? And if !P ⊦ Q implies ⊦ Q then can't that be used to prove anything? eg. !0 ⊦ 0 so ⊦ 0?

@Dan Doel

I'm aware that pure-positive/data types are copyable (in the sense that you can derive T -> !T) but it would be cool to be able to do this with negative types, or types whose definition is private (not that linear logic has a notion of "private", but most languages do via the module system).


Come to think of it. Linear logic has the rule:

!Γ ⊦ B, ?Δ
-----------
!Γ ⊦ !B, ?Δ

Which means you also have:

!Γ, A ⊦ A, B, ?Δ
------------------
!Γ, A ⊦ A, !B, ?Δ

Which means you also have:

!Γ, A ⊦ A ⅋ B, ?Δ
------------------
!Γ, A ⊦ A ⅋ !B, ?Δ

Which seems to be the rule I'm looking for - if you can use A to derive any number of B then you can use it to derive A ⅋ !B. Huh...

view this post on Zulip JS PL (he/him) (Jul 13 2020 at 15:12):

@Andrew Cann : Yes in differential linear logic, &=+\& = + and 0=10=1 (so in category theory terms, you have biproducts and zero objects), and so in particular this gives you zero maps. And I believe your question about why things don't become non-linear is because there is no "copromotion" rule for !!. The idea behind !PQ!P \vdash Q implies PQP \vdash Q is more like for every proof you have a linear version of that proof. So I don't think those extra rules break everything... But differential linear logic might not be what you're looking for!

view this post on Zulip Valeria de Paiva (Jul 14 2020 at 00:02):

Andrew Cann said:

but I cannot see the rationale for these exponentials myself. do you have one, apart from trying out symmetry?

Partly just trying out symmetry. Mainly though because I'm interested in programming language design and ¡P seems like it could be practically useful. Having unordered multisets as a primitive type would mean having commutativity built into the language, which would be good for modelling race-free concurrency.

I don't see why the commutativity based on + would be better than the one based on tensor or on the product, maybe you think this would imply race-freeness? this does not seem to me clear, but I'd like to understand why you think so.

I'm a great believer in multiplicative disjunction (or par) on intuitionistic linear logic and of ?'s that are not defined by negation from the exponential "!", especially for program language design, so certainly would like to understand your intuition better. I wonder if that is at all similar to Aschieri and Genco's
Par Means Parallel: Multiplicative Linear Logic Proofs as Concurrent Functional Programs
Proceedings of POPL 2020 https://dmg.tuwien.ac.at/aschieri/popl-clinear.pdf

view this post on Zulip Andrew Cann (Jul 14 2020 at 14:57):

@Valeria de Paiva

From my thinking the concept of commutativity/unorderedness only applies to the exponential modalities. The binary connectives (+, &, , ) are unordered in the sense that A + B is isomorphic to B + A and so forth, but you still need to distinguish between the left and the right element because they have different types. If I wanted a type of unordered pairs of ints for instance, where (1, 2) is indistinguishable from (2, 1), I couldn't use Int ⊗ Int for this. Being able to match on a A ⊗ B requires you to be able to handle the A and the B differently. You can't special-case that for when A and B happen to be the same type because that would require decidable type equality and it also breaks when you have generic types / type-variables.

So if I want unordered pairs but I only have I need to axiomatize them as a new primitve type. But then I still can't define and would need to axiomatize unordered triples, unorderd 5-tuples, and unordered p-tuples for any prime p in general. I could have a form of ¡ indexed by a natural number expressing the arity, but I'd rather avoid meta-level natural numbers for now and it's also unnecessary if I have dependent types and can express the arity separately with something like ∑(x: ¡P).(count(x) = n)

Does this make sense?

and of ?'s that are not defined by negation from the exponential "!", especially for program language design

Interesting. I'm still having trouble wrapping my head around what ? means from a programming-language design point of view, aside from it being the negation of !. Do you have some hints here?

I haven't read "par means parallel", though mainly because my initial reaction to the title was "Huh? Par doesn't mean parallel. The tensor product is parallel, the whole point of par is to be not-parallel." That's a dumb reason though so I should just get around to reading it.

view this post on Zulip Valeria de Paiva (Jul 14 2020 at 16:14):

Andrew Cann said:

Valeria de Paiva

The binary connectives (+, &, , ) are unordered in the sense that A + B is isomorphic to B + A and so forth, but you still need to distinguish between the left and the right element because they have different types. If I wanted a type of unordered pairs of ints for instance, where (1, 2) is indistinguishable from (2, 1), I couldn't use Int ⊗ Int for this. Being able to match on a A ⊗ B requires you to be able to handle the A and the B differently.
Does this make sense?

well, I don't quite follow you. yes, you need natural isomorphisms to make A+BA+B the same as B+AB+A, etc. and for some things you want this identification, for others you don't, I agree. it's even worse with A×(B+C)A×B+A×CA\times (B+C)\cong A\times B+ A\times C imo. but then I don't see how the exponentials/modalities will help with these issues.

Now the issue of the multiplicative disjunction and its modality "?" seems to me orthogonal to the conversation above.

Tensor seems parallelism of assumptions/contexts, while "par" seems like parallelism of threads. Each thread is a linear function, they evolve independently, hence we can look at them as parallel disjunctions of executing threads.
What I wanted to model with !,? is the duality of production and consumption of resources, so the rules for a comonoidal monad (as above) make sense, but they do not enforce a total duality between !A and ?A.

this does not preclude your problems with Nat. those are serious as there's cat theory showing that we can only do NNOs for cartesian categories, I think (see https://www.mscs.dal.ca/~pare/PareRomanNNO1.pdf)

view this post on Zulip Chris Barrett (Oct 29 2020 at 11:44):

For those interested, it looks like a paper on this topic has just been uploaded to arxiv: https://arxiv.org/pdf/2010.13926.pdf

view this post on Zulip Andrew Cann (Nov 07 2020 at 06:21):

@Chris Barrett: Oh wow! Thanks for posting this!

I like that someone else is looking into these exponentials. I've been playing with them again recently and I'm convinced they're important, partly for the same reasons mentioned in the paper.

That said, I'm sad that their version of ¿ (which they call ¡) introduces non-determinism.

¡A (my ¡, their ¿) is the type of multisets of type A. Multisets are inherently unordered, so the eliminator for it (ie. its dual type ¿(¬A)) should respect that lack of ordering. That is, it should process the As it receives commutatively, rather than being able to distinguish between {1, 1, 2} and {1, 2, 1} since they're the same multiset. @JS Pacaud Lemay gave an elimination rule above which satisfies this:

¿P ⊦ Q
-------
¿P ⊦ ¿Q

But unfortunately it's only commutative because it only allows you to eliminate a multiset into more multisets. eg. it can let you map a multiset of ints into a multiset of strings by calling "print" on each int, but it can't let you xor a multiset of bools into a single bool.

I have a few issues with linear logic's promotion rule and this highlights one of them. The intro and elim rules for a type should be the exact inverse of each other. That is, they should allow you to "get out" exactly what you "put in". But if you turn the rules for ! and ? around to make ¡ and ¿ you can see that this doesn't work (at least for ¡ and ¿). For instance, a multiset of units (¡1) is defined entirely by the number of elements put into it. So it should be equivalent to Nat. But since there's no way to check if that number of elements is even some information has been lost.


I have a bunch of scattered thoughts around this topic and I'm hoping someone will review them for me and give me feedback.

Firstly I want to motivate the ¡/¿coexponentials a bit more. I've designed a programming-language syntax for multiplicative-additive linear logic. It captures all of MALL with nothing extraneous (like extra rules that weaken the system or -o types distinct from types). One nice thing about it is that pattern syntax is identical to term syntax - every syntactic construct is usable in both positive (term) and negative (pattern) position. However the meaning of the syntax differs between the two positions. Specifically: let's define a "semi-dual" operation on types which only inverts the additive connectives. That is, writing ~A to mean the semi-dual of A, you have:

~0 = ⊤
~⊤ = 0
~(A + B) = ~A & ~B
~(A & B) = ~A + ~B
~⊥ = ⊥
~1 = 1
~(A ⊗ B) = ~A ⊗ ~B
~(A ⅋ B) = ~A ⅋ ~B

In my syntax you match on a term of type A with the syntax of a term of type ~A. In terms of the underlying logic a type still needs to be cut against its full dual, but when moving between the term and pattern fragments you only syntactically take the semi-dual.

A full description of the syntax might help (this is a simplified version of my hobby language syntax for explanation purposes):

## Terms:

for terms:
    a : A
    b : B
    n : ¬A
and patterns
    c : A
    m : ¬A

you have the terms:

    [] : ⊤

    [left = a, right = b] : A & B

    .left a : A + B

    .right b : A + B

    () : 1

    (a, b) : A ⊗ B

    connect a = n : ⊥

    c => b : A ⅋ B

    let m = a; b : B

## Patterns:

for patterns:
    a : A
    b : B
    n : ¬A
and terms
    c : A
    m : ¬A

you have the patterns:

    [] : ⊤

    [left = a, right = b] : A & B

    .left a : A + B

    .right b : A + B

    connect a = n : 1

    c => b : A ⊗ B

    () : ⊥

    (a, b) : A ⅋ B

    let m = a; b : B

Notice that the term and pattern fragments are identical except that the syntax for the multiplicative fragment gets reversed. In other words, you match on a set of branches by choosing a branch and a sum by providing a set of branches, but you match on a tuple by destructuring it and a function by calling it with an argument and binding the result.

This all works well and has some very pretty symmetry, but what happens when you try to add the exponentials? You can define the term syntax for !A as &a where a is a term that only uses other !-typed variables, but to match on a !A you want to be able to destructure it into a set of variables of types A and !A. In other words, you want to syntactically match it against its semi-dual, which happens to be ¡(~A). And this also works in reverse - if you have ¡A types then the syntax for matching on them is &a, except where a is a pattern that only binds more ¡-types (this follows directly from the elimination rule for ¡ that I gave above).

So if I want to continue expanding my language to full linear logic with exponentials, and if I want to keep the nice property that term and pattern syntax is identical, then I end up needing to add ¡ and ¿ types to complement the ! and ? types, otherwise the syntax becomes lop-sided. And since LL is kinda useless for programming without ¡ and ¿ anyway (as pointed out in the paper) that seems like a good motivation to try and understand them, rather than abandon this exploration path in language-design-space.

That said, I want to talk about generators, inductive types and fixpoints in LL, but I'll start a new topic for that...

view this post on Zulip Chris Barrett (Nov 07 2020 at 14:24):

Andrew Cann said:

That said, I'm sad that their version of ¿ (which they call ¡) introduces non-determinism.

It looks like their system has a "strong" version of the co-exponentials and that the non-determinism comes from the multiplexing version of que. It's hard for me to tell exactly what's happening as they don't seem to list their cut-elimination rules.

I'd be interested to see a system for just the "weak" co-exponentials, although they claim in the conclusion there is a problem with the commuting conversions for cut-elimination here. I've played around a bit with making a deep inference formulation of the system though and I think that should work fine. I can't see that non-determinism would arise in this setting, at least

view this post on Zulip JS PL (he/him) (Nov 07 2020 at 15:41):

@Chris Barrett : when you say strong exponential, do you mean the one where they assume !A1×A×(!A!A)! A \cong 1 \times A \times (! A \otimes ! A)

view this post on Zulip Chris Barrett (Nov 09 2020 at 12:10):

@JS Pacaud Lemay I believe that's how they define their "strong exponential", but really I'm thinking more about the formulation of the rule for ! listed in section 2.1 of their paper. It looks unusual to me - I've never seen this presentation before and it doesn't have the subformula property (and thus neither do cut-eliminated proofs). Calling the usual LL exponentials ! and ? and the new "co-exponentials" co-! and co-? , they have a strong rule for both ! and co-!