You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hello people. I'm working on a type theory for a programming language I want to make based on linear logic and dependent types. As part of that, I'm considering adding an exponential modality similar to !
/"of course" except without dereliction or promotion and with another rule instead which here I'll call "read". My problem is that I'm failing to figure out how to formulate this rule, so I'm hoping someone can help me...
I'll describe my problem in more detail: The !
modality has these three properties (in addition to promotion):
!A |- 1
!A |- !A × !A
!A |- A
However these can be formulated as right-rules for ?
as such:
G |- D
----
G |- D, ?A
G |- D, ?A, ?A
----
G |- D, ?A
G |- D, A
----
G |- D, ?A
The advantage of this formulation is that each rule has a single instance of ?A
on the right-hand side of the conclusion, which I think is necessary for cut-elimination and proof search.
I'd like a type, which I'll call Read(A)
, which has these 3 properties instead:
Read(A) |- 1
Read(A) |- Read(A) × Read(A)
A |- A × Read(A)
But I'd like to formulate the read rule similarly to weakening and contraction such that it has a conclusion with a single instance of Read(A)
or ¬Read(A)
to the right of the turnstile. I can't see a way to do this.
My aim here is to end up with the following properties for Read
:
Read(0) = 0
Read(1) = 1
Read(A + B) = Read(A) + Read(B)
Read(A × B) = Read(A) × Read(B)
Read(!A) = !A
Read(Read(A)) = Read(A)
It seems like these properties should be necessary consequences of the read rule. For instance to prove Read(0) |- 0
I should be able to use the fact that the only way to introduce Read(0)
is via the rule 0 |- 0 × Read(0)
, implying there was a proof of 0
at some point. And since 0 |- 0 × 0
I should be able to "look back" to where there was a proof of 0
, duplicate that proof, and bring the duplicate forward to where it's needed. The same logic should apply to any type X
which satisfies X |- X × X
, which should all have the property that Read(X) = X
.
Is this reasoning sound? Or is there a way to formulate the read rule to make it sound and so that I still have cut-elimination?
Thank you to anyone who can help me clarify my thoughts on this or who can link me to research exploring a similar idea :)
To explain my motivation a bit: The type !A
is a strictly positive / "pure-data" type in the sense that it can be freely copied and destroyed. The only way to create a !A
is if it's possible to create an A
out of strictly-positive pure-data, but once you have an !A
you can freely extract as many A
s out of it as you like. In this sense, !A
can be read as "I have a copyable A
". And !A
behaves like a comonad.
I'd like a similar kind of type which behaves more like a monad (though not quite like one) and can be read as "I have a reference to an A". Read(A)
, like !A
, can be freely copied and destroyed. The difference is that you can always create a Read(A)
out of an A
(though without consuming the A
, which you still have to consume in parallel), but when you have a Read(A)
you can only extract the positive/data component of the A
out of it.
I can then add dependent types to my linear-logic language by having the type of a term under a context G
be checked under a context Read(G)
. I might also redefine the promotion rule for !
as being:
Read(G) |- A
----
Read(G) |- !A
Since Read
behaves better than !
w.r.t. context structure, in the sense that we have Read(A × B) = Read(A) × Read(B)
.
Andrew Cann said:
Hello people. I'm working on a type theory for a programming language I want to make based on linear logic and dependent types. As part of that, I'm considering adding an exponential modality similar to
!
/"of course" except without dereliction or promotion and with another rule instead which here I'll call "read". My problem is that I'm failing to figure out how to formulate this rule, so I'm hoping someone can help me...
[...]The advantage of this formulation is that each rule has a single instance of
?A
on the right-hand side of the conclusion, which I think is necessary for cut-elimination and proof search.
Well, the traditional wisdom on that has been that you do want Natural Deduction (ND) instead of sequent calculus for Curry-Howard, as you want to make more proofs equivalent. the sequent calculus is better for proof search, but it's too verbose, too many essentially the same proof are considered different proofs. However there are serious problems when trying to make ND multi-conclusion, there are some books on the issues and possible solutions, starting with Kneale and Kneale (the math historians), CSLI's Ungar in the 80s and many others. So the idea of concentrating on ?A, instead of !A does not seem to me very promising, but of course you may be able to make progress in all of these different issues. But then what it would make sense to ask is what is the categorical semantics of this not-quite-monad "Read". Because references (meaning "pointers") have also always been difficult to model. So, I'm always willing to discuss making LL into programming languages, I'm doing it and talking about it all the time and pleased to do so with new ideas, but hear I am not seeing a "path to the sea", perhaps yet.
*here
Is this reasoning sound?
Looking at this again, it's obviously not. I'm missing an axiom.
@Valeria de Paiva:
Well, the traditional wisdom on that has been that you do want Natural Deduction (ND) instead of sequent calculus for Curry-Howard
The language I've got so far uses a natural-deduction style and it captures all of linear logic, plus another two modalities which I discussed here
The basic idea is that there's two kinds of judgements, which I'll call "terms" and "patterns". A term judgement is of the form G |- A
where G
is a context (a multi-set of formulas) and A
is a single formula. This is the same as the standard linear logic sequent it resembles. A pattern judgement is of the form G |- A -| D
where D
is another context. This is equivalent to the linear logic sequent G |- A, tensor(D)
where tensor(D)
means to combine all the formulas in D
into a tensor.
From here you can derive term and pattern rules for all the types in linear logic, and this is what you get:
0
type (which I write @[]
)(no term or pattern rule)
⊤
type (which I write #[]
)----
G |- []: #[]
----
G |- [] : #[] -| D
1
type (which I write #()
)----
. |- () : #()
G0 |- x: A -| .
G1 |- y: Not(A) -| D
----
G0, G1 |- connect x = y : #() -| D
⊥
type (which I write @()
)G0 |- x : A
G1 |- y : Not(A)
----
G0, G1 |- connect x = y : @()
----
. |- () : @() -| .
@[left: A, .. B]
)G |- a : A
----
G |- @left a : @[left: A, .. B]
G |- a : A -| D
----
G |- @left a : @[left: A, .. B] -| D
#[left: A, .. B]
)G |- a : A
G |- b : B
----
G |- [left = a, .. b] : #[left: A, .. B]
G |- a : A -| D
G |- b : B -| D
----
G |- [left = a, .. b] : #[left: A, .. B] -| D
#(left: A, .. B)
)G0 |- a : A
G1 |- b : B
----
G0, G1 |- (left = a, .. b) : #(left: A, .. B)
G0 |- a : A
G1 |- b : B -| D
----
G0, G1 |- (left = a) => b : #(left: A, .. B) -| D
@(left: A, .. B)
)G0 |- a : A -| C
G1, C |- b : B
----
G0, G1 |- (left = a) => b : @(left: A, .. B)
G0 |- a : A -| D0
G1 |- b : B -| D1
----
G0, G1 |- (left = a, .. b) : @(left: A, .. B) -| D0, D1
Copy(A)
)Copy(G) |- a : A
----
Copy(G) |- copy a : Copy(A)
Copy(G0) |- a : A -| C
Copy(G1), C |- x : Bundle(B)
Copy(G2) |- y : Copy(Not(B)) -| D
----
Copy(G0, G1, G2) |- srv(x = y) a : Copy(A) -| D
Bundle(A)
)G0 |- x : T
G1 |- y : Not(T)
----
G0, G1 |- x <|> y : Bundle(A)
G |- a : A
----
G |- just a : Bundle(A)
G0 |- x : Bundle(A) -| C
G1, C |- y : Bundle(A)
----
G0, G1 |- <.. x> => y : Bundle(A)
----
. |- <> : Bundle(A) -| .
G |- a : A -| D
----
G |- <a> : Bundle(A) -| D
G0 |- x : Bundle(A) -| D0
G1 |- y : Bundle(A) -| D1
----
G0, G1 |- <.. x, .. y> : Bundle(A) -| D0, D1
Set(A)
types (multi-sets of values)----
. |- <> : Set(A)
G |- a : A
----
G |- <a> : Set(A)
G0 |- x : Set(A)
G1 |- y : Set(A)
----
G0, G1 |- <.. x, .. y> : Set(A)
G0 |- x : T -| .
G1 |- y : Not(T) -| D
----
G0, G1 |- x <|> y : Set(A) -| D
G |- a : A -| D
----
G |- just a : Set(A) -| D
G0 |- x : Set(A)
G1 |- y : Set(A) -| D
----
G0, G1 |- <.. x> => y : Set(A) -| D
Srv(A)
types (the dual of Set
types)C |- a : A
. |- x : Set(Not(B)) -| C
G |- y : Srv(B)
----
G |- srv(x = y) a : Srv(A)
. |- a : A -| Set(C)
----
. |- copy a : Srv(A) -| Set(C)
If I've done everything correctly then these rules should be equivalent to the original linear logic rules.
An interesting thing about this is that the term/pattern rules for any type mirror the pattern/term rules of it's "semi-dual" (where I'm defining semi-dual to mean the type with all the multiplicative components inverted). For instance, if you take the term rule for par types and replace all the pattern premises with terms and all the term premises with patterns you get the pattern rule for tensor types, and vice-versa. I don't know if this has any deep significance, but it makes for a nice language since it means I can make pattern syntax be identical to term syntax (eg. any syntactic form that's usable in one position is usable in the other).
references (meaning "pointers") have also always been difficult to model.
I think I'm nearly there... The idea I'm going for is to add another 4 modalities, similar to the existing four but with opposite "monadness" (sort-of). The (co)weakening and (co)contraction rules then move to the new modalities and the rules for the original 4 modalities then get reformulated in terms of the new ones:
Read(G) |- A
----
Read(G) |- Copy(A)
G |- CoRead(Bundle(A)), D
----
G |- Bundle(A), D
G |- Extend(Set(A)), D
----
G |- Set(A), D
CoExtend(B) |- A
----
CoExtend(B) |- Srv(A)
The problem I'm having is how to formulate the rules for these new modalities. (co)weakening and (co)contraction are easy but then I need another few axioms which give me the equivalences for Read
that I listed in the OP, as well as similar equivalences for the other types.