Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: type theory & logic

Topic: Type Theory based on Dependent Inductive/Coindinductive Typs


view this post on Zulip Cody Roux (Jun 26 2020 at 18:51):

Has anyone internalized (no pun intended) this paper: https://arxiv.org/abs/1605.02206? Basically it's a type theory that defines function types as a particular inductive, and has normalization as a by-product. I've been thinking this is possible for a while now, so I'm excited to understand what makes it tick, and whether this general approach can be applied to other type theories.

view this post on Zulip Dan Doel (Jun 26 2020 at 18:55):

I haven't seen that paper, but I'm familiar with another one that can define functions. It's more like System F_ω, though.

view this post on Zulip Dan Doel (Jun 26 2020 at 19:01):

I guess the setting I'm familiar with is rather different, though. It doesn't attempt to have strong normalization, I think, and defining functions as 'data' would be kind of wrong. They are codata (that gives the correct evaluation strategy).

view this post on Zulip Cody Roux (Jun 26 2020 at 19:03):

Co-inductive, sure. What's the other paper?

view this post on Zulip Dan Doel (Jun 26 2020 at 19:03):

Compiling with Classical Connectives.

view this post on Zulip Dan Doel (Jun 26 2020 at 19:04):

It is also different from this paper in that it has multiple things on the right of the turnstile, and (co)constructors can have arbitrarily many things to the left and right of the turnstile.

view this post on Zulip Cody Roux (Jun 26 2020 at 19:06):

Ugh classical logic. Thanks for the reference though! I really do enjoy this "function spaces are nothing special" philosophy.

view this post on Zulip Dan Doel (Jun 26 2020 at 19:07):

It is not just classical logic.

view this post on Zulip Dan Doel (Jun 26 2020 at 19:07):

It is more like 'classical' linear logic, but it isn't linear, it has polarity shifts instead.

view this post on Zulip Dan Doel (Jun 26 2020 at 19:08):

And the polarity shifts correspond to evaluation strategies that make the reduction behavior unambiguous.

view this post on Zulip Dan Doel (Jun 26 2020 at 19:13):

So, e.g. A + B is not the same as A ⅋ B, because the evaluation strategy is different, and A and B cannot even be the same in either case, because + requires A and B to be positive, while for they must be negative.

view this post on Zulip Dan Doel (Jun 26 2020 at 19:19):

And is defined in a similar way to functions if you use the calculus for arbitrary (co)data definitions. It is the codata type with a destructor like A ⅋ B ⊢ A,B I think. Functions are like A , A → B ⊢ B, but you could use a ¬ codata type to mediate between the difference.

view this post on Zulip Dan Doel (Jun 26 2020 at 19:30):

I guess I should say (since it's probably not obvious), the similarity between the two is that is a codata type with one destructor with two parameters, both negative. is a codata type with one destructor with two parameters, one positive and one negative.

view this post on Zulip Jacques Carette (Jun 26 2020 at 20:09):

I think the original question is still a good one: has anyone formalized either of those approaches?

view this post on Zulip Kenji Maillard (Jun 26 2020 at 20:34):

I saw a few formalizations in Coq of μμ~\mu\tilde{\mu} that are related to this point of view (the development of this paper for instance -- μ~\tilde{\mu} gives you let-binding as well as pattern-matching, whereas μ\mu provides some way to sequence contexts, as well as copattern-matching).
I have yet to read in details the paper above, but I think one annoying point is the treatment of universes. Sure you can provide Inductive-recursive models of universes but I don't think there is any such "symmetric" presentation of type theory incorporating the universes well (I would love to be proven wrong though).

view this post on Zulip Dan Doel (Jun 26 2020 at 20:37):

Yeah, I'm not really sure how to handle dependent types, because they're kind of inherently biased toward terms.

view this post on Zulip Dan Doel (Jun 26 2020 at 20:43):

Whereas the μμ~μ \tilde μ stuff wants to put terms, coterms and cuts on more equal footing.

view this post on Zulip Dan Doel (Jun 26 2020 at 20:44):

(Assuming it's like the paper I mentioned above, which does have those operations.)

view this post on Zulip Cody Roux (Jun 26 2020 at 22:03):

Dan Doel said:

I guess I should say (since it's probably not obvious), the similarity between the two is that is a codata type with one destructor with two parameters, both negative. is a codata type with one destructor with two parameters, one positive and one negative.

That's cool, but I have very bad intuition about this. Is there a "standard" PL construct of type ?

view this post on Zulip Dan Doel (Jun 27 2020 at 00:23):

I don't know. It probably won't look very nice, because the way a lot of stuff is presented means that use of is necessarily obscured. It is a type that is natural when you are about to manipulate continuations as easily as values, but most things are designed so that continuations are implicit, to limit what you can do with them.

A value p : A ⅋ B is something you can supply with two continuations, so in some ways you can think of it as something where you can do match p with { L a -> ... ; R b -> ... }. In a linear setting, I think the expectation you should have is that both branches run (instead of A + B, where only one runs). In a non-linear setting, this is relaxed, but still multiple branches may run, and some branches may run multiple times, I think (and this is contrary to A + B where again one branch runs exactly once).

So, A ⅋ B is kind of like the "devil's bargain" interpretation of excluded middle. But the important part is that the calculus is structured so that this computational behavior is not conflated with the type A + B. The latter is intrinsically call-by-value, so its values are actually reduced values that actually look like L x or R y, and it makes no sense for the "devil's bargain" to be values of that type. By contrast, A ⅋ B is inherently call-by-name, so its values can have non-trivial computational behavior.

Some of this can be seen in A → B = ¬ A ⅋ B. To call the function I think we must do match f with { L k → jump k x ; R b → b }. The first branch is, I think, run as many times as the argument is referred to directly in the body of the function. This may actually be 0 times, even, but I think it may be multiple times as well, although it isn't doing any significant computation here (because ¬ just turns a call-by-value type into call-by-name). The second branch is evaluated with the result of the function. But actually B is call-by-name because requires its sub-expressions to be so, so how many times we go through the second branch might depend on how many times we use the name of this overall expression. So in some sense, even "standard" call-by-name scenarios are doing something a lot like the bargain, jumping back to the same place over and over. It just isn't presented that way.

view this post on Zulip Dan Doel (Jun 27 2020 at 00:37):

In the calculus from the paper I mentioned, function values look like λ[x,α]. c, where x is the argument, and α is the return continuation, and c says how they should be used together. So, like, the identity function is λ[x,α]. <x|α>. Function continuations are like [x,α], which is like 'pushing x onto the stack α.' And values of A ⅋ B are like λ[α,β]. c, while continuations are like [α,β]. Values of A + B are like L x and L y, and continuations are like case { L x. c1 ; R y. c2 }. So there is more obvious distinction in the syntax than I used above, but I'm not sure how to maintain that in world where you are required to only talk about terms.

view this post on Zulip Cody Roux (Jun 27 2020 at 03:43):

Cool, thanks for the clarification Dan! This makes some sense.

view this post on Zulip James Wood (Jun 27 2020 at 10:57):

Is it fair to say that a function in this kind of calculus can be made to go in two ways: either providing some input or requesting some output? I think this is how I'd make sense of ABBAA ⊸ B ≅ B^⊥ ⊸ A^⊥ (both of them being equivalent to ABA^⊥ ⅋ B).

view this post on Zulip Dan Doel (Jun 27 2020 at 16:44):

I'm not sure what you mean exactly. I guess I wouldn't say that for linear logic, because the idea is those aren't 'two modes' at all, they're just two equivalent ways of looking at the same thing, which is that there is an 'input pipe' for an AA and an 'output pipe' for a BB, and things may happen in any order/simultaneously up to data dependencies. Unless you mean that 'an input pipe for an input pipe is an output pipe'. But it might be better to to think of putting an adapter on the output of a function than the function having multiple modes for that.

In Compiling with Classical Connectives it is more complicated, because negation mediates between evaluation strategies, and so there are multiple negations. If AA is call-by-value, then ¬A¬A is call-by-name, but if BB is call-by-name, then ¬B¬B is not even well formed. Instead you have B\ominus B which is call-by-value.

This has connections with different ways of viewing continuations. If AA is call-by-value, then its values are like trees built out of 'constructors'. Then values of ¬A¬A are, I think, suspended computations waiting on an AA value to proceed. Continuations/covalues of ¬A¬A are just wrapped values of AA.

If BB is call-by-name, then its values are already suspended computations, and its continuations are like trees of 'observations' or instructions to perform on a corresponding computation. So the roles of value and continuation are kind of swapped from above. Then a value of B\ominus B is like a captured tree/stack representation of a continuation. And a continuation of B\ominus B is a suspended computation waiting to be matched up with one of these captured stacks.

So, then, if you think about all this, you should hopefully be able to work out that ¬BB¬ \ominus B \cong B and ¬AA\ominus ¬ A \cong A, and that this has something to do with inserting/removing some wrappers that kind of make my description of the 'trees' and such overly naive. :slight_smile: And I think functions still don't run in 'two modes' per se, it's probably more like snapping adapters on either side.

(There are even more complications that I haven't mentioned yet, but I think they're not actually relevant for this example.)

view this post on Zulip Dan Doel (Jun 27 2020 at 17:00):

This kind of shows you the bias toward values of a lot of type theory. The call-by-value stuff is inductive, and is thought of as concrete(-ish), well-founded trees. But that is describing its values. Coinductive stuff has concrete trees, too, but as the continuations, which you can't really access in type theory. But you can define an inductive type that essentially the same as B\ominus B. So you might not be really losing out on anything in absolute terms, but it is somewhat obscured.

view this post on Zulip Dan Doel (Jun 27 2020 at 17:23):

Hmm, actually, maybe my description wasn't overly naive, too. I guess the point of ¬¬ and \ominus is that they dualize things in a way that matches up the 'simple' parts. So, a ¬A¬A continuation is concrete presuming that AA values are concrete, and a B\ominus B value is concrete presuming that a BB continuation is concrete. And ABA → B continuations are concrete because AA values are concrete, and BB continuations are concrete (push a concrete value on a concrete stack). It is probably the 'additional complications' that make the description naive.

view this post on Zulip Dan Doel (Jun 27 2020 at 18:08):

@Cody Roux I think one way to think about this, based on my rambling above, is that the calculus actually explicitly talks about (suspended) computations and concrete trees at every type. When you do this, functions are definable because it is just one type where the values are suspended computations, and the continuations are concrete trees.

But in a lot of type theory, they only want to talk about values, and moreover, they almost exclusively want to talk about types where the values are the concrete trees (bias toward induction). Then functions are the one exception, and all of the 'suspended computation' stuff gets dumped into its values. Then all of the difficulty of normalization, extensionality, etc. gets presented as being about functions, because that is the one 'escape hatch' type used to embed all the computational behavior. When other M-types are included, they also inherit some difficulties, because of the value bias (their values are suspended computations, which want extensionality, rather than trees, which are content with intensionality).

view this post on Zulip Dan Doel (Jun 27 2020 at 18:39):

In CwCC, computations eT:sα:ΓΔ \langle e | T:s | α\rangle : Γ ⊢ Δ are their own thing to talk about. ee is a term of type TT, ss is the evaluation strategy of type TT, αα is a co-term of type TT, and ΓΓ and ΔΔ classify the free variables used in ee and αα. Then normalization is clearly about that, not about function values.