You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
For example, why do we care if every term of is syntactically either of the form of for some , when we can easily prove by induction that ? I know that this property is important for computational reasons, but I don't know what these reasons are. (cross-posted to the HoTT Zulip)
to quote folklore, canonicity allows equality of terms, and hence type checking, to be decidable; however, there are practical type theories that lack decidability of type checking, such as NuPRL - extensional type theories. Those have their own problems, such as not being able to normalize under binders, but naively it doesn't seem like those problems are directly due to lack of canonicity. Spending a lot of time programming in Coq makes it obvious why axioms are 'less good' than theorems, but I'm hoping people can speak to other reasons.
What is the difference between
and 1.41421356237... where by "1.41421356237..." I refer to the output of a program that accepts a natural number and outputs the first digits of .
up to a sufficiently large value of at which point the computer crashes.
There are many features of the string "1.41421356237..." that make it more tangible or concrete somehow than .
For example, "1.414" is obviously greater than 1.4 and obviously less than 1.5. This can be seen at a moment's glance. It is not obvious without doing some math that is greater than 1.4.
I hope you would agree that "What is the value of the expression " is a sensible question whose answer is not immediately obvious, and "What is the value of the expression 139394" is perhaps sensible but whose answer is immediately obvious - i.e., it is 139394.
That is, I personally draw some kind of philosophical distinction between expressions and values. An expression denotes a value but it is not itself necessarily a value; we can compute an expression to determine its value. In the case of real numbers we cannot compute values in a finite amount of time but if I have an algorithm which gives me an open interval with , rational, and .
In both classical and constructive mathematics, there is a distinction we can draw between expressions and values. The difference between classical and constructive mathematics is that classical mathematics admits expressions which we cannot associate a value to in practice (even though platonically it denotes a value) and constructive mathematics where from every expression we can compute a value.
If you think of a closed term of type as an expression denoting a value, and a numeral as a value properly speaking, then canonicity implies that "every expression which denotes a natural number can be reduced to the value of that natural number" which can be interpreted as a criterion for the constructivity of the system. In classical mathematics we can give abstract descriptions of natural numbers and prove they are unique but not denote them by numerals, so this is one sense in which classical mathematics fails to be constructive.
For example if you take the result here -
https://homotopytypetheory.org/2022/06/09/the-brunerie-number-is-2/
when we say "we have computed this number and it is -2" what that means is that we proved that the group is cyclic of order , and from that we can talk about "the such that is cyclic of order ", and that is an expression. Normalizing that expression denotes a value, which is in this case -2. In classical mathematics it would not follow immediately that we can know the value of -2 just because we know "there exists such that is cyclic of order ", but because of canonicity we know that by repeatedly beta-reducing the expression "the such that is cyclic of order " we will eventually get a numeral.
Ryan Wisnesky said:
To quote folklore, canonicity allows equality of terms, and hence type checking, to be decidable...
Technically speaking that's normalization, not canonicity. Canonicity (for natural numbers) says that every term of type Nat that's "closed" (i.e. defined in the empty context) is definitionally equal to a numeral. Normalization says that in any context, any term of any type is definitionally equal to a special kind of term called a normal form, such that if two terms are definitionally equal then their normal forms are identical.
Normalization is an important ingredient in a typechecking algorithm for dependent type theory, since it gives us an algorithm to test whether two terms are definitionally equal: compute their normal forms and check whether they're identical. I'm not an expert on NuPRL, but I'm dubious of any claim that anyone's actually ever implemented any type theory that doesn't have decidable type-checking; it seems to me it's more likely that they specified a fragment of the theory that does have decidable typechecking and implemented that.
We usually think of normalization as a stronger property than canonicity, because as long as the normal forms of type Nat are the numerals, normalization implies canonicity. However, there are type theories with normalization but not canonicity, e.g. just add any axiom to a more familiar type theory. Since adding an axiom is equivalent to working in a nonempty context, it doesn't break normalization, but it sure can break canonicity.
My short answer to the original question is that only with canonicity does it make sense to think of a type theory as being a programming language. If you write a function in, say, C++, of type int, and run it, you expect to actually get out a concrete integer.
In general, many type theories do not satisfy canonicity, such as a type theory with function extensionality or the univalence axiom. Furthermore, there exists type theories, such as objective type theory, for which it is simply impossible to even express the notion of canonicity - because the theories don't even have a judgmental equality.
More important in these cases would be homotopy canonicity, where every term in the empty context is propositionally equal to a canonical term . Kapulkin and Sattler have proven, for example, that the type theory in the 2013 HoTT textbook satisfy homotopy canonicity, despite it not satisfying canonicity. Homotopy canonicity suffices for most purposes because one could get [types or terms indexed by] canonical terms via transport and application across propositional equalities to the canonical terms in the natural numbers.
Canonicity itself is relevant for a different reason: having every term in the empty context judgmentally reduce down a canonical term makes the resulting syntax and proofs simpler to deal with in the language - you don't have to constantly transport or ap across propositional equalities in the natural numbers to get [something indexed by] the canonical term if your theory has judgmental equality and canonicity; everything just reduces.
It's the same reason why somebody might want to define function extensionality or univalent universes using equivalences with judgmental left-inverse and right-inverse equalities. One doesn't have to deal with either transport or ap across the left-inverse and right-inverse family of propositional equalities for weak equivalences, or the coherence datum of weak equivalences, because taking the inverse function of the equivalence evaluated at the term just reduces to the original term, making many proofs easier.
It's not really clear to me that homotopy canonicity is useful for anything in its own right. It's interesting metatheoretically as an argument that some axiom like funext or univalence is "morally constructive", but to do anything with canonicity in practice one needs a reduction algorithm.
Joshua Meyers said:
For example, why do we care if every term of is syntactically either of the form of for some
By the way, just as a point of terminology, I would tend to use "syntactically" to refer to an equality that is finer than definitional equality, such as mere -equivalence, or maybe even strict identity of terms with named variables. Canonicity says that every term is "definitionally" or "judgmentally" equal to a numeral, but not "syntactically" in this sense, e.g. I would say that is syntactically distinct from even though they are definitionally/judgmentally equal.
Mike Shulman said:
Ryan Wisnesky said:
To quote folklore, canonicity allows equality of terms, and hence type checking, to be decidable...
Technically speaking that's normalization, not canonicity. Canonicity (for natural numbers) says that every term of type Nat that's "closed" (i.e. defined in the empty context) is definitionally equal to a numeral. Normalization says that in any context, any term of any type is definitionally equal to a special kind of term called a normal form, such that if two terms are definitionally equal then their normal forms are identical.
Normalization is an important ingredient in a typechecking algorithm for dependent type theory, since it gives us an algorithm to test whether two terms are definitionally equal: compute their normal forms and check whether they're identical. I'm not an expert on NuPRL, but I'm dubious of any claim that anyone's actually ever implemented any type theory that doesn't have decidable type-checking; it seems to me it's more likely that they specified a fragment of the theory that does have decidable typechecking and implemented that.
We usually think of normalization as a stronger property than canonicity, because as long as the normal forms of type Nat are the numerals, normalization implies canonicity. However, there are type theories with normalization but not canonicity, e.g. just add any axiom to a more familiar type theory. Since adding an axiom is equivalent to working in a nonempty context, it doesn't break normalization, but it sure can break canonicity.
My first thought is that normalization seems too strong -- if what we really want is for testing whether two terms are judgmentally equal to be efficiently computable, normalization is clearly sufficient but is it necessary?
A second thought: if we have some crazy non-canonical term, isn't it easy to tell that it is not judgementally equal to, say, ? I mean, perhaps we could do some work to evaluate it and ultimately get , but in that case, it would be equal to propositionally, not judgmentally.
I want to learn more about these matters. What should I read? (Also see here for the kind of references I am looking for.)
Mike Shulman said:
My short answer to the original question is that only with canonicity does it make sense to think of a type theory as being a programming language. If you write a function in, say, C++, of type int, and run it, you expect to actually get out a concrete integer.
Right, but why should we necessarily care about type theory being a programming language if we are using it for mathematical foundations or formal verification? If I want to construct and reason about a function I don't see why it should have to output a concrete integer.
"I'm dubious of any claim that anyone's actually ever implemented any type theory that doesn't have decidable type-checking"
Why is decidable type-checking important? In ordinary (proof-irrelevant) math we will often write things like "consider the group " --- to check that this is well-typed we must check whether the given structure satisfies the group axioms, which I don't think is decidable. The reader is expected to check the group axioms, and a conscientious author will give them hints to do so if it is not straightforward -- they might include a parenthetical such as "(to check associativity consider _____)", or a citation such as "this group has been described in more detail in \cite{...}". Why would we be so averse to setting a similar expectation for a compilier?
If we have some crazy non-canonical term, isn't it easy to tell that it is not definitionally equal to, say, 2? I mean, perhaps we could do some work to evaluate it and ultimately get 2, but in that case, it would be equal to 2 judgmentally, not definitionally.
I don't know what you mean here. There are a lot of people who use "judgemental equality" and "definitional equality" synonymously so if you are distinguishing between those you have to clarify
Sorry! Just fixed
Joshua Meyers said:
Mike Shulman said:
My short answer to the original question is that only with canonicity does it make sense to think of a type theory as being a programming language. If you write a function in, say, C++, of type int, and run it, you expect to actually get out a concrete integer.
Right, but why should we necessarily care about type theory being a programming language if we are using it for mathematical foundations or formal verification? If I want to construct and reason about a function I don't see why it should have to output a concrete integer.
This is just fundamentally what constructive mathematics is about imo. Not all your math has to be constructive but to the extent that you want to do any kind of constructive mathematics whatsoever you have to have the ability to write programs which can compute and evaluate. If you genuinely don't care about constructivity whatsoever that's also fine, but even systems based on classical logic have directed rewriting automation tools which repeatedly rewrite and simplify according to a set of formulas, for example replacing a term by its definition, making chains of additions left-associative and so on, some kind of computation in this sense of automated directed rewriting is still necessary.
Joshua Meyers said:
"I'm dubious of any claim that anyone's actually ever implemented any type theory that doesn't have decidable type-checking"
Why is decidable type-checking important? In ordinary (proof-irrelevant) math we will often write things like "consider the group " --- to check that this is well-typed we must check whether the given structure satisfies the group axioms, which I don't think is decidable. The reader is expected to check the group axioms, and a conscientious author will give them hints to do so if it is not straightforward -- they might include a parenthetical such as "(to check associativity consider _____)", or a citation such as "this group has been described in more detail in \cite{...}". Why would we be so averse to setting a similar expectation for a compilier?
It sounds like you're saying that you're saying we should call a powerful automated theorem prover to resolve typing issues, which has been done, for example, here. https://arxiv.org/abs/2305.15382
But more generally like. It should be obvious that this is a hard problem right? Like you're suggesting "Why don't we just let the computer take care of the easy parts, like checking the group axioms?". This is what we've been trying to do with interactive theorem provers for decades, and we're getting there slowly, but easy is in the eye of the beholder.
Checking that a given group structure satisfies the axioms is not part of type-checking, that's proof search. Type-checking is (among other things) checking that a given proof that a given group structure satisfies the axioms is is correct.
The whole point of formal verification is to, well, verify the correctness of a proof. And that's part of typechecking, so you really have to have a typechecking algorithm. The only wiggle room I can see is that maybe it's only semidecidable, i.e. your algorithm might spin forever on some inputs without deciding whether the proof is correct or not. But the more common that is in practice, the less useful your proof assistant will be.
It's true that in theory, one might have an equality-checking algorithm that doesn't proceed by normalization. In fact, technically this is the case in many real-world proof assistants, which use a type-directed equality-checking algorithm that sort of normalizes each term bit by bit, checking equality as it goes, rather than normalizing both terms all the way and then afterwards checking whether they're equal. However, in practice I'm not aware of any equality-checking algorithm that isn't essentially a normalization algorithm at heart.
Joshua Meyers said:
Right, but why should we necessarily care about type theory being a programming language if we are using it for mathematical foundations or formal verification? If I want to construct and reason about a function I don't see why it should have to output a concrete integer.
You didn't specify in your original question what your context was. Being a programming language is indeed not directly important for foundations or verification. In particular, therefore, people are free to assume axioms (like excluded middle and choice) when doing foundations and verification, which break canonicity. But importantly, they don't break normalization or decidability of typechecking. So from that pure perspective, canonicity is not perhaps important on its own, except as a signal that a theory being developed seems computational and it's worth the effort of trying to find a normalization algorithm.
Joshua Meyers said:
A second thought: if we have some crazy non-canonical term, isn't it easy to tell that it is not judgementally equal to, say, ? I mean, perhaps we could do some work to evaluate it and ultimately get , but in that case, it would be equal to propositionally, not judgmentally.
I still can't figure out what you mean here. If you just have some random type theory, then no, it's not easy to check whether two terms are judgmentally equal, since judgmental equality is defined as a transitive closure, so they could in principle be connected by some long string of equalities that go throurgh a bunch of other apparently unrelated terms. Having a normalization/canonicity algorithm solves this problem by saying there is a specific sequence of equalities that's enough to check.
Mike Shulman said:
Joshua Meyers said:
Right, but why should we necessarily care about type theory being a programming language if we are using it for mathematical foundations or formal verification? If I want to construct and reason about a function I don't see why it should have to output a concrete integer.
You didn't specify in your original question what your context was. Being a programming language is indeed not directly important for foundations or verification. In particular, therefore, people are free to assume axioms (like excluded middle and choice) when doing foundations and verification, which break canonicity. But importantly, they don't break normalization or decidability of typechecking. So from that pure perspective, canonicity is not perhaps important on its own, except as a signal that a theory being developed seems computational and it's worth the effort of trying to find a normalization algorithm.
Why wouldn't these axioms break normalization? Maybe it would help if I could see a definition of "normal form" in a given context.
Adding an axiom, of type say, doesn't break normalization because it's essentially equivalent to just always working in an extended context with an extra variable of type , and normalization is a statement about all contexts.
In particular, a variable appearing in the context is always a normal form in that context.
Intuitively, a normal form is a term that contains no "redexes", subterms to which reduction rules could be applied, such as which reduces to . Formally, normal forms are defined inductively and mutually with a subclass of themselves called "neutral terms" rather than defined by this "negative" property.
In particular, if we have an axiom or variable , i.e. , then a term like is a normal form (as long as the branches are): the "if" is only a redex if its argument is syntactically of the form or .
Mike Shulman said:
Intuitively, a normal form is a term that contains no "redexes", subterms to which reduction rules could be applied, such as which reduces to . Formally, normal forms are defined inductively and mutually with a subclass of themselves called "neutral terms" rather than defined by this "negative" property.
Do you know a good reference to see how normal forms are defined for some type theory?
here is a recent result that gives decidability for the lambda calculus corresponding to bi-cartesian closed categories, proceeding via normal forms https://arxiv.org/abs/1610.01213
Here's an expository note describing a "normalization by evaluation" algorithm, including the definition of normal and neutral forms, and its connection to category-theoretic "gluing": https://arxiv.org/abs/1809.08646
Mike Shulman said:
I'm dubious of any claim that anyone's actually ever implemented any type theory that doesn't have decidable type-checking[.]
This would require checking, but I think the PML prover has semi-decidable type checking.
Quoting from Rodolphe Lepigre's home page:
Type checking (verifying that a given term inhabits a given type) and type inference (finding a type that is inhabited by a given term) tend to be undecidable in Curry-style languages like System F or PML₂. As a consequence, these system are sometimes considered impractical, although practicality and decidability are two different problems. The main issue with Curry-style languages is that their type systems are generally not syntax-directed, meaning that they cannot be easily implemented. In particular, there is no canonical way of deciding what typing rule should be applied first when attempting to prove a typing judgment. To solve this problem, we designed (with Christophe Raffalli) a framework based on subtyping [LepRaf2019]. The main, innovating idea is to use a ternary relation t ∈ A ⊆ B instead of the usual binary relation A ⊆ B. We interpret the former as the implication “if the term t has type A, then it also has type B”, while the latter is interpreted as the inclusion “every element of type A is an element of type B”. In the obtained system, only one typing rule applies for every term constructor, and only one subtyping rule applies for every pair of types (up to commutation). In particular, the connectives that do not have algorithmic contents (those that are not reflected in the syntax of the terms) are handled using subtyping exclusively. Such connectives include the quantifiers, but also the equality types of PML₂ and the least and greatest fixpoint constructors used by inductive and coinductive types.
It's not entirely clear whether this means that type-checking in PML is syntax-directed or decidable, so I'm relying on memories of discussions with Rodolphe here. Maybe @Pierre Hyvernat would know better?
Also, @Greta Coraglia and @Jacopo Emmenegger might be interested in this, since the ternary judgement t ∈ A ⊆ B looks related to their work on the semantics of subtyping.
I had a longish conversation about decidability of typechecking with various folks on the nCafe here. The upshot was that I ended up getting convinced that, as I mentioned above, semidecidability is enough for an implementation. Although I do think that, as I also said above, in practice you want things to terminate as much as possible.
Personally, I think I still prefer a proof assistant that distinguishes between a "core language" that has decidable typechecking and all the extra bells and whistles that might not. I like knowing that if I wanted to, I could write out a term in complete explicit gory detail and be certain that it would typecheck in finite time.
Although, of course, as was also pointed out in that thread, in practice decidability isn't much use without efficiency. Knowing that something will theoretically terminate isn't much help if that termination requires more time or space than is available in the observable universe.
Sure, I didn't mean to claim any preference whatsoever, only to point out a proof assistant with semidecidable typechecking.