You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
The HoTT book defines the law of excluded middle as follows (see page 113):
This means: every mere proposition is decidable. However, this is stronger than what I personally consider LEM to be, namely the statement that every mere proposition is merely true or false:
I haven't looked in detail at items 3 and 4 yet, but I can tell you that LEM and LEM' are equivalent. It will be a nice exercise to prove this yourself. (Hint: prove that is a proposition when is.)
RE Item 3: What if for every there are multiple (possibly infinitely many) such that holds? You're not always in the position where you can/want to make a choice. I'll try to think of some concrete examples.
RE Item 4: I'm not an expert on Cubical Agda, but it does support univalence and (most?) higher inductive types, certainly spheres. Since each has a unique , you can actually prove the formulation with and Cubical Agda should be able to calculate it for each (giving enough time and memory). I don't think this has led to completely new results though, but someone else should know a lot more about that topic.
In item 2, it sounds like you're talking about something more like (for example) Lean's type theory, which has a separate Prop
with no computational content, and instead calls subsingleton
, and trunc
(constructed as the quotient by the always-true relation).
For item 4, also see https://thehighergeometer.wordpress.com/2018/06/29/trying-to-compute-bruneries-number/
There's a whole zoo of notions of irrelevance, and I don't think there's any alternative to studying each one carefully. There are h-props, as in HoTT, which say that all inhabitants are substitutable with each other. Then there are s-props (strict propositions), as in Lean and Agda's Prop
and Coq's SProp
, which have all open inhabitants identified, and only interact with computation by dismissing impossible cases. Additionally, there is erasure, where types can genuinely have multiple distinct closed inhabitants, but there is a guarantee that no information from them is needed when computing with closed terms (i.e, in a compiled program). If you want to preserve computational meaning, you first have to decide which notion of computation (open or closed) to preserve.
Item 4 is one of the open problems in HoTT. In principle, we know that cubical type theories can compute the "Brunerie number", since it's been proven that they satisfy canonicity (and even normalization). But no one has yet managed to implement a cubical type theory and actually perform the computation using available time and memory constraints.
Leopold Schlicht said:
- Is there another framework in which classical logic and constructive logic can coexist in the sense of question 2. (so that one can use LEM at the "mere" layer without destroying executability of proofs at the "pure" layer)?
One answer is that double-negation is a modality. Thus, you always can use LEM while "reasoning in that modality". Of course, the resulting logic won't interact with types quite the way you're used to, e.g. it won't satisfy function comprehension.
Re 3, it happens often that we want truncated logic when defining subobjects of a given object. For instance, in defining the dedekind reals we would get the wrong answer if we formulated the axioms of a cut using untruncated logic. Similarly, the classifying space can be defined as , whereas the untruncated version is contractible.
Another place where truncation is necessary is in formulating nonclassical continuity principles, although that's roughly similar to its usage in LEM and AC.
Re 3 again and similar to Mike Shulman's point above: Assuming univalence, the circle satisfies but not .
Thanks for all the answers! :-)