Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: type theory

Topic: model of univalence


view this post on Zulip Leopold Schlicht (May 04 2021 at 18:56):

Two quotes from the HoTT book:

"On the one hand, there is Voevodsky’s subtle and beautiful univalence axiom. The univalence axiom implies, in particular, that isomorphic structures can be identified, a principle that mathematicians have been happily using on workdays, despite its incompatibility with the “official” doctrines of conventional foundations."
"The short answer is that all of the constructions and axioms considered in this book have a model in the category of Kan complexes, due to Voevodsky [KLV12] (see [LS17] for higher inductive types)."

Isn't it a bit odd that one needs to search really hard in areas as advanced as algebraic topology to find a model justifying a principle that mathematicians are already using in day-to-day work? After all, probably most mathematicians are thinking about set-based mathematics when applying this principle, with "set" meaning "set" rather than "Kan complex". :-) So shouldn't the principle also be true in set-based mathematics (in some form)?

view this post on Zulip Fawzi Hreiki (May 04 2021 at 19:21):

I think the idea is that such models of HoTT are as complex as they are because they’re being modelled in ZFC (or something like it) so you need to build in all the higher homotopies ‘by hand’ on top of the abstract sets.

view this post on Zulip Fawzi Hreiki (May 04 2021 at 19:44):

Also, that comment is more so about the principle of equivalence rather than the full univalence axiom in its formal guise. It just means that mathematicians really only care about things up to the correct notion of equivalence and so act as if equivalent things are equal (even if formally they know that to not be the case in set theoretic foundations).

view this post on Zulip Mike Shulman (May 04 2021 at 20:38):

There are a lot of different ways to answer this question. One is that yes, it's odd -- and that this says more about ZFC than about HoTT! After all, if a principle is natural in everyday mathematics, isn't it more natural to use a foundational system in which it's true? In traditional set-based foundations we have to work hard to set up a context in which this principle becomes true, but that tells us more about the unnaturality of set-based foundations.

view this post on Zulip Mike Shulman (May 04 2021 at 20:39):

(I think this is similar to what Fawzi said.)

view this post on Zulip Mike Shulman (May 04 2021 at 20:41):

Another answer is that you don't need things like homotopy theory and Kan complexes if you only want to apply univalence to isomorphism of set-based structures such as groups, rings, fields, etc. For that purpose, it suffices to use a version of HoTT in which only the universe of sets is univalent, and that theory can be modeled using plain ordinary groupoids (as shown by Hofmann and Streicher in 1998). It's only when you want to start applying univalence to equivalence of categorical structures that you need higher categorical models (e.g. to model a univalent universe of 1-types you need a model in some kind of 2-groupoid), leading in the limit to homotopy theory.

view this post on Zulip Mike Shulman (May 04 2021 at 20:46):

In addition, there are senses in which the principle is "true in set-based mathematics". It's certainly not literally true in set theory, since we can make statements about a group GG, like G\emptyset \in G or G=ZG = \mathbb{Z}, that can be true about some group but not about an isomorphic one. But there are set-like theories in which the principle is "true" as a metatheorem, such as dependent type theory without any equality predicate on the universe. The most intuitive (in my opinion) reason for such metatheorems is that the theory would remain consistent if you add univalence -- but you can also give direct syntactic proofs. However, a metatheorem is significantly less convenient to use in practice than an actual theorem, and doesn't yield any of the additional benefits of univalence.

view this post on Zulip John Baez (May 04 2021 at 22:58):

However, a metatheorem is significantly less convenient to use in practice than an actual theorem....

I always wondered what exactly was the difference between a theorem and a metatheorem. :upside_down:

view this post on Zulip Leopold Schlicht (May 05 2021 at 12:56):

Thanks!
Mike Shulman said:

It's certainly not literally true in set theory, since we can make statements about a group GG, like G\emptyset \in G or G=ZG = \mathbb{Z}, that can be true about some group but not about an isomorphic one.

Of course (as you know), this only means that the principle isn't true in material set theory.
What's wrong with adding the univalence principle to a more structural set theory like type theory (not treating it as a metatheorem!)?
I just don't get why spaces should be so essential to univalence. :-)

view this post on Zulip Leopold Schlicht (May 05 2021 at 13:16):

Okay, directly adding it as an axiom rather than a metatheorem doesn't work because for that one needs an equality predicate on the universe.

view this post on Zulip Nick Hu (May 05 2021 at 13:20):

John Baez said:

However, a metatheorem is significantly less convenient to use in practice than an actual theorem....

I always wondered what exactly was the difference between a theorem and a metatheorem. :upside_down:

My understanding is something like this: A metatheorem is like a C macro, but for mathematics. Theorems in a formal system have to be finitely long, but some weird corners (particularly of set theory, e.g. Godel encoding etc.) end up making statements which would require the proofs to be infinitely long in full generality. However, every such instance of such a theorem would be specific enough to only need to be finitely long. This is resolved by having a metatheorem; i.e. a device to generate finite-length instances of such a theorem at whatever use-site you want

view this post on Zulip Leopold Schlicht (May 05 2021 at 13:25):

Maybe it doesn't explain all aspects of the term "metatheorem", but one analogy might be that a metatheorem is to a theorem what an axiom schema is to an axiom.
A metatheorem might also be something like a "derivable" inference rule.
This makes me think: What's wrong with "univalence" as a metatheorem? After all, nothing is wrong with inference rules!

view this post on Zulip Nick Hu (May 05 2021 at 13:28):

I think there are two main considerations:

  1. people trying to make sense of univalence in ZFC models - if you add it as an axiom, it's not really obvious what the model is
  2. people who think type theory should be constructive - univalence ought to have some computational meaning (and it is actually reasonably intuitive what this means), but adding it as an axiom does not resolve this. Cubical type theory however does

view this post on Zulip Nick Hu (May 05 2021 at 13:31):

Leopold Schlicht said:

Maybe it doesn't explain all aspects of the term "metatheorem", but one analogy might be that a metatheorem is to a theorem what an axiom schema is to an axiom.
A metatheorem might also be something like a "derivable" inference rule.
This makes me think: What's wrong with "univalence" as a metatheorem? After all, nothing is wrong with inference rules!

I don't think what you've said is correct. Comprehension in ZFC is an axiom schema, and I don't think anyone would call it 'meta'. 'Meta' is about using external reasoning outside of a formal system to derive facts about it

view this post on Zulip Mike Shulman (May 05 2021 at 13:36):

Leopold Schlicht said:

It's certainly not literally true in set theory, since we can make statements about a group GG, like G\emptyset \in G or G=ZG = \mathbb{Z}, that can be true about some group but not about an isomorphic one.

Of course (as you know), this only means that the principle isn't true in material set theory.

The example of G\emptyset\in G is specific to material set theory, but G=ZG=\mathbb{Z} is not. As soon as you have a universe in a structural set theory, meaning a family of sets indexed by another set, then there is an equality relation on that family of sets, and you can make statements like G=ZG=\mathbb{Z} that are not invariant under isomorphism.

Univalence only makes sense in type theory, where the "equality relation" on a type can be another type rather than a mere relation.

view this post on Zulip Mike Shulman (May 05 2021 at 13:36):

I just don't get why spaces should be so essential to univalence.

Would you get it if we said "\infty-groupoids" instead of "spaces"?

view this post on Zulip Steve Awodey (May 05 2021 at 14:48):

One familiar case that is true in Sets is the propositional one, where for subsets S, T c X of any set X, we have S = T iff S c T and T c S. This is in fact a (very) special case of univalence.

view this post on Zulip Verity Scheel (May 05 2021 at 15:19):

I think parametricity is a nice example of a metatheorem: for a lot of type theories, we have a metatheorem that any closed term f:a,aaf : \forall a, a \to a must be the identity function. Why? Because we have no mechanism to write down anything else! But this fails once we have a mechanism for doing so, such as LEM. So metatheorems are true by looking at the syntax (initial model?), but fail to hold in all worlds modeled by the syntax. They assume it’s somehow a closed world.

Similarly in a language with only simple type formers (think ADTs like Haskell), we can prove metatheorems like “every syntactic type function F:TypeTypeF : Type \to Type is an invariant Haskell-functor, in the sense that it admits an obvious function (ab)(ba)FaFb(a \to b) \to (b \to a) \to F a \to F b”. How do we prove that? Well we just do induction on the syntax of how we defined each type or type function. But again this only works in a closed world. We could postulate type formers where this fails.

Metatheorems often look like “what you see is what you get”, but we know that’s not how logic works!

view this post on Zulip Verity Scheel (May 05 2021 at 15:48):

Note how the metatheorem about invariant Haskell-functors starts looking like univalence: the metatheorem can give us functions in both directions between FaF a and FbF b (we have (ab)(FaFb)(a \leftrightarrow b) \to (F a \leftrightarrow F b) as a corollary of the metatheorem), but we’re missing the proof that says we get an equivalence from an equivalence. Plus, we can only handle a small number of type formers. In particular, if we introduce equality types, our metatheorem breaks: we cannot produce a term of type (ab)(x=a)(x=b)(a \leftrightarrow b) \to (x = a) \to (x = b). So it’s really remarkable that univalence can work for all types, including function types, iterated identity types, higher inductive types.

view this post on Zulip Verity Scheel (May 05 2021 at 16:07):

One more comment on metatheorems: this business about “closed terms” is crucial, and it’s a meta property: it cannot be expressed internally. You could postulate something like (f:A,AA),A,(a:A),fA(a)=a\forall (f : \forall A, A \to A), \forall A, \forall (a : A), f_A(a) = a to state something like the metatheorem internally. But now it applies to all terms, not just closed terms! The internal logic cannot treat open and closed terms differently. So you end up extrapolating a property of closed terms to a property of all terms. But intuitively we assume that, in a computable type theory, ff will _eventually_ be a closed term, will _eventually_ by the identity function, and so no contradiction will occur. So there really is something “meta” about that theorem.

view this post on Zulip Leopold Schlicht (May 05 2021 at 17:38):

Thanks all!
Mike Shulman said:

I just don't get why spaces should be so essential to univalence.

Would you get it if we said "\infty-groupoids" instead of "spaces"?

No, I'm not yet really comfortable with considering either "space" or "\infty-groupoid" as primitive notions in a foundational system. However, I completely accept using HoTT as an internal language (rather than a foundational system). From that perspective, the second quote just says that univalence is true in the internal language of the category of Kan complexes (or something like that). But when mathematicians are using the principle "isomorphic structures have the same structural properties" in practice, I think they are not working in the internal language of the category of Kan complexes. So I don't agree with the undertone of the first quote suggesting that univalence gives a sort of justification of a statement mathematicians are already using (but maybe I'm over-interpreting!).
If I remember correctly, Solomon Feferman has written a paper in which he argues that a foundational system shouldn't axiomatize the notion of "category" as a primitive concept (the same argument applies to "\infty-groupoid"). I guess this was in reaction to Lawvere's ETCC. The reason is that some notion of a "collection of things" is psychologically necessary to understand the notion of a "category".
Looking forward to hearing arguments that enable me to change my opinion. :-)

view this post on Zulip Kenji Maillard (May 05 2021 at 17:51):

I think the point in the context of Martin-Löf Type Theory is that you do not formalize the notion of \infty-groupoid but automatically obtain such structure externally from the iteration of identity types and the algebra of paths operations that are definable from induction on identity types (this happens even before throwing univalence into the mix). The relevant papers are Types are weak ω\omega-groupoids and Weak omega-categories from intensional type theory.
So the notion of space or \infty-groupoid was historically not forced upon the axioms but was rather discovered after almost ~30 years of investigation of the theory.

view this post on Zulip Mike Shulman (May 05 2021 at 20:20):

Leopold Schlicht said:

No, I'm not yet really comfortable with considering either "space" or "\infty-groupoid" as primitive notions in a foundational system.

I'm glad you added "yet"! Can you explain your discomfort? I tried to give an explanation of \infty-groupoids from a foundational perspective in https://arxiv.org/abs/1601.05035, I don't know if you have looked at that.

But when mathematicians are using the principle "isomorphic structures have the same structural properties" in practice, I think they are not working in the internal language of the category of Kan complexes.

Would you accept that they are working with groupoids, whether or not they make it explicit? A groupoid is, after all, just the formal structure possessed by isomorphisms, so whenever you are working with isomorphisms you are working with groupoids.

And groupoids are a full subcategory of \infty-groupoids, so whenever you are working with groupoids you are working with (certain, particularly simple) \infty-groupoids. The rest of the \infty-groupoids don't impinge on the average mathematician very often, but when they start to say things like "equivalent categories have the same structural properties" then they're implicitly working with 2-groupoids, and so on.

If I remember correctly, Solomon Feferman has written a paper in which he argues that a foundational system shouldn't axiomatize the notion of "category" as a primitive concept (the same argument applies to "\infty-groupoid"). I guess this was in reaction to Lawvere's ETCC. The reason is that some notion of a "collection of things" is psychologically necessary to understand the notion of a "category".

This is a philosophical argument that probably isn't going to be settled any time soon. But I believe \infty-groupoids can be regarded as a notion of "collection of things", not requiring a prior notion of "collection" to be built out of. The claim is just that whenever we collect a bunch of things together, we always (explicitly or implicitly) also have in mind a "notion of sameness" between them.

view this post on Zulip Leopold Schlicht (May 06 2021 at 14:23):

Mike Shulman said:

I tried to give an explanation of \infty-groupoids from a foundational perspective in https://arxiv.org/abs/1601.05035, I don't know if you have looked at that.

No, I wasn't aware of that paper! Great read, really fantastic, exactly the thing I needed!
Can the

"an \infty-groupoid consists of a collection of elements, together with for any two elements xx, yy a collection of ways in which x=yx = y, and for any two such ways ff, gg a collection of ways in which f=gf = g, and so on, plus operations . . ."

approach be used to give, inside ZFC, a definition of the notion of an \infty-groupoid so that the category of these \infty-groupoids satisfies HoTT? I would find this more satisfying than a model in Kan complexes, which I think of as very topological things.

view this post on Zulip Fawzi Hreiki (May 06 2021 at 14:25):

Maybe take a look at https://ncatlab.org/nlab/show/algebraic+definition+of+higher+categories

view this post on Zulip Fawzi Hreiki (May 06 2021 at 14:28):

There is the (apparently equivalent) concept of an algebraic Kan complex where the composites a chosen as opposed to just known to exist.

view this post on Zulip John Baez (May 06 2021 at 17:01):

Can the [globular] approach be used to give, inside ZFC, a definition of the notion of an ∞-groupoid so that the category of these ∞-groupoids satisfies HoTT?"

That's an interesting question. I feel sure the answer must ultimately be "yes", but I don't know if anyone has come up with a globular model of HoTT yet. They've come up with simplicial and more recently cubical models. But I'm just commenting from the outside - @Mike Shulman actually works on this stuff, so he'd know.

view this post on Zulip Mike Shulman (May 06 2021 at 17:07):

Well, first of all, I don't think of Kan complexes as "topological" at all. There's no topology in sight! A Kan complex is a combinatorial object with a collection of elements, together with for any two elements x,yx,y a collection of ways in which x=yx=y, and for any three ways f,g,hf,g,h in which x=yx=y, y=zy=z, and x=zx=z respectively a collection of ways in which fg=hf\cdot g = h, and so on. And as Fawzi said, you're free to regard the Kan-ness as operations rather than properties. So I would argue that Kan complexes are very close to the intuitive description you quoted.

view this post on Zulip Mike Shulman (May 06 2021 at 17:10):

The only difference is, as John said, that in a Kan complex the "shape" of the "ways in which things are the same" changes as you go up in dimension. This turns out to make things a lot easier, both in terms of developing the theory and in terms of modeling HoTT, largely because when you come to define morphisms between such things, with Kan complexes you don't need to specify data showing how a morphism preserves the operations -- that happens automatically as soon as it preserves the shape of the higher-dimensional identifications.

view this post on Zulip Mike Shulman (May 06 2021 at 17:12):

It's well-known that you can model HoTT using globular 1-groupoids. Indeed, this predates the notion of "HoTT" by at least a decade.

view this post on Zulip Mike Shulman (May 06 2021 at 17:13):

At one point I convinced myself that it would probably also work to model HoTT with globular 2-groupoids (although I never wrote down the details); but that a similar naive approach, at least, would not work with globular 3-groupoids or higher.

view this post on Zulip Mike Shulman (May 06 2021 at 17:15):

I don't remember exactly why, and I could easily have been wrong, so if anyone wants to try to make it work themselves I don't want to discourage them. But it's definitely not an easy problem.

view this post on Zulip Mike Shulman (May 06 2021 at 17:21):

Of course, this all depends on exactly what you mean by "HoTT". The issue is roughly that most formal type theories used for HoTT are "strict" (i.e. certain equalities hold judgmentally rather than typally) in ways that it can be hard to make true in models. One especially problematic issue of this sort involves the computation rule for the J-eliminator of identity types.

One can, of course, also change the formal type theory to make these rules hold more weakly to match a given model; but the tradeoff is that the resulting type theory may have worse formal properties (e.g. it may no longer satisfy canonicity and normalization) and it may be less convenient or even wholly impractical to use. Making the J-computation rule typal is not a real big problem; cubical type theories have a typal J-computation rule (at least, for their primitive path-types) and they are computational and fairly usable.

However, my memory is that the problem with globular models had instead to do with the computation and uniqueness rules for functions, like β\beta-reduction (λx.M)(N)M[N/x](\lambda x.M)(N) \equiv M[N/x]. This seems quite hard to do without, both for computation and for practical usability.

view this post on Zulip Steve Awodey (May 06 2021 at 18:37):

Michael Warren's thesis contains a strict (globular) infinity groupoid model of intensional type theory - it's written up here:
https://mawarren.net/papers/crmp1295.pdf
No univalence, though.

view this post on Zulip Mike Shulman (May 07 2021 at 01:41):

Yeah, I should have mentioned that. It was the weakness that was a problem with what I was trying to do.

view this post on Zulip Nick Hu (May 07 2021 at 13:33):

Why do mathematicians feel like the cubical model of HoTT is "no good" to work with, despite its nice computational properties? I think it's something like it changes the definitions of (something from homotopy theory), but I would really appreciate knowing a bit more about this than I currently do

view this post on Zulip Mike Shulman (May 07 2021 at 14:37):

I'd be interested to hear where you got the impression that mathematicians feel that way.

Until recently, the main drawback of most cubical models of HoTT has been that they are not a presentation of the usual homotopy theory of spaces (or Kan complexes). So for mathematicians who are interested in using HoTT to say things about classical homotopy theory, the cubical models were problematic. Now that may be in the process of changing with the newer Awodey-Cavallo-Coquand-Riehl-Sattler "equivariant" model structure on cartesian cubical sets, which is (at least in classical mathematics) equivalent to the homotopy theory of spaces. However, the paper on this model hasn't appeared yet, and it hasn't been incorporated into a cubical type theory. It seems like the equivariant features can probably be ignored when interpreting type theory, so that it would model ABCFHL-style cartesian cubical type theories, but the most popular cubical proof assistant is probably Cubical Agda, which implements something closer to the earlier CCHM cubical type theory that is not known to have any model equivalent to classical homotopy theory.

view this post on Zulip Leopold Schlicht (May 07 2021 at 19:42):

Mike Shulman said:

Leopold Schlicht said:

No, I'm not yet really comfortable with considering either "space" or "\infty-groupoid" as primitive notions in a foundational system.

I'm glad you added "yet"! Can you explain your discomfort?

One thing I expect a foundation of mathematics to have is a clear "ontology", i.e., there should be an intended model (in the informal sense) it aims to describe. Ideally, this model is both unambiguous and communicable, in such a way that whenever two people are talking about it, one can be quite sure that they are talking about the "same" thing. So it should feel like something like a "platonic reality" one can investigate (regardless of whether one believes in platonism in the philosophical sense).
I think two examples of such (partial) foundations of mathematics having a clear ontology everyone learns are the Peano axioms and Euclidean geometry -- probably everyone has more or less the same conception of what natural numbers are and what 2-dimensional space is. This is in contrast to, say, first-order theories that are designed to have many models, such as the first-order theory of groups.
It's not that obvious why ZFC should be a foundation in this sense, but its ontology is attempted to be described in Shoenfield's Axioms of set theory. I would argue that at least for Zermelo set theory, one can describe the ontology quite satisfactory, as the informal version of {ω,P(ω),P(P(ω)),}\bigcup\{\omega, \mathcal P(\omega), \mathcal P (\mathcal P(\omega)), \dots\} (although I've seen people arguing that the concept of an arbitrary subset isn't definite), and this suffices for most uses of set theory.
Your paper has really convinced me that one can regard "\infty-groupoid" as a primitive notion. But now my question is: Is it a definite concept?
All the conversation about different models for \infty-groupoids somehow suggests that "\infty-groupoid" might be ambiguous. That the concept isn't easy to define in ZFC might also be evidence for that, but not necessarily.
I would be glad if someone can convince me otherwise, but this is why I currently think of HoTT more as a "first-order theory of groups" or an internal language, and not as a foundation.

Slightly related: When reading http://math.andrej.com/2012/10/03/am-i-a-constructive-mathematician/ a while ago, in which Andrej Bauer argues that mathematics should take place in many different "worlds" (roughly: toposes), or the "Conclusion" chapter of Mike's Homotopy type theory: the logic of space I asked myself the question: Isn't it, for metatheoretic purposes, useful to decide on one foundation of mathematics, which can be used to construct and compare all the different worlds (toposes)? After all, even the metatheoretic study of "worlds", i.e., topos theory, can be done in each of the worlds separately, and might look completely different in each world, which would be quite a mess to think about. Also: is there a guarantee that each world can construct models of all the other worlds (if equipped with one more universe)?

view this post on Zulip Mike Shulman (May 07 2021 at 21:26):

I've heard various people make different versions of this argument. I'm kind of tired of rebutting it, but I guess I'll have another go.

view this post on Zulip Mike Shulman (May 07 2021 at 21:26):

The idea that any formal theory can have one "intended model" is, I believe, a misconception that should have gone the way of the dinosaur with Godel's incompleteness theorem. The only way for a theory to have a unique model is if it's so weak that it can only encode a small fragment of mathematics, or if it's not effectively specified and therefore much more dependent for its meaning on the ambient foundational system in which you formulate it. Human beings have intuition for the behavior of certain concepts, but there is no qualitative difference between our intuition about natural numbers or sets and our intuition about the elements of a group or the points of a topological space. The only difference is quantitative, that most mathematicians are more used to thinking about different groups or topological spaces than they are to thinking about different universes of sets or different systems of natural numbers. But people who study set theory, for instance, acquire an intuition for different universes of sets and how they are related to each other and can be constructed from each other, analogous to a group-theorist's intuition for different groups and how they are related to each other and can be constructed from each other.

view this post on Zulip Mike Shulman (May 07 2021 at 21:33):

Now I'll grant that a foundational system for mathematics should probably be sufficiently expressive that the ordinary mathematician can develop an intuition for its basic objects and use them to encode their objects of interest without needing to go down the road of thinking about different universes of them. But I believe that homotopy types have this property just as sets do.

I'm not saying that all mathematicans already have such an intuition. But another big mistake people make is thinking that "mathematical intuition" is something fixed and immutable. In fact, I believe nearly all the intuitions that professional mathematicians take for granted were actually developed gradually through their process of schooling, including the behavior of ZFC-like "sets". In particular, this accords with my experience teaching basic mathematics to undergraduates, who have to be taught what a "set" is, and struggle with the idea that (for instance) the elements of one set could be other sets (which is peculiar to ZFC-like sets).

Thus, if you don't yet have such an intuition for \infty-groupoids, fear not. You can develop such an intuition by working with them, and HoTT has many advantages as a context for acquiring such an intuition over an "analytic" approach to \infty-groupoids through ZFC. It only requires you to learn how \infty-groupoids behave (which is, after all, what an intuition about them means) rather than how they can be defined (which is where all the complexity of things like simplicial sets, globular sets, and issues of different models comes in).

view this post on Zulip Mike Shulman (May 07 2021 at 21:37):

The issue of different models comes up in many other places when expressing one theory analytically in another. For instance, suppose you start with HoTT (or any other type theory) and are trying to explain to someone what a ZFC-set is. It's not just a 0-truncated type; it has lots of complicated hereditary membership structure, and there is more than one way to express that. For instance, you can use extensional relations or rigid trees, which differ in their representation according to whether, when xx appears more than once in the hereditary membership structure of yy (e.g. if y={{x},{{x}}}y = \{\{x\},\{\{x\}\}\}), those two xx's are represented by the same node or by different nodes. Alternatively, you can build the entire class of ZFC-sets at once, as in algebraic set theory. And so on. So the existence of more than one analytic model of one theory inside of another theory does not make the first one any less "definite".

view this post on Zulip Fawzi Hreiki (May 07 2021 at 21:53):

It's also important to note that before Zermelo and axiomatic set theory, mathematicians didn't view the world of mathematics as 'founded' on some minimal ontology of sets - there were just mathematical objects and these objects had certain properties and interactions.

view this post on Zulip Fawzi Hreiki (May 07 2021 at 21:56):

Its nice that this world of mathematical concepts can be (partly) realised in a given formal axiomatic framework, but its a mistake (in my opinion at least) to close off other avenues of what 'mathematics' may mean.

view this post on Zulip Nick Hu (May 08 2021 at 10:38):

I'm also a bit confused about this obsession with one-ism; to me, it's like asking what the best programming language is - the answer is it depends on your perspective. I might even stretch myself and use a physics metaphor: is light a particle or a wave?

view this post on Zulip Mike Shulman (May 08 2021 at 16:05):

Fawzi Hreiki said:

It's also important to note that before Zermelo and axiomatic set theory, mathematicians didn't view the world of mathematics as 'founded' on some minimal ontology of sets - there were just mathematical objects and these objects had certain properties and interactions.

Right. It is, of course, true that mathematics has benefited a lot from this innovation of axiomatic set theory. For instance, it gives mathematicians from different fields a common language and toolbox with which to talk to each other and use each other's ideas and results. But it's also possible to go too far in that direction. As Nick says, programmers manage to communicate with each other without insisting that there must be one programming language, or one computer architecture, that defines the meaning of all others.

view this post on Zulip Mike Shulman (May 08 2021 at 16:09):

I also forgot to mention another useful analogy, to inertial frames of reference in relativity. As much as we might like there to be, there is no preferred "global rest frame": all we can have is various different inertial frames. But physics can be described just as well in any inertial rest frame, and transformed into any other in well-understood ways.

view this post on Zulip John Baez (May 08 2021 at 16:21):

@Leopold Schlicht wrote:

I think two examples of such (partial) foundations of mathematics having a clear ontology everyone learns are the Peano axioms and Euclidean geometry -- probably everyone has more or less the same conception of what natural numbers are and what 2-dimensional space is.

It's funny that you should choose these examples. Euclidean geometry as formalized by Tarski has been proved to be complete, so even though it has many nonisomorphic models (by the Loewenheim-Skolem theorem), at least they are elementarily equivalent (they agree on which sentences hold). But Peano arithmetic is famously not complete, so it has many different models in which different sentences hold.

You may say that "everyone has more or less the same conception of what natural numbers are", but I'm not sure how you'd determine that. By brain scan?

It's probably true that most mathematicians agree that the natural numbers obey the Peano axioms - though I know mathematicians who don't. But that's different than "having the same conception" of the natural numbers. For example, there are some famous statements about natural numbers, like Goodstein's theorem, that don't follow from Peano's axioms, but only from additional principles that you can add to the Peano axioms if you so choose. So even if everyone agreed that the natural numbers obeyed Peano's axioms, that wouldn't imply they "have the same conception" of the natural numbers.

view this post on Zulip John Baez (May 08 2021 at 16:33):

Nick Hu said:

I'm also a bit confused about this obsession with one-ism; to me, it's like asking what the best programming language is - the answer is it depends on your perspective.

I'd go further and say this obsession is downright harmful.

To get around it, I'd like to abolish the whole metaphor of "foundations of mathematics". A building has one foundation, you have to lay the foundation before you do anything else, and it's damned hard to change it once the building is built - you basically have to tear the building down and start over.

That's not how math works.

view this post on Zulip Fawzi Hreiki (May 08 2021 at 16:44):

I think a better analogy is currency. An axiomatic foundation is a currency which mathematicians can use to ensure clean exchange of ideas

view this post on Zulip Fawzi Hreiki (May 08 2021 at 16:44):

But there are multiple currencies and we know how to exchange between different currencies

view this post on Zulip Fawzi Hreiki (May 08 2021 at 16:46):

Or the frame of reference analogy above works too

view this post on Zulip Leopold Schlicht (May 08 2021 at 17:43):

Thanks for all the comments. I really like this discussion! But there are still some things that are unclear to me.
Mike Shulman said:

The idea that any formal theory can have one "intended model" is, I believe, a misconception that should have gone the way of the dinosaur with Godel's incompleteness theorem. The only way for a theory to have a unique model is if it's so weak that it can only encode a small fragment of mathematics

But I was not claiming that the intended model is the unique model. From the perspective I described, one starts with a clear conception of what the intended model is, and then writes down axioms describing it. Gödel's incompleteness theorem implies that, at each point in time, the set of axioms one wrote down don't pin down the intended model uniquely. But the hope is, from the perspective I described, that as one tries to answer more questions about the intended model, one finds more (intuitively) evident principles that one can add to the existing axiom system.
Gödel himself was a platonist, so I would think it's hard to use Gödel's incompleteness theorem as an argument against mathematical platonism. By the way I think he was even trying to find additional evident principles that can be used as axioms to settle the continuum hypothesis.
Mike Shulman said:

In fact, I believe nearly all the intuitions that professional mathematicians take for granted were actually developed gradually through their process of schooling, including the behavior of ZFC-like "sets". In particular, this accords with my experience teaching basic mathematics to undergraduates, who have to be taught what a "set" is, and struggle with the idea that (for instance) the elements of one set could be other sets (which is peculiar to ZFC-like sets).

I agree that this process is one way of learning to use ZFC-like sets (undeniably it's the usual one). But one could also argue that the reason for struggling with the idea that elements of one set could be other sets is that one wasn't told the ontology! :-)
By the way what's your opinion on the questions I asked in the last paragraph of my previous message?
The fact that you are using Gödel's incompleteness theorem in your argumentation somehow suggests that even you adopt some kind of "realist" or "platonist" position when it comes to discussing meta-theoretical questions about the manipulation of finite strings.
Fawzi Hreiki said:

It's also important to note that before Zermelo and axiomatic set theory, mathematicians didn't view the world of mathematics as 'founded' on some minimal ontology of sets - there were just mathematical objects and these objects had certain properties and interactions.

I think at the time they were talking mostly about finite sets, finite groups, concrete examples of spaces like R2\mathbb R^2, and so on. All things with a "clear" ontology. But arbitrary sets and arbitrary \infty-groupoids are rather different.
Nick Hu said:

I'm also a bit confused about this obsession with one-ism; to me, it's like asking what the best programming language is - the answer is it depends on your perspective.

I agree! But certainly it's sometimes useful to have some form of compatibility between different technologies (e.g., for transferring files from one device to another). That's what I was arguing for in my last paragraph: "Isn't it, for metatheoretic purposes, useful to decide on one foundation of mathematics, which can be used to construct and compare all the different worlds?"
John Baez said:

You may say that "everyone has more or less the same conception of what natural numbers are", but I'm not sure how you'd determine that. By brain scan?

Yes, I agree, it isn't possible. But I guess many mathematicians, for example number theorists, have the feeling that they are discovering things and they want to know whether statements are true. Would you say that doesn't make sense at all? At least historically, this philosophy must have played an important role.

view this post on Zulip John Baez (May 08 2021 at 18:32):

Gödel himself was a platonist, so I would think it's hard to use Gödel's incompleteness theorem as an argument against mathematical platonism.

I agree that someone can be a mathematical platonist even if they know Gödel's incompleteness theorem - or even if they were the first to prove it.

But note, Mike Shulman wasn't talking about "mathematical platonism", which opens up a whole extra can of worms. (For example: what exactly is "mathematical platonism", and what if anything does it have to do with what Plato wrote about mathematics?)

He was just arguing against the idea that a formal theory can have one "intended model".

I'd put it this way: we can have something vague in mind and try to capture it using axioms, which may have many models. We can then sometimes add axioms that rule out some of these models, saying "oh, I wasn't thinking about models like that". And all this is fine and dandy. But if we're in a situation like Peano Arithmetic where no recursively enumerable collection of additional axioms is enough to single out a unique model, we should accept that fact. In particular, we should not pretend that in our heart of hearts we know the non-recursive collection of all statements that single out the "intended model". Because there's no way that we can really know a non-recursive collection of statements. We're just not that good.

By the way I think he was even trying to find additional evident principles that can be used as axioms to settle the continuum hypothesis.

Yes, and look how well that worked.

view this post on Zulip John Baez (May 08 2021 at 18:33):

He came up with the axiom of constructibility, and showed that implied the continuum hypothesis, but in his heart of hearts he believed the continuum hypothesis was false.

view this post on Zulip John Baez (May 08 2021 at 18:38):

For more on the epic saga of whether the continuum hypothesis is "really true" (whatever that means), try:

This paper reviews the history of arguments in favor of, and against, the continuum hypothesis. Nowadays set theorists think the continuum hypothesis is off the "main line" of questions that can be settled by large cardinal hypotheses (which these set theorists tend to believe are "really true").

view this post on Zulip Fawzi Hreiki (May 08 2021 at 18:38):

Well with PA, if we're working in some ambient set theory, we could say that the intended model is the model picked out uniquely (up to iso) by second order PA (or by the NNO axioms). However, Joel Hamkins has mentioned somewhere that even this depends on the ambient set theory and so from the POV of another set theory, this 'actual' set of natural numbers is non-standard

view this post on Zulip Mike Shulman (May 08 2021 at 22:56):

If Godel himself was too committed to platonism to realize how his own theorems had demolished it, that's too bad for him, but it doesn't change the fact. The nice thing about mathematics is that the motivations and beliefs of mathematicians are irrelevant to their theorems.

view this post on Zulip Mike Shulman (May 08 2021 at 22:58):

But certainly it's sometimes useful to have some form of compatibility between different technologies (e.g., for transferring files from one device to another). That's what I was arguing for in my last paragraph: "Isn't it, for metatheoretic purposes, useful to decide on one foundation of mathematics, which can be used to construct and compare all the different worlds?"

There's a world of difference between "some form of compatibility" and "deciding on one foundation of mathematics". Some form of compatibility only requires translations between the various foundations, which we have (and is also what people do in the world of software, and relativity); it doesn't require fixing any one of them as privileged.

view this post on Zulip Mike Shulman (May 08 2021 at 23:10):

I guess many mathematicians, for example number theorists, have the feeling that they are discovering things and they want to know whether statements are true. Would you say that doesn't make sense at all?

This is getting very philosophical and somewhat digressive, but if you're interested, my own philosophy of mathematics is a sort of empirical formalism. I believe that the "discovery" aspects of mathematics refer to the empirical study of a certain mode of functioning of the human mind, which one might call something like "deductive reasoning". When we prove in a certain deductive system that certain assumptions and axioms entail certain other consequences, that is an empricial discovery about the nature and consequences of deductive reasoning, so it makes sense to call it a "discovery".

Other aspects of mathematics, which we tend to think of as "creating" or "constructing" rather than "discovering", are arguably better-understood as the shared participation in a world of ideas, analogous to the universe of a fictional world. From this perspective, mathematical objects exist as ideas that are shared (but not identically) by different people, just like fictional characters, and statements about mathematical objects can be "true" in the same sense that statements about, say, Harry Potter can be "true" even though neither of them "exist" as physical objects. But neither of these kinds of idea are absolutely fixed and invariable between people: just as different reboots or fandom can vary the truth of statements about fictional characters, so different formal systems and axioms can vary the truth of statements about mathematical objects.

view this post on Zulip Mike Shulman (May 08 2021 at 23:12):

Thanks for this John:

In particular, we should not pretend that in our heart of hearts we know the non-recursive collection of all statements that single out the "intended model". Because there's no way that we can really know a non-recursive collection of statements. We're just not that good.

This reminds me of Bishop's remark that "If God has mathematics of his own that needs to be done, let him do it himself."

view this post on Zulip Mike Shulman (May 08 2021 at 23:14):

even the metatheoretic study of "worlds", i.e., topos theory, can be done in each of the worlds separately, and might look completely different in each world, which would be quite a mess to think about.

Well, real life is messy. Mathematics in general looks different in different worlds, and the study of worlds is part of mathematics, so yes, it too looks different in different worlds.

view this post on Zulip Mike Shulman (May 08 2021 at 23:16):

Also: is there a guarantee that each world can construct models of all the other worlds (if equipped with one more universe)?

Well, since we can't expect to define precisely what "a world" means, no, we can't have any such guarantee. We generally do expect that when a new foundational system is introduced, people will argue for its sensibility and consistency by showing how to interpret back and forth between it and some previously known foundation. But it's also true that there may be some worlds that you "can't get out of" to other kinds of worlds, kind of like how once you pass the event horizon of a black hole you can't describe all the rest of the universe. That's life.

view this post on Zulip Mike Shulman (May 08 2021 at 23:18):

John Baez said:

I'd like to abolish the whole metaphor of "foundations of mathematics". A building has one foundation, you have to lay the foundation before you do anything else, and it's damned hard to change it once the building is built - you basically have to tear the building down and start over.

That's not how math works.

Agreed. Sometimes I like to call theories like ZFC and HoTT "mathematics-complete", by analogy to NP-complete problems in complexity theory: they are rich enough that you can encode all of mathematics into them, but there's no particulary reason we have to fix one such theory and encode everything else into it.

view this post on Zulip Nick Hu (May 09 2021 at 00:46):

John Baez said:

He came up with the axiom of constructibility, and showed that implied the continuum hypothesis, but in his heart of hearts he believed the continuum hypothesis was false.

I am reminded of my undergraduate days when I took a course on axiomatic set theory. I always found it a hard pill to swallow that there should be undefinable functions. I guess the main reason not to believe in L is because it takes a decent chunk of machinery to set up, at least the way I remember being taught it

view this post on Zulip John Baez (May 09 2021 at 06:04):

Leopold Schlicht said:

But I guess many mathematicians, for example number theorists, have the feeling that they are discovering things and they want to know whether statements are true. Would you say that doesn't make sense at all?

It makes sense: when I do math I feel I am discovering things. When doing math I take the attitude that the objects I'm discussing "really exist", and I try to learn more about them. For example I know a lot about N\mathbb{N}, and that allows me to think about the natural numbers in intuitive ways that aren't just syntactic manipulation of axioms according to formal rules. But that doesn't mean that I know, say, which model of PA is the "right one". Most of what I know about N\mathbb{N} holds in every model of PA! It's a bit like how I can think about what a brick would do in various circumstances, without knowing the "one true brick".

view this post on Zulip John Baez (May 09 2021 at 06:13):

Nick Hu said:

John Baez said:

He came up with the axiom of constructibility, and showed that implied the continuum hypothesis, but in his heart of hearts he believed the continuum hypothesis was false.

I am reminded of my undergraduate days when I took a course on axiomatic set theory. I always found it a hard pill to swallow that there should be undefinable functions. I guess the main reason not to believe in L is because it takes a decent chunk of machinery to set up, at least the way I remember being taught it.

Actually that's not the reason a lot of famous set theorists disbelieve in V=L. A lot of them disbelieve it because in some rough sense the constructible universe L is a very stripped-down of ZFC and they believe the universe should contain everything conceivable, not just the bare minimum. I'll quote Penelope Maddy:

Silver's model theoretic results show that V \ne L can actually be derived from the existence of one particular set of integers, 0#. [....] Thus, not only does Silver's theory show that L goes wrong, it shows how L goes wrong: the process of taking only definable subsets at each stage yields a model satisfying ZFC at some countable stage, and all the further stages make no difference (this countable structure is an elementary substructure of L). The range of L's quantifiers is so deficient that L cannot tell one uncountable cardinal from another, or even from some countable ones. In purpler terms:

L takes on the character of a very thin inner model indeed, bare ruined choirs appended to the slender life-giving spine which is the class of ordinals.

(Kanamori and Magidor [1978, p. 1311) The point is that 0# yields a rich explanatory theory of exactly where and why L goes wrong. Before Silver, many mathematicians believed that V \ne L, but after Silver they knew why.

view this post on Zulip John Baez (May 09 2021 at 06:18):

I don't understand this stuff, much less agree that V=L is "wrong". But it shows that these expert set theorists have intuitions about the universe that push them into thinking L is much too small to be V. They think it's too "thin": there's not enough stuff in each stage LαL_\alpha. This is different from whether it's "tall" enough, i.e. how big α\alpha can get.

When I've talked to set theorists about this stuff, they like to draw pictures that illustrate these things.

view this post on Zulip Fawzi Hreiki (May 09 2021 at 06:36):

By the way, model pluralism is not inconsistent with platonism (whatever that means). You can beleive that mathematical objects 'really exist' but that first-order theories need not pick out their models uniquely (even morally)

view this post on Zulip Nick Hu (May 09 2021 at 11:04):

I thought V = L just implied that the universe was merely the constructible things, but it sounds a little more complicated than that. Nonetheless, I find it amazing that mathematicians still spend their entire lives working out very technical details about sets (collections with membership as a proposition), and I'm sure that for any model of ZFC you will find pathologies which imply that it 'must be false'. Even well-ordering the reals is pretty weird

view this post on Zulip Leopold Schlicht (May 09 2021 at 15:08):

I got quite a lot of insights from reading all the fantastic answers following my last post, thanks so much to all, again!
Mike Shulman said:

Some form of compatibility only requires translations between the various foundations, which we have (and is also what people do in the world of software, and relativity); it doesn't require fixing any one of them as privileged.

Okay, I agree.
What is a translation between foundations (or worlds) precisely?
I also came across the following quote from your expository paper Homotopy type theory: the logic of space:

We have consistent rules for translating between toposes along functors, and there are some toposes in which mathematics looks a bit simpler (those satisfying LEM or UIP).

So are these translations something like logical morphisms EF\mathcal E\to\mathcal F between elementary toposes (and are thus somewhat linked to what are called translations of type theories L(E)L(F)\mathbb L (\mathcal E)\to \mathbb L(\mathcal F) between the internal languages L(E)\mathbb L(\mathcal E) and L(F)\mathbb L(\mathcal F) of E\mathcal E and F\mathcal F)?
In which ways can one use functors to compare different worlds (can one just state there there exists a functor between two worlds or are there different ways to say something about their relationship)?
Also: Is a translation between worlds (in this sense) related to whether one of the worlds can construct a model of the other world?

view this post on Zulip John Baez (May 09 2021 at 15:39):

Nick Hu said:

I thought V = L just implied that the universe was merely the constructible things, but it sounds a little more complicated than that.

That's what V = L says, but as you probably know, it implies a lot more: it implies the axiom of choice, and the continuum hypothesis, and the generalized continuum hypothesis,... and it implies that 0# does not exist. Experts in set theory - classical logicians, I mean - tend to love the axiom of choice. But they tend to want lots of sets! This is why Goedel thought the continuum hypothesis was improbable, and even more so this is why they want 0# to exist. This is also one reason they're fond of large cardinal hypotheses.

(V \ne L makes the universe slightly "wide", but large cardinal hypotheses mainly make the universe "tall". Probably a more important reason set theorists like large cardinal hypotheses is that these hypotheses tend to settle lots of questions, and they are in some rough sense linearly ordered: that is, given two large cardinal hypotheses it's often true that one implies the other.)

Since I have no great desire for lots of sets, I'd be very happy with V=L. I'd like a really simple, manageable universe of sets.

It's amusing that a bunch of set theorists considered "V=L     \implies 0# does not exist" to be a powerful argument against V=L.

I hadn't known about this stuff until just now, so I'll quote Wikipedia:

Zero sharp was defined by Silver and Solovay as follows. Consider the language of set theory with extra constant symbols c1,c2,...c_1, c_2, ... for each positive integer. Then 0# is defined to be the set of Gödel numbers of the true sentences about the constructible universe, with cic_i interpreted as the uncountable cardinal i\aleph_i (Here i\aleph_i means i\aleph_i in the full universe, not the constructible universe.)

Superficially 0# seems to me like a set that should exist. So I can see why "V=L     \implies 0# does not exist" would turn people against V=L.

But:

There is a subtlety about this definition: by Tarski's undefinability theorem it is not, in general, possible to define the truth of a formula of set theory in the language of set theory. To solve this, Silver and Solovay assumed the existence of a suitable large cardinal, such as a Ramsey cardinal, and showed that with this extra assumption it is possible to define the truth of statements about the constructible universe. More generally, the definition of 0# works provided that there is an uncountable set of indiscernibles for some LαL_\alpha, and the phrase "0# exists" is used as a shorthand way of saying this.

All this makes me much less convinced that 0# should exist. Indiscernibles are unequal sets that obey all the same formulae. To even define 0#, much less prove it exists, we need an uncountable set of indiscernibles!

So, I think expert set theorists - like Silver and Solovay and so on - have very different tastes than I do.

view this post on Zulip Mike Shulman (May 09 2021 at 17:06):

I can't find the reference right now, but I think I remember that Lawvere considered (V=L)CH(V=L) \Rightarrow \rm CH as settling the continuum hypothesis positively for "constant sets". He argued that toposes can be regarded as universes of "variable sets", in some of which CH might fail, but that the original CH was a statement about "maximally constant sets". We can't expect to completely characterize the absolutely constant sets, but he argued that V=LV=L makes the universe of sets "more constant" than otherwise, and so we should conclude that CH is true for maximally constant sets.

view this post on Zulip Mike Shulman (May 09 2021 at 17:09):

Personally, I find V=LV=L uncomfortable mainly because the definition of LL is so non-structural. It seems to take the iterative conception of set and the primacy of first-order logic much too seriously. So it's interesting to me that set theorists, who generally do argue for the iterative conception of set and the primacy of first-order logic, are also unhappy with V=LV=L; even they aren't willing to go that far in that direction.

view this post on Zulip Mike Shulman (May 09 2021 at 17:12):

Leopold Schlicht said:

What is a translation between foundations (or worlds) precisely?

Well, the notion of "world" is not precise here, and hence neither is the notion of "translation between worlds". One example would be a functor between toposes (logical, or perhaps geometric -- each with different behavior). Another example is the construction of LL inside VV, or the construction of a model of ZFC inside a topos.

In which ways can one use functors to compare different worlds?

Innumerable ways. A functor maps objects of one world to objects of the other, so you can ask what constructions it preserves or reflects, and use it to deduce things about one world from properties of the other.

view this post on Zulip John Baez (May 09 2021 at 20:51):

Mike Shulman said:

Personally, I find V=LV=L uncomfortable mainly because the definition of LL is so non-structural. It seems to take the iterative conception of set and the primacy of first-order logic much too seriously. So it's interesting to me that set theorists, who generally do argue for the iterative conception of set and the primacy of first-order logic, are also unhappy with V=LV=L; even they aren't willing to go that far in that direction.

There are lots of theorems about material set theory, and I've been studying it since I was a kid, so I like reading about it even though it's old-fashioned and doesn't accord with modern mathematical practice. I have Kanamori's The Higher Infinite by my bed and I read it now and then to stretch my brain in a certain way. But I don't take it very seriously.

If I took material set theory seriously, I might advocate V = L, since it seems to accord quite nicely with the idea that sets are built up as sets of sets. But I think people who spend their lives thinking about set theory find it constrictive. It spoils the fun. :upside_down:

view this post on Zulip Nick Hu (May 10 2021 at 13:21):

Thanks for your insight John, it was a very interesting read. Especially what you said about 0#; it looks like something that programmers might describe as a 'hack', so I too am surprised to hear that mathematicians feel compelled that it must exist. I'm not sure if I would agree with you that it even seems like it ought to superficially exist - once you start talking about Godel numbers you are kind of in dangerous territory imo, but I guess that's just my preference for constructivism shining through

view this post on Zulip Mike Shulman (May 10 2021 at 14:34):

I don't have any objection to material set theory as a subject; there are a lot of amazing things you can do with it, and it's certainly a more convenient context in which to study well-founded relations than building them explicitly in a structural framework, and that can have important ramifications for the rest of mathematics too. For instance, whether or not LL is all of VV, its construction is still the only way I know of to prove that CH is consistent, and as far as I know no one has come up with a convincing structural version of LL. I only object to claims that material set theory is the only foundation for mathematics, or a better one for all purposes.

view this post on Zulip John Baez (May 10 2021 at 16:08):

Nick Hu said:

I'm not sure if I would agree with you that it even seems like it ought to superficially exist - once you start talking about Godel numbers you are kind of in dangerous territory imo, but I guess that's just my preference for constructivism shining through.

Well, I guess all of classical set theory and model theory is dangerous territory then. :upside_down: Gödel numbers are used for absolutely everything.

I don't think there's anything innately nonconstructive about Gödel numbers. They're just a trick that shows up when a system gets powerful enough to reason about itself. But classical logic is full of nonconstructive reasoning.

view this post on Zulip Leopold Schlicht (May 10 2021 at 19:31):

Mike Shulman said:

Well, the notion of "world" is not precise here, and hence neither is the notion of "translation between worlds". One example would be a functor between toposes (logical, or perhaps geometric -- each with different behavior). Another example is the construction of LL inside VV, or the construction of a model of ZFC inside a topos.

Okay, thanks.

Innumerable ways. A functor maps objects of one world to objects of the other, so you can ask what constructions it preserves or reflects, and use it to deduce things about one world from properties of the other.

Doesn't sound like "innumerable ways" to me, probably because I'm lacking the experience. :-) What would it imply when a translation preserves or reflects a construction?

John Baez said:

but I don't know if anyone has come up with a globular model of HoTT yet.

By the way what is a "globular" model?

view this post on Zulip Steve Awodey (May 10 2021 at 19:59):

Fawzi Hreiki said:

Well with PA, if we're working in some ambient set theory, we could say that the intended model is the model picked out uniquely (up to iso) by second order PA (or by the NNO axioms). However, Joel Hamkins has mentioned somewhere that even this depends on the ambient set theory and so from the POV of another set theory, this 'actual' set of natural numbers is non-standard

I would like to make two comments here:

(1) The fact that there are different models of PA just shows that we need to also include the ambient "setting" interpreting the induction axiom in the axiomatization - this is what we do in topos theory, where the notion of an NNO (or model of 2nd-order PA in the internal logic) becomes categorical (in the strong sense of model theory: any two models are - uniquely! - isomorphic). This is a perfectly satisfactory situation - each topos has an essentially unique NNO. Now one can of course ask further about comparing different toposes, but there is no expectation that these are all equivalent (much less isomorphic). The fact that the RE deductive system of second-order PA is deductively complete w/resp. to such topos models (indeed, w/resp. to just the well-pointed toposes, and therefore models of Mac Lane set theory) is again a perfectly satisfactory situation. What more could you coherently want?

(2) I think there should be a principle of Reductio ad Gödelium in the philosophy of mathematics, like the Reductio ad Hitlerium in history. The person who first plays the Gödel card loses the argument.

view this post on Zulip John Baez (May 10 2021 at 23:58):

Leopold Schlicht said:

John Baez said:

but I don't know if anyone has come up with a globular model of HoTT yet.

By the way what is a "globular" model?

view this post on Zulip John Baez (May 10 2021 at 23:59):

It's the thing you said you'd find "more satisfying":

Can the

"an \infty-groupoid consists of a collection of elements, together with for any two elements xx, yy a collection of ways in which x=yx = y, and for any two such ways ff, gg a collection of ways in which f=gf = g, and so on, plus operations . . ."

approach be used to give, inside ZFC, a definition of the notion of an \infty-groupoid so that the category of these \infty-groupoids satisfies HoTT? I would find this more satisfying than a model in Kan complexes, which I think of as very topological things.

view this post on Zulip John Baez (May 11 2021 at 00:01):

That remark you quoted was giving a very sketchy definition of a "globular" \infty-groupoid, where the shapes of n-morphism are "globes".

You've got 0-morphisms, or objects x,y,x,y,\dots, which look like points.

And you've got 1-morphisms like f:xyf: x \to y, which look like arrows.

And you've got 2-morphisms like α:fg\alpha: f \Rightarrow g when f,g:xyf,g : x \to y, which look like 2-dimensional discs.

And so on. If you draw the n-morphisms, they look like n-dimensional balls, or "globes".

view this post on Zulip John Baez (May 11 2021 at 00:03):

There are also simplicial \infty-groupoids, called Kan complexes, where the shapes of morphisms are simplexes, but apparently you think simplexes are "very topological" and thus unsatisfying.

view this post on Zulip John Baez (May 11 2021 at 00:03):

And there are cubical \infty-groupoids, where the shapes of n-morphisms are cubes.

view this post on Zulip John Baez (May 11 2021 at 00:05):

This would be a lot better with pictures.

view this post on Zulip John Baez (May 11 2021 at 00:09):

People have worked a lot on simplicial and cubical models of HoTT, so I asked about work on globular models of HoTT, and people started talking about those...

view this post on Zulip Leopold Schlicht (May 12 2021 at 14:58):

Ah, thanks! But I don't yet see why the quoted sketchy definition suggests that the shape of n-morphisms is a "globe". In fact, it isn't even talking about "shape" at all, it basically just says that we want to have a collection of n-morphisms, for each n.

view this post on Zulip Nick Hu (May 12 2021 at 15:15):

The crucial thing is that it only makes sense to have nn-morphisms between parallel pairs of n1n-1-morphisms. This is called the globularity condition.

view this post on Zulip Mike Shulman (May 12 2021 at 15:17):

It's the part where the informal description said "for any two such ways f,gf, g a collection of ways in which f=gf = g" that starts it off globular. The simplicial version would have instead, as I mentioned earlier, "for any three ways f,g,hf,g,h in which x=yx=y, y=zy=z and x=zx=z respectively, a collection of ways in which fg=hf\cdot g = h."

view this post on Zulip John Baez (May 12 2021 at 18:38):

Yup, that's the part.

It may help, @Leopold Schlicht, to note that the usual concept of "natural transformation between functors" is an example of a globular 2-morphism. It only makes sense to talk about a natural transformation between parallel functors, by which I mean functors with the same domain and the same codomain, like f,g:ABf,g: A \to B.

(Nick Hu said "parallel" but I'm not sure you know what that means so I wanted to define it.)

view this post on Zulip John Baez (May 12 2021 at 18:40):

Thus, when we draw a natural transformation it looks like a 2-dimensional globe, meaning this:

view this post on Zulip John Baez (May 12 2021 at 18:40):

It does not look like a cube, or simplex, etc.

view this post on Zulip John Baez (May 12 2021 at 18:42):

Natural transformations are 2-morphisms in the 2-category Cat - at least if we take the usual "globular" approach to 2-categories!

view this post on Zulip John Baez (May 12 2021 at 18:42):

You seem to be more comfortable with the globular approach than the simplicial approach or the cubical approach, except for the fact that you didn't know the word "globular".

view this post on Zulip James Wood (May 12 2021 at 19:15):

By the way, are there any similarly familiar examples of simplices or cubes at low dimensions? I feel like squares and cubes come up fairly naturally in dependent type theory when you consider equality between diverging telescopes/dependent tuples, but (1) I can't remember how to state this cleanly, and (2) I can't think of an example of simplices. Double category morphisms look a bit like squares, but they're strictly more general IIRC, and all the familiar examples (modules) use that generality.

view this post on Zulip John Baez (May 12 2021 at 19:45):

Double category morphisms look exactly like squares - indeed, I often call them "squares".

view this post on Zulip John Baez (May 12 2021 at 19:46):

An easy example of a double category is: take a category, and create a double category where both the vertical and horizontal arrows in the double category are morphism in that category, and the squares are commuting squares.

view this post on Zulip John Baez (May 12 2021 at 19:47):

One can play the same game with higher-dimensional cubes and get easy examples of n-tuple categories.

view this post on Zulip John Baez (May 12 2021 at 19:49):

Even the squares in double categories of modules look just like squares: they have four objects, two horizontal arrows, two vertical arrows and a square in the middle.

view this post on Zulip James Wood (May 12 2021 at 19:54):

Still, the example with commuting squares feels a bit degenerate. There's some basic globular structure (parallel morphisms can be compared for equality), but then you force it to have two extra corners by saying that the two morphisms being compared have to be composites. You could do the same and get any shape you wanted. Modules are a better example of a double category because the square shape isn't artificial. Perhaps the example of modules is related to the example of telescopes/dependent pairs? Modules are naturally dependently typed.

view this post on Zulip John Baez (May 12 2021 at 19:57):

Still, the example with commuting squares feels a bit degenerate.

Sure! I was merely trying to reassure you that the morphisms in double categories look exactly like squares.

view this post on Zulip John Baez (May 12 2021 at 19:59):

Also, you asked if there are any "familiar examples of cubes at low dimensions", so I wanted to point out that commuting squares, common in category theory, are a familiar example.

view this post on Zulip John Baez (May 12 2021 at 19:59):

There's no shortage of more complicated, less degenerate examples.

view this post on Zulip James Wood (May 12 2021 at 20:00):

Are double categories exactly cubical 2-categories? I thought it was a problem that the horizontal and vertical 1-cells could be different.

view this post on Zulip John Baez (May 12 2021 at 20:04):

I don't actually know what a "cubical 2-category" is - it's not something I say. I use double categories a lot, and there's a famous kind of double category where the vertical and horizontal 1-cells are the same, and composed the same way. There's some name for that kind....

view this post on Zulip Mike Shulman (May 12 2021 at 20:05):

"edge-symmetric"?

view this post on Zulip Mike Shulman (May 12 2021 at 20:10):

James Wood said:

By the way, are there any similarly familiar examples of simplices or cubes at low dimensions?

I'm not sure if this is quite what you're thinking of, but I would say that the commutative square in the definition of a natural transformation is a specific example. Of course, as John said, any commutative square is a square, but I think naturality squares are particularly closely related to the appearance of cubes in type theory. For instance, suppose in type theory you have f,g:ABf,g:A\to B, which induce maps apf:IdA(x,y)IdB(fx,fy){\rm ap}_f: {\rm Id}_A(x,y) \to {\rm Id}_B(fx,fy) and similarly apg{\rm ap}_g . Then suppose you have p:IdAB(f,g)p:{\rm Id}_{A\to B}(f,g), meaning p:x:AIdB(fx,gx)p : \prod_{x:A} {\rm Id}_B(fx,gx). Then pp should induce an identification of apf(e){\rm ap}_f(e) with apg(e){\rm ap}_g(e), but this has to be a dependent identification since their codomains IdB(fx,fy){\rm Id}_B(fx,fy) and IdB(gx,gy){\rm Id}_B(gx,gy) are different. This sort of doubly-dependent identification is precisely what gives rise to cubes in type theory, and in this case it's clearly also a naturality square for pp.

view this post on Zulip Mike Shulman (May 12 2021 at 20:15):

As for simplices, the simplest sort of simplex is a composition relation gf=hg\circ f = h. Simplices also come up naturally when you talk about (co)descent objects, which are a sort of 2-categorical (co)limit that generalizes (co)equalizers one dimension up and also truncates geometric realization at dimension 2. Descent objects are so-named because they're the natural way to construct categories of descent data when defining stacks, but they also come up when strictifying (co)algebras for 2-(co)monads and when constructing general 2-dimensional (co)limits out of more basic ones.

view this post on Zulip Nick Hu (May 12 2021 at 20:18):

James Wood said:

Are double categories exactly cubical 2-categories? I thought it was a problem that the horizontal and vertical 1-cells could be different.

I would be comfortable with this description. nn-fold enrichment in Cat yields globular higher categories, whereas nn-fold internalisation of a category yields cubical higher categories

view this post on Zulip John Baez (May 12 2021 at 20:29):

The usual term for an n-fold internal category, though, is not "cubical higher category" but "n-fold category" or "n-tuple category". I think it's best to stick with established terminology here, since the terminology in higher category theory is already confusing enough without making up even more new terms.

view this post on Zulip James Wood (May 12 2021 at 20:50):

Tying this all back together, I'd expect the HoTT type ΣR:RingRMod\Sigma_{R : \mathrm{Ring}} R\mathrm{-Mod} to be a groupoid, but more particularly to naturally form a double groupoid which is like the double category of modules, but with isomorphisms instead of morphisms everywhere.

view this post on Zulip Mike Shulman (May 12 2021 at 21:17):

If you write down the definition of "category" naively in HoTT, you get a structure that looks sort of "double" because you have the morphisms in your "category" but also the paths in the type of objects. The "univalence" condition on a category (see e.g. Chapter 9 of the Book) collapses this information by making the paths coincide with invertible morphisms. Most naturally-occurring 1-categories are univalent when defined straightforwardly in HoTT, but plenty of naturally-occurring 2-categories aren't -- because even in set theory they're better defined as double categories!

This may be what you're thinking of with modules: if you define the bicategory of rings and bimodules in HoTT, it's not "univalent" because the paths in the type of objects are ring isomorphisms whereas the invertible bimodules are Morita equivalences. Viewed as a double category, this structure consists of the ring isomorphisms and the bimodules. If you want to see the noninvertible ring homomorphisms, of course, you have to define it as a double category inside HoTT.

view this post on Zulip Mike Shulman (May 12 2021 at 23:03):

Mike Shulman said:

I can't find the reference right now, but I think I remember that Lawvere considered (V=L)CH(V=L) \Rightarrow \rm CH as settling the continuum hypothesis positively for "constant sets".

Found it: it's in "Cohesive Toposes and Cantor's 'lauter Einsen'".

The dialectic reflected in this method is that between imagined Being in general with all its interlocking categories of cohesion and variation on the one hand and the extreme special case of discreteness and constancy on the other. Proper understanding of this dialectic may put into a different light debates over 'foundational' questions such as whether there is an infinite set of reals of cardinality less than the continuum, or whether there is a large measurable cardinal. The answers are clearly no if we are pursuing constancy toward an extreme, because Godel's L construction would otherwise give us something still more constant.... On the other hand, the answer is clearly yes, the GCH is false, if we are considering almost any cohesive variable topos of independent mathematical interest.

view this post on Zulip David Michael Roberts (May 13 2021 at 00:50):

Maybe rigid, in a non-technical sense, is a good synonym for "constant" here, since LL is even more highly constrained than VV (which is rigid in a technical sense).

view this post on Zulip James Wood (May 13 2021 at 08:16):

Ah, sorry, I completely forgot the details of the bimodules example!

view this post on Zulip Fawzi Hreiki (May 13 2021 at 08:58):

I think there’s more on V=LV = L in ‘Foundations and Applications

view this post on Zulip Leopold Schlicht (May 27 2021 at 21:41):

Somewhat related to the discussion at the beginning of this thread: can one find a precise definition of when a first-order formula ϕ(x)\phi(x) in the language of ZFC is "a structural, isomorphism-invariant property" and then prove the following metatheorem: whenever ZFC proves such a property for some set aa and proves aba\cong b, then ZFC proves ϕ(b)\phi(b)? (Of course, this statement should really be formulated allowing ϕ(x)\phi(x) to contain parameters.)
(This question reminds me of the purely syntactic definition, which is due to Freyd I think, of when a first-order sentence ϕ\phi in the language of a single category is invariant under equivalence of categories. I can't recall the details off the top of my head, but one requirement should be that ϕ\phi doesn't talk about equality of objects, another that only parallel morphisms are compared, roughly speaking. Then one can prove that each such ϕ\phi is indeed invariant under equivalence. As a minor question: can one prove something like a converse of that statement? Can there be nontrivial ϕ\phi that don't satisfy this syntactic property, but are nevertheless invariant under equivalence? Of course there are trivial examples like A.A=A\forall A.\, A=A.)

view this post on Zulip Leopold Schlicht (May 27 2021 at 21:54):

Also, what's wrong with just adding A,B ⁣:U.ABA=UB\forall A,B\colon\mathcal U.\, {A \cong B} \to A=_{\mathcal U}B instead of univalence (where ABA\cong B is the type of equivalences between AA and BB)?
Maybe that's related to what Mike was referring to when he said:

Another answer is that you don't need things like homotopy theory and Kan complexes if you only want to apply univalence to isomorphism of set-based structures such as groups, rings, fields, etc. For that purpose, it suffices to use a version of HoTT in which only the universe of sets is univalent, and that theory can be modeled using plain ordinary groupoids (as shown by Hofmann and Streicher in 1998).

But to model the above axiom, I would think one doesn't need groupoids.

view this post on Zulip Mike Shulman (May 28 2021 at 02:28):

I don't know of a characterization in terms of ZFC syntax, but a usefully broad class of isomorphism-invariant formulas is those that arise from a dependently typed formulation of ETCS along the lines of Freyd's theorem that you mention. See for instance Lemma 3.4 of my paper Comparing material and structural set theories.

view this post on Zulip Mike Shulman (May 28 2021 at 02:29):

Leopold Schlicht said:

Also, what's wrong with just adding A,B ⁣:U.ABA=UB\forall A,B\colon\mathcal U.\, {A \cong B} \to A=_{\mathcal U}B instead of univalence?

You may be able to interpret this in a "skeletal" set-theoretic model, although it's not entirely trivial; I think there was some discussion of this on the HoTT mailing list some time ago. However, it's not a very useful axiom compared to univalence, since when two things are isomorphic, in order to get a meaningful statement about one from a statement about the other, we often need to transfer across the specific isomorphism that we're interested in.

view this post on Zulip Leopold Schlicht (May 28 2021 at 09:02):

Ah, I see, thanks!

view this post on Zulip Leopold Schlicht (Jun 01 2021 at 10:04):

What have models of HoTT in cubical sets to do with computational properties like canonicity or normalization? Models of HoTT are semantical things, while computational properties are rather syntactical concepts, so I don't see any connection.

view this post on Zulip Nick Hu (Jun 01 2021 at 10:49):

We know how to do all these computational things for cubical. So far, it's not known for other flavours how to do so

view this post on Zulip Mike Shulman (Jun 01 2021 at 13:17):

When you define a "constructive" model of HoTT, the ambient computation of the constructive metatheory gives you a way to evaluate terms: first interpret them in the model and then compute in the metatheory. Historically, at least, this was a "first step" towards giving intrinsic notions of computation for the cubical theory such as canonicity and normalization.

view this post on Zulip Leopold Schlicht (Jun 01 2021 at 13:36):

Interesting, so using a constructive metatheory one can show that cubical sets are a model of HoTT, while one needs a classical metatheory to prove that Kan complexes are a model of HoTT, roughly speaking?

view this post on Zulip Mike Shulman (Jun 01 2021 at 14:17):

That's the current state of the art.

view this post on Zulip Leopold Schlicht (Jun 06 2021 at 21:42):

Can the univalence axiom be formulated as A,B ⁣:U.isiso(idtoeqA,B)1\prod_{A,B\colon\mathcal U}.\, \lVert\mathrm{isiso}(\mathrm{idtoeq}_{A,B})\rVert_{-1}?

view this post on Zulip Mike Shulman (Jun 06 2021 at 23:57):

I assume by 'isiso' you mean what the book calls 'qinv', and that the codomain of your 'idtoeq' is a correct definition of equivalence. In this case, the answer is yes at least if you assume funext, since then isiso(f)1\Vert \mathrm{isiso}(f)\Vert_{-1} is equivalent to isequiv(f)\mathrm{isequiv}(f). I'm not sure offhand whether this version suffices to prove funext from univalence; you'd have to trace through the proof.

view this post on Zulip Leopold Schlicht (Jun 07 2021 at 11:24):

Oh, I actually meant A,B ⁣:U.isiso(idtoisoA,B)1\prod_{A,B\colon\mathcal U}.\, \lVert\mathrm{isiso}(\mathrm{idtoiso}_{A,B})\rVert_{-1}, where idtoisoA,B\mathrm{idtoiso}_{A,B} is the canonical map of type A=UBABA=_{\mathcal U}B\to A\cong B, ABA\cong B is f ⁣:AB.isiso(f)1\sum_{f\colon A\to B}.\, \lVert \mathrm{isiso}(f)\rVert_{-1}, and isiso(f)\mathrm{isiso}(f) is g ⁣:BA.(gf=idA)×(fg=idB)\sum_{g\colon B\to A}.\, (g\circ f=\mathrm{id}_A)\times (f\circ g=\mathrm{id}_B). The motivation of my question is to formulate the univalence axiom without Voevodsky's tricky notion of equivalence that magically happens to make isequiv(f)\mathrm{isequiv}(f) a mere proposition (at the cost of assuming the existence of 1\lVert\cdot\rVert_{-1}-truncation). Maybe such a formulation would be both conceptually and pedagogically simpler.

view this post on Zulip Jon Sterling (Jun 07 2021 at 12:39):

I personally don't think that such a formulation would be conceptually or pedagogically simpler. Voevodksy's formulation of equivalences looks unfamiliar when you use words like "contractible" and "fiber" or whatever, but it is also very natural from the perspective of the naïve "input-output blob" diagrams that they use to teach pre-teens the notion of surjective and injective and bijective functions...

view this post on Zulip Jon Sterling (Jun 07 2021 at 12:39):

(I do not mean to downplay the importance of Voevodsky's definition --- I just mean that it is not as alien as you might think)

view this post on Zulip Mike Shulman (Jun 07 2021 at 12:57):

Oh, I actually meant

That's also correct if you assume funext separately, for the same reason.

view this post on Zulip Mike Shulman (Jun 07 2021 at 12:59):

However, apart from questions of "simplicity" which can be debated, there is an important mathematical point that isiso(f)1\Vert \mathrm{isiso}(f)\Vert_{-1} is not a very useful definition of equivalence, because you can't extract an inverse function from it without essentially giving one of the correct definitions of equivalence and proving its equivalence to that.

view this post on Zulip Leopold Schlicht (Jun 07 2021 at 14:54):

Thanks!

view this post on Zulip Leopold Schlicht (Jun 26 2021 at 13:44):

Mike Shulman said:

Another answer is that you don't need things like homotopy theory and Kan complexes if you only want to apply univalence to isomorphism of set-based structures such as groups, rings, fields, etc. For that purpose, it suffices to use a version of HoTT in which only the universe of sets is univalent, and that theory can be modeled using plain ordinary groupoids (as shown by Hofmann and Streicher in 1998). It's only when you want to start applying univalence to equivalence of categorical structures that you need higher categorical models

Why, actually, does one need higher categorical models in order to apply univalence to equivalence of categories? Simply because one wants that there's a category of sets (whose underlying type of objects isn't a set) or did you have further reasons in mind when writing this?

I guess the "version of HoTT in which only the universe of sets is univalent" you are referring to is MLTT with so-called universe extensionality. At the moment I am a bit confused, because isn't Exercise 4.6 in the book saying that universe extensionality is inconsistent?

Is there a conceptual reason for isequiv(f)\mathrm{isequiv}(f) to be a mere proposition? To me this seems to be a statement that just randomly happens to be true. But since this fact is so central to HoTT, I would like to hear a reason that is conceptually more pleasing, or any other simple motivation for the definition of isequiv\mathrm{isequiv}.

view this post on Zulip Mike Shulman (Jun 26 2021 at 15:47):

Leopold Schlicht said:

Why, actually, does one need higher categorical models in order to apply univalence to equivalence of categories?

Because applying univalence to equivalence of categories requires the categories to be "univalent" themselves, which requires their type of objects to be a 1-type. So if there is a type of categories to which to apply univalence, it must be a 2-type.

I guess the "version of HoTT in which only the universe of sets is univalent" you are referring to is MLTT with so-called universe extensionality.

MLTT with one universe of sets that satisfies universe extensionality, or equivalently univalence (the two are equivalent for a universe of sets).

At the moment I am a bit confused, because isn't Exercise 4.6 in the book saying that universe extensionality is inconsistent?

It's inconsistent for a universe that contains non-sets. (More specifically, particular non-sets such as the circle or a smaller universe.)

Is there a conceptual reason for isequiv(f)\mathrm{isequiv}(f) to be a mere proposition?

In set-based mathematics, isomorphisms and equivalences are universally defined to be morphisms with the property of being an isomorphism/equivalence. In particular, if we want to show that two equivalences are the same, it suffices to show they are the same function; we don't have to separately argue that their "equivalence data" are the same. Requiring isequiv to be a mere proposition achieves this same result in HoTT.

Plus, as Exercise 4.6 says, otherwise univalence is inconsistent. (-:

view this post on Zulip Leopold Schlicht (Jun 28 2021 at 17:55):

Thanks!
Mike Shulman said:

Because applying univalence to equivalence of categories requires the categories to be "univalent" themselves, which requires their type of objects to be a 1-type. So if there is a type of categories to which to apply univalence, it must be a 2-type.

These two sentences somehow don't compile in my brain. What do you mean by "applying univalence to equivalence of categories"? What does it mean for a category to be "univalent"?

Requiring isequiv to be a mere proposition achieves this same result in HoTT.

I see, but this just explains why one wants isequiv(f) to be a mere proposition, and doesn't motivate Voevodsky's subtle definition of isequiv(f) and why it is a mere proposition. Of course, one could "justify" the definition by formally proving that it is a mere proposition. But I would like to know an intuitive reason for that and, moreover, understand why it is natural to come up with this definition. (My goal in https://categorytheory.zulipchat.com/#narrow/stream/229952-theory.3A-type.20theory/topic/model.20of.20univalence/near/241761226 was to propose another definition that I consider to be "natural", but as you pointed out, it isn't that useful.)

view this post on Zulip Mike Shulman (Jun 28 2021 at 17:58):

What do you mean by "applying univalence to equivalence of categories"? What does it mean for a category to be "univalent"?

Have you read chapter 9 of the HoTT Book?

view this post on Zulip Leopold Schlicht (Jun 28 2021 at 18:01):

Sort of, but only parts of it in detail. :sweat_smile:

view this post on Zulip Mike Shulman (Jun 28 2021 at 18:01):

It should answer those questions.

view this post on Zulip Mike Shulman (Jun 28 2021 at 18:03):

I see, but this just explains why one wants isequiv(f) to be a mere proposition, and doesn't motivate Voevodsky's subtle definition of isequiv(f) and why it is a mere proposition.

Oh, I see I misread your question. I thought you were asking "why should isequiv" be a proposition?" rather than "why does some particular definition of isequiv happen to be a proposition?".

If you're asking about Voevodsky's definition of isequiv, then the reason that's a proposition is because iscontr is a proposition. As for why iscontr is a proposition, one way to understand this intuitively is that to prove a type is a proposition it suffices to assume it is inhabited. But once iscontr(A) is inhabited, then A is contractible, and so pretty much anything we construct out of it will also be contractible, including iscontr(A). Specifically, iscontr(A) consists of a center and a contraction, but when A is contractible, there is a contractible space of such points and a contractible space of contractions to any such point.

view this post on Zulip Mike Shulman (Jun 28 2021 at 18:04):

For other definitions of isequiv, one can give different intuitions. For instance, for a half-adjoint equivalence one can make an argument about how a ball has a CW-presentation with 2 cells at each dimension except 1 at the top.

view this post on Zulip Leopold Schlicht (Jun 28 2021 at 18:10):

That's exactly what I was asking for, thanks! :-)