You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Two quotes from the HoTT book:
"On the one hand, there is Voevodsky’s subtle and beautiful univalence axiom. The univalence axiom implies, in particular, that isomorphic structures can be identified, a principle that mathematicians have been happily using on workdays, despite its incompatibility with the “official” doctrines of conventional foundations."
"The short answer is that all of the constructions and axioms considered in this book have a model in the category of Kan complexes, due to Voevodsky [KLV12] (see [LS17] for higher inductive types)."
Isn't it a bit odd that one needs to search really hard in areas as advanced as algebraic topology to find a model justifying a principle that mathematicians are already using in day-to-day work? After all, probably most mathematicians are thinking about set-based mathematics when applying this principle, with "set" meaning "set" rather than "Kan complex". :-) So shouldn't the principle also be true in set-based mathematics (in some form)?
I think the idea is that such models of HoTT are as complex as they are because they’re being modelled in ZFC (or something like it) so you need to build in all the higher homotopies ‘by hand’ on top of the abstract sets.
Also, that comment is more so about the principle of equivalence rather than the full univalence axiom in its formal guise. It just means that mathematicians really only care about things up to the correct notion of equivalence and so act as if equivalent things are equal (even if formally they know that to not be the case in set theoretic foundations).
There are a lot of different ways to answer this question. One is that yes, it's odd -- and that this says more about ZFC than about HoTT! After all, if a principle is natural in everyday mathematics, isn't it more natural to use a foundational system in which it's true? In traditional set-based foundations we have to work hard to set up a context in which this principle becomes true, but that tells us more about the unnaturality of set-based foundations.
(I think this is similar to what Fawzi said.)
Another answer is that you don't need things like homotopy theory and Kan complexes if you only want to apply univalence to isomorphism of set-based structures such as groups, rings, fields, etc. For that purpose, it suffices to use a version of HoTT in which only the universe of sets is univalent, and that theory can be modeled using plain ordinary groupoids (as shown by Hofmann and Streicher in 1998). It's only when you want to start applying univalence to equivalence of categorical structures that you need higher categorical models (e.g. to model a univalent universe of 1-types you need a model in some kind of 2-groupoid), leading in the limit to homotopy theory.
In addition, there are senses in which the principle is "true in set-based mathematics". It's certainly not literally true in set theory, since we can make statements about a group , like or , that can be true about some group but not about an isomorphic one. But there are set-like theories in which the principle is "true" as a metatheorem, such as dependent type theory without any equality predicate on the universe. The most intuitive (in my opinion) reason for such metatheorems is that the theory would remain consistent if you add univalence -- but you can also give direct syntactic proofs. However, a metatheorem is significantly less convenient to use in practice than an actual theorem, and doesn't yield any of the additional benefits of univalence.
However, a metatheorem is significantly less convenient to use in practice than an actual theorem....
I always wondered what exactly was the difference between a theorem and a metatheorem. :upside_down:
Thanks!
Mike Shulman said:
It's certainly not literally true in set theory, since we can make statements about a group , like or , that can be true about some group but not about an isomorphic one.
Of course (as you know), this only means that the principle isn't true in material set theory.
What's wrong with adding the univalence principle to a more structural set theory like type theory (not treating it as a metatheorem!)?
I just don't get why spaces should be so essential to univalence. :-)
Okay, directly adding it as an axiom rather than a metatheorem doesn't work because for that one needs an equality predicate on the universe.
John Baez said:
However, a metatheorem is significantly less convenient to use in practice than an actual theorem....
I always wondered what exactly was the difference between a theorem and a metatheorem. :upside_down:
My understanding is something like this: A metatheorem is like a C macro, but for mathematics. Theorems in a formal system have to be finitely long, but some weird corners (particularly of set theory, e.g. Godel encoding etc.) end up making statements which would require the proofs to be infinitely long in full generality. However, every such instance of such a theorem would be specific enough to only need to be finitely long. This is resolved by having a metatheorem; i.e. a device to generate finite-length instances of such a theorem at whatever use-site you want
Maybe it doesn't explain all aspects of the term "metatheorem", but one analogy might be that a metatheorem is to a theorem what an axiom schema is to an axiom.
A metatheorem might also be something like a "derivable" inference rule.
This makes me think: What's wrong with "univalence" as a metatheorem? After all, nothing is wrong with inference rules!
I think there are two main considerations:
Leopold Schlicht said:
Maybe it doesn't explain all aspects of the term "metatheorem", but one analogy might be that a metatheorem is to a theorem what an axiom schema is to an axiom.
A metatheorem might also be something like a "derivable" inference rule.
This makes me think: What's wrong with "univalence" as a metatheorem? After all, nothing is wrong with inference rules!
I don't think what you've said is correct. Comprehension in ZFC is an axiom schema, and I don't think anyone would call it 'meta'. 'Meta' is about using external reasoning outside of a formal system to derive facts about it
Leopold Schlicht said:
It's certainly not literally true in set theory, since we can make statements about a group , like or , that can be true about some group but not about an isomorphic one.
Of course (as you know), this only means that the principle isn't true in material set theory.
The example of is specific to material set theory, but is not. As soon as you have a universe in a structural set theory, meaning a family of sets indexed by another set, then there is an equality relation on that family of sets, and you can make statements like that are not invariant under isomorphism.
Univalence only makes sense in type theory, where the "equality relation" on a type can be another type rather than a mere relation.
I just don't get why spaces should be so essential to univalence.
Would you get it if we said "-groupoids" instead of "spaces"?
One familiar case that is true in Sets is the propositional one, where for subsets S, T c X of any set X, we have S = T iff S c T and T c S. This is in fact a (very) special case of univalence.
I think parametricity is a nice example of a metatheorem: for a lot of type theories, we have a metatheorem that any closed term must be the identity function. Why? Because we have no mechanism to write down anything else! But this fails once we have a mechanism for doing so, such as LEM. So metatheorems are true by looking at the syntax (initial model?), but fail to hold in all worlds modeled by the syntax. They assume it’s somehow a closed world.
Similarly in a language with only simple type formers (think ADTs like Haskell), we can prove metatheorems like “every syntactic type function is an invariant Haskell-functor, in the sense that it admits an obvious function ”. How do we prove that? Well we just do induction on the syntax of how we defined each type or type function. But again this only works in a closed world. We could postulate type formers where this fails.
Metatheorems often look like “what you see is what you get”, but we know that’s not how logic works!
Note how the metatheorem about invariant Haskell-functors starts looking like univalence: the metatheorem can give us functions in both directions between and (we have as a corollary of the metatheorem), but we’re missing the proof that says we get an equivalence from an equivalence. Plus, we can only handle a small number of type formers. In particular, if we introduce equality types, our metatheorem breaks: we cannot produce a term of type . So it’s really remarkable that univalence can work for all types, including function types, iterated identity types, higher inductive types.
One more comment on metatheorems: this business about “closed terms” is crucial, and it’s a meta property: it cannot be expressed internally. You could postulate something like to state something like the metatheorem internally. But now it applies to all terms, not just closed terms! The internal logic cannot treat open and closed terms differently. So you end up extrapolating a property of closed terms to a property of all terms. But intuitively we assume that, in a computable type theory, will _eventually_ be a closed term, will _eventually_ by the identity function, and so no contradiction will occur. So there really is something “meta” about that theorem.
Thanks all!
Mike Shulman said:
I just don't get why spaces should be so essential to univalence.
Would you get it if we said "-groupoids" instead of "spaces"?
No, I'm not yet really comfortable with considering either "space" or "-groupoid" as primitive notions in a foundational system. However, I completely accept using HoTT as an internal language (rather than a foundational system). From that perspective, the second quote just says that univalence is true in the internal language of the category of Kan complexes (or something like that). But when mathematicians are using the principle "isomorphic structures have the same structural properties" in practice, I think they are not working in the internal language of the category of Kan complexes. So I don't agree with the undertone of the first quote suggesting that univalence gives a sort of justification of a statement mathematicians are already using (but maybe I'm over-interpreting!).
If I remember correctly, Solomon Feferman has written a paper in which he argues that a foundational system shouldn't axiomatize the notion of "category" as a primitive concept (the same argument applies to "-groupoid"). I guess this was in reaction to Lawvere's ETCC. The reason is that some notion of a "collection of things" is psychologically necessary to understand the notion of a "category".
Looking forward to hearing arguments that enable me to change my opinion. :-)
I think the point in the context of Martin-Löf Type Theory is that you do not formalize the notion of -groupoid but automatically obtain such structure externally from the iteration of identity types and the algebra of paths operations that are definable from induction on identity types (this happens even before throwing univalence into the mix). The relevant papers are Types are weak -groupoids and Weak omega-categories from intensional type theory.
So the notion of space or -groupoid was historically not forced upon the axioms but was rather discovered after almost ~30 years of investigation of the theory.
Leopold Schlicht said:
No, I'm not yet really comfortable with considering either "space" or "-groupoid" as primitive notions in a foundational system.
I'm glad you added "yet"! Can you explain your discomfort? I tried to give an explanation of -groupoids from a foundational perspective in https://arxiv.org/abs/1601.05035, I don't know if you have looked at that.
But when mathematicians are using the principle "isomorphic structures have the same structural properties" in practice, I think they are not working in the internal language of the category of Kan complexes.
Would you accept that they are working with groupoids, whether or not they make it explicit? A groupoid is, after all, just the formal structure possessed by isomorphisms, so whenever you are working with isomorphisms you are working with groupoids.
And groupoids are a full subcategory of -groupoids, so whenever you are working with groupoids you are working with (certain, particularly simple) -groupoids. The rest of the -groupoids don't impinge on the average mathematician very often, but when they start to say things like "equivalent categories have the same structural properties" then they're implicitly working with 2-groupoids, and so on.
If I remember correctly, Solomon Feferman has written a paper in which he argues that a foundational system shouldn't axiomatize the notion of "category" as a primitive concept (the same argument applies to "-groupoid"). I guess this was in reaction to Lawvere's ETCC. The reason is that some notion of a "collection of things" is psychologically necessary to understand the notion of a "category".
This is a philosophical argument that probably isn't going to be settled any time soon. But I believe -groupoids can be regarded as a notion of "collection of things", not requiring a prior notion of "collection" to be built out of. The claim is just that whenever we collect a bunch of things together, we always (explicitly or implicitly) also have in mind a "notion of sameness" between them.
Mike Shulman said:
I tried to give an explanation of -groupoids from a foundational perspective in https://arxiv.org/abs/1601.05035, I don't know if you have looked at that.
No, I wasn't aware of that paper! Great read, really fantastic, exactly the thing I needed!
Can the
"an -groupoid consists of a collection of elements, together with for any two elements , a collection of ways in which , and for any two such ways , a collection of ways in which , and so on, plus operations . . ."
approach be used to give, inside ZFC, a definition of the notion of an -groupoid so that the category of these -groupoids satisfies HoTT? I would find this more satisfying than a model in Kan complexes, which I think of as very topological things.
Maybe take a look at https://ncatlab.org/nlab/show/algebraic+definition+of+higher+categories
There is the (apparently equivalent) concept of an algebraic Kan complex where the composites a chosen as opposed to just known to exist.
Can the [globular] approach be used to give, inside ZFC, a definition of the notion of an ∞-groupoid so that the category of these ∞-groupoids satisfies HoTT?"
That's an interesting question. I feel sure the answer must ultimately be "yes", but I don't know if anyone has come up with a globular model of HoTT yet. They've come up with simplicial and more recently cubical models. But I'm just commenting from the outside - @Mike Shulman actually works on this stuff, so he'd know.
Well, first of all, I don't think of Kan complexes as "topological" at all. There's no topology in sight! A Kan complex is a combinatorial object with a collection of elements, together with for any two elements a collection of ways in which , and for any three ways in which , , and respectively a collection of ways in which , and so on. And as Fawzi said, you're free to regard the Kan-ness as operations rather than properties. So I would argue that Kan complexes are very close to the intuitive description you quoted.
The only difference is, as John said, that in a Kan complex the "shape" of the "ways in which things are the same" changes as you go up in dimension. This turns out to make things a lot easier, both in terms of developing the theory and in terms of modeling HoTT, largely because when you come to define morphisms between such things, with Kan complexes you don't need to specify data showing how a morphism preserves the operations -- that happens automatically as soon as it preserves the shape of the higher-dimensional identifications.
It's well-known that you can model HoTT using globular 1-groupoids. Indeed, this predates the notion of "HoTT" by at least a decade.
At one point I convinced myself that it would probably also work to model HoTT with globular 2-groupoids (although I never wrote down the details); but that a similar naive approach, at least, would not work with globular 3-groupoids or higher.
I don't remember exactly why, and I could easily have been wrong, so if anyone wants to try to make it work themselves I don't want to discourage them. But it's definitely not an easy problem.
Of course, this all depends on exactly what you mean by "HoTT". The issue is roughly that most formal type theories used for HoTT are "strict" (i.e. certain equalities hold judgmentally rather than typally) in ways that it can be hard to make true in models. One especially problematic issue of this sort involves the computation rule for the J-eliminator of identity types.
One can, of course, also change the formal type theory to make these rules hold more weakly to match a given model; but the tradeoff is that the resulting type theory may have worse formal properties (e.g. it may no longer satisfy canonicity and normalization) and it may be less convenient or even wholly impractical to use. Making the J-computation rule typal is not a real big problem; cubical type theories have a typal J-computation rule (at least, for their primitive path-types) and they are computational and fairly usable.
However, my memory is that the problem with globular models had instead to do with the computation and uniqueness rules for functions, like -reduction . This seems quite hard to do without, both for computation and for practical usability.
Michael Warren's thesis contains a strict (globular) infinity groupoid model of intensional type theory - it's written up here:
https://mawarren.net/papers/crmp1295.pdf
No univalence, though.
Yeah, I should have mentioned that. It was the weakness that was a problem with what I was trying to do.
Why do mathematicians feel like the cubical model of HoTT is "no good" to work with, despite its nice computational properties? I think it's something like it changes the definitions of (something from homotopy theory), but I would really appreciate knowing a bit more about this than I currently do
I'd be interested to hear where you got the impression that mathematicians feel that way.
Until recently, the main drawback of most cubical models of HoTT has been that they are not a presentation of the usual homotopy theory of spaces (or Kan complexes). So for mathematicians who are interested in using HoTT to say things about classical homotopy theory, the cubical models were problematic. Now that may be in the process of changing with the newer Awodey-Cavallo-Coquand-Riehl-Sattler "equivariant" model structure on cartesian cubical sets, which is (at least in classical mathematics) equivalent to the homotopy theory of spaces. However, the paper on this model hasn't appeared yet, and it hasn't been incorporated into a cubical type theory. It seems like the equivariant features can probably be ignored when interpreting type theory, so that it would model ABCFHL-style cartesian cubical type theories, but the most popular cubical proof assistant is probably Cubical Agda, which implements something closer to the earlier CCHM cubical type theory that is not known to have any model equivalent to classical homotopy theory.
Mike Shulman said:
Leopold Schlicht said:
No, I'm not yet really comfortable with considering either "space" or "-groupoid" as primitive notions in a foundational system.
I'm glad you added "yet"! Can you explain your discomfort?
One thing I expect a foundation of mathematics to have is a clear "ontology", i.e., there should be an intended model (in the informal sense) it aims to describe. Ideally, this model is both unambiguous and communicable, in such a way that whenever two people are talking about it, one can be quite sure that they are talking about the "same" thing. So it should feel like something like a "platonic reality" one can investigate (regardless of whether one believes in platonism in the philosophical sense).
I think two examples of such (partial) foundations of mathematics having a clear ontology everyone learns are the Peano axioms and Euclidean geometry -- probably everyone has more or less the same conception of what natural numbers are and what 2-dimensional space is. This is in contrast to, say, first-order theories that are designed to have many models, such as the first-order theory of groups.
It's not that obvious why ZFC should be a foundation in this sense, but its ontology is attempted to be described in Shoenfield's Axioms of set theory. I would argue that at least for Zermelo set theory, one can describe the ontology quite satisfactory, as the informal version of (although I've seen people arguing that the concept of an arbitrary subset isn't definite), and this suffices for most uses of set theory.
Your paper has really convinced me that one can regard "-groupoid" as a primitive notion. But now my question is: Is it a definite concept?
All the conversation about different models for -groupoids somehow suggests that "-groupoid" might be ambiguous. That the concept isn't easy to define in ZFC might also be evidence for that, but not necessarily.
I would be glad if someone can convince me otherwise, but this is why I currently think of HoTT more as a "first-order theory of groups" or an internal language, and not as a foundation.
Slightly related: When reading http://math.andrej.com/2012/10/03/am-i-a-constructive-mathematician/ a while ago, in which Andrej Bauer argues that mathematics should take place in many different "worlds" (roughly: toposes), or the "Conclusion" chapter of Mike's Homotopy type theory: the logic of space I asked myself the question: Isn't it, for metatheoretic purposes, useful to decide on one foundation of mathematics, which can be used to construct and compare all the different worlds (toposes)? After all, even the metatheoretic study of "worlds", i.e., topos theory, can be done in each of the worlds separately, and might look completely different in each world, which would be quite a mess to think about. Also: is there a guarantee that each world can construct models of all the other worlds (if equipped with one more universe)?
I've heard various people make different versions of this argument. I'm kind of tired of rebutting it, but I guess I'll have another go.
The idea that any formal theory can have one "intended model" is, I believe, a misconception that should have gone the way of the dinosaur with Godel's incompleteness theorem. The only way for a theory to have a unique model is if it's so weak that it can only encode a small fragment of mathematics, or if it's not effectively specified and therefore much more dependent for its meaning on the ambient foundational system in which you formulate it. Human beings have intuition for the behavior of certain concepts, but there is no qualitative difference between our intuition about natural numbers or sets and our intuition about the elements of a group or the points of a topological space. The only difference is quantitative, that most mathematicians are more used to thinking about different groups or topological spaces than they are to thinking about different universes of sets or different systems of natural numbers. But people who study set theory, for instance, acquire an intuition for different universes of sets and how they are related to each other and can be constructed from each other, analogous to a group-theorist's intuition for different groups and how they are related to each other and can be constructed from each other.
Now I'll grant that a foundational system for mathematics should probably be sufficiently expressive that the ordinary mathematician can develop an intuition for its basic objects and use them to encode their objects of interest without needing to go down the road of thinking about different universes of them. But I believe that homotopy types have this property just as sets do.
I'm not saying that all mathematicans already have such an intuition. But another big mistake people make is thinking that "mathematical intuition" is something fixed and immutable. In fact, I believe nearly all the intuitions that professional mathematicians take for granted were actually developed gradually through their process of schooling, including the behavior of ZFC-like "sets". In particular, this accords with my experience teaching basic mathematics to undergraduates, who have to be taught what a "set" is, and struggle with the idea that (for instance) the elements of one set could be other sets (which is peculiar to ZFC-like sets).
Thus, if you don't yet have such an intuition for -groupoids, fear not. You can develop such an intuition by working with them, and HoTT has many advantages as a context for acquiring such an intuition over an "analytic" approach to -groupoids through ZFC. It only requires you to learn how -groupoids behave (which is, after all, what an intuition about them means) rather than how they can be defined (which is where all the complexity of things like simplicial sets, globular sets, and issues of different models comes in).
The issue of different models comes up in many other places when expressing one theory analytically in another. For instance, suppose you start with HoTT (or any other type theory) and are trying to explain to someone what a ZFC-set is. It's not just a 0-truncated type; it has lots of complicated hereditary membership structure, and there is more than one way to express that. For instance, you can use extensional relations or rigid trees, which differ in their representation according to whether, when appears more than once in the hereditary membership structure of (e.g. if ), those two 's are represented by the same node or by different nodes. Alternatively, you can build the entire class of ZFC-sets at once, as in algebraic set theory. And so on. So the existence of more than one analytic model of one theory inside of another theory does not make the first one any less "definite".
It's also important to note that before Zermelo and axiomatic set theory, mathematicians didn't view the world of mathematics as 'founded' on some minimal ontology of sets - there were just mathematical objects and these objects had certain properties and interactions.
Its nice that this world of mathematical concepts can be (partly) realised in a given formal axiomatic framework, but its a mistake (in my opinion at least) to close off other avenues of what 'mathematics' may mean.
I'm also a bit confused about this obsession with one-ism; to me, it's like asking what the best programming language is - the answer is it depends on your perspective. I might even stretch myself and use a physics metaphor: is light a particle or a wave?
Fawzi Hreiki said:
It's also important to note that before Zermelo and axiomatic set theory, mathematicians didn't view the world of mathematics as 'founded' on some minimal ontology of sets - there were just mathematical objects and these objects had certain properties and interactions.
Right. It is, of course, true that mathematics has benefited a lot from this innovation of axiomatic set theory. For instance, it gives mathematicians from different fields a common language and toolbox with which to talk to each other and use each other's ideas and results. But it's also possible to go too far in that direction. As Nick says, programmers manage to communicate with each other without insisting that there must be one programming language, or one computer architecture, that defines the meaning of all others.
I also forgot to mention another useful analogy, to inertial frames of reference in relativity. As much as we might like there to be, there is no preferred "global rest frame": all we can have is various different inertial frames. But physics can be described just as well in any inertial rest frame, and transformed into any other in well-understood ways.
@Leopold Schlicht wrote:
I think two examples of such (partial) foundations of mathematics having a clear ontology everyone learns are the Peano axioms and Euclidean geometry -- probably everyone has more or less the same conception of what natural numbers are and what 2-dimensional space is.
It's funny that you should choose these examples. Euclidean geometry as formalized by Tarski has been proved to be complete, so even though it has many nonisomorphic models (by the Loewenheim-Skolem theorem), at least they are elementarily equivalent (they agree on which sentences hold). But Peano arithmetic is famously not complete, so it has many different models in which different sentences hold.
You may say that "everyone has more or less the same conception of what natural numbers are", but I'm not sure how you'd determine that. By brain scan?
It's probably true that most mathematicians agree that the natural numbers obey the Peano axioms - though I know mathematicians who don't. But that's different than "having the same conception" of the natural numbers. For example, there are some famous statements about natural numbers, like Goodstein's theorem, that don't follow from Peano's axioms, but only from additional principles that you can add to the Peano axioms if you so choose. So even if everyone agreed that the natural numbers obeyed Peano's axioms, that wouldn't imply they "have the same conception" of the natural numbers.
Nick Hu said:
I'm also a bit confused about this obsession with one-ism; to me, it's like asking what the best programming language is - the answer is it depends on your perspective.
I'd go further and say this obsession is downright harmful.
To get around it, I'd like to abolish the whole metaphor of "foundations of mathematics". A building has one foundation, you have to lay the foundation before you do anything else, and it's damned hard to change it once the building is built - you basically have to tear the building down and start over.
That's not how math works.
I think a better analogy is currency. An axiomatic foundation is a currency which mathematicians can use to ensure clean exchange of ideas
But there are multiple currencies and we know how to exchange between different currencies
Or the frame of reference analogy above works too
Thanks for all the comments. I really like this discussion! But there are still some things that are unclear to me.
Mike Shulman said:
The idea that any formal theory can have one "intended model" is, I believe, a misconception that should have gone the way of the dinosaur with Godel's incompleteness theorem. The only way for a theory to have a unique model is if it's so weak that it can only encode a small fragment of mathematics
But I was not claiming that the intended model is the unique model. From the perspective I described, one starts with a clear conception of what the intended model is, and then writes down axioms describing it. Gödel's incompleteness theorem implies that, at each point in time, the set of axioms one wrote down don't pin down the intended model uniquely. But the hope is, from the perspective I described, that as one tries to answer more questions about the intended model, one finds more (intuitively) evident principles that one can add to the existing axiom system.
Gödel himself was a platonist, so I would think it's hard to use Gödel's incompleteness theorem as an argument against mathematical platonism. By the way I think he was even trying to find additional evident principles that can be used as axioms to settle the continuum hypothesis.
Mike Shulman said:
In fact, I believe nearly all the intuitions that professional mathematicians take for granted were actually developed gradually through their process of schooling, including the behavior of ZFC-like "sets". In particular, this accords with my experience teaching basic mathematics to undergraduates, who have to be taught what a "set" is, and struggle with the idea that (for instance) the elements of one set could be other sets (which is peculiar to ZFC-like sets).
I agree that this process is one way of learning to use ZFC-like sets (undeniably it's the usual one). But one could also argue that the reason for struggling with the idea that elements of one set could be other sets is that one wasn't told the ontology! :-)
By the way what's your opinion on the questions I asked in the last paragraph of my previous message?
The fact that you are using Gödel's incompleteness theorem in your argumentation somehow suggests that even you adopt some kind of "realist" or "platonist" position when it comes to discussing meta-theoretical questions about the manipulation of finite strings.
Fawzi Hreiki said:
It's also important to note that before Zermelo and axiomatic set theory, mathematicians didn't view the world of mathematics as 'founded' on some minimal ontology of sets - there were just mathematical objects and these objects had certain properties and interactions.
I think at the time they were talking mostly about finite sets, finite groups, concrete examples of spaces like , and so on. All things with a "clear" ontology. But arbitrary sets and arbitrary -groupoids are rather different.
Nick Hu said:
I'm also a bit confused about this obsession with one-ism; to me, it's like asking what the best programming language is - the answer is it depends on your perspective.
I agree! But certainly it's sometimes useful to have some form of compatibility between different technologies (e.g., for transferring files from one device to another). That's what I was arguing for in my last paragraph: "Isn't it, for metatheoretic purposes, useful to decide on one foundation of mathematics, which can be used to construct and compare all the different worlds?"
John Baez said:
You may say that "everyone has more or less the same conception of what natural numbers are", but I'm not sure how you'd determine that. By brain scan?
Yes, I agree, it isn't possible. But I guess many mathematicians, for example number theorists, have the feeling that they are discovering things and they want to know whether statements are true. Would you say that doesn't make sense at all? At least historically, this philosophy must have played an important role.
Gödel himself was a platonist, so I would think it's hard to use Gödel's incompleteness theorem as an argument against mathematical platonism.
I agree that someone can be a mathematical platonist even if they know Gödel's incompleteness theorem - or even if they were the first to prove it.
But note, Mike Shulman wasn't talking about "mathematical platonism", which opens up a whole extra can of worms. (For example: what exactly is "mathematical platonism", and what if anything does it have to do with what Plato wrote about mathematics?)
He was just arguing against the idea that a formal theory can have one "intended model".
I'd put it this way: we can have something vague in mind and try to capture it using axioms, which may have many models. We can then sometimes add axioms that rule out some of these models, saying "oh, I wasn't thinking about models like that". And all this is fine and dandy. But if we're in a situation like Peano Arithmetic where no recursively enumerable collection of additional axioms is enough to single out a unique model, we should accept that fact. In particular, we should not pretend that in our heart of hearts we know the non-recursive collection of all statements that single out the "intended model". Because there's no way that we can really know a non-recursive collection of statements. We're just not that good.
By the way I think he was even trying to find additional evident principles that can be used as axioms to settle the continuum hypothesis.
Yes, and look how well that worked.
He came up with the axiom of constructibility, and showed that implied the continuum hypothesis, but in his heart of hearts he believed the continuum hypothesis was false.
For more on the epic saga of whether the continuum hypothesis is "really true" (whatever that means), try:
This paper reviews the history of arguments in favor of, and against, the continuum hypothesis. Nowadays set theorists think the continuum hypothesis is off the "main line" of questions that can be settled by large cardinal hypotheses (which these set theorists tend to believe are "really true").
Well with PA, if we're working in some ambient set theory, we could say that the intended model is the model picked out uniquely (up to iso) by second order PA (or by the NNO axioms). However, Joel Hamkins has mentioned somewhere that even this depends on the ambient set theory and so from the POV of another set theory, this 'actual' set of natural numbers is non-standard
If Godel himself was too committed to platonism to realize how his own theorems had demolished it, that's too bad for him, but it doesn't change the fact. The nice thing about mathematics is that the motivations and beliefs of mathematicians are irrelevant to their theorems.
But certainly it's sometimes useful to have some form of compatibility between different technologies (e.g., for transferring files from one device to another). That's what I was arguing for in my last paragraph: "Isn't it, for metatheoretic purposes, useful to decide on one foundation of mathematics, which can be used to construct and compare all the different worlds?"
There's a world of difference between "some form of compatibility" and "deciding on one foundation of mathematics". Some form of compatibility only requires translations between the various foundations, which we have (and is also what people do in the world of software, and relativity); it doesn't require fixing any one of them as privileged.
I guess many mathematicians, for example number theorists, have the feeling that they are discovering things and they want to know whether statements are true. Would you say that doesn't make sense at all?
This is getting very philosophical and somewhat digressive, but if you're interested, my own philosophy of mathematics is a sort of empirical formalism. I believe that the "discovery" aspects of mathematics refer to the empirical study of a certain mode of functioning of the human mind, which one might call something like "deductive reasoning". When we prove in a certain deductive system that certain assumptions and axioms entail certain other consequences, that is an empricial discovery about the nature and consequences of deductive reasoning, so it makes sense to call it a "discovery".
Other aspects of mathematics, which we tend to think of as "creating" or "constructing" rather than "discovering", are arguably better-understood as the shared participation in a world of ideas, analogous to the universe of a fictional world. From this perspective, mathematical objects exist as ideas that are shared (but not identically) by different people, just like fictional characters, and statements about mathematical objects can be "true" in the same sense that statements about, say, Harry Potter can be "true" even though neither of them "exist" as physical objects. But neither of these kinds of idea are absolutely fixed and invariable between people: just as different reboots or fandom can vary the truth of statements about fictional characters, so different formal systems and axioms can vary the truth of statements about mathematical objects.
Thanks for this John:
In particular, we should not pretend that in our heart of hearts we know the non-recursive collection of all statements that single out the "intended model". Because there's no way that we can really know a non-recursive collection of statements. We're just not that good.
This reminds me of Bishop's remark that "If God has mathematics of his own that needs to be done, let him do it himself."
even the metatheoretic study of "worlds", i.e., topos theory, can be done in each of the worlds separately, and might look completely different in each world, which would be quite a mess to think about.
Well, real life is messy. Mathematics in general looks different in different worlds, and the study of worlds is part of mathematics, so yes, it too looks different in different worlds.
Also: is there a guarantee that each world can construct models of all the other worlds (if equipped with one more universe)?
Well, since we can't expect to define precisely what "a world" means, no, we can't have any such guarantee. We generally do expect that when a new foundational system is introduced, people will argue for its sensibility and consistency by showing how to interpret back and forth between it and some previously known foundation. But it's also true that there may be some worlds that you "can't get out of" to other kinds of worlds, kind of like how once you pass the event horizon of a black hole you can't describe all the rest of the universe. That's life.
John Baez said:
I'd like to abolish the whole metaphor of "foundations of mathematics". A building has one foundation, you have to lay the foundation before you do anything else, and it's damned hard to change it once the building is built - you basically have to tear the building down and start over.
That's not how math works.
Agreed. Sometimes I like to call theories like ZFC and HoTT "mathematics-complete", by analogy to NP-complete problems in complexity theory: they are rich enough that you can encode all of mathematics into them, but there's no particulary reason we have to fix one such theory and encode everything else into it.
John Baez said:
He came up with the axiom of constructibility, and showed that implied the continuum hypothesis, but in his heart of hearts he believed the continuum hypothesis was false.
I am reminded of my undergraduate days when I took a course on axiomatic set theory. I always found it a hard pill to swallow that there should be undefinable functions. I guess the main reason not to believe in L is because it takes a decent chunk of machinery to set up, at least the way I remember being taught it
Leopold Schlicht said:
But I guess many mathematicians, for example number theorists, have the feeling that they are discovering things and they want to know whether statements are true. Would you say that doesn't make sense at all?
It makes sense: when I do math I feel I am discovering things. When doing math I take the attitude that the objects I'm discussing "really exist", and I try to learn more about them. For example I know a lot about , and that allows me to think about the natural numbers in intuitive ways that aren't just syntactic manipulation of axioms according to formal rules. But that doesn't mean that I know, say, which model of PA is the "right one". Most of what I know about holds in every model of PA! It's a bit like how I can think about what a brick would do in various circumstances, without knowing the "one true brick".
Nick Hu said:
John Baez said:
He came up with the axiom of constructibility, and showed that implied the continuum hypothesis, but in his heart of hearts he believed the continuum hypothesis was false.
I am reminded of my undergraduate days when I took a course on axiomatic set theory. I always found it a hard pill to swallow that there should be undefinable functions. I guess the main reason not to believe in L is because it takes a decent chunk of machinery to set up, at least the way I remember being taught it.
Actually that's not the reason a lot of famous set theorists disbelieve in V=L. A lot of them disbelieve it because in some rough sense the constructible universe L is a very stripped-down of ZFC and they believe the universe should contain everything conceivable, not just the bare minimum. I'll quote Penelope Maddy:
Silver's model theoretic results show that V L can actually be derived from the existence of one particular set of integers, 0#. [....] Thus, not only does Silver's theory show that L goes wrong, it shows how L goes wrong: the process of taking only definable subsets at each stage yields a model satisfying ZFC at some countable stage, and all the further stages make no difference (this countable structure is an elementary substructure of L). The range of L's quantifiers is so deficient that L cannot tell one uncountable cardinal from another, or even from some countable ones. In purpler terms:
L takes on the character of a very thin inner model indeed, bare ruined choirs appended to the slender life-giving spine which is the class of ordinals.
(Kanamori and Magidor [1978, p. 1311) The point is that 0# yields a rich explanatory theory of exactly where and why L goes wrong. Before Silver, many mathematicians believed that V L, but after Silver they knew why.
I don't understand this stuff, much less agree that V=L is "wrong". But it shows that these expert set theorists have intuitions about the universe that push them into thinking L is much too small to be V. They think it's too "thin": there's not enough stuff in each stage . This is different from whether it's "tall" enough, i.e. how big can get.
When I've talked to set theorists about this stuff, they like to draw pictures that illustrate these things.
By the way, model pluralism is not inconsistent with platonism (whatever that means). You can beleive that mathematical objects 'really exist' but that first-order theories need not pick out their models uniquely (even morally)
I thought V = L just implied that the universe was merely the constructible things, but it sounds a little more complicated than that. Nonetheless, I find it amazing that mathematicians still spend their entire lives working out very technical details about sets (collections with membership as a proposition), and I'm sure that for any model of ZFC you will find pathologies which imply that it 'must be false'. Even well-ordering the reals is pretty weird
I got quite a lot of insights from reading all the fantastic answers following my last post, thanks so much to all, again!
Mike Shulman said:
Some form of compatibility only requires translations between the various foundations, which we have (and is also what people do in the world of software, and relativity); it doesn't require fixing any one of them as privileged.
Okay, I agree.
What is a translation between foundations (or worlds) precisely?
I also came across the following quote from your expository paper Homotopy type theory: the logic of space:
We have consistent rules for translating between toposes along functors, and there are some toposes in which mathematics looks a bit simpler (those satisfying LEM or UIP).
So are these translations something like logical morphisms between elementary toposes (and are thus somewhat linked to what are called translations of type theories between the internal languages and of and )?
In which ways can one use functors to compare different worlds (can one just state there there exists a functor between two worlds or are there different ways to say something about their relationship)?
Also: Is a translation between worlds (in this sense) related to whether one of the worlds can construct a model of the other world?
Nick Hu said:
I thought V = L just implied that the universe was merely the constructible things, but it sounds a little more complicated than that.
That's what V = L says, but as you probably know, it implies a lot more: it implies the axiom of choice, and the continuum hypothesis, and the generalized continuum hypothesis,... and it implies that 0# does not exist. Experts in set theory - classical logicians, I mean - tend to love the axiom of choice. But they tend to want lots of sets! This is why Goedel thought the continuum hypothesis was improbable, and even more so this is why they want 0# to exist. This is also one reason they're fond of large cardinal hypotheses.
(V L makes the universe slightly "wide", but large cardinal hypotheses mainly make the universe "tall". Probably a more important reason set theorists like large cardinal hypotheses is that these hypotheses tend to settle lots of questions, and they are in some rough sense linearly ordered: that is, given two large cardinal hypotheses it's often true that one implies the other.)
Since I have no great desire for lots of sets, I'd be very happy with V=L. I'd like a really simple, manageable universe of sets.
It's amusing that a bunch of set theorists considered "V=L 0# does not exist" to be a powerful argument against V=L.
I hadn't known about this stuff until just now, so I'll quote Wikipedia:
Zero sharp was defined by Silver and Solovay as follows. Consider the language of set theory with extra constant symbols for each positive integer. Then 0# is defined to be the set of Gödel numbers of the true sentences about the constructible universe, with interpreted as the uncountable cardinal (Here means in the full universe, not the constructible universe.)
Superficially 0# seems to me like a set that should exist. So I can see why "V=L 0# does not exist" would turn people against V=L.
But:
There is a subtlety about this definition: by Tarski's undefinability theorem it is not, in general, possible to define the truth of a formula of set theory in the language of set theory. To solve this, Silver and Solovay assumed the existence of a suitable large cardinal, such as a Ramsey cardinal, and showed that with this extra assumption it is possible to define the truth of statements about the constructible universe. More generally, the definition of 0# works provided that there is an uncountable set of indiscernibles for some , and the phrase "0# exists" is used as a shorthand way of saying this.
All this makes me much less convinced that 0# should exist. Indiscernibles are unequal sets that obey all the same formulae. To even define 0#, much less prove it exists, we need an uncountable set of indiscernibles!
So, I think expert set theorists - like Silver and Solovay and so on - have very different tastes than I do.
I can't find the reference right now, but I think I remember that Lawvere considered as settling the continuum hypothesis positively for "constant sets". He argued that toposes can be regarded as universes of "variable sets", in some of which CH might fail, but that the original CH was a statement about "maximally constant sets". We can't expect to completely characterize the absolutely constant sets, but he argued that makes the universe of sets "more constant" than otherwise, and so we should conclude that CH is true for maximally constant sets.
Personally, I find uncomfortable mainly because the definition of is so non-structural. It seems to take the iterative conception of set and the primacy of first-order logic much too seriously. So it's interesting to me that set theorists, who generally do argue for the iterative conception of set and the primacy of first-order logic, are also unhappy with ; even they aren't willing to go that far in that direction.
Leopold Schlicht said:
What is a translation between foundations (or worlds) precisely?
Well, the notion of "world" is not precise here, and hence neither is the notion of "translation between worlds". One example would be a functor between toposes (logical, or perhaps geometric -- each with different behavior). Another example is the construction of inside , or the construction of a model of ZFC inside a topos.
In which ways can one use functors to compare different worlds?
Innumerable ways. A functor maps objects of one world to objects of the other, so you can ask what constructions it preserves or reflects, and use it to deduce things about one world from properties of the other.
Mike Shulman said:
Personally, I find uncomfortable mainly because the definition of is so non-structural. It seems to take the iterative conception of set and the primacy of first-order logic much too seriously. So it's interesting to me that set theorists, who generally do argue for the iterative conception of set and the primacy of first-order logic, are also unhappy with ; even they aren't willing to go that far in that direction.
There are lots of theorems about material set theory, and I've been studying it since I was a kid, so I like reading about it even though it's old-fashioned and doesn't accord with modern mathematical practice. I have Kanamori's The Higher Infinite by my bed and I read it now and then to stretch my brain in a certain way. But I don't take it very seriously.
If I took material set theory seriously, I might advocate V = L, since it seems to accord quite nicely with the idea that sets are built up as sets of sets. But I think people who spend their lives thinking about set theory find it constrictive. It spoils the fun. :upside_down:
Thanks for your insight John, it was a very interesting read. Especially what you said about 0#; it looks like something that programmers might describe as a 'hack', so I too am surprised to hear that mathematicians feel compelled that it must exist. I'm not sure if I would agree with you that it even seems like it ought to superficially exist - once you start talking about Godel numbers you are kind of in dangerous territory imo, but I guess that's just my preference for constructivism shining through
I don't have any objection to material set theory as a subject; there are a lot of amazing things you can do with it, and it's certainly a more convenient context in which to study well-founded relations than building them explicitly in a structural framework, and that can have important ramifications for the rest of mathematics too. For instance, whether or not is all of , its construction is still the only way I know of to prove that CH is consistent, and as far as I know no one has come up with a convincing structural version of . I only object to claims that material set theory is the only foundation for mathematics, or a better one for all purposes.
Nick Hu said:
I'm not sure if I would agree with you that it even seems like it ought to superficially exist - once you start talking about Godel numbers you are kind of in dangerous territory imo, but I guess that's just my preference for constructivism shining through.
Well, I guess all of classical set theory and model theory is dangerous territory then. :upside_down: Gödel numbers are used for absolutely everything.
I don't think there's anything innately nonconstructive about Gödel numbers. They're just a trick that shows up when a system gets powerful enough to reason about itself. But classical logic is full of nonconstructive reasoning.
Mike Shulman said:
Well, the notion of "world" is not precise here, and hence neither is the notion of "translation between worlds". One example would be a functor between toposes (logical, or perhaps geometric -- each with different behavior). Another example is the construction of inside , or the construction of a model of ZFC inside a topos.
Okay, thanks.
Innumerable ways. A functor maps objects of one world to objects of the other, so you can ask what constructions it preserves or reflects, and use it to deduce things about one world from properties of the other.
Doesn't sound like "innumerable ways" to me, probably because I'm lacking the experience. :-) What would it imply when a translation preserves or reflects a construction?
John Baez said:
but I don't know if anyone has come up with a globular model of HoTT yet.
By the way what is a "globular" model?
Fawzi Hreiki said:
Well with PA, if we're working in some ambient set theory, we could say that the intended model is the model picked out uniquely (up to iso) by second order PA (or by the NNO axioms). However, Joel Hamkins has mentioned somewhere that even this depends on the ambient set theory and so from the POV of another set theory, this 'actual' set of natural numbers is non-standard
I would like to make two comments here:
(1) The fact that there are different models of PA just shows that we need to also include the ambient "setting" interpreting the induction axiom in the axiomatization - this is what we do in topos theory, where the notion of an NNO (or model of 2nd-order PA in the internal logic) becomes categorical (in the strong sense of model theory: any two models are - uniquely! - isomorphic). This is a perfectly satisfactory situation - each topos has an essentially unique NNO. Now one can of course ask further about comparing different toposes, but there is no expectation that these are all equivalent (much less isomorphic). The fact that the RE deductive system of second-order PA is deductively complete w/resp. to such topos models (indeed, w/resp. to just the well-pointed toposes, and therefore models of Mac Lane set theory) is again a perfectly satisfactory situation. What more could you coherently want?
(2) I think there should be a principle of Reductio ad Gödelium in the philosophy of mathematics, like the Reductio ad Hitlerium in history. The person who first plays the Gödel card loses the argument.
Leopold Schlicht said:
John Baez said:
but I don't know if anyone has come up with a globular model of HoTT yet.
By the way what is a "globular" model?
It's the thing you said you'd find "more satisfying":
Can the
"an -groupoid consists of a collection of elements, together with for any two elements , a collection of ways in which , and for any two such ways , a collection of ways in which , and so on, plus operations . . ."
approach be used to give, inside ZFC, a definition of the notion of an -groupoid so that the category of these -groupoids satisfies HoTT? I would find this more satisfying than a model in Kan complexes, which I think of as very topological things.
That remark you quoted was giving a very sketchy definition of a "globular" -groupoid, where the shapes of n-morphism are "globes".
You've got 0-morphisms, or objects , which look like points.
And you've got 1-morphisms like , which look like arrows.
And you've got 2-morphisms like when , which look like 2-dimensional discs.
And so on. If you draw the n-morphisms, they look like n-dimensional balls, or "globes".
There are also simplicial -groupoids, called Kan complexes, where the shapes of morphisms are simplexes, but apparently you think simplexes are "very topological" and thus unsatisfying.
And there are cubical -groupoids, where the shapes of n-morphisms are cubes.
This would be a lot better with pictures.
People have worked a lot on simplicial and cubical models of HoTT, so I asked about work on globular models of HoTT, and people started talking about those...
Ah, thanks! But I don't yet see why the quoted sketchy definition suggests that the shape of n-morphisms is a "globe". In fact, it isn't even talking about "shape" at all, it basically just says that we want to have a collection of n-morphisms, for each n.
The crucial thing is that it only makes sense to have -morphisms between parallel pairs of -morphisms. This is called the globularity condition.
It's the part where the informal description said "for any two such ways a collection of ways in which " that starts it off globular. The simplicial version would have instead, as I mentioned earlier, "for any three ways in which , and respectively, a collection of ways in which ."
Yup, that's the part.
It may help, @Leopold Schlicht, to note that the usual concept of "natural transformation between functors" is an example of a globular 2-morphism. It only makes sense to talk about a natural transformation between parallel functors, by which I mean functors with the same domain and the same codomain, like .
(Nick Hu said "parallel" but I'm not sure you know what that means so I wanted to define it.)
Thus, when we draw a natural transformation it looks like a 2-dimensional globe, meaning this:
It does not look like a cube, or simplex, etc.
Natural transformations are 2-morphisms in the 2-category Cat - at least if we take the usual "globular" approach to 2-categories!
You seem to be more comfortable with the globular approach than the simplicial approach or the cubical approach, except for the fact that you didn't know the word "globular".
By the way, are there any similarly familiar examples of simplices or cubes at low dimensions? I feel like squares and cubes come up fairly naturally in dependent type theory when you consider equality between diverging telescopes/dependent tuples, but (1) I can't remember how to state this cleanly, and (2) I can't think of an example of simplices. Double category morphisms look a bit like squares, but they're strictly more general IIRC, and all the familiar examples (modules) use that generality.
Double category morphisms look exactly like squares - indeed, I often call them "squares".
An easy example of a double category is: take a category, and create a double category where both the vertical and horizontal arrows in the double category are morphism in that category, and the squares are commuting squares.
One can play the same game with higher-dimensional cubes and get easy examples of n-tuple categories.
Even the squares in double categories of modules look just like squares: they have four objects, two horizontal arrows, two vertical arrows and a square in the middle.
Still, the example with commuting squares feels a bit degenerate. There's some basic globular structure (parallel morphisms can be compared for equality), but then you force it to have two extra corners by saying that the two morphisms being compared have to be composites. You could do the same and get any shape you wanted. Modules are a better example of a double category because the square shape isn't artificial. Perhaps the example of modules is related to the example of telescopes/dependent pairs? Modules are naturally dependently typed.
Still, the example with commuting squares feels a bit degenerate.
Sure! I was merely trying to reassure you that the morphisms in double categories look exactly like squares.
Also, you asked if there are any "familiar examples of cubes at low dimensions", so I wanted to point out that commuting squares, common in category theory, are a familiar example.
There's no shortage of more complicated, less degenerate examples.
Are double categories exactly cubical 2-categories? I thought it was a problem that the horizontal and vertical 1-cells could be different.
I don't actually know what a "cubical 2-category" is - it's not something I say. I use double categories a lot, and there's a famous kind of double category where the vertical and horizontal 1-cells are the same, and composed the same way. There's some name for that kind....
"edge-symmetric"?
James Wood said:
By the way, are there any similarly familiar examples of simplices or cubes at low dimensions?
I'm not sure if this is quite what you're thinking of, but I would say that the commutative square in the definition of a natural transformation is a specific example. Of course, as John said, any commutative square is a square, but I think naturality squares are particularly closely related to the appearance of cubes in type theory. For instance, suppose in type theory you have , which induce maps and similarly . Then suppose you have , meaning . Then should induce an identification of with , but this has to be a dependent identification since their codomains and are different. This sort of doubly-dependent identification is precisely what gives rise to cubes in type theory, and in this case it's clearly also a naturality square for .
As for simplices, the simplest sort of simplex is a composition relation . Simplices also come up naturally when you talk about (co)descent objects, which are a sort of 2-categorical (co)limit that generalizes (co)equalizers one dimension up and also truncates geometric realization at dimension 2. Descent objects are so-named because they're the natural way to construct categories of descent data when defining stacks, but they also come up when strictifying (co)algebras for 2-(co)monads and when constructing general 2-dimensional (co)limits out of more basic ones.
James Wood said:
Are double categories exactly cubical 2-categories? I thought it was a problem that the horizontal and vertical 1-cells could be different.
I would be comfortable with this description. -fold enrichment in Cat yields globular higher categories, whereas -fold internalisation of a category yields cubical higher categories
The usual term for an n-fold internal category, though, is not "cubical higher category" but "n-fold category" or "n-tuple category". I think it's best to stick with established terminology here, since the terminology in higher category theory is already confusing enough without making up even more new terms.
Tying this all back together, I'd expect the HoTT type to be a groupoid, but more particularly to naturally form a double groupoid which is like the double category of modules, but with isomorphisms instead of morphisms everywhere.
If you write down the definition of "category" naively in HoTT, you get a structure that looks sort of "double" because you have the morphisms in your "category" but also the paths in the type of objects. The "univalence" condition on a category (see e.g. Chapter 9 of the Book) collapses this information by making the paths coincide with invertible morphisms. Most naturally-occurring 1-categories are univalent when defined straightforwardly in HoTT, but plenty of naturally-occurring 2-categories aren't -- because even in set theory they're better defined as double categories!
This may be what you're thinking of with modules: if you define the bicategory of rings and bimodules in HoTT, it's not "univalent" because the paths in the type of objects are ring isomorphisms whereas the invertible bimodules are Morita equivalences. Viewed as a double category, this structure consists of the ring isomorphisms and the bimodules. If you want to see the noninvertible ring homomorphisms, of course, you have to define it as a double category inside HoTT.
Mike Shulman said:
I can't find the reference right now, but I think I remember that Lawvere considered as settling the continuum hypothesis positively for "constant sets".
Found it: it's in "Cohesive Toposes and Cantor's 'lauter Einsen'".
The dialectic reflected in this method is that between imagined Being in general with all its interlocking categories of cohesion and variation on the one hand and the extreme special case of discreteness and constancy on the other. Proper understanding of this dialectic may put into a different light debates over 'foundational' questions such as whether there is an infinite set of reals of cardinality less than the continuum, or whether there is a large measurable cardinal. The answers are clearly no if we are pursuing constancy toward an extreme, because Godel's L construction would otherwise give us something still more constant.... On the other hand, the answer is clearly yes, the GCH is false, if we are considering almost any cohesive variable topos of independent mathematical interest.
Maybe rigid, in a non-technical sense, is a good synonym for "constant" here, since is even more highly constrained than (which is rigid in a technical sense).
Ah, sorry, I completely forgot the details of the bimodules example!
I think there’s more on in ‘Foundations and Applications’
Somewhat related to the discussion at the beginning of this thread: can one find a precise definition of when a first-order formula in the language of ZFC is "a structural, isomorphism-invariant property" and then prove the following metatheorem: whenever ZFC proves such a property for some set and proves , then ZFC proves ? (Of course, this statement should really be formulated allowing to contain parameters.)
(This question reminds me of the purely syntactic definition, which is due to Freyd I think, of when a first-order sentence in the language of a single category is invariant under equivalence of categories. I can't recall the details off the top of my head, but one requirement should be that doesn't talk about equality of objects, another that only parallel morphisms are compared, roughly speaking. Then one can prove that each such is indeed invariant under equivalence. As a minor question: can one prove something like a converse of that statement? Can there be nontrivial that don't satisfy this syntactic property, but are nevertheless invariant under equivalence? Of course there are trivial examples like .)
Also, what's wrong with just adding instead of univalence (where is the type of equivalences between and )?
Maybe that's related to what Mike was referring to when he said:
Another answer is that you don't need things like homotopy theory and Kan complexes if you only want to apply univalence to isomorphism of set-based structures such as groups, rings, fields, etc. For that purpose, it suffices to use a version of HoTT in which only the universe of sets is univalent, and that theory can be modeled using plain ordinary groupoids (as shown by Hofmann and Streicher in 1998).
But to model the above axiom, I would think one doesn't need groupoids.
I don't know of a characterization in terms of ZFC syntax, but a usefully broad class of isomorphism-invariant formulas is those that arise from a dependently typed formulation of ETCS along the lines of Freyd's theorem that you mention. See for instance Lemma 3.4 of my paper Comparing material and structural set theories.
Leopold Schlicht said:
Also, what's wrong with just adding instead of univalence?
You may be able to interpret this in a "skeletal" set-theoretic model, although it's not entirely trivial; I think there was some discussion of this on the HoTT mailing list some time ago. However, it's not a very useful axiom compared to univalence, since when two things are isomorphic, in order to get a meaningful statement about one from a statement about the other, we often need to transfer across the specific isomorphism that we're interested in.
Ah, I see, thanks!
What have models of HoTT in cubical sets to do with computational properties like canonicity or normalization? Models of HoTT are semantical things, while computational properties are rather syntactical concepts, so I don't see any connection.
We know how to do all these computational things for cubical. So far, it's not known for other flavours how to do so
When you define a "constructive" model of HoTT, the ambient computation of the constructive metatheory gives you a way to evaluate terms: first interpret them in the model and then compute in the metatheory. Historically, at least, this was a "first step" towards giving intrinsic notions of computation for the cubical theory such as canonicity and normalization.
Interesting, so using a constructive metatheory one can show that cubical sets are a model of HoTT, while one needs a classical metatheory to prove that Kan complexes are a model of HoTT, roughly speaking?
That's the current state of the art.
Can the univalence axiom be formulated as ?
I assume by 'isiso' you mean what the book calls 'qinv', and that the codomain of your 'idtoeq' is a correct definition of equivalence. In this case, the answer is yes at least if you assume funext, since then is equivalent to . I'm not sure offhand whether this version suffices to prove funext from univalence; you'd have to trace through the proof.
Oh, I actually meant , where is the canonical map of type , is , and is . The motivation of my question is to formulate the univalence axiom without Voevodsky's tricky notion of equivalence that magically happens to make a mere proposition (at the cost of assuming the existence of -truncation). Maybe such a formulation would be both conceptually and pedagogically simpler.
I personally don't think that such a formulation would be conceptually or pedagogically simpler. Voevodksy's formulation of equivalences looks unfamiliar when you use words like "contractible" and "fiber" or whatever, but it is also very natural from the perspective of the naïve "input-output blob" diagrams that they use to teach pre-teens the notion of surjective and injective and bijective functions...
(I do not mean to downplay the importance of Voevodsky's definition --- I just mean that it is not as alien as you might think)
Oh, I actually meant
That's also correct if you assume funext separately, for the same reason.
However, apart from questions of "simplicity" which can be debated, there is an important mathematical point that is not a very useful definition of equivalence, because you can't extract an inverse function from it without essentially giving one of the correct definitions of equivalence and proving its equivalence to that.
Thanks!
Mike Shulman said:
Another answer is that you don't need things like homotopy theory and Kan complexes if you only want to apply univalence to isomorphism of set-based structures such as groups, rings, fields, etc. For that purpose, it suffices to use a version of HoTT in which only the universe of sets is univalent, and that theory can be modeled using plain ordinary groupoids (as shown by Hofmann and Streicher in 1998). It's only when you want to start applying univalence to equivalence of categorical structures that you need higher categorical models
Why, actually, does one need higher categorical models in order to apply univalence to equivalence of categories? Simply because one wants that there's a category of sets (whose underlying type of objects isn't a set) or did you have further reasons in mind when writing this?
I guess the "version of HoTT in which only the universe of sets is univalent" you are referring to is MLTT with so-called universe extensionality. At the moment I am a bit confused, because isn't Exercise 4.6 in the book saying that universe extensionality is inconsistent?
Is there a conceptual reason for to be a mere proposition? To me this seems to be a statement that just randomly happens to be true. But since this fact is so central to HoTT, I would like to hear a reason that is conceptually more pleasing, or any other simple motivation for the definition of .
Leopold Schlicht said:
Why, actually, does one need higher categorical models in order to apply univalence to equivalence of categories?
Because applying univalence to equivalence of categories requires the categories to be "univalent" themselves, which requires their type of objects to be a 1-type. So if there is a type of categories to which to apply univalence, it must be a 2-type.
I guess the "version of HoTT in which only the universe of sets is univalent" you are referring to is MLTT with so-called universe extensionality.
MLTT with one universe of sets that satisfies universe extensionality, or equivalently univalence (the two are equivalent for a universe of sets).
At the moment I am a bit confused, because isn't Exercise 4.6 in the book saying that universe extensionality is inconsistent?
It's inconsistent for a universe that contains non-sets. (More specifically, particular non-sets such as the circle or a smaller universe.)
Is there a conceptual reason for to be a mere proposition?
In set-based mathematics, isomorphisms and equivalences are universally defined to be morphisms with the property of being an isomorphism/equivalence. In particular, if we want to show that two equivalences are the same, it suffices to show they are the same function; we don't have to separately argue that their "equivalence data" are the same. Requiring isequiv to be a mere proposition achieves this same result in HoTT.
Plus, as Exercise 4.6 says, otherwise univalence is inconsistent. (-:
Thanks!
Mike Shulman said:
Because applying univalence to equivalence of categories requires the categories to be "univalent" themselves, which requires their type of objects to be a 1-type. So if there is a type of categories to which to apply univalence, it must be a 2-type.
These two sentences somehow don't compile in my brain. What do you mean by "applying univalence to equivalence of categories"? What does it mean for a category to be "univalent"?
Requiring isequiv to be a mere proposition achieves this same result in HoTT.
I see, but this just explains why one wants isequiv(f) to be a mere proposition, and doesn't motivate Voevodsky's subtle definition of isequiv(f) and why it is a mere proposition. Of course, one could "justify" the definition by formally proving that it is a mere proposition. But I would like to know an intuitive reason for that and, moreover, understand why it is natural to come up with this definition. (My goal in https://categorytheory.zulipchat.com/#narrow/stream/229952-theory.3A-type.20theory/topic/model.20of.20univalence/near/241761226 was to propose another definition that I consider to be "natural", but as you pointed out, it isn't that useful.)
What do you mean by "applying univalence to equivalence of categories"? What does it mean for a category to be "univalent"?
Have you read chapter 9 of the HoTT Book?
Sort of, but only parts of it in detail. :sweat_smile:
It should answer those questions.
I see, but this just explains why one wants isequiv(f) to be a mere proposition, and doesn't motivate Voevodsky's subtle definition of isequiv(f) and why it is a mere proposition.
Oh, I see I misread your question. I thought you were asking "why should isequiv" be a proposition?" rather than "why does some particular definition of isequiv happen to be a proposition?".
If you're asking about Voevodsky's definition of isequiv, then the reason that's a proposition is because iscontr is a proposition. As for why iscontr is a proposition, one way to understand this intuitively is that to prove a type is a proposition it suffices to assume it is inhabited. But once iscontr(A) is inhabited, then A is contractible, and so pretty much anything we construct out of it will also be contractible, including iscontr(A). Specifically, iscontr(A) consists of a center and a contraction, but when A is contractible, there is a contractible space of such points and a contractible space of contractions to any such point.
For other definitions of isequiv, one can give different intuitions. For instance, for a half-adjoint equivalence one can make an argument about how a ball has a CW-presentation with 2 cells at each dimension except 1 at the top.
That's exactly what I was asking for, thanks! :-)