You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I saw on the category theory calendar that Kevin Buzzard gave a Zoom talk on April 1st. Is there a video available anywhere?
He spoke at the OWLS seminar, and they don't seem to mention videos - am I overlooking it?
https://www.cs.bham.ac.uk/~vicaryjo/owls/
apparently there isn't a recording available yet (via the HoTT Zulip)
Okay, thanks
The recording is promised to be put up on YouTube, but Jamie had the problem that the recording existed as two files: an audio file and a video file with no sound. I guess he's just got to find the time to faff about combining them and uploading the result.
Okay,t hanks for letting me. That's curious. Zoom creates a video with sound. I guess Jamie is stuck in the "silent movie" era.
He did record using Zoom, I asked. I've done a bit of googling and sometimes mp4 files play without sound, something to do with codecs and whatnot.
Based on what I've seen at MathOverflow, I'd almost bet money that Buzzard's answer would be 'no'. (But hey, surprise me.)
Another case of Betteridge's law?
Over at the HoTT zulip, Kevin has seemed very open-minded. I didn't manage to watch the talk, but from the slides that he shared, it seems that his answer is "we need to formalize lots of undergraduate mathematics in a HoTT proof assistant, in order to find out."
Yes, it's a bit like his attitude to formal proofs (as I understand it) before he became enamoured of Lean. We need to do a bunch of stuff to see what happens. Formalisation is as close to experimental pure mathematics as one can get, I feel (pure numerical work in number theory notwithstanding).
Mike Shulman said:
Over at the HoTT zulip, Kevin has seemed very open-minded. I didn't manage to watch the talk, but from the slides that he shared, it seems that his answer is "we need to formalize lots of undergraduate mathematics in a HoTT proof assistant, in order to find out."
I think the story is as follows. Kevin was trying to formalize some stuff in Lean. In doing so, he needed to prove some statements that clearly are theorems of Lean (and ZFC, and other non-univalent systems) but the proofs are long and inconvenient. He asked what would be a covenient way of proving these statements. Some HoTT enthusiasts suggested that univalence was the answer, because the proof of these statements would be trivial in HoTT. So Kevin is asking these people to "put their money where their mouth is" by formalizing his stuff in HoTT.
In my opinion it's rather strange to adopt univalence for the sole purpose of getting shorter proofs of things that were already provable without univalence.
Paul Blain Levy said:
In my opinion it's rather strange to adopt univalence for the sole purpose of getting shorter proofs of things that were already provable without univalence.
This was arguably Voevodsky's motivation. He didn't want to work with higher inductive types or other "synthetic" homotopy constructs until the models and their relation to standard constructions (e.g. standard model categories, etc) were fully worked out and understood (which now great progress has been made on). So the system he worked in, iirc, was something like MLTT + truncation + choice when warranted + univalence.
More generally, even in the presence of higher inductive types, I think it would be pretty shocking to get proofs of things _really_ not provable without univalence with the addition of univalence. If all it lets you do is "tighten up" equivalences into equalities, then anything you prove with it should be able to be "laxified" into something nearly the same, but with some extra equivalences lying about that can be proved without it.
Arguably the convenience it buys you is that once you get those equalities, then you have a straightforward way to apply transport along them.
Paul Blain Levy said:
In my opinion it's rather strange to adopt univalence for the sole purpose of getting shorter proofs of things that were already provable without univalence.
Well, if your goal is to have 'generic mathematicians' jumping on the train, it's quite crucial that trivialities don't become cumbersome. Moreover univalence reflects much of the laxity the mathematical practice regularly employ, that is identifying equivalent objects
Hence, given the premise, it seems to be that Kevin was stressing that univalence is a great 'psychological' tool
Gershom said:
More generally, even in the presence of higher inductive types, I think it would be pretty shocking to get proofs of things _really_ not provable without univalence with the addition of univalence. If all it lets you do is "tighten up" equivalences into equalities, then anything you prove with it should be able to be "laxified" into something nearly the same, but with some extra equivalences lying about that can be proved without it.
Maybe I'm misunderstanding what you mean, but you do need univalence to show that certain higher inductive types have the expected homotopy groups. For example, without univalence it is consistent that the circle is contractible, as the set model shows.
Yeah, univalence is essentially making the universe a higher inductive type, and without it, there's no accessible structure to do non-trivial 'large' elimination of other higher inductive types. So you're in a similar situation to where you can't prove disjointness of constructors without the universe.
You'd probably have to laxify all the HITs involved.
However, "someone could, in principle, write down all the structure and prove things without univalence," is a criterion that will probably always be satisfied, which I think was the point.
My confusion is, I suppose, that it is not clear to me that one needs univalence to rule out the set model. My understanding, which I guess may be wrong, is that the presence of certain higher inductive types with sufficient elim rules alone (maybe ad-hoc large-elims per type) suffices. I should have thought through and specified what I meant by "HITs" in the above though.
Perhaps if there were a special principle for elimination into the universe that allowed you to provide equivalences for higher constructors to go to, that would technically allow you to avoid the exact univalence axiom. Although I couldn't rule out being able to prove univalence from that. It seems like that would be the same thing to a resistant mathematician, though.
looking at the fundamental group of the circle in the hott book, i suppose another point is to even state the definitions used in the proof you need univalence image.png
Right, that's where you would need the special elimination. So that you can eliminate loop
to succ
.
Anyway, I shouldn't try to salvage my misstatement further :-). The point I should have stopped at is just that if one is not engaged in synthetic homotopy theory with HITs, then univalence largely just makes things "nicer". Once you have a richer univalent universe that you want to prove things regarding, then a priori one way or another you're going to be more with some set of axioms regarding that structure than otherwise.
I suppose I now wonder what Voevodsky wanted to do with mainly MLTT+univalence. I guess maybe you can avoid using HITs because the constructions involving them can be encoded in other ways. Like, a family on the circle is equivalent to a type and a loop in that type, and there are encodings of sections of that family and so on.
He didn't want to do anything that different than MLTT without univalence, if I recall, just from a "univalent" point of view. He did make use of the fact that univalence -> FunExt, and a few other things. Here's an article on the UniMath project that may or may not help: https://arxiv.org/pdf/1710.02723.pdf
Well, I guess part of it is that intensional type theory can already be seen as being a setting for -groupoids, unlike most other systems, and univalence adds the right sort of extensionality for that. But without HITs it seems like you're probably missing most of the homotopy types people would want to talk about.
So I'm not sure what activities people would do. I guess you could use it like a pure type system, where even if there aren't any defined types other than Π and the sorts, you can talk about things in a non-empty context.
Dan Doel said:
I suppose I now wonder what Voevodsky wanted to do with mainly MLTT+univalence. I guess maybe you can avoid using HITs because the constructions involving them can be encoded in other ways. Like, a family on the circle is equivalent to a type and a loop in that type, and there are encodings of sections of that family and so on.
I would argue that the purpose of HIT, beside synthetic homotopy theory, is to provide a non-axiomatic (and thus computable) definition of quotients and propositional truncations for instance. These were to some extend in previous systems : a strict version of propositional truncation is allowed in CiC by an inductive in Prop, and non-axiomatic quotients are available in Observational Type Theory if I understand what OTT is about. The goal of HIT is then to generalize the theory of inductive data types to accomodate these examples. It happens that generalizing the theory of inductive gives us weird things like the Circle about which we may wonder the usefulness :upside_down:
i don't know if ott had quotients—it did have funext
i guess funext + a universe w/ propext probably gets you quotients, but i dont think the original ott paper even had a universe, let alone propext
(maybe there was a followup)
ive heard reference to something called "setoidtt"
There was a pretty late paper or blog post on how you could add quotients to OTT. However, the work on the homotopy aspects of type theory leads to a more satisfying computational account of things like quotients (and funext) than might have been discovered for a very long time without it.
But the statement above was that Voevodsky was relatively content with MLTT+truncation+choice+univalence. And he wanted to do high dimensional mathematics that he thought was infeasible in set theory. My point is, I'm not clear how he would do that without stuff like HITs, unless it was entirely by proving things about hypothetical structures (which, maybe that was the plan, I don't know).
In the case of OTT, the answer is always about the same. Props are irrelevant, and equality is a prop, so you can add any axioms to it that you want, if that helps you.
Also, the thing about OTT was that it seemed compelling that quotienting should be a third way to define types, like data and codata. But now quotient/higher (co)inductive types seem more compelling.
If you take a look at the linked paper, the "higher dimensionality" he saw was not "synthetic" but really in terms of h-levels. So he wanted to be able to reason up to equivalence, but he imagined, at least at first, constructing definitions of concrete mathematical objects essentially as one would in MLTT typically.
Sarah Reboullet said:
I would argue that the purpose of HIT, beside synthetic homotopy theory, is to provide a non-axiomatic (and thus computable) definition of quotients and propositional truncations for instance. These were to some extend in previous systems : a strict version of propositional truncation is allowed in CiC by an inductive in Prop, and non-axiomatic quotients are available in Observational Type Theory if I understand what OTT is about. The goal of HIT is then to generalize the theory of inductive data types to accomodate these examples. It happens that generalizing the theory of inductive gives us weird things like the Circle about which we may wonder the usefulness :upside_down:
Of course, that's totally backwards historically. We invented HITs in order to construct things like the circle, and later noticed that they also gave quotients and truncations. I guess from your emoji that you're aware of that, but I wanted to emphasize it in case anyone else wasn't.
Gershom said:
My confusion is, I suppose, that it is not clear to me that one needs univalence to rule out the set model. My understanding, which I guess may be wrong, is that the presence of certain higher inductive types with sufficient elim rules alone (maybe ad-hoc large-elims per type) suffices.
I don't know of any natural axiom that rules out the set model and doesn't imply some form of univalence. HITs alone are perfectly consistent with set-level models, and if you add "univalence-inspired large elims" then you can prove univalence. (Coincidentally, @Egbert Rijke just mentioned this over in the HoTT zulip.)
(Although of course, in a type theory that doesn't have any universes you can't even state univalence, but you can still state a form of univalence-inspired large elim for HITs.)
This is total speculation, but I could imagine not wanting to use HITs because the details hadn't been worked out. IIRC, the presentation of some of the HITs in the HoTT Book is a bit of a mess, given that some of the laws you'd expect to be definitional equalities are instead stated as mathematical equalities. I think those issues have since been fixed by moving away from globular assumptions, but I know at least HIITs are not really worked out yet.
I think what the paper says was essentially what I was saying. Without HITs, you can still prove things about defined structures. Like, you can prove that all the type of groupoids in is itself a 2-groupoid, even though there's no way to give an example of a groupoid in (I think).
So, you can prove theorems about structures even though there aren't any examples to apply them to. If you have additional universes you can get particular examples of some things, too, but I'm not sure how far that goes.
For example, the type of groups in is a groupoid in , I think. Without anything "synthetic" it just becomes harder to give smaller fully concrete examples of higher structure, I think.
We have a stream for #theory: type theory.
The article title: homotopy type theory : the logic of space. Does this article's title mean, structured category of homotopy type theory is sufficient mostly all of internalization of 'spatial space'? This mean, for example, Structured category suffice all of categorical properties which the category of smooth manifold admit?
I don't think anyone has completely captured the category of smooth manifolds using HoTT plus extra axioms. I could be wrong!
The category of smooth manifolds lacks many nice features: it doesn't have all equalizers, it doesn't have all coequalizers, it's not cartesian closed, etc. But there are other categories of smooth spaces that have these features:
There was a discussion on the lean4 channel on HoTT a few days ago, where Kevin Buzzard explained what his problems with HoTT were. See the thread I started Web Programming with Lean4.
thx a lot. Let me see !
In general you can't expect any kind of synthetic mathematics to "completely capture" any particular model. Arguably, that's the whole point. (-:
Kevin on Lean Zulip makes some mathematical points that I cannot follow (as I am a (philosophical) engineer, not a mathematician). His argument seems to be that Undergraduate and Early Postgraduate Mathematics students don't need much more than what Lean offers. But I don't know exactly what those students learn.
Could it be that as software engineers we benefit from using tools that require more advanced maths than what they learn at that level, or that we would benefit from tools using those? For example Linear Logic is entering programming through compilers such as Rust, Idris2 and modal linear logic with Granule, ... A lot of software engineers are embracing Rust for that reason, even if they never heard about Linear Logic. Is that something that the (under)graduate mathematicians come across? If not, then it may be that Cubical HoTT is very useful for programmers using Lean as a programming language (or Cubical Agda for that matter), even if not that interesting to mathematicians (at the level he is concerned about). For example if it would allow me to prove things on simple data structures and then transport those to efficient ones for the programs I want to compile, that may be a huge thing.
Almost no undergrad math majors come across linear logic.
With luck they learn classical logic: the propositional calculus and first-order predicate calculus, "classical" because the law of excluded middle and thus proofs by contradiction play a huge role in their thinking. I believe most of them learn these things informally.
(I took courses on logic in the philosophy department as an undergrad so I actually learned about formal mathematics. Maybe things have changed a bit, but I think formal mathematics, e.g. starting with axioms for logic, is not what math undergrads do.)
A lot of math students first learning about proofs fall in love with proof by contradiction and use it even when it makes the argument longer!
Proof by contradiction is often easier for the brain to hold onto, imo.
Another problem is that most introductory logic courses don’t distinguish between a proof by contradiction and just proving a negation.
Joe Moeller said:
Proof by contradiction is often easier for the brain to hold onto, imo.
Here's one thing you'll never hear though:
Proposition: This diagram commutes.
Proof: Suppose it didn't...
Henry Story said:
it may be that Cubical HoTT is very useful for programmers using Lean as a programming language (or Cubical Agda for that matter), even if not that interesting to mathematicians (at the level he is concerned about). For example if it would allow me to prove things on simple data structures and then transport those to efficient ones for the programs I want to compile, that may be a huge thing.
It's possible. This has always been one of the things that CS HoTT people hope for, or at least say on grant applications that they say they hope for. I don't know whether anyone has ever actually achieved some concrete benefit of this sort, though. You should try!
Anders Mortberg is working on examples of this sort of thing. Check out his work here: https://staff.math.su.se/anders.mortberg/
Todd Trimble said:
Joe Moeller said:
Proof by contradiction is often easier for the brain to hold onto, imo.
Here's one thing you'll never hear though:
Proposition: This diagram commutes.
Proof: Suppose it didn't...
Very true! One of the charms of category theory is its simple, direct character, and diagram-chasing is all about stringing together implications between collections of equations. I think people who like category theory tend to love how much you can do with such simple reasoning. Maybe people who don't like category theory find it "trivial" or "abstract nonsense".
Steve Awodey said:
Anders Mortberg is working on examples of this sort of thing. Check out his work here: https://staff.math.su.se/anders.mortberg/
The two I have seen is the introductory article Cubical agda: a dependently typed programming language with univalence and higher inductive types which starts off with how one can transport proofs from the inefficient but simple unary representation of natural numbers to the efficient binary number representation used by computers. I can see a lot of applications of this. Eg. RDF has a simple graph structure, but there are many different encodings that have better efficiency characteristics for search, updated, etc...
Also I found his talk on conatural numbers very easy to follow, and there he shows how it ties in nicely with the Agda codata types.
In programming the most important application of monads is for Input/Output and effects (where linear logic comes in). So I look forward to some awesome tools emerging from 2-level type theories used for the development of Modal HoTT see nlab page... But that is still in development, which should give me some time to catch up :-)
Here is a very nice introduction on coinductive proofs of equality using Cubical Agda by @taooftypes . One can prove that an equality of streams is a stream of equalities in 2 copattern matches that mirror inductive proofs. https://www.youtube.com/watch?v=-fhaZvgDaZk
- The 🐟 BabelFish (@bblfish)@Henry Story / @Mike Shulman
I'm a CS person (an engineer, I've exactly zero formal training) who did some work to generate Agda from Rust types where we did some “proofs” in Rust's type system. One of those things was to prove some impossibilities vis-á-vis carving up the bits of a 32-bit unsigned integer as it represents fields in a register. I based most of the proofs on and wrote an injective proof to round trip to my base-2 numbers and back, but was too lazy to add the truncation (not in the HoTT sense, but literally to drop trailing zeros in the base-2 rep). This, I think, would have been made easier with univalence / HITs thanks to quotients on the base-2 side, making trailing zeros irrelevant.
I'm saying this all to say, those things that we say sometimes are “just engineering” or places where we “just need to turn the crank” could be made radically easier. I'm a firm believer that Voevodsky was right even if I lack the capacity to properly understand why!
How do I even LaTeX on Zulip.
Open and close with two dollar signs instead of just one.
Your use cases @dan pittman in the car industry are very appealing (I watched a couple of your talks).
It would be very interesting to write up or give a talk where you use HoTT to transport a proof from one structure to another. Perhaps that could get the Lean4 folks to take seriously both an implementation into Rust and the development of HoTT. (I mention Lean4 as it produces C code, and so having it produce Rust may not be such a big project, and someone did produce a Cubical version of Lean2).
Agda is far more advanced as it has an implementation of Cubical HoTT of course. (I actually wonder why Idris does not get closer to Rust as it has linear types, but it does not have Cubical). It looks like each language is missing some piece...
An appealing use case involving AI and sensors in cars using #rustlang and #Agda to blur the distinction between proof and code using Curry-Howard and Dependent types https://www.youtube.com/watch?v=jrVPB-Ad5Gc
- The 🐟 BabelFish (@bblfish)Lean already has some form of quotients which satisfy Kevin.
Yes, but Kevin is a mathematician, and he's not trying to write code to interact with AI in cars :-) So it may be that coinduction, cubical, and all these other features are really important in these process oriented use cases in a way that they are not in number theory.
To give a more specific reference to applications of Cubical Agda to programming, see the recent POPL paper of Angiuli, Cavallo, Mortberg, and Zeuner: https://arxiv.org/abs/2009.05547
Here they show how to apply univalence (and the structure identity principle) to programming problems. An interesting aspect of this work is the way it shows that many applications of "parametricity" (in the Reynolds sense) in programming are actually redundant in the presence of univalence.
This is very good, because parametricity is in essence only a statement about definability (my program does the right thing because you can't use it in a way that makes it go wrong) --- hence any program correctness lemma that follows only from parametricity could in principle be invalidated if the program is interpreted in an execution environment where more "contexts" are possible. One answer to this problem is 'internal parametricity', but this is a priori a very .... bold direction to go because there is no reason to believe that models of internal parametricity actually say anything interesting about standard mathematics (as opposed to programming).
So, by showing how far you can get with just univalence, the authors increase the unity of ideas between programming and ordinary mathematics in a way that finally pays off from the CS perspective.