You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I have seen a bunch of stuff that has piqued my interest in Homotopy Type Theory, so I've made a few stabs at digging into the HoTT book. As I have no formal training in type theory, I've gotten bogged down in the details in chapter 1 every time. I am thinking about making another stab at it, so I have a pair of related questions:
I'm not sure I want to start a reading group just yet, but the type of answer I might give to that (implicit) question in the future may be dependent on the answers to the enumerated questions. :upside_down:
I hope you know is a fancy name for "cartesian product", which includes "and", while "" means "sum" (= "coproduct" = "disjoint union") which includes "or".
I think if you keep these things in mind, and go through some examples to make sure you understand why "and" is a product and "or" is a coproduct, you can nod and say "yeah, that makes sense" when the book says stuff about types and types.
I think feeling that stuff makes sense is probably enough at first; you don't need to worry too much about how a few basic axioms imply all the rest.
Those "fancy names" are what I have as my fallback intuitive understanding, but that doesn't seem to quite correspond to how the HoTT book defines and types. There, types are dependent function types, and types are dependent pair types. Product (cartesian product) and coproduct types are introduced after and types, respectively. That might be part of what's contributing to my confusion.
The relation between and/or and product/coproduct is almost second-nature for me. Maybe I should give an example of the kind of thing I'm getting hung up on.
If you could give an example, I could probably help. I read the first chapter of the HoTT book in December and it made sense to me.
A conventional cartesian product is really just the set of dependent functions . A particular tuple corresponds to the dependent function sending to the element .
Not sure if this addresses your confusion though...
Shortly after showing every element of a product type is equal to an ordered pair:
[...] we call the resulting function induction for product types: given we have
with the defining equation
Checking that the type of the defining equation matches the type stated is making my head spin. And it doesn't help that the defining equation line is right after a pagebreak.
As I understand it, the scope of the first extends all the way to the . My intuition for how the HoTT book explains types is that we have a dependent function from something of the type to something of the type "everything after the ", where is the variable that, when evaluated at a particular , will set the type of the expression.
I'm guessing this isn't the best intuition, but it seems to fit the earlier, simpler examples, where is the type . That is we get something where .
I figure I ought to be able to type check things so that when I actually go do stuff I can make that first sanity check that I might not be spewing nonsense.
OK so here's an intuition for . You can think of it as a -indexed family of functions of type
So we have .
What does do? It takes a curried function and outputs an uncurried function .
Does this address your confusion?
So where did come from? Is it the uncurried version of ? And then why does the defining equation end up with the curried function ?
Yes is the uncurried version of .
The defining equation is uncurrying . We can write . Then the defining equation becomes .
As stated in the HoTT book, seems to be taking the following input: , the curried function , and the ordered pair , and outputs the curried function applied to and .
Well, the defining equation is saying that when we apply to uncurry , the result must truly be an uncurrying of . So when we plug into , the result must be the same as .
I understand what you're saying, and why that would make sense, but I am not seeing that in what's in the book.
That's one way of looking at it. But everything is automatically curried in HoTT, so a function taking the inputs , , and and outputting where is some expression with 3 free variables in which I'm substituting in , , and , could be equally considered a function taking just two inputs, and , and outputting a function taking the third input and outputting .
In this case, you can take the last input, the ordered pair and say that this is not actually an input: rather, it is the input to the function which is the output of the whole thing.
So then the output would be the function that takes ordered pairs and applies the curried function to them.
And the input would be just and .
I haven't even gotten to the part of the book that is about HoTT yet. This is supposed to be ordinary type theory. Up to now the book has been careful to differentiate between curried and uncurried functions, saying that they'll be conflated much later.
Sorry I think I'm conflating ordinary type theory and HoTT. In ordinary type theory, currying is the default way to have a function of multiple variables.
I mean, you're already thinking this way when you say that has multiple inputs.
Actually, I would be surprised if it wasn't the default way in both.
If you were speaking entirely literally, you would say it has just one input, , and a big complicated output.
But actually if you like, we can speak literally here. is a function which takes a curried function as input and gives an uncurried one as output.
[...] we call the resulting function induction for product types: given we have
with the defining equation
Okay. Let's just put that here again so I don't have to keep scrolling up and down to read and reply. My eyeballs are telling me the defining equation of induction here is taking a triple of inputs and giving a curried function (on two variables, one at a time) as the output.
If the authors were being entirely precise they would write:
But at some point they said that was an acceptable abbreviation for . Which is of course confusing in this context, because it is about the distinction between currying and non-currying.
Yeah... in section 1.2 they mention avoiding the proliferation of parentheses as a reason for writing for , even though there are no products involved. But in the section on product types (where this induction type is so introduced),
Thus, we introduce a new rule (the elimination rule for products), which says that for any such , we can define a function by
We avoid writing here, in order to emphasize that is not a function on a product. (However, later on in the book we will often write both for functions on a product and for curried functions of two variables.)
I had figured that careful distinction would have applied to . So that at least addresses some of my confusion. I'll try rereading this definition with the understanding that the notation is shorthand currying.
Yeah you can also see by the type of that this must be shorthand currying, so you don't just have to believe me.
Well, that is what I was getting lost in trying to check in the first place, so it was not at all clear to me a priori that it must be shorthand currying. Thanks, though.
Joshua Meyers said:
If the authors were being entirely precise they would write:
Or, even better, . You could submit an issue or a pull request to the book repository suggesting a change like this. At the moment I don't see a downside of the more precise notation here, and it could help future students in your situation!
When I started reading HoTT book, I had finished almost all "Types and Programming Languages".
And then, I did not know Martin Loef Types very much.
I did not have background of homotopy theory and ∞-category then.
I had some knowledge of Algebraic Data Type ( F-Algebra ) .
Though I did not understand what was Π and Σ,
I could start reading the HoTT book Chapter 1, regarding Π as → (Hom) and Σ as × (Product).
For me, chapter 1 is essential since we cannot construct a concrete proof in Chapter 2 without it.
It was in Chapter 2 that I realized that path-induction (introduced in 1.12) was really in practice.
Gradually, I noticed that Π was the generalization of ∀ and Σ was of ∃ in logics, Coq like proof Assistants, or System F,
and that these are fibration(the generalization of Hom Space) and cofibration(the generalization of Product Space) in Quillen's Model (very soft geometry), thanks to Cisinski's Book (Higher categories and Homotopical Algebra) and some papers, slides and notes which were openly given by @Emily Riehl, @Mike Shulman and so many.
And recently I also noticed that they seem called "fundamental theorem of topoi" .
Π is right adjunction of pullback and Σ is left adjunction.
I have a question here,
Is the category where fundamental theorem of topoi occurs opposite to the ∞-category ?
Is the category of terms in HoTT opposite to the category where the resulting adjunction from fundamental theorem of topoi occurs ? It might be a mistake, but I could not relate them well without taking opposite. (Edited)
Do they fit to Loc and Frm ?
( Edited: I remembered that the category of Open sets of X in Pos is a frame.
I began to realize I might be mixing locales and frames in a bad manner. )
The result often referred to as the "fundamental theorem of topos theory" is that any slice (aka over-category) of a topos is a topos, and possibly the secondary result that if is a morphism in a topos then .
As such, any such gives a geometric morphism whose inverse image is the pullback along , and this functor has and as adjoints, as you mention, but this is not the content of the fundamental theorem.
But I don't really understand what your question is, I'm afraid.
Thanks, I did not know the main theorem at all!
I have been confused and taken as . ( I mistook as an index. )
Now, I see that just gives a "geometric" morphism ( i.e. a solid and structure-preserving morphism like a natural projection or a rng homomorphism and its adjunction is not indexed but the fibration itself in a particular situation ) and that is the base . ( is the index )
Does anyone have similar experience bothered by this notation ?
( I have remebered someone was talking about this elsewhere. )