Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: Semantics of Type Theory


view this post on Zulip Julius (Nov 18 2023 at 19:05):

I am reading this extremely hard paper and I’d like to try to have a running conversation about it while I read it.

view this post on Zulip Julius (Nov 18 2023 at 19:09):

We propose an abstract notion of a type theory to unify the semantics of various type theories including Martin-Löf type theory…

Intuitionistic type theory has three finite types, which are then composed using five different type constructors. Unlike set theories, type theories are not built on top of a logic like Frege's.”

Type constructors are rules which allow you to create new types from existing ones, right? Then, in what way is it not “built on top of a logic”? First-order logic provides rules by which we can form new expressions from the ones we currently have. Isn’t that the same as type constructors?

view this post on Zulip Julius (Nov 18 2023 at 19:20):

“There are three finite types: The 0 type contains 0 terms. The 1 type contains 1 canonical term. And the 2 type contains 2 canonical terms.

Because the 0 type contains 0 terms, it is also called the empty type. It is used to represent anything that cannot exist. It is also written ⊥ and represents anything unprovable. (That is, a proof of it cannot exist.) As a result, negation is defined as a function to it:
¬A := A→⊥”

Intuitively, I think this says, “a witness of A would imply something impossible/which does not exist”. Is there a deeper reason why this turns out to be so effective as a definition of “negation”?

“Likewise, the 1 type contains 1 canonical term and represents existence. It also is called the unit type. It often represents propositions that can be proven and is, therefore, sometimes written ⊤.

Finally, the 2 type contains 2 canonical terms. It represents a definite choice between two values. It is used for Boolean values but not propositions. Propositions are considered the 1 type and may be proven to never have a proof (the 0 type), or may not be proven either way.”

How can a proposition be a 1-type if the 1-type has only one term? This sounds like there could only be one proposition in existence, and it is proven to be true. If propositions are 1-types, how can you say they can be proven to not have a proof and therefore be a 0-type?

view this post on Zulip Ralph Sarkis (Nov 18 2023 at 19:56):

Julius said:

Type constructors are rules which allow you to create new types from existing ones, right? Then, in what way is it not “built on top of a logic”? First-order logic provides rules by which we can form new expressions from the ones we currently have. Isn’t that the same as type constructors?

Unconfident guess. In set theory, the logic for constructing propositions and the logic for constructing sets are different, and the latter is built on top of the former. In type theory, propositions are types so there is no distinction.

view this post on Zulip Mike Shulman (Nov 18 2023 at 20:11):

Yes, first-order logic does provide rules to form new expressions from the ones we currently have, but it isn't the only system that does that. Type theory also does that.

view this post on Zulip Mike Shulman (Nov 18 2023 at 20:16):

Propositions are considered the 1 type and may be proven to never have a proof (the 0 type), or may not be proven either way.

That is an extremely confusing sentence! I just corrected it to

Propositions are instead represented by particular types. For instance, a true proposition can be represented by the 1 type, while a false proposition can be represented by the 0 type. But we cannot assert that these are the only propositions...

Is that any better?

view this post on Zulip Patrick Nicodemus (Nov 18 2023 at 21:12):

This wikipedia article uses inappropriate language imo.
"The 1 type contains 1 canonical term."
To speak of a type "containing" something is to think of it like a set. But if it is a set then why do we talk about "canonical terms" and not simply "elements"?

view this post on Zulip Patrick Nicodemus (Nov 18 2023 at 21:14):

Also referring to ordered pairs as "2-tuples" is unnecessary.

view this post on Zulip Mike Shulman (Nov 18 2023 at 21:19):

I don't think there's anything wrong with "contains" for elements or terms of types.

view this post on Zulip Mike Shulman (Nov 18 2023 at 21:20):

Being like a set is not the same as being a set.

view this post on Zulip Mike Shulman (Nov 18 2023 at 21:22):

The meaning of "canonical" there is that the type 1 is defined to contain one specific term. In theory one might be able to construct other elements of it that aren't "there by definition" (although in this case, often there is an η\eta-rule saying that all elements of 1 are definitionally equal).

view this post on Zulip Patrick Nicodemus (Nov 18 2023 at 21:27):

To me this seems like a blurring of syntax and semantics. Types are syntactic entities. We can also call the things they denote "types" if you're a homotopy type theorist and really insist that types are not sets, but most of the time I think it's fine to say that the type "nat" denotes the set of natural numbers.
But if we are passing to semantics then the terms should also become their denotations.
Otherwise I don't know. It seems weird to talk about the type 1 as denoting "the set of terms in the formal language which the judgement rules prove are type 1"

view this post on Zulip Mike Shulman (Nov 18 2023 at 21:54):

I think even in set-level type theory, types behave sufficiently differently from sets (especially from ZFC-style sets) that it's reasonable to want to use a different word for them. Not that everyone has to, but it's reasonable for someone to want to.

view this post on Zulip Mike Shulman (Nov 18 2023 at 21:58):

It seems weird to talk about the type 1 as denoting "the set of terms in the formal language which the judgement rules prove are type 1"

I agree, that is wrong. When you pass to semantics, a syntactic type denotes a thing (whether you call that thing a "type" or a "set") whose elements/points/whatever-you-call-them are also semantic things. But I think we can also use the word "contains" at the syntactic level.

view this post on Zulip Astra Kolomatskaia (Nov 19 2023 at 19:17):

I think that one example to have in mind is that we can have terms in a context like:

n : ℕ ⊢ n : ℕ

Then the term n is semantically distinct from any numeral like 0, 1, or 2.

view this post on Zulip Julius (Nov 19 2023 at 22:26):

34724212-1CE3-4B75-891E-E029A4EB4C1F.jpg

A representable functor is a presheaf which is isomorphic to a hom-functor. A representing object is just the object X in the hom-functor hx:Hom(,X)Seth_x: Hom(-,X) \rightarrow Set.

Why in the picture above does it seem like they state two maps, when they speak of the existence of a representing object (for a category with families)? Shouldn’t the representing object just be some object in C?

view this post on Zulip David Egolf (Nov 19 2023 at 23:10):

I'm guessing you are referring to the stuff directly below "a representing object for the functor", but I'm not sure. In case that is what you are referring to:

I think that (C/Γ)opSet(C/\Gamma)^{op} \to \mathsf{Set} is describing the source and target categories of a functor. Then the line below describes how that functor acts on objects. Namely, it sends an object f:ΔΓf: \Delta \to \Gamma to a set Tm(f(A))Tm(f^*(A)).

view this post on Zulip Julius (Nov 20 2023 at 05:01):

Ok, I understand. Thank you very much.

view this post on Zulip Julius (Nov 20 2023 at 05:45):

@Ralph Sarkis

In set theory, the logic for constructing propositions and the logic for constructing sets are different, and the latter is built on top of the former. In type theory, propositions are types so there is no distinction.

First order logic is nine a priori categories (index, variable, constant, unary/binary/ternary function, unary/binary/ternary predicate), with “terms” defined in terms of those 9 things, “atomic formula” defined in terms of terms, and “formula” defined in terms of atomic formula.

These formation rules define all the expressions that can be made in this system. If you have a constant c’’’c’’’ and a variable xx’, you can form a function f2(c’’’,x)f2(c’’’, x’), for example.

I think the “signature” is symbols you provide at the beginning, for each of the 9 starting “kinds”. For example, if you had one constant symbol cc, and one binary function f2f2, then I think all the expressions you generate from those two symbols alone is what we think of as the free magma on one generator. (However, we haven’t defined that concept yet.)

It is possible to recursively enumerate all expressions from (“over”) some signature up to a finite number of iterations.

For example:

At stage 00, we have two potential choices: cc and f2(<term>,<term>)f2(<term>, <term>), but the only valid term of those 2 is cc. Therefore, we “create” cc.

At stage 11, we have constructed cc, and we have 2 primary expressions we can use (apply) to create a (new) term. cc is not new, but f2f2 with cc for both its <term><term> parameters is new, so we create that term, f2(c,c)f2(c, c).

(still drafting this message)

view this post on Zulip Julius (Nov 20 2023 at 06:27):

k = 3

All terms:
cc
f2(c,c)f2(c, c)
New terms:
f2(c,f2(c,c))f2(c, f2(c, c))
f2(f2(c,c),c)f2(f2(c, c), c)
f2(f2(c,c),f2(c,c))f2(f2(c, c), f2(c, c))

view this post on Zulip Julius (Nov 20 2023 at 06:40):

I believe we can draw a tree diagram to help us keep track of all possible derivations in the next stage, but there may be variety in how we do that. Could someone point me to the name of this concept? I think it is a multiway system of a formal language. I need to sketch out how to present it. Maybe the rule could be:

view this post on Zulip Julius (Nov 20 2023 at 06:50):

c f2(c,c) f2(f2(c,c),c) f2(f2(c,c),f2(c,c)) f2(c,f2(c,c))\begin{CD} @. @. c @. @.\\ @. @. @VVV @. @.\\ @. @. f2(c, c) @. @.\\ @VVV @. @VVV @. @VVV\\ f2(f2(c, c), c) @. f2(f2(c, c), f2(c, c)) @. f2(c, f2(c, c)) \end{CD}

view this post on Zulip Julius (Nov 20 2023 at 06:54):

I’ll come back to this, I need to format it on my computer.

view this post on Zulip Julius (Nov 20 2023 at 07:11):

The next point I wanted to discuss was how set theory is constructed from first order logic and to try to understand more clearly why that is apparently fundamentally different than how type theory works. I’ll return to this. Thanks.

view this post on Zulip Mike Shulman (Nov 20 2023 at 07:35):

nine a priori categories (index, variable, constant, unary/binary/ternary function, unary/binary/ternary predicate)

I think maybe you misread that wikipedia page a bit. Earlier it says that functions and predicates can have arbitrary valence, not just 1,2,3. The BNF grammar only lists rules for valences 1,2,3 as examples, but there's nothing special about those valences, and no one in practice restricts to just those valences.

view this post on Zulip Julius (Nov 20 2023 at 07:39):

Does that mean in BNF you can define primitive types of "arbitrary valence" inductively / recursively?

view this post on Zulip Mike Shulman (Nov 20 2023 at 07:42):

I don't know what the precise definition of BNF is, so I couldn't tell you. But "first-order logic" is not defined by what you can encode in BNF.

view this post on Zulip Julius (Nov 20 2023 at 07:52):

Ok. But regardless of BNF, a "function" is essentially nothing but a dependent type, right? (This is not type theory, but the concept is the same.) Regardless of what it "means", it is an expression whose formation rule depends on some term you select, as input. If you allow the "primitive function" to be essentially the product type (take two things, and consider them one thing together), and you have the concept of "free variable", then via currying, you inductively define n-ary functions. Is that correct? (I know I need to make this more precise, otherwise it may be hard to determine what I'm even claiming.)

view this post on Zulip Mike Shulman (Nov 20 2023 at 07:53):

No, I don't think that makes sense, functions are not dependent types.

view this post on Zulip Julius (Nov 20 2023 at 07:55):

Alright, let me get back to you. Thanks.

view this post on Zulip Julius (Nov 20 2023 at 08:05):

Or perhaps I can benefit from the running-conversation approach.

view this post on Zulip Julius (Nov 20 2023 at 08:08):

I am pretty sure it is a classic philosophical problem of "how to begin".

view this post on Zulip Julius (Nov 20 2023 at 08:10):

Maybe the very lowest level is the concept of an alphabet, our set of symbols. Perhaps for FOL that would be:

'
x
c
f
P
(
)
,
(space)
and a few others.

view this post on Zulip Julius (Nov 20 2023 at 08:13):

First we define the index rule, to generate a non-terminating supply of distinguishable "things" (symbols, elements).

We want to be able to say, "for anything that counts as a term, you can also create the expression term'"

view this post on Zulip Julius (Nov 20 2023 at 08:13):

as in, xxx \rightarrow x'

view this post on Zulip Julius (Nov 20 2023 at 08:14):

I could be completely incorrect here, but this is something I've been thinking about all year. I'd love to understand this more clearly:

view this post on Zulip Julius (Nov 20 2023 at 08:16):

First, for me, there is the open philosophical question about, it seems we always want to find a way to define or explain something, in the smallest way possible. This sounds obvious. It feels like a success when we realize that, amongst some axioms, one of the axioms is actually not necessary, it follows from some of the other ones. So we can logically define some phenomena with even less starting assumptions.

view this post on Zulip Julius (Nov 20 2023 at 08:17):

Sometimes I wonder if there is actually more to be contemplated there about why we are so sure "less" is better.

view this post on Zulip Julius (Nov 20 2023 at 08:18):

However, sometimes it seems like there is a certain minimum level of starting assumptions it is almost conceptually impossible to go lower than.

view this post on Zulip Julius (Nov 20 2023 at 08:18):

For example, we just defined a rule about how to generate new symbols, xxx \rightarrow x', where xx is any term.

view this post on Zulip Julius (Nov 20 2023 at 08:19):

I'm quite curious to know your opinion on the following:

view this post on Zulip Julius (Nov 20 2023 at 08:19):

I have seen functions, in set theory, defined in terms of sets of pairs of elements.

view this post on Zulip Julius (Nov 20 2023 at 08:20):

And yet (although I am studying the details, as we speak), I believe set theory is largely defined in terms of first-order logic.

view this post on Zulip Julius (Nov 20 2023 at 08:22):

In order to get off the ground in first order logic, we had to assume that there is an understood concept of "for anything that is of a certain type, you may 'associate' it with...

view this post on Zulip Julius (Nov 20 2023 at 08:22):

well, with something, anyway.

view this post on Zulip Julius (Nov 20 2023 at 08:25):

It implies that we already know a number of things... that there are "things", and they can be grouped into "kinds of things", and we can follow certain rules, regarding conditions, like "if a thing is a certain kind of thing, then you can do the following..."

view this post on Zulip Julius (Nov 20 2023 at 08:27):

So, it is not possible for someone to follow the rules of first order logic, or understand them, unless they already understand ideas like, distinguishing between different things (unique identities), conditional rules, and roughly the idea of a "variable".

view this post on Zulip Julius (Nov 20 2023 at 08:29):

So.

view this post on Zulip Julius (Nov 20 2023 at 08:33):

I will develop my argument here in case it is not clear what I am saying. But would you agree that first order logic is not an attempt to construct fundamental notions like function and variable, but rather, the belief that it would be impossible to define these things if you did not already know what they are; therefore, first order logic is an attempt at pinpointing certain "undefinables"?

view this post on Zulip Julius (Nov 20 2023 at 08:35):

Which is why we don't need to worry about there being some problem with doing what you said - "assuming the axistence of functions of arbitrary valence". You won't get anywhere by trying to build that notion from simpler notions. Similarly, we can just assume the existence of infinitely many term symbols (constants, variables).

view this post on Zulip Julius (Nov 20 2023 at 08:37):

Because while it might bother one to see that assumed, make them ask why we would assume that, the answer is, we actually can't avoid that assumption. if we claim we do not assume it, it would be shown that anything we try to do in some basic set of axioms basically already implies it.

view this post on Zulip Julius (Nov 20 2023 at 08:37):

I don't know if that's what mathematicians think about this. Thanks.

view this post on Zulip Morgan Rogers (he/him) (Nov 20 2023 at 08:39):

Julius said:

Or perhaps I can benefit from the running-conversation approach.

I would encourage this, but it's not what you're actually doing. Beware that if you post many messages at once it gives people more work to do before they can engage with the discussion, which can result in them not bothering.

view this post on Zulip Julius (Nov 20 2023 at 08:41):

Alright, sorry.

view this post on Zulip Morgan Rogers (he/him) (Nov 20 2023 at 09:05):

Julius said:

Sometimes I wonder if there is actually more to be contemplated there about why we are so sure "less" is better.

Having said that, it is fun that a long sequence of messages should contain this line :wink:

Julius said:

So, it is not possible for someone to follow the rules of first order logic, or understand them, unless they already understand ideas like, distinguishing between different things (unique identities), conditional rules, and roughly the idea of a "variable".

Having read through all of this, I would disagree with this point. The rules of first-order logic are entirely syntactic, formal rules. You don't have to attach meaning to them in order to follow them, or even to learn how to construct proofs in FOL, but to "understand" them in the sense of seeing the point of it all does require some appeal to logical principles which are more or less intuitive, like what we might formally mean by "and" and "or", and how these meanings are captured by the rules defining the corresponding logical connectives.

view this post on Zulip John Baez (Nov 20 2023 at 09:35):

It might be helpful to read a carefully written textbook on first-order logic or type theory instead of trying to get information from here and there from websites and papers. If one is mainly concerned with how to build up a lot of logic and mathematics from simple assumptions (e.g. purely syntactic rules), and wondering whether it's really possible to do it without "cheating" in some way, then one is following in the footsteps of Frege, Russell and Whitehead, Kleene and other philosophers and logicians. They worried a lot about these issues, and wrote a lot about them.

I don't know of more modern type-theoreretic treatments of logic that take a similarly nit-picky approach to these particular issues. They probably exist! But in some ways the most important thing is to carefully follow one extremely careful, formal account of how to build up logic and math from simple, purely syntactic foundations. There are a million ways one can do it. Once you've seen one or two approaches worked out very carefully, you can loosen up and relax a bit (e.g. by talking about semantics more, to make the syntax more digestible). But I think it's good to spend some time going through one approach in detail, not jumping around between different references.

view this post on Zulip Julius (Nov 20 2023 at 10:50):

Thank you, I will do that.

view this post on Zulip Julius (Nov 20 2023 at 11:07):

This is a small technical question to reinforce what I am learning in reading this paper (above).

A discrete fibration is a functor p:ASp: A \rightarrow S such that for any arrow u:Jp(a)u: J \rightarrow p(a) in SS, there exists a unique arrow m:bam: b \rightarrow a such that p(m)=up(m) = u.

Does this imply that JJ is not necessarily in the image of functor pp? (I do not think that would be possible, if p(m)=up(m) = u.)

The definition intuitively sounds to me like that of a faithful functor, that the functor is injective on hom-sets. What is the difference?

Thank you.

view this post on Zulip John Baez (Nov 20 2023 at 11:09):

It's a good exercise to look at the definition of faithful functor and discrete fibration and see exactly what's the difference. It's not too hard to look at those definitions and compare them. So I'll turn my question back to you: what's the difference?

view this post on Zulip John Baez (Nov 20 2023 at 11:10):

As for your first question, I'm not sure what "does this imply that ... is not necessarily..." means. Mathematicians don't tend to say that kind of thing. Much more common is "does this imply that... is necessarily...".

view this post on Zulip John Baez (Nov 20 2023 at 11:11):

For example, what about "does being over 6 imply that you are not necessarily over 7?" I guess the answer is yes. But it's not how mathematicians would ask the question.

view this post on Zulip John Baez (Nov 20 2023 at 11:12):

For P to imply that Q does not necessarily follow is a curious idea, logically speaking.

view this post on Zulip Julius (Nov 22 2023 at 06:33):

Thank you.
I am working through specific examples of categories to see the difference, myself. I’ll get back to you.

view this post on Zulip Julius (Nov 22 2023 at 06:41):

(No need to respond, but I’ll state where I currently am. I’d love to figure this out myself.)

I understand why a faithful functor (injective on homhom-sets) is not the same as a functor injective on morphisms. A faithful functor is injective on morphisms if it is injective on objects.

For a discrete fibration, to say that each arrow in pSpS has a unique index, sounds to me like injective on morphisms. I’ll try to prove or falsify this.

view this post on Zulip John Baez (Nov 22 2023 at 08:51):

It'd be good to find an example of a functor that's a discrete fibration but not injective on morphisms. I think there's a nice one F:CDF: C \to D where DD has 2 objects and 1 nonidentity morphism, while CC has 4 objects and 2 nonidentity morphisms, so you can draw everything.

view this post on Zulip Julius (Nov 22 2023 at 21:00):

Tomorrow, I will try to develop the following idea.

An n-gram counts substrings of length nn in string SS.

A string is a map f:NAf: \mathbb{N} \rightarrow A for a set AA, the "alphabet". (or from a sub-interval of N\mathbb{N}).

A 1-gram maps each element of AA to the cardinality of its inverse image (a "fiber"), g:AN,g(a)f1({a})g: A \rightarrow \mathbb{N}, g(a) \rightarrow |f^{-1}(\{a\})|.

For 2-grams, gg maps the subset of N×N\mathbb{N} \times \mathbb{N} such that for (i,j)(i, j), ji=1j - i = 1 (I think there is a better way to define this; maybe partitions), into N\mathbb{N}. These are subsequences. Form the product A×AA \times A. Map (a1,a2)A×A(a1, a2) \in A \times A to the cardinality of the fiber of gg.

More abstractly, ordinal numbers form a category, the simplex category. A functor from Δop\Delta^{op} to Set is called a simplicial set. sSet is the category of simplicial sets. A codomain fibration is a functor from an arrow category to an object's codomain. An essential fiber is like a fiber but for functors. There might be a way to portray n-grams this way.