You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hello friends, If I consider a category, where objects are types (X, Y for example), what would morphisms look like? Since any f: Id (X, Y) is also a type?
you generally want the objects to be contexts, rather than types (unless you have products), and the morphisms are terms in that context
this is often called a syntactic category
the definition on the nLab for this topic is quite readable, I think: https://ncatlab.org/nlab/show/syntactic+category
Huh, syntactic categories sound like the right approach here. But there also seems to be a category of Haskell types, which (to my knowledge) has never been fully specified but could be understood as having as objects the terms of type * in System Fw and as morphisms between X, Y
the terms of type X -> Y
, right?
yes, this is an example of a syntactic category, because you can take products of types and thus represent a context by taking the product of the types of the variables in the context
(this ignores the extra structure on kinds themselves)
Anton Lorenzen said:
Huh, syntactic categories sound like the right approach here. But there also seems to be a category of Haskell types, which (to my knowledge) has never been fully specified but could be understood as having as objects the terms of type * in System Fw and as morphisms between
X, Y
the terms of typeX -> Y
, right?
Take a look at Hask: https://wiki.haskell.org/Hask. By the way being fast, loose and ignoring its liiiiiittle problem with () is OK!
isn't the problem seq
rather than ()
?
I think its undef, isn't it?
It's how undef interacts with (), to be more precise (I think).
I'm not sure I've seen it worked out in detail, but this thread seems to suggest seq
is the main problem
well, it seems seq
is the obstruction to making Hask
a category, and undefined
is the obstruction to giving it nice structure
so both :)
Hahaha OK, that's good to know :).
It is in general good to first understand how to form categories from λ-calculus without nontermination/fixpoints - that part is harder!
Tiago Veras said:
Hello friends, If I consider a category, where objects are types (X, Y for example), what would morphisms look like? Since any f: Id (X, Y) is also a type?
It depends on what you want! The standard thing to do is to have x : X |- M : Y be your morphisms X -> Y.
One way you might want to think about Hom(Γ, Σ)
in the syntactic category is that it is a _substitution_ for Σ
in a context Γ
, where a substitution is made of appropriately typed terms for the variables listed in Σ
.
Nathanael Arkor said:
well, it seems
seq
is the obstruction to makingHask
a category, andundefined
is the obstruction to giving it nice structure
As Alex Kavvos explained, you can get a category despite the presence of seq
. But here are some curious facts. Firstly, the reader, state and continuation monads are not monads. That's because of seq
. Secondly, even if seq
is excluded, the Maybe monad transformer applied to the Maybe monad is not a monad. That's because of undefined
.
That allows you to make syntactic sense of maps between entire contexts, even though typically terms are given as exactly one thing on the right. The n-ary version of a term is a simultaneous substitution of many things.
... or, simply an n-tuple of terms.
I agree it is better to leave out general recursion/non-termination to move towards an understanding.
CCCs correspond to typed calculus with surjective pairing, so being able to form pairs is a must for the cartesian part. And since you want constants, you add 1 and you get arbitrary tuples. The morphisms are the terms, the context gives types to the free variables in the terms.
Regarding the non-termination piece: If every type A gets an inhabitant , then makes the terminal object 1 a weakly initial object. Then everything becomes isomorphic which makes the model slightly uninteresting. So I think one can pretend or assume that non-termination does not exist.
i think ive heard that multicategories are arguably better-suited to how type systems are presented than ordinary categories, and that's where the pain w/ contexts vs types comes from
Burak Emir said:
Regarding the non-termination piece: If every type A gets an inhabitant , then makes the terminal object 1 a weakly initial object. Then everything becomes isomorphic which makes the model slightly uninteresting. So I think one can pretend or assume that non-termination does not exist.
i think this only goes thru if you ask for there to be an initial object! isn't the category of non-empty sets cartesian closed?
With a terminal object (that role being played by the empty tuple and its unique morphisms ), which now also acts as initial object via , we have an object that is both initial and terminal.
It would be enough for there be a unique arrow to get into this state. I'm not saying the thing is not a CCC, but it may be an undesirable CCC.
I admit I did not quite get what you are asking. In the category of non-empty sets, there is no that would play the role of initial object, so we won't run the risk of initial and terminal being isomorphic. Is there an function from a singleton set to every set $A$?
well, it sounded to me like you were claiming that if every object in a CCC has a global element, then the CCC is trivial (in the sense that all objects are isomorphic to each other), so i was offering what i think is probably a counterexample, since the category of non-empty sets is definitely not trivial
(but every object has a global element, since the terminal object is still the singleton set)
Oh, I see now. Every non-empty set has some element, so we can come up with a global element. Aren't these all interchangeable, though?
There are correspondences between CCCs and typed lambda calculus that do not involve the products corresponding to a type. They are used only for contexts and multi-terms/substitutions/whatever you want to call them.
So we have arrows for any . Then and . (UPDATE: alas, that is not an isomorphism. Indeed, it seems one needs a proper initial object; i'll dig out the exact thing it is about fixpoints collapsing CCCs, it might have been biCCCs)
Dan Doel said:
There are correspondences between CCCs and typed lambda calculus that do not involve the products corresponding to a type. They are used only for contexts and multi-terms/substitutions/whatever you want to call them.
Totally, besides the syntactic category, there are the domain models, dcpo's (the denotational semantics) are also CCCs.
I think @Dan Doel is referring to (closed) cartesian multicategories, where context concatenation is not represented by products
@Nathanael Arkor Right.
I think what I mean is that the syntactic categories of simply typed lambda calculi _without_ (necessarily) product types (just function types) correspond to free Cartesian closed categories. Whether there are product types might depend on what the CCC is free over.
And that correspondence works the same sort of way as closed cartesian multicategories.
I can't find the paper I'm thinking of that exhibits this, though.
Do you guys agree that it would be nice to have a clear exposition of these matters, i.e. that shows the connection between CCC, typed lambda calculus (typing rules / natural deduction) and syntactic categories? It seems like a basic thing, yet I think every exposition chooses something else to anchor the basics in.
Have you seen Mike Shulman's lecture notes on Categorical Logic from a Categorical Point of View? That at least points out the connection between type theories and a syntactic multi-category pretty well, I think. And that's kind of the most straight forward analogy of them all, I think.
(There's lots of other good nuggets in the notes, too, even if you already know most of the material.)
Thanks for sharing, no I was not aware. By contrary, I have encountered before the connection between multi-categories and cut/sequent-style reasoning. I had scratched my head over Szabo's "Algebra of Proofs" many times to get an intuitive understanding for the hairy composition rules.
Would you really consider this the most straightforward correspondence? I vaguely remember my former PL grad student self knowing -calculus but trying hard to understand plain CCCs (without multi-).
Wait, I started reading now and Shulman argues this point; that it is more pedagogical to use multi-categories. I should read the whole thing, this is very exciting.
as for "classic" expositions for the standard point of view, my view is that Lambek & Scott is pretty good?
Ah hah. I figured out which paper I was thinking of.
Normalization and the Yoneda Embedding by Cubric, Dybjer and Scott.
In section 3, they construct a correspondence between typed lambda calculi with just function types and free Cartesian closed PER-enriched-categories, because they're going to use that for normalization-by-evaluation.
I imagine having some of the categorical pieces perform double-duty (both contexts and types) makes the correspondence harder to use for this purpose. The function-types-only approach makes the correspondence really on-the-nose, I think.
i also like the treatment of cccs and lambda calculi in "The Maximality of the Typed Lambda Calculus and of Cartesian Closed Categories" https://arxiv.org/abs/math/9911073 which also proves a very lovely result
Also, incidentally, the reason I'd push for thinking of context maps as "substitutions" rather than just a tuple of terms is that that can connect up with work on simultaneous substitution and the like, which is a useful thing to think about in type theory, and maybe even a useful concept to have within a type theory.
Tuples of terms kind of leads you to think there's just some mismatch in the details of how each thing works.
What does "PER-enriched" mean? What's "PER"?
Partial equivalence relation.
named for PER martin-löf
I think the point is similar to your Enriched Lawvere Theories for Operational Semantics, but the PER lets you have different denotational semantics with the same underlying set. So, α only vs αβη.
Those can both be different PERs on the same set of lambda terms.
Somehow that makes their construction easier, because they're 'normalizing' by making a loop on the α category through the αβη category or something.
Actually, if you used something directed instead (like the enriched Lawvere theory stuff), maybe that would ensure that you actually get the expected normal form.
Instead of just 'by accident'. :)
Burak Emir said:
Do you guys agree that it would be nice to have a clear exposition of these matters, i.e. that shows the connection between CCC, typed lambda calculus (typing rules / natural deduction) and syntactic categories? It seems like a basic thing, yet I think every exposition chooses something else to anchor the basics in.
I think the main reference for this is probably Introduction to higher order categorical logic, by Lambek and Scott.
How did this end up in #general, by the way? This would probably fit best in #type theory. There are many streams to subscribe to - this and other points will have to be made clear when we put together a better guide.
Agree on L&S, but they do not talk about multicategories. Also agree that moving the conversation there makes sense.
Anyone has good pointers on why most texts on the Calculus of Constructions written by logicians avoid mentioning dependent sums, like Σa:A.B(a)? When we do Categorical Semantics for DTTs (canonical e.g.: in Bart Jacobs "Categorical Logic and Type Theory") we do the opposite - we explain most concepts on the adjunction Σ⊣π^* first, and leave most properties of π^*⊣Π as exercises...
Here is my guess: rules for dependent sums _used to be_ given explicitly - for example they appear in the system in Zhaohui Luo's thesis - but then people like Christine Paulin-Mohring showed that dependent sums can be defined as certain initial algebras using inductive types blah blah blah - see https://hal.inria.fr/hal-01094195/ for an overview...
Problem: I'm not being able to neither find on the web, nor define myself, a definition of Σa:A.B(a) given in terms of Π! I know that chapter 11 of Girard's "Proofs and Types" - http://www.paultaylor.eu/stable/prot.pdf - is about System F and about how to define Booleans, naturals, products and inductive types in System F, and I'm rereading it and finding it quite clear, but I'm not being able to adapt its definition of "×" to a definition of a "Σ" - probably because I don't have much practise...
Any hints? Any pointers?
Thanks in advance!!!
An impredicative encoding of Σ in terms of Π is usually like Σ(x : A), B(x) := Π(r : Type), (Π(x : A), B(x) -> r) -> r
. In Haskell we usually say type Exists f = forall r. (forall x. f x -> r) -> r
. I’m not quite sure if that’s what you’re looking for, but I hope it helps :)
Of course if your universe is predicative, this construction ends up in the “wrong” universe ...
It's also not quite the same as a dependent sum, depending on what you mean by that. The Calculus of Constructions doesn't have dependent sums with the full induction principle.
Right, I suppose you need function extensionality and parametricity to prove that the construction actually does what you want it to do.
@Nicholas Scheel, this looks good but I don't know how to express the second projection...
Let S = Σx:A.B(x)
and S' = ΠR:Type. (Πx:A. B(x) → R) → R.
If s':S' then we can define πs
as πs' := s A (λa:A. λb:B(a). a),
but how do I define the second projection π's'?
it would be s (B(πs')) (λa:A. λb:B(a). coe b)
, but @Dan Doel is right, you actually need a parametricity theorem/assumption to prove that a ~ πs'
.
I’m afraid I don’t know exactly how the parametricity result would apply off the top of my head ... you essentially need to prove that forall r. (a -> r) -> r
is a “constant” function, by specializing it (applying the identity function), and using parametricity to say that any other result factors through that ... in particular you should be able to get the coercion you need
@Nicholas Scheel What is "coe"? Some kind of coercion operator?
Anyway: thanks, great! Cleaning up the notation a bit, this works in the naïve models that I have in mind:
Let S = Σx:A.B(x)
and S' = ΠR:Type. (Πx:A. B(x) → R) → R.
If s':S' then we can define πs' and π's' as:
πs' := s' A (λa:A. λb:B(a). a) and
π's' := s' (B(πs')) (λa:A. λb:B(a). b).
Homework for me for the next months: understand the extra axioms that we need to add to make this work "in general"...
Here is an Agda proof that parametricity allows you to derive induction for encoded Σ: https://hub.darcs.net/dolio/agda-share/browse/ParamInduction.agda#139
Although I haven't run it in a very long time, so it's possible the file no longer loads.
There are probably papers now that give a more systematic treatment of this (for all inductive types), but adding internal parametricity to type theory is still pretty researchy. Another approach is Cedille, which has a type former called "dependent intersection" that allows inductive types with the strong induction principle to be derived for lambda term implementations.
However, this is all beyond the Calculus of Constructions.
Parametricity is also just false in some important models of type theory, such as ordinary mathematics.
So you may not want to take it as an axiom, depending on your goals.
"Ordinary mathematics" already isn't a model of the calculus of constructions.
Why not? Are you worried about universes?
The calculus of constructions doesn't have universes. It has impredicative quantification over all types, so that you can encode weak inductive types.
Okay, this is some specific system I'm not familiar with then
So you can construct e.g. . It's like System F with dependent types.
i mean, the CoC is a specific system ;p
it's not quite the same as CIC, nor is it quite a subsystem of it iirc
Right.
I think the original description might not be quite 'the maximal corner of the lambda cube', either, but that's what people usually take it to mean these days.
Ah I was thinking of CIC (although then the original question would be somewhat odd). I agree in this particular context, my comment is not that relevant.
How close or far away from the standard Curry-Howard interpretation of types (with terms being justifications that type-check) are the objects of the Effective Topos?
I'm tempted to read (plus equality on it) as all possible sets of indices for terms (let's write the enumeration of terms that relates numbers and codes as ) that could make for proofs. So that for any object (type) , it's really the numbers with (of which there will be several, corresponding to differently coded up functions, for example).
Does this make any sense?
That sounds vaguely correct, although I'm not sure calling it 'Curry-Howard' really makes sense to me. The part seems on the right track.
One way of saying what an object is (I think) is that it's a set together with a partial function saying how elements of are realized by natural numbers. Each element of must be realized by at least one number. Numbers that realize the same element of must be mapped by arrows to numbers realizing the same element of the codomain.
However, you can instead consider the partial equivalence relations on the are induced by , I think. The arrows must preserve naturals related by the PER.
In the first case you'd talk about how the arrows are realizing functions between the sets. In the second you'd talk about how they're respecting the PER structure.
Thx. When you say "must be mapped by arrows", which arrows (plural) do you mean? Arrows S to S?
An arrow is a function and a number such that for all for which is well defined, and are also well defined, and . Or something along those lines. is the operation of on in Kleene's first algebra (the (equivalent of the) computation model underlying the effective topos).
So if you're thinking about something like a numbering of the partial recursive functions is where that enters in in an essential way.
Actually, I guess I'm uncertain whether it's the definition there, where two numbers that realize the same function are different arrows, or whether it's properly stated as "functions for which there exists a realizer".