Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: On Categorical Logic


view this post on Zulip Suraaj K S (Jul 12 2024 at 14:53):

Suraaj K S10:48 AM

I've been trying to learn a little bit about how category connects to logic and type theory.
Background wise, I'm familiar with category theory at the level of Awodey / Leinser's book, and axiomatic set theory (ZFC). I think that it's safe to assume that I know very little about type theory.

Here are some books/ resources I came across:

I am sure that there an many other resources.

Something that I was quite surprised about was that these books seemed to give 'start off' with things which are quite different:

I was wondering how similar / different all these approaches are?
Is there a 'standard' approach that people use, and which book / resource would be a good starting point for that?
How to these other approaches (maybe using Multicategories) relate to the standard approach?

view this post on Zulip Chad Nester (Jul 12 2024 at 15:14):

I guess which reference you're looking for depends on what, precisely, you mean when you say "logic".

I'm going to recommend "Introduction to Higher Order Categorical Logic" by Lambek and Scott. If any book is a standard reference in categorical logic it is that one. I think it should be pretty approachable given your background.

view this post on Zulip Chad Nester (Jul 12 2024 at 15:15):

The book spends a lot of time on the correspondence between the simply-typed lambda calculus and cartesian closed categories. I think if you're looking to learn some basic type theory this would be a good way to do it.

view this post on Zulip Suraaj K S (Jul 12 2024 at 15:26):

@Chad Nester , thanks for your response. I'd still be a little curious as to why so many different approaches are used, and whether all of them are different ways of looking at the same thing..

view this post on Zulip Chris Grossack (they/them) (Jul 12 2024 at 17:41):

This is naturally going to be a bit sloppy because of space constraints, but here's the general idea. As a tl;dr, yes I think these people are all more-or-less talking about the same thing.

First, we have to realize there are multiple interesting "logics" one can study. There's a LOT to say here, but one quick observation is that "weak" logics can't say much, but what they can say is true in a lot of settings, whereas "strong" logics can say lots of things, but what they can say only matters in more limited circumstances.

For instance, if you're working with "equational logic", where all your axioms have to be equations between terms, you can interpret models inside any category with finite products. This gives you access to models in LOTS of settings (which is freeing!), but you can only express equational things (which is limiting!). Contrast this with, say "geometric logic", where your axioms can be much more flexible, so you can express lots of interesting properties (which is freeing!), but now you only get models inside topoi (which is limiting).

How does this work? In broad strokes, you think of an object AA in your category as a "context". That means the expressions your write down are allowed to have a variable "of type AA". A colloquial example might be

"let v:V\vec{v} :V and w:V\vec{w} : V. Then v+w=w+v\vec{v} + \vec{w} = \vec{w} + \vec{v}."

The context is "v:V\vec{v} :V and w:V\vec{w} : V". It tells you what variables you have access to. Then in this context we're able to ask the question v+w=w+v\vec{v} + \vec{w} = \vec{w} + \vec{v}, which has some truth value. We can interpret the context (with two variables of type VV) as the object V×VV \times V in our category. We think about v+w\vec{v} + \vec{w} and w+v\vec{w} + \vec{v} as being two arrows V×VVV \times V \to V, and we want these arrow to be equal (which we can express by the commutativity of a diagram).

This brings us to the first thing you explicitly asked about, namely "multicategories". Instead of working with V×VV \times V in order to get an arrow with multiple inputs (read: a context with multiple variables), it would be nice if we natively had access to arrows with multiple sources in the definition of a category. This is because things in the definition of a category are the only things guaranteed to be strict, and this allows us to sidestep some potentially annoying "coherence issues" that are a frequent problem in categorical logic. A multicategory is just a category where we allow arrows to have multiple inputs.

How, I hear you asking, do we interpret more general propositions? Well, we assign to every type (read: object) AA a lattice P(A)P(A) of "propositions on AA". These are supposed to be "compatible" in the sense that whenever we have a map f:ABf : A \to B and a proposition φ(b)P(B)\varphi(b) \in P(B), we should get a proposition on AA defined by φ(f(a))\varphi(f(a)) (we say this is defined "by pullback"). This data is exactly a fibred category, where the fibre over AA in your category is exactly the lattice P(A)P(A), and the arrows in the total category come from these pullback operations.

To connect this with topoi, every topos has a natural subobject fibration, where we assign to the object AA the lattice Sub(A)\text{Sub}(A) of subobjects of AA. This functor is represented by the "subobject classifier" Ω\Omega, which is (part of) why you see the subobject classifier get so much attention.

Lastly, you ask why category theory is "naturally 2-categorical". Here Evan is getting at Lawvere's functorial semantics. Going back to equational theories because they're simpler, every theory has a "classifying category"
CT\mathcal{C}_\mathbb{T} so that models of T\mathbb{T} in some category E\mathcal{E} are exactly functors from CTE\mathcal{C}_\mathbb{T} \to \mathcal{E}! But the models are only half the story, where are the homomorphisms? It turns out that homomorphisms are exactly natural transformations between these functors!! So in order to get the whole category of T\mathbb{T}-models in E\mathcal{E}, you need access to the full 2-categorical structure of categories, functors, and natural transformations.

So we see that these topics are all closely related, but they solve different problems that arise in the field.


I hope this helps ^_^

view this post on Zulip Suraaj K S (Aug 22 2024 at 13:02):

I was reading the wikepedia article on Categorical Logic: https://en.wikipedia.org/wiki/Categorical_logic.

It says that there are 3 'themes' to categorical logic:

I was wondering if these three themes are 'independent', or interconnected?
A brief look seems to indicate that at least the first 2 (Categorical semantics and Internal languages) are quite independent ideas..

For instance, Mike Shulman's notes: https://mikeshulman.github.io/catlog/catlog.pdf seem to be about the second theme (type theories as internal languages). Moreover, searching for 'functorial semantics' in the pdf yields nothing!

On the other hand, Steve Awodey's notes: https://awodey.github.io/catlog/notes/ seem to be about the first theme.

I don't really know what the third theme means, but it looks like it should just be part of the first theme, as indicated by the phrase

In many cases, the categorical semantics of a logic provide a basis for establishing ...

So, am I right in assuming that these are two separate ideas, and Categorical Logic is in some sense, an overloaded term?

view this post on Zulip Federica Pasqualone (Aug 22 2024 at 13:08):

"Introduction to Higher Order Categorical Logic" by Lambek and Scott is my fav and best book ever!

view this post on Zulip David Michael Roberts (Aug 22 2024 at 13:39):

'Logic' is an overloaded term. Categorical logic is at least cutting it down to a subset where categories are used in more-or-less essential ways :-)

view this post on Zulip John Baez (Aug 22 2024 at 14:00):

What about Suraaj's first question: are "categorical semantics," "internal languages" and "term-model constructions" independent or interconnected?

I know nothing about the third, but I'd hope that at least the first two are deeply connected. For example geometric functors between topoi are part of categorical semantics, while every topos has its own internal language. But these aren't independent ideas: a geometric functor F:CDF: C \to D should map models of geometric theories formulated in the internal language of CC to those formulated in the internal language of DD. Or something like that - maybe an expert could say this more accurately.

view this post on Zulip Ryan Wisnesky (Aug 22 2024 at 14:14):

re: the third, you could call this "initial model constructions". the reason computer scientists call those "term models" is that the most direct route to construct them is from the "terms" (syntax) of the logic. Anyway, in e.g. equational logic and many others you can always find a distinguished initial model of a theory via a universal property etc. In fact in CQL we compute these all the time.

view this post on Zulip John Baez (Aug 22 2024 at 14:24):

Thanks! If "term models" are initial models constructed synactically, which sounds completely believable, then these are definitely closely connected to the theme of internal languages.

So now I will claim all 3 topics in the Wikipedia article on categorical logic are tightly connected, and the article should be rewritten to explain how.

view this post on Zulip Ivan Di Liberti (Aug 22 2024 at 15:23):

Suraaj K S said:

I was reading the wikepedia article on Categorical Logic: https://en.wikipedia.org/wiki/Categorical_logic.

It says that there are 3 'themes' to categorical logic:

I was wondering if these three themes are 'independent', or interconnected?
A brief look seems to indicate that at least the first 2 (Categorical semantics and Internal languages) are quite independent ideas..

For instance, Mike Shulman's notes: https://mikeshulman.github.io/catlog/catlog.pdf seem to be about the second theme (type theories as internal languages). Moreover, searching for 'functorial semantics' in the pdf yields nothing!

On the other hand, Steve Awodey's notes: https://awodey.github.io/catlog/notes/ seem to be about the first theme.

I don't really know what the third theme means, but it looks like it should just be part of the first theme, as indicated by the phrase

In many cases, the categorical semantics of a logic provide a basis for establishing ...

So, am I right in assuming that these are two separate ideas, and Categorical Logic is in some sense, an overloaded term?

Check out my suggested readings.

view this post on Zulip Mike Shulman (Aug 22 2024 at 16:19):

I would say that the "categorical semantics" of a logic consists in describing a category of structured categories that is related to the category of theories in that logic by an adjunction, where the two functors in the adjunction are the "internal language" of a category and the "term model" of a theory.

view this post on Zulip John Baez (Aug 22 2024 at 16:25):

Nice, that ties it up neatly.

view this post on Zulip Suraaj K S (Aug 24 2024 at 18:56):

Mike Shulman said:

I would say that the "categorical semantics" of a logic consists in describing a category of structured categories that is related to the category of theories in that logic by an adjunction, where the two functors in the adjunction are the "internal language" of a category and the "term model" of a theory.

Thanks a lot! That seems concise enough to be put on a bumper sticker ;)
I was wondering if there are books / resources which mention this? IMHO, my ideal book on Categorial Logic would mention this at the very beginning, and the rest of the book would be various examples of this adjunction...

Also, am I right when I think of the 'classic' 'equivalence' of Cartesian Closed Categories and STLC as an adjunction as mentioned above (with CCCs being the 'structured categories', and the category of theories in that logic being the category of STLC theories)?

view this post on Zulip Mike Shulman (Aug 25 2024 at 01:11):

My unfinished notes, which you mentioned in the original post, are certainly written from that point of view, although I didn't emphasize that particular terminology.

view this post on Zulip Mike Shulman (Aug 25 2024 at 01:13):

One thing that's confusing is that we sometimes sloppily say that a logic or type theory X is "an internal language for" structured categories Y when they are related by a categorical-semantics adjunction, but that's a different meaning of "internal language" than when that phrase describes the right adjoint in the adjunction.

view this post on Zulip Mike Shulman (Aug 25 2024 at 01:14):

Also, it's worth noting that many people (but not me) would prefer that the adjunction is an equivalence.

view this post on Zulip Suraaj K S (Aug 27 2024 at 00:09):

I see. Thanks a lot for the explanation.

I am a little curious about the following sentence -

Mike Shulman said:

the "categorical semantics" of a logic consists in describing a category of structured categories that is related to the category of theories in that logic .

Here, examples of 'logic' would be equational logic, STLC, FOL, etc., right?
And are 'theories in that logic' simply a pair consisting of a signature and a set of axioms. For instance, ZFC is a 'theory' of FOL. (I'm curious what the morphisms between two theories are...)
One also talks about 'models' (sometimes called algebras?), but I guess that the category of structured categories takes this role? That is, for each 'theory' we have a category of models (which is an object of the category of structured categories...)

view this post on Zulip Mike Shulman (Aug 27 2024 at 17:55):

Yes, those are examples of what I meant by "logics". Whatever we call it, the class of objects in question also includes type theories (like STLC and MLTT); sometimes these are also called "doctrines" or "2-theories". Note that in general each such "logic" is multi-sorted.

view this post on Zulip Mike Shulman (Aug 27 2024 at 17:58):

And yes, a signature and a set of axioms is basically what I mean by a "theory" in a logic. Defining the morphisms between theories is part of where you get into the question of whether the adjunction is an equivalence. I would choose a morphism of theories STS\to T to consist of maps of signatures and axioms, sending each base type of SS to a base type of TT, each function symbol of SS to a function symbol of TT, and each axiom of SS to an axiom of TT. But there are successive weakenings of this where you first allow axioms of SS to go to theorems of TT rather than axioms, then allow function symbols to go to arbitrary terms, then allow base types to go to arbitrary types. The fully relaxed version is what tends to get you an equivalence between theories and categories, but it remembers much less about what distinguishes a theory from its term model.

view this post on Zulip Mike Shulman (Aug 27 2024 at 18:01):

As for models, if TT is a theory and CC is a structured category, then a model of TT in CC is a structured functor Tm(T)C\mathrm{Tm}(T) \to C, or equivalently a theory morphism TLang(C)T \to \mathrm{Lang}(C). (In the latter case it doesn't really matter which morphisms of theories you take, since theories of the form Lang(C)\mathrm{Lang}(C) are "saturated" in that every theorem is an axiom, every term is represented by a function symbol, and every type is isomorphic to a base type.) Although the choice of Tm(T)C\mathrm{Tm}(T) \to C is usually better since then you get without any extra work the whole category of models, since a category of structured categories is actually a 2-category, whereas the category of theories is less obviously anything other than a 1-category.

view this post on Zulip Suraaj K S (Aug 28 2024 at 12:43):

Thanks for the explanation!

In Jacob's book: https://people.mpi-sws.org/~dreyer/courses/catlogic/jacobs.pdf, I think what's happening is that what we call "logics" / "doctrines" , etc. seem to be split into a 'logic' part and a 'type theory' part. For instance, in section 0.1, we have "a logic is always a logic over a type theory". Moreover, it seems like we can 'mix and match' different 'logics' and 'type theories'(according to Jacob's) to form different 'doctrines' (the previous discussion).

Moreover, in that book, we have the following sentence: "Categorically this will correspond to one ("total") category, capturing the logic, being fibred over another ("base") category, capturing the type theory".

I was wondering how this 'fibred' category maps to our previous discussion where we have:

  1. a category of structured categories (lets call it Str)
  2. a category of theories (lets call it Th)
  3. an adjunction between Tm: Th → Str and Lang: Str → Th

Is the 'fibred' category simply a particular object in Str?

view this post on Zulip Mike Shulman (Aug 28 2024 at 17:55):

Yes, that's right. Sometimes a "structured category" means a category together with a fibration over it, or even something more like two fibrations, etc. In that case the corresponding doctrine has two classes of types, one or more of which might be called "propositions", and the part of the doctrine pertaining to terms belonging to "propositions" is traditionally called the "logic".

view this post on Zulip Suraaj K S (Aug 28 2024 at 18:33):

Thanks a lot for your patience and response! I hope this was not too annoying.

Mike Shulman said:

In that case the corresponding doctrine has two classes of types, one or more of which might be called "propositions", and the part of the doctrine pertaining to terms belonging to "propositions" is traditionally called the "logic".

This reminds me a little bit about how we have Prop and Set, etc. in Coq. I was wondering if this is related?

view this post on Zulip Suraaj K S (Aug 28 2024 at 18:36):

Mike Shulman said:

a signature and a set of axioms is basically what I mean by a "theory" in a logic

Hopefully, this is my last question (for now ;)). I'm guessing that a 'signature' here is a collection of sorts and first order function symbols (with constants being a special case).

I also think CT cannot handle 'higher order' function symbols, which (I think) was the point of the discussion here: https://categorytheory.zulipchat.com/#narrow/stream/229199-learning.3A-questions/topic/Did.20Category.20Theory.20catch.20up.3F/near/453035014...

view this post on Zulip Mike Shulman (Aug 28 2024 at 18:45):

Suraaj K S said:

This reminds me a little bit about how we have Prop and Set, etc. in Coq. I was wondering if this is related?

Yes, it's related. It's not exactly the same because Coq is a dependent type theory so that everything is all mixed up, with Prop and Set not being judgment forms but types themselves. But if you replace A : Prop with a judgment A prop and so on, you get the judgmental structure of a fibred logic of this sort, as in chapter 4 of my notes.

view this post on Zulip Patrick Nicodemus (Aug 28 2024 at 18:51):

I like the fibrational perspective because I think toposes can sometimes bake in too much of a tight connection between propositions and types. For example, I don't think in the setting of a fibration modelling higher order logic, we have (for P,QP, Q with PQP \land Q \to \bot)

{x:APxQx}{x:APx}+{x:AQx}\{ x : A \mid P x \lor Q x \} \cong \{ x :A \mid Px \} + \{ x : A \mid Qx \}
Whereas this is true in a topos.

To me, this seems like not something you would not to follow automatically. Toposes are a little bit too strong in that regard.

view this post on Zulip Mike Shulman (Aug 28 2024 at 18:53):

I love fibrations, but I don't really intuit why one might expect your example to ever fail to be true.

view this post on Zulip Mike Shulman (Aug 28 2024 at 18:55):

As for the meaning of "signature", in general it depends on the doctrine. A signature for a doctrine (which I would prefer to define in general to include the "axioms", since axioms are "really" just higher-dimensional generators) should consist of "generators" for all the sorts of the doctrine. So, for instance, in STLC, a signature consists of generating types, generating terms, and generating equalities (= 2-cells = relations between terms).

A "generating term" is also known as a function symbol, which has types as its input(s) and output(s). But you can make different choices about whether those input and output types must be generating types or arbitrary types. If they are required to be generating types, then the function symbol is "first-order", whereas if they are allowed to be arbitrary types (and the doctrine contains higher-order type-formers like \to) then the function symbol is "higher-order". There's no problem with higher-order function symbols or their semantics in categories with higher-order structure.

view this post on Zulip Mike Shulman (Aug 28 2024 at 18:57):

I didn't read that other thread carefully, but I gathered the point was that there's no good notion of non-invertible morphism between models of a higher-order theory due to contravariance. This seems to me like just a fact, and I'm not quite sure why one would think that there should be such a notion. In any case there's no problem with higher-order function symbols in a doctrine that has higher-order type formers, although the construction of the adjunction is somewhat more involved (it goes like the one for ×\times-presentations in section 1.7.3 of my notes).

view this post on Zulip Patrick Nicodemus (Aug 28 2024 at 19:00):

Mike Shulman said:

I love fibrations, but I don't really intuit why one might expect your example to ever fail to be true.

Hahahaha, yes, I agree it is a bit common sense, it's not that I have a counterexample semantics in mind that is persuasive to me, it's just that I don't like the proof and the way it relies on theorems about topos theory, the fact that it fails in a general higher order fibration makes me suspect that there's some blurring of logic and semantics going on in the topos theoretic proof where we "cheat" by passing to the semantics to establish what should be a logical theorem.

I am sure there are formulations of intuitionistic higher-order logic that are complete for topos theory, but since this fails in fibrations, it seems that completeness for the topos theoretic semantics is not robust with respect to small changes in the proof calculus for intuitionistic higher order logic.

view this post on Zulip Mike Shulman (Aug 28 2024 at 19:05):

Well, personally I think a better internal language for toposes is dependent type theory. You can more or less code all of mathematics in IHOL, but there are parts of it (like category theory!) that are much more natural in DTT. And in DTT, your example is provable.

view this post on Zulip Mike Shulman (Aug 28 2024 at 19:08):

I think one thing (perhaps the main thing?) that fails in a general higher-order fibration relative to a topos, and which is related to your example, is function comprehension, a.k.a. unique choice: from x:A!y:B.φ(x,y)\forall x:A \exists! y:B. \varphi(x,y) we can't deduce the existence of a function f:ABf:A\to B such that x:A.φ(x,f(x))\forall x:A. \varphi(x,f(x)). Again, this is a pretty unintuitive failure, and the most common thing we do with a higher- order fibration that fails it is construct a category in which it does hold, e.g. the tripos-to-topos construction.

view this post on Zulip Suraaj K S (Aug 29 2024 at 00:02):

Thanks a lot. This has been very enlightening.
I guess my initial confusions arose because I thought there was a 'unique way' to do Categorical semantics for a logic, stemming from 'cool CT things' like the 'Curry-Howard-Lambek' correspondence, etc. where people say that STLC and CCCs 'are the same thing'. I guess that I thought that every doctrine (eg. STLC) would be 'the same as' some structured category (eg. CCC). However, it seems like it's not the case. It seems like we could choose some other Structured Category - For instance, it seems like we could use both Fibred categories as well as Topoi as our 'structured categories' for some doctrine (Idk which doctrine was being discussed exactly. Or maybe the point is that they work for all of them...). Now, I'm curious about things like soundness/completeness though.. Do we only have 'completeness' when the adjunction is an equivalence?

view this post on Zulip Mike Shulman (Aug 29 2024 at 02:33):

Completeness, in the categorical sense, basically follows from the construction of the term model. If something is true in all structured categories, then it's true in the term model, and therefore it's provable in the theory since the term model is constructed so that the things true in it are precisely those provable in the theory. (Classically, completeness sometimes refers more specifically to completeness relative to one specific structured category such as Set; that's a much more special property that only holds for some doctrines and theories.)

view this post on Zulip Mike Shulman (Aug 29 2024 at 02:43):

It's true in general that a single syntactic doctrine can correspond to more than one kind of structured category. Patrick and I were talking about the syntactic doctrine of Intuitionistic Higher-Order Logic, which has semantics both in a kind of "higher-order fibration" and also in toposes. I'm not sure how special of a situation this is.

In a certain sense the higher-order fibrations are the more "natural" semantics of IHOL, since they represent its judgmental structure more directly, e.g. they have two classes of objects corresponding to the two judgments "is a type" and "is a proposition" of IHOL. It's usually possible to write down a "natural" semantics of any syntactic theory in this way by simply translating judgments to objects and morphisms. But if you take it seriously, sometimes what you get isn't what most people think of! E.g. the "natural" semantics of STLC is not really cartesian closed categories but rather cartesian closed multicategories. Of course CCCs and CCMCs are "equivalent" in a certain sense.

On the other hand, for highly structured categories like toposes and LCCCs, there doesn't seem to be a syntactic doctrine for which they are exactly the "natural" semantics. So instead we pass through an intermediate structure, like a higher-order fibration or a "category with families", that is the natural semantics of a doctrine like IHOL or DTT.

view this post on Zulip Mike Shulman (Aug 29 2024 at 02:46):

And my last two messages are a bit contradictory: completeness is only really "automatic" in the case of the "natural" semantics, since that's the case when the term model can be constructed directly out of the judgments in the theory. So Patrick's point was in general well-taken, that for, e.g., the semantics of IHOL in toposes, there may be things that hold in all topos models but aren't provable syntactically. IIRC I think this doesn't precisely happen for IHOL and toposes, because the things that hold in all toposes but not all higher-order fibrations (like Patrick's example) aren't expressible internally in the syntax of IHOL; but even if I'm right about that, it's sort of an accident.

view this post on Zulip Mike Shulman (Aug 29 2024 at 02:47):

(Or, one might say, a consequence of the very special nature of toposes.)

view this post on Zulip Mike Shulman (Aug 29 2024 at 02:47):

(Or, another one might say, the impoverished syntax of IHOL.)

view this post on Zulip Suraaj K S (Aug 30 2024 at 14:02):

Mike Shulman said:

Completeness, in the categorical sense, basically follows from the construction of the term model

Just to confirm, this is the sentence which is not exactly correct? Completeness 'basically follows' only when we have the 'natural semantics'?

view this post on Zulip Mike Shulman (Aug 30 2024 at 16:53):

Yeah.

view this post on Zulip Suraaj K S (Aug 30 2024 at 23:36):

Another thing that I think happens is that these 'doctrines' can be programming languages too, and you can similarly give categorical semantics to programming languages too? And in many cases (I think), that these PL 'doctrines' are lousy logics because they are inconsistent (because nonterminating programs are 'proofs' of ⊥)? I think this book: https://www.cambridge.org/core/books/categories-for-types/258841251C62FED1DACD20884E59E61C takes that view?

view this post on Zulip Mike Shulman (Aug 31 2024 at 00:01):

Yes, many doctrines have a computational interpretation, and so many programming languages have corresponding doctrines. And yes, if the programming language is Turing-complete, then every type in the doctrine will be inhabited by some term, which means that the corresponding categories can't have strict initial objects the way topos-like categories usually do. Instead, the corresponding categories are those arising from domain theory. As you say, it's hard to use a naive propositions-as-types interpretation of logic when all types are inhabited, but there may be things you can do.

view this post on Zulip Nathanael Arkor (Aug 31 2024 at 07:14):

Why does Turing completeness imply inhabitedness of all types? If I consider the unityped lambda calculus, which is Turing complete, and add a second type with no term formers, the calculus can still compute every Turing-computable function, so I would have thought this would be considered Turing complete without having all types inhabited.

view this post on Zulip Ryan Wisnesky (Aug 31 2024 at 13:53):

Usually you add a fixed point combinator of type fix_X : (X->X)->X and then every type would be inhabited by "fix id", or show that such a construction can be emulated already (in the case of the untyped lambda calculus the Y combinator plays the role of fix for example). It's an interesting exercise to write a term of type "Void" in Haskell for example.

view this post on Zulip Mike Shulman (Aug 31 2024 at 15:35):

Yes, I was thinking of Turing-completeness at every type, according to which the empty function is one of the partial functions that would be required to be represented. I guess it's true that you could have a language that's Turing-complete at some types and not others.

view this post on Zulip Mike Shulman (Aug 31 2024 at 16:00):

Ryan Wisnesky said:

It's an interesting exercise to write a term of type "Void" in Haskell for example.

Do you mean in some more complicated way than

?

view this post on Zulip Ryan Wisnesky (Aug 31 2024 at 16:09):

that's indeed what I meant; the pedagogy is in discovering why you can't write that same definition in e.g. Coq, how recursion is implicit in Haskell, and how axioms such as letrec can lead to terms in types that at first appear to have no constructors (such as Void).

view this post on Zulip Mike Shulman (Aug 31 2024 at 20:39):

Ah, yes. Sorry, it sounded to me like you were suggesting that writing such a term required using a fixpoint combinator.

view this post on Zulip Suraaj K S (Sep 02 2024 at 00:25):

I'm a little curious if / how Logical Relations might fit into this framework?

view this post on Zulip Mike Shulman (Sep 02 2024 at 13:44):

Logical Relations basically consist of interpreting a theory in a special sort of diagram/glued category. I wrote an old blog post about it from that point of view.

view this post on Zulip Suraaj K S (Sep 02 2024 at 14:02):

Mike Shulman said:

Logical Relations basically consist of interpreting a theory in a special sort of diagram/glued category. I wrote an old blog post about it from that point of view.

I see. I was wondering if this also follows the 'general shape' of categorical semantics (with a category of structured categories, a category of theories, and an adjunction?), where these 'diagram/glued categories' are the 'structured categories'?

view this post on Zulip Mike Shulman (Sep 02 2024 at 14:27):

The diagram/glued category (for a particular shape of logical relations) is one of the structured categories.

view this post on Zulip Suraaj K S (Sep 02 2024 at 20:04):

Mike Shulman said:

The diagram/glued category (for a particular shape of logical relations) is one of the structured categories.

Oh yes. Okay, that makes sense...

view this post on Zulip Suraaj K S (Sep 02 2024 at 20:08):

I'm also guessing that we can use the same ideas with "fancier" type theories / doctrines like HoTT?

On a related note, if I understand correctly, we have an adjunction between the category of infinity groupoids (the category of structured categories), and the theories of MLTT?

view this post on Zulip Mike Shulman (Sep 02 2024 at 20:32):

Yes, you can do the same thing with HoTT and so on, but there are two different ways to do it. On one hand, you can work with 1-categorical presentations of structured higher categories like categories with families, and then you can expect an ordinary 1-adjunction just like for ordinary type theories. On the other hand, you can work with a higher category of structured higher categories, but then things get a bit more fraught and you may need to invert some "weak equivalence" theory morphisms.

view this post on Zulip Mike Shulman (Sep 02 2024 at 20:34):

For MLTT, the structured categories are "directly" categories with families with the appropriate type structure. The more indirect version is locally cartesian closed (,1)(\infty,1)-categories (for intensional MLTT with funext), although technically that is still a conjecture. The (,1)(\infty,1)-category of \infty-groupoids is one such structured category.

view this post on Zulip Suraaj K S (Sep 02 2024 at 21:03):

Mike Shulman said:

On one hand, you can work with 1-categorical presentations of structured higher categories like categories with families, and then you can expect an ordinary 1-adjunction just like for ordinary type theories. On the other hand, you can work with a higher category of structured higher categories, but then things get a bit more fraught and you may need to invert some "weak equivalence" theory morphisms.

I was wondering if there are pros / cons of both approaches? The first does seem simpler / easier

view this post on Zulip Suraaj K S (Sep 06 2024 at 17:25):

I'm curious if 'theories' of doctrines 'can be' docrines themselves?

For instance, in this thread: #learning: questions > Higher-order logic as a 'theory' of STLC , we discuss how HOL is a theory of STLC (the doctrine). Being a logic, would one consider HOL a doctrine as well?

I guess that an extension to that question would be - can something like Set Theory (maybe ZFC / ETCS), which is a theory of first order logic, considered a doctrine as well?

view this post on Zulip Federica Pasqualone (Sep 06 2024 at 21:43):

Do we have a 5 stars hard old book on doctrines?

view this post on Zulip Mike Shulman (Sep 07 2024 at 18:43):

Suraaj K S said:

I was wondering if there are pros / cons of both approaches? The first does seem simpler / easier

Sorry, I guess I missed this question at first. The first is indeed easier, and in fact most current ways to do the second actually go through the first, by establishing an equivalence between an (,1)(\infty,1)-category of structured (,1)(\infty,1)-categories and a localization of some 1-category of CwF-like objects and then applying the first approach to the latter.

One disadvantage of the first approach is that it requires using a coherence theorem to strictify your structured higher categories into an appropriate kind of CwF. It's not always clear how to do this, or if it's even possible (e.g. strictifying univalent universes was an open problem for a while). Moreover, such strictification theorems generally rely heavily on a set-theoretic metatheory (meaning ZFC or ETCS, but not HoTT). Of course, insofar as one does the second approach via the first approach, it inherits the same problem, but at least in theory one might imagine a version of the second approach that doesn't require strictification.

One disadvantage of the second approach is that it's less clear how to formulate it as an adjunction rather than an equivalence, because syntactic theories naturally form only a 1-category, and we can't really expect to have an "internal language" functor from an (,1)(\infty,1)-category of structured (,1)(\infty,1)-categories to a 1-category of theories. If we invert some theory weak equivalences we can make an (,1)(\infty,1)-category of theories, but this seems to usually end up equivalent to that of structured (,1)(\infty,1)-categories.

However, all of this is pretty speculative, since the second approach hasn't been pursued very much so there isn't much that we can say about it for sure. Perhaps if someone takes it seriously, they'll find that I'm totally wrong. (-:

view this post on Zulip Mike Shulman (Sep 07 2024 at 18:51):

Suraaj K S said:

I'm curious if 'theories' of doctrines 'can be' docrines themselves?

For instance, in this thread: #learning: questions > Higher-order logic as a 'theory' of STLC , we discuss how HOL is a theory of STLC (the doctrine). Being a logic, would one consider HOL a doctrine as well?

One sense in which theories can be doctrines is with a dimension shift. A doctrine (= 1-doctrine) is the same as a 2-theory, which lives in some 2-doctrine (= 3-theory). And so a 1-theory can be considered a 0-doctrine, in which we have 0-theories. For example, if we regard the 1-theory of groups as a 0-doctrine, then the 0-theories in that 0-doctrine are (presentations of) particular groups.

view this post on Zulip Mike Shulman (Sep 07 2024 at 18:51):

However, I don't think this is what you have in mind. It sounds to me as though what you're thinking of is closer to a "co-slice doctrine". For example, if we consider the semantic doctrine of CCCs (whose syntactic doctrine is STLC), then in that doctrine we have the theory Grp\rm Grp of group objects. Semantically, this is a particular CCC T[Grp]T[\rm Grp] that is freely generated by a group object, so that CCC morphisms T[Grp]CT[{\rm Grp}] \to C are equivalent to group objects in CC. Now if we consider the co-slice 2-category T[Grp]/CCCT[{\rm Grp}] / \mathsf{CCC}, we can regard this as the semantic doctrine of "CCCs containing a group object". It has a corresponding syntactic doctrine, which is obtained from STLC by adding the constants and axioms of the syntactic theory Grp\rm Grp as rules to the doctrine.

view this post on Zulip Mike Shulman (Sep 07 2024 at 18:54):

The description in the other thread of "HOL as a theory of STLC" seems to be thinking of something like this, of obtaining the doctrine HOL by adding the rules relating to Ω\Omega to the doctrine STLC, and thinking of those rules as a "theory" in STLC. I don't think this is quite correct, though, because the the quantifier rules like \forall have to be parametrized over arbitrary types, and that's not something you can do inside a theory. So I would say rather than HOL is an extension of STLC obtained by adding one base type and several rules relating to it.

view this post on Zulip dusko (Sep 07 2024 at 19:45):

Suraaj K S said:

I'm curious if 'theories' of doctrines 'can be' docrines themselves?

I thought about this question a lot. in its time, many people asked various versions of the question:

what follows from the fact that category theory streamlines mathematical theories into corresponding universes (category of sets, category of groups, etc) and category theory itself is a mathematical theory that can be streamlined like that? what does it mean logically that category theory itself satisfies the axioms of a 2-category?

lawvere tried to axiomatize that. street had his theory of cosmoi, etc. in your terms, if we formalize the theory of doctrines as 2-monads, is there a 2-monad of doctrines?

the answer that mike shulman sketched is that there is a dimension shift: the "doctrine" of 2-monads would be a 3-monad. which is clear already on the basic case of "category" of categories being a 2-category.

BUT I think that this shift is a consequence of asking the question in terms of algebraic theories. category theory is essentially algebraic and categorical algebra covers a great part of categorical practice, so it seemed natural to ask the question in terms of monads and 2-monads. but in in higher-order logic, all theories become algebraic...

i think it is worth dwelling on the question as it stands BECAUSE if we find a format of categorical logic where category theory is an instance of a categorical theory THEN category theory becomes a programming language, in the sense that the logical theories are programs, and any formalization of category theory itself within category theory is a universal interpreter of that programming language.

i think THAT point of view displays what we actually do when we do category theory.

the success of machine learning and the realization of AI that we witness these days shows that our logical practices in general are based on such universal interpreters in various forms at various levels of language... (I can say more if it is of interest.)

view this post on Zulip dusko (Sep 07 2024 at 19:46):

one example of a categorical programming language is the theory of sketches. there is a sketch of all sketches, and a sketch of their models...

view this post on Zulip dusko (Sep 07 2024 at 20:26):

Mike Shulman said:

Well, personally I think a better internal language for toposes is dependent type theory.

lawvere's logic for toposes was the language of hyperdoctrines. note that the top of the doctrinal diagram is dependent type theory, or at least the sums and the products as adjoints of the substitution. the bottom is quantifiers-as-adjoints. with all predicates at the bottom being substitution instances of Truth, the doctrinal diagram gives Girard's F-omega. with the comprehension adjunctioin between the top and the bottom, lawvere's doctrinal diagram describes a type theory equivalent with coquand-huet's calculus of constructions.

it is interesting that his equational hyperdoctrines put a sort of a straitjacket on identity types. he did not pursue the question of relating the equality predicate that he defined using the existential quantifier and the leibniz equality, defined using second-order universal quantifier. that is what thomas streicher did, and they found the groupoids :)

view this post on Zulip Suraaj K S (Sep 07 2024 at 21:58):

Mike Shulman said:

The description in the other thread of "HOL as a theory of STLC" seems to be thinking of something like this, of obtaining the doctrine HOL by adding the rules relating to Ω\Omega to the doctrine STLC, and thinking of those rules as a "theory" in STLC. I don't think this is quite correct, though, because the the quantifier rules like \forall have to be parametrized over arbitrary types, and that's not something you can do inside a theory. So I would say rather than HOL is an extension of STLC obtained by adding one base type and several rules relating to it.

This makes sense, and I think this is what I thought too... I guess I was confused when this book: https://mitpress.mit.edu/9780262133210/foundations-for-programming-languages/ (specifically example 4.4.7) which explicitly says, "we can present higher-order logic as a λ→ theory". The 'signature' of this theory has infinitely many types of the form ∀<σ>, apparently..

view this post on Zulip Mike Shulman (Sep 07 2024 at 22:00):

It's not just about having infinitely many \forall operators, it's that you want to have a \forall for all types, and you can't say that in a theory. Even if you add a \forall operator to the theory for each specific type that exists in the theory, it won't follow that a model of that theory has a \forall operator for every type (object) that exists in the model.

view this post on Zulip Suraaj K S (Sep 07 2024 at 22:03):

Mike Shulman said:

It's not just about having infinitely many \forall operators, it's that you want to have a \forall for all types, and you can't say that in a theory. Even if you add a \forall operator to the theory for each specific type that exists in the theory, it won't follow that a model of that theory has a \forall operator for every type (object) that exists in the model.

I see.. I think that the book was maybe just showing a 'cute' syntactic trick, where you can 'encode' and 'do' HOL in STLC...

view this post on Zulip Suraaj K S (Sep 07 2024 at 22:24):

Mike Shulman said:

Logical Relations basically consist of interpreting a theory in a special sort of diagram/glued category. I wrote an old blog post about it from that point of view.

Another thing that I seem to keep coming back to is this discussion. It does seem like CT 'can handle' logical relations and parametricity.

However, https://www.cs.bham.ac.uk/~udr/papers/logical-relations-and-parametricity.pdf seems to talk about how CT 'cannot handle' parametricity, and we need a CT 2.0.

We have had a thread of discussion on this in the past, but I'm a little bit confused as to what the resolution was... Is it safe to say that we do have a satisfactory answer to the questions put forth in the paper? Or does the paper actually demonstrate a 'real' limitation that exists even today?

view this post on Zulip Mike Shulman (Sep 08 2024 at 00:03):

I think that's an exaggeration of what that paper says. For one thing, they describe in that very paper two ways in which category theory can handle parametricity, and their reflexive-graph categories are pretty much the same as one of the diagram categories I referred to.

view this post on Zulip Suraaj K S (Sep 08 2024 at 01:23):

That makes sense.
I think that I was confused by some other comments made by the paper's author here: https://cs.stackexchange.com/questions/9769/what-is-the-relation-between-functors-in-sml-and-category-theory and https://cs.stackexchange.com/questions/9818/what-is-meant-by-category-theory-doesnt-yet-know-how-to-deal-with-higher-order, the specific comment being "Category Theory doesn't know how to deal with higher-order functions. Maybe someday, it will"

view this post on Zulip Mike Shulman (Sep 08 2024 at 04:16):

Yeah, those comments are pretty nonsensical.

view this post on Zulip Suraaj K S (Sep 22 2024 at 17:09):

I was reading https://plato.stanford.edu/entries/goedel-incompleteness/.

In the article, there seems to be a particular section about 'the method of interpretations' (particularly section 1.2.2). I was wondering if what is being referred to there was simply the idea of 'constructing a model' of one theory in another?

There also seems to be a notion of interpretability (https://en.wikipedia.org/wiki/Interpretability), which seems to be another way of relating a theory to another.

I was wondering if there is some connection between the two notions?
Moreover, I'm also curious what notion the article is referring to?

view this post on Zulip Mike Shulman (Sep 22 2024 at 19:05):

I'm not sure exactly what that SEP article is referring to, but it could be either "constructing a model of one theory from a model of another" or the sort of "interpretation" referred to on that WP page. However, there is a close connection between the two: if you can construct a model of T from any model of S in a "uniform" or "elementary" way (I'm not sure how to make that precise), then that construction also gives a way to translate formulas in T to formulas in S by sending an T-formula ϕ\phi to the S-formula "the T-model constructed from the present S-model satisfies ϕ\phi."

view this post on Zulip Evan Washington (Sep 22 2024 at 19:35):

Indeed, the semantic notion (constructing a model of S inside a model of T in a 'nice' way) and the syntactic notion (map each relation symbol of S to a formula in T preserving satisfaction) agree. The idea is just we interpret each atomic formula in one language as a formula in another, which gives a "uniform" choice of definable set in each model. So what the SEP article is talking about is the same notion as in the wiki article. See for instance 1.3 of Marker's Model Theory or 2.6 of Hodges's Model Theory. Note that an interpretation of S into T gives a functor Mod(T) to Mod(S) between the categories of models (see "Glymour & Quine on theoretical equivalence" and "Morita equivalence" by Barrett and Halvorson).

view this post on Zulip Mike Shulman (Sep 22 2024 at 20:08):

Thanks! I guess there is some way of making "in a nice way" precise?

view this post on Zulip Mike Shulman (Sep 22 2024 at 20:10):

Also, I would be inclined to talk about constructing a model of S from a model of T rather than "inside" it, since the latter sounds to me like doing it internally in such a way that T would prove the consistency of S. Do model theorists use "inside" for the former instead?

view this post on Zulip Evan Washington (Sep 22 2024 at 23:26):

My understanding (which might be wrong!) is that they tend to use "inside" in that way because the notion of interpretation really does give relative consistency. If there's an interpretation of S in T, then if T is consistent, S is consistent too. I'm not sure about the best way to say "in a nice way" purely semantically; it really means: interpret each relation symbol as a definable subset and extend in the usual way by induction. So maybe "definable interpretation" is a better description of what's going on.

view this post on Zulip Mike Shulman (Sep 22 2024 at 23:39):

Ah, yes, "definable interpretation" makes sense.

view this post on Zulip Suraaj K S (Sep 23 2024 at 14:06):

To be clear, there seem to be two ways of 'comparing' one theory T with another theory S.

If what I said was correct, then it seems to me that the second way (Interpretations) should be much easier than the first. All you need to do is 'hack up/encode' the proof system of one theory in another. Moreover, I think that it would always be possible to use this method of interpretations for any theory in PA by using things like Godel encodings.. Is this correct? Or is the wikepedia article mentioning yet another way of comparing one theory with another..

view this post on Zulip Mike Shulman (Sep 23 2024 at 16:54):

No, I don't think that's what's meant by interpretations. You seem to be basically describing using one theory as the metatheory for another. I think "interpretation" means instead that both theories are in the same metatheory, and you construct a translation from formulas of one to formulas of the other (or from models of one to models of the other, in the other direction).

view this post on Zulip John Baez (Sep 23 2024 at 18:40):

There's a standard concept of 'interpretation' in (classical, non-category theoretic) model theory, described here:

view this post on Zulip John Baez (Sep 23 2024 at 18:41):

There must be many other concepts called 'interpretation'....

view this post on Zulip Suraaj K S (Sep 29 2024 at 13:50):

Mike Shulman said:

The description in the other thread of "HOL as a theory of STLC" seems to be thinking of something like this, of obtaining the doctrine HOL by adding the rules relating to Ω\Omega to the doctrine STLC, and thinking of those rules as a "theory" in STLC. I don't think this is quite correct, though, because the the quantifier rules like \forall have to be parametrized over arbitrary types, and that's not something you can do inside a theory. So I would say rather than HOL is an extension of STLC obtained by adding one base type and several rules relating to it.

I was doing some reading and found this answer on the internet: https://cstheory.stackexchange.com/a/47313/73876. I was wondering if what you meant was also something similar?

view this post on Zulip Mike Shulman (Sep 29 2024 at 16:20):

Yes, that's what I was thinking "HOL as a theory of STLC" meant, and what I was responding to. But Andrej generally knows what he's talking about, so if he's willing to call it a theory then I wonder whether I'm missing something.

view this post on Zulip Suraaj K S (Oct 01 2024 at 23:52):

Mike Shulman said:

Yes, that's what I was thinking "HOL as a theory of STLC" meant, and what I was responding to. But Andrej generally knows what he's talking about, so if he's willing to call it a theory then I wonder whether I'm missing something.

Something worth mentioning might be that the answer does talk about both 'a theory of STLC' as well as 'a slight extension of STLC'...

view this post on Zulip Suraaj K S (Oct 09 2024 at 01:29):

When one talks about STLC (and how its equivalent to the category of Cartesian Closed (Multi) Categories), you assume 0,x,→ as the type constructors.

However, in introductory PL courses, we are usually taught λ→, that is, just with function types. (I am still talking about the typed λ calculus here.)

I was wondering how / if λ→ is treated in categorical logic. Something that seems to bug me a little bit is that you need products in your categories to define exponentials, and λ→ does not have product types..

view this post on Zulip Mike Shulman (Oct 09 2024 at 04:36):

The categorical counterpart of λ→ is closed cartesian multicategories. That is, a [[cartesian multicategory]] that is closed in the multicategorical sense that it has internal hom-objects with a universal property

C(x1,,xn;y)C(x1,,xn1;[x,y]).C(x_1,\dots,x_n; y) \cong C(x_1,\dots,x_{n-1}; [x,y]).

In generaly, multicategories are the context in which you can define exponentials without products, corresponding to how syntax can have multiple variables in the context even if there aren't product types.

view this post on Zulip Josselin Poiret (Oct 09 2024 at 10:37):

Note that we can also define exponentials without there being cartesian products, using representability instead

view this post on Zulip Nathanael Arkor (Oct 09 2024 at 12:26):

This should coincide with the approach via closed cartesian multicategories: since C^\widehat C is cartesian closed, it is in particular a closed cartesian multicategory, and we can take the full sub cartesian multicategory spanned by representables. The condition for XYX^Y to exist in the category CC (without assuming products with YY exist in CC) will be the condition that XYX^Y exists in the cartesian multicategory CC. In other words, every category is the underlying category of a canonical cartesian multicategory, and asking for exponentials in the representable sense is the same as asking for exponentials in the canonical cartesian multicategory.

view this post on Zulip Mike Shulman (Oct 09 2024 at 14:05):

I would say it's a particular case of closed cartesian multicategories. A general closed cartesian multicategory won't be "canonical" in that sense; in fact it seems rare to me that it would be except in the case when it also has finite products.

view this post on Zulip Max New (Oct 09 2024 at 14:20):

are you saying there are closed cartesian multicategories that are not equivalent to that canonical one given by Arkor using representable presheaves?

view this post on Zulip Mike Shulman (Oct 09 2024 at 15:26):

Yes. The main "naturally occurring" one that I can think of is the syntactic category of λ→, in which the objects are types and the multi-morphisms are terms in a finite context. I don't see any reason why for types A,B,CA,B,C, every natural transformation A×BCよA \times よB \to よC would be represented by a term x:A,y:Bt:Cx:A, y:B \vdash t:C.

view this post on Zulip Mike Shulman (Oct 09 2024 at 15:27):

Note that here "CよC" is the representable presheaf on the underlying ordinary category, so it only knows about terms in length-1 contexts.

view this post on Zulip Max New (Oct 09 2024 at 20:19):

what if you took presheaves on the category of contexts instead?

view this post on Zulip Mike Shulman (Oct 09 2024 at 20:29):

The category of contexts in  λ→ is a CCC.

view this post on Zulip Mike Shulman (Oct 09 2024 at 20:30):

So, yes, its cartesian multicategory structure is the canonical one, as it is for any category with finite products.

view this post on Zulip Suraaj K S (Oct 12 2024 at 01:02):

In categorical logic / type theory, the judgements seem to be of equality. For example, you have the beta equality rule in STLC, etc.

On the other hand, for computation, you want rules for directed reduction. For instance, you have a beta reduction rule in STLC too. These reduction rules need to 'play well' with the equality rules, and are often directed versions of equality rules, but not always (for example eta reduction).

I'm wondering if there is some 'method to this madness'? Something like, given a type theory with equality judgements, can one automatically derive 'the right' reduction rules?

view this post on Zulip Mike Shulman (Oct 12 2024 at 09:39):

Oh, would that that were true! Finding a good way to "orient" the equality rules to make them reduction steps is one of the trickiest, if not the trickiest, parts of turning a "declarative" type theory into a computational one. And proving that such a choice "works", which goes by names like "canonicity" and "normalization", is generally one of the more difficult and substantial metatheorems to establish about a new type theory.

view this post on Zulip Mike Shulman (Oct 12 2024 at 09:41):

People have established good intuitions, heuristics, and general techniques that help a lot. But there's no automatic way to produce reduction rules from equality rules; and if you consider the space of all possible "declarative type theories", the ones that admit a computational interpretation are, I think, going to be quite sparse. It's just that those tend to also be the most interesting and useful ones, so we're more familiar with them. That's not entirely a coincidence, but it still feels kind of magical to me on some days.

view this post on Zulip Suraaj K S (Oct 12 2024 at 15:17):

Mike Shulman said:

Oh, would that that were true! Finding a good way to "orient" the equality rules to make them reduction steps is one of the trickiest, if not the trickiest, parts of turning a "declarative" type theory into a computational one. And proving that such a choice "works", which goes by names like "canonicity" and "normalization", is generally one of the more difficult and substantial metatheorems to establish about a new type theory.

I see. That makes sense..
I'm curious whether 'canonicity' and 'normalization' are synonyms? From what I understand:

view this post on Zulip Mike Shulman (Oct 12 2024 at 15:59):

Your property that "If m → m', then m=m'" is what I would call "soundness" of the reduction system: it doesn't violate the judgmental equality. That's sort of a "bare minimum" requirement of any notion of reduction to be sensible.

view this post on Zulip Mike Shulman (Oct 12 2024 at 16:02):

Normalization is, roughly, about the converse. It says that there is an algorithm that computes from any term (in any context) a "normal form" (which we often think of as another term, although it doesn't have to be) with the property that two terms are judgmentally equal if and only if their normal forms are exactly equal. This allows you to algorithmically test judgmental equality by reducing both terms to normal form and then checking them syntactically for equality. The traditional sort of normalization algorithm consists in applying primitive "reduction steps" one by one until the algorithm terminates (termination being part of what one has to prove), but more modern treatments tend to use a fancier sort of algorithm called "normalization by evaluation".

view this post on Zulip Mike Shulman (Oct 12 2024 at 16:10):

Canonicity is technically a statement only about the judgmental equality, and is stated for a particular concrete type like the natural numbers: it says that for any term t in the empty context belonging to the natural numbers type, there is a term c in canonical form -- which in the case of the natural numbers means it is a "numeral" of the form S(S(S(S...(S(0))))) -- and a judgmental equality t=s.

However, usually we ask for an algorithm that computes c from t, and this algorithm is usually very similar to the normalization algorithm. Specifically, if every normal form belonging to the natural numbers in the empty context is a numeral, then canonicity follows from normalization. (However, you can do things like add arbitrary axioms, which preserve normalization since they are equivalent to prepending variables to the context, but do not generally preserve canonicity.) On the other hand, while canonicity obviously does not actually imply normalization, and is often significantly easier to prove, in practice it's quite hard to think of type theories that satisfy canonicity but not normalization, although usually canonicity is proven first since it's easier and to point the way towards how to prove normalization.

view this post on Zulip Suraaj K S (Oct 12 2024 at 16:51):

That makes sense!

Does one think of things like 'observational'/'contextual' equivalence too, when doing categorical logic? Or is that 'just' a programming language thing?

view this post on Zulip Suraaj K S (Oct 12 2024 at 16:54):

Also, does one consider the description of 'what normal and canonical forms are', a part of the core type theory? or are they 'add-ons' which are useful for proving things about the core type theory?

view this post on Zulip Mike Shulman (Oct 12 2024 at 17:28):

Suraaj K S said:

Does one think of things like 'observational'/'contextual' equivalence too, when doing categorical logic? Or is that 'just' a programming language thing?

For the most part, "algorithmic" equality isn't relevant to categorical logic, except insofar as it coincides with judgmental equality. Some people have tried to formulate categorical structures that "see" it, such as 2-categories whose 2-cells represent reductions, but this is generally in the vein of "how can we come up with a categorical description of the algorithmic behavior" rather than the reduction steps meaning anything directly in naturally-occurring categories.

Of course, when normalization holds, then algorithmic equality does coincide with judgmental equality, and this is very useful for coding proof assistants since it gives a way to test equality algorithmically. In a dependent type theory, testing equality is essential to having a decidable typechecking algorithm. So for practical use in a proof assistant, this sort of thing is important, even if it isn't needed for the mathematical theory of categorical semantics.

view this post on Zulip Mike Shulman (Oct 12 2024 at 17:29):

Suraaj K S said:

Also, does one consider the description of 'what normal and canonical forms are', a part of the core type theory? or are they 'add-ons' which are useful for proving things about the core type theory?

I think most people would consider them add-ons, but I'm not positive. They certainly are closely associated with the theory.

view this post on Zulip Mike Shulman (Oct 12 2024 at 17:29):

Suraaj K S said:

Also, does one consider the description of 'what normal and canonical forms are', a part of the core type theory? or are they 'add-ons' which are useful for proving things about the core type theory?

I think most people would consider them add-ons, but I'm not positive. They certainly are closely associated with the theory.

view this post on Zulip Suraaj K S (Oct 12 2024 at 19:18):

When you say algorithmic equality, do you just mean reduction rules made into an equivalence relation?

When I was talking about 'observational' equivalence between t1 and t2, the notion is more about whether a context can distinguish between t1 and t2. Here, a context is a program with a 'hole' in it.

Let's take the untyped lambda calculus for example. We can say that t1 is contextually equivalent to t2 iif for all contexts c, c[t1] terminates iif c[t2] terminates.

view this post on Zulip Mike Shulman (Oct 12 2024 at 22:00):

Yes, I'm aware that people study "contextual" equalities like that, but that's not what I was talking about. I meant the equality given by a normalization algorithm (which, again, in a modern context might not be simply repeated applications of reduction rules). I thought that's what you meant too since you said up above

You could take the previous idea further, by using reduction to define a new notion of equality called contextual/observational equality.

view this post on Zulip Suraaj K S (Oct 13 2024 at 00:02):

Mike Shulman said:

I meant the equality given by a normalization algorithm (which, again, in a modern context might not be simply repeated applications of reduction rules)

Oh wow, I did not know that! I always thought that the reduction rules were the algorithm..

view this post on Zulip James Deikun (Oct 13 2024 at 00:16):

Reduction rules give a "small-step operational semantics", from which you can also extract a "big-step operational semantics". However you can get a big-step operational semantics by other means. "Normalization by evaluation" means extracting a big-step operational semantics from a denotational semantics!

view this post on Zulip Mike Shulman (Oct 13 2024 at 00:51):

James Deikun said:

"Normalization by evaluation" means extracting a big-step operational semantics from a denotational semantics!

While true, I feel this is a bit misleading, because it makes it sound magic, whereas in fact I think all the work of constructing a normalization algorithm is just moved into constructing a denotational semantics. (-:

view this post on Zulip Mike Shulman (Oct 13 2024 at 00:52):

Although maybe that would be obvious to a computer scientist.

view this post on Zulip Suraaj K S (Oct 13 2024 at 01:25):

Here's another question that I was wondering about:
In a few introduction to CT courses / books (for example: Awodey, https://www.cs.uoregon.edu/research/summerschool/summer23/_lectures/Introduction_to_Category_Theory_notes.pdf, etc.), a connection between monads and algebras is introduced. If I understand correctly, an 'algebraic theory' (which is traditionally a signature and a set of axioms) 'is just a monad'... For instance, the last section in the linked OPLSS lecture notes talks about this.

What I'm curious whether there is a link to the style of categorical logic as we were discussing in this thread (an adjunction between a category of theories and a category of structured categories)?

view this post on Zulip Mike Shulman (Oct 13 2024 at 01:27):

Are you thinking specifically of relating these notions of "theory"?

view this post on Zulip Suraaj K S (Oct 13 2024 at 01:34):

Mike Shulman said:

Are you thinking specifically of relating these notions of "theory"?

Yes. I think that would be a good start...

view this post on Zulip Suraaj K S (Oct 13 2024 at 01:36):

I was just wondering if the monads as algebraic theories idea was a 'neat' idea, or whether it was 'just' an instance of some deeper concept in categorical logic

view this post on Zulip Suraaj K S (Oct 13 2024 at 01:42):

In categorical logic, a model is a functor in the category of structured categories.
On the other hand, an algebra over a monad T is an object A, along with a morphism a : TA → A, subject to a few conditions

view this post on Zulip Suraaj K S (Oct 13 2024 at 17:33):

Here's another monad related question:

In categorical logic, we have an adjunction between

We also discussed that doctrines could be 'programming languages' too, and 'models' give a denotational semantics of programming languages. For instance, I think that https://www.cambridge.org/core/books/categories-for-types/258841251C62FED1DACD20884E59E61C talks about such categorical semantics for programming languages.

What I was wondering about is the following: Moggi's seminal work (https://www.cs.cmu.edu/~crary/819-f09/Moggi91.pdf) gives a 'monadic' semantics for programming languages with effects. I was wondering if this would fit cleanly in the framework of categorical semantics as we have been discussing so far?

For instance, I'm curious what the 'category of structured categories' would be in this case?

view this post on Zulip Mike Shulman (Oct 13 2024 at 17:51):

To the second question, if the doctrine involves the monad as an explicit operator, then the "structured categories" would be categories equipped with a monad.

view this post on Zulip Mike Shulman (Oct 13 2024 at 17:51):

Otherwise, it might be something like a [[Freyd category]].

view this post on Zulip Mike Shulman (Oct 13 2024 at 17:53):

To the first question, there's some overlap in the notions of "theory" but they're not identical. The theories in the doctrine of finite products are multi-sorted, finitary algebraic theories, while monads on Set are single-sorted, infinitary algebraic theories. The intersection consists of single-sorted finitary algebraic theories.

view this post on Zulip Mike Shulman (Oct 13 2024 at 18:32):

A multi-sorted theory does also induce a monad, but rather than on Set, this monad is on the category SetS{\rm Set}^S, where SS is the set of sorts (i.e. base types).

view this post on Zulip Mike Shulman (Oct 13 2024 at 18:33):

I don't know of a direct categorical connection between the category of finite-product theories and the collection of monads on powers of Set, although I suppose there probably is one.

view this post on Zulip Mike Shulman (Oct 13 2024 at 18:33):

And for other doctrines, there isn't much of a connection to monads in general.

view this post on Zulip Suraaj K S (Oct 13 2024 at 19:14):

Mike Shulman said:

To the first question, there's some overlap in the notions of "theory" but they're not identical. The theories in the doctrine of finite products are multi-sorted, finitary algebraic theories, while monads on Set are single-sorted, infinitary algebraic theories. The intersection consists of single-sorted finitary algebraic theories.

I see. If I understand correctly, these 'theories' are objects of the 'category of theories of the doctrine', right? And you'd have an adjunction with some other 'category of structured categories' in both cases?

Moreover, could we call 'monads on Set' a doctrine? I guess that the sentence says that the theories of this doctrine are single-sorted, infinitary algebraic theories. What'd be the 'category of structured categories' in this case? Furthermore, could we have models in categories other than Sets?

I'm sorry for the barrage of questions :(

view this post on Zulip Mike Shulman (Oct 13 2024 at 20:55):

The multi-sorted finitary algebraic theories are the objects of the category of theories of the doctrine of finite products, yes. But single-sorted theories don't really fit well into the doctrinal framework. You can, of course, consider the full subcategory of the category of multi-sorted theories determined by those that have only one sort, but that category doesn't have an adjunction to a category of structured categories, since the "internal language" theory of a structured category is multi-sorted (its sorts are the objects of the category).

view this post on Zulip Mike Shulman (Oct 13 2024 at 20:56):

In general a doctrine specifies the kinds of operations that are allowed on types and terms, and a theory in that doctrine gives particular generating types, terms, and axioms. So single-sortedness is really a property of a theory rather than an aspect of a doctrine.

view this post on Zulip Mike Shulman (Oct 13 2024 at 21:12):

Also I wasn't very precise in saying "infinitary". In general a monad can have "infinitary operations" but I don't think there is an explicit notion of "infinitary theory" that's equivalent to a monad on Set. There is a notion of [[infinitary Lawvere theory]] but it's not well-understood as far as I know, in particular it isn't known whether every such gives rise to a monad, or whether every monad arises from such a thing.

What you can say is that for any regular cardinal κ\kappa, there's a notion of κ\kappa-ary algebraic theory, and the single-sorted ones of those are equivalent to κ\kappa-accessible monads on Set. But there are also inaccessible monads on Set, like the one whose algebras are compact Hausdorff spaces.

view this post on Zulip Nathanael Arkor (Oct 13 2024 at 21:41):

Mike Shulman said:

I don't think there is an explicit notion of "infinitary theory" that's equivalent to a monad on Set. There is a notion of [[infinitary Lawvere theory]] but it's not well-understood as far as I know, in particular it isn't known whether every such gives rise to a monad, or whether every monad arises from such a thing.

Perhaps I'm misunderstanding something, but I would have thought an "infinitary Lawvere theory" is precisely what Linton calls a "varietal theory", i.e. an identity-on-objects product-preserving functor from Setop\text{Set}^{\text{op}}, and these are trivially equivalent to monads on Set\text{Set}.

view this post on Zulip Nathanael Arkor (Oct 13 2024 at 21:45):

(Of course, characterising the presentations that correspond to varietal theories is not at all straightforward.)

view this post on Zulip Mike Shulman (Oct 14 2024 at 01:10):

Ok, that's what I thought 14 years ago, but then Toby Bartels disagreed and I never really paid attention to the discussion that followed, and the resulting nLab page is ambiguous. If you're confident about that, maybe you could edit the nLab page?

view this post on Zulip Mike Shulman (Oct 14 2024 at 01:10):

But in any case, you're right that it doesn't really help with the connection to doctrines, since the "local smallness" condition isn't syntactic.

view this post on Zulip Nathan Corbyn (Oct 14 2024 at 06:56):

Nathanael Arkor said:

Perhaps I'm misunderstanding something, but I would have thought an "infinitary Lawvere theory" is precisely what Linton calls a "varietal theory", i.e. an identity-on-objects product-preserving functor from Setop\text{Set}^{\text{op}}, and these are trivially equivalent to monads on Set\text{Set}.

I’ve not read the discussion Mike is alluding to but my understanding was: this would only cover the single-sorted case (maybe that’s fine) and also (I think) requires choice to present each set as isomorphic to a small coproduct of a fixed singleton set (edit: so that coproducts of these presentations are strictly associative). I’d thought the point was that it is not enough to merely have small products but we also need to _choose_ them for the notion of model. If this is accurate, the usual notion of (single-sorted) Lawvere theory would more closely correspond to a product preserving identity-on-objects functor from the opposite of the category of cardinal numbers and functions between them. The multi-sorted case would then follow by taking powers.

view this post on Zulip Mike Shulman (Oct 14 2024 at 07:01):

Every set is canonically and constructively isomorphic to a small coproduct of copies of a fixed singleton: AaA{}A \cong \coprod_{a\in A} \{\ast\}. That doesn't require any choice.

view this post on Zulip Nathan Corbyn (Oct 14 2024 at 07:07):

Sorry, I meant in such a way that e.g., coproducts of these presentations are strictly associative

view this post on Zulip Mike Shulman (Oct 14 2024 at 07:15):

I don't understand, why would you need that?

view this post on Zulip Nathan Corbyn (Oct 14 2024 at 07:18):

I’d thought it would be essentially the same reason that a Lawvere theory is defined as an IOO functor out of the _skeleton_ of finite sets (op’d): you have a single representative of each arity and structure is preserved strictly.

view this post on Zulip Nathan Corbyn (Oct 14 2024 at 07:21):

(Not that I necessarily think these are good reasons, I was just trying to make sense of what was on the nLab page in terms of my understanding of the finitary case)

view this post on Zulip Mike Shulman (Oct 14 2024 at 07:39):

Ah. I don't think that's necessary in the finitary case either, it just makes some people happier to have only one object of each arity rather than a bunch of isomorphic copies. I think the fact that you'd need AC to get it in the infinitary case is a good reason to drop that requirement, as the nLab definition does.

view this post on Zulip Nathan Corbyn (Oct 14 2024 at 07:40):

That makes sense

view this post on Zulip Todd Trimble (Oct 14 2024 at 15:26):

Perhaps I'm missing part of the discussion, but wouldn't complete Boolean algebras be considered an infinitary algebraic theory? Those don't come from a monad; in fact the left adjoint to the forgetful functor to sets doesn't have a left adjoint (free complete Boolean algebras do not generally exist; they exist only on finite sets).

view this post on Zulip Nathanael Arkor (Oct 14 2024 at 15:46):

I personally would refer to that as a "large algebraic theory", but reserve "infinitary algebraic theory" for the notion corresponding to arbitrary monads on Set. But different authors have made different terminological choices.

view this post on Zulip Mike Shulman (Oct 14 2024 at 17:00):

There are four notions (in the one-sorted case for simplicity):

  1. A large collection of function symbols, of arbitrary arities, and axioms.
  2. A (large) category with arbitrary products generated under such products by a single object.
  3. As in (2), but the category must be locally small.
  4. A monad on Set.

Obviously any (1) generates a (2), and we can think of (2) as the "extensional essense" of a (1). And every (4) generates a (3), namely the opposite of its Kleisli category, while there is an argument that every (3) induces a (4) that it would be nice to have spelled out in more detail (or perhaps it's in some reference, maybe the Linton mentioned above?). Complete Boolean algebras are a (1)+(2) but not a (3)+(4).

So what do we call what? I can see arguments for either (2) or (3) being an "infinitary Lawvere theory"; (2) is the more straightforward modification of the usual notion, but lacks the correspondence to (4) that's an essential part of the theory of ordinary Lawvere theories. I do see a pretty strong argument for an "infinitary algebraic theory" being a (1), since "algebraic theory" suggests to me something that can be presented syntactically, and there's no clear syntactic counterpart of the local smallness condition in (3). But, as Nathanael said, different people may make different choices.

view this post on Zulip Nathanael Arkor (Oct 14 2024 at 17:28):

or perhaps it's in some reference, maybe the Linton mentioned above?

(Yes, that's right.)

view this post on Zulip Suraaj K S (Oct 15 2024 at 02:36):

I was reading the appendix of https://mikeshulman.github.io/catlog/catlog.pdf.

There is the following sentence: "Similarly, an equality judgment like x : A |- M ≡ N : B is shorthand for (x.M) ≡ (x.N) : (A |- B)".

If the notes are taking the view that terms are 'just' annotations, I'm a little confused as to what exactly the real judgement is? Are they the sorts (which are defined to be judgements) of the form Σ^≡ (as shown in A.3)? Or are they 'simpler' sorts? I think I'm a little bit confused when 'term annotations' are introduced...

I've also never seen deductive systems presented this way. I was wondering if this is standard? or something novel?

view this post on Zulip Mike Shulman (Oct 15 2024 at 07:12):

I wrote in A.4 that a "judgment" is a sort in one of the signatures in a dependency sequence, which is therefore defined in some unspecified way from the initial algebras of the previous ones. In the case of something like Σ\Sigma^\equiv, so that we're talking about "equality judgments", these sorts are pair of "derivations" of some other judgment in a previous signature, so that they could potentially be equated. For example, in the case of categories, we start with a signature whose sorts are written ABA\vdash B, and so the sorts of Σ\Sigma^\equiv are triples (f,g,(AB))(f,g,(A\vdash B)) where ff and gg are derivations of ABA\vdash B. The perspective I'm taking in that appendix is that x.Mx.M and x.Nx.N are just notations for two such derivations ff and gg.

view this post on Zulip Mike Shulman (Oct 15 2024 at 07:13):

I wouldn't say this presentation is "standard", as I don't think I've seen it written down like that before. But I don't think it's all that novel either; what I wrote was an attempt to describe what I had learned (or thought I learned) about how some type theorists think about what's going on in many cases, even if they aren't that precise about it.

view this post on Zulip Suraaj K S (Oct 15 2024 at 17:27):

Mike Shulman said:

sorts of Σ≡ are triples (f,g,(A⊢B))

I think that to match the notes exactly, they need to be ((A⊢B),f,g)?
However, I think I understand it a little better now

view this post on Zulip Suraaj K S (Oct 15 2024 at 20:52):

Also, I was curious if there were plans to extend those notes to include dependent type theories and polymorphic type theories?

view this post on Zulip Mike Shulman (Oct 15 2024 at 21:30):

It depends. Does a "plan" have to be time-bound? (-:O

view this post on Zulip Suraaj K S (Oct 15 2024 at 22:35):

I've a small question about 'term systems' as defined by those notes (Definition A.5.2):

It says that a "term system for Σ is a Σ-algebra...". However, there is an example in the previous page where the 'term system' for a cut-ful type theory of categories (where the sorts are pairs of objects in the category) is a single sorted free algebra of untyped terms. I'm wondering if I'm missing something here?

view this post on Zulip Mike Shulman (Oct 15 2024 at 23:21):

The example is also followed by a parenthetical that says

For technical reasons,
rather than a single set of terms as here, in the general case we will allow each
judgment of our intended theory to be assigned a different set of “potential
terms”; see below.

view this post on Zulip Suraaj K S (Oct 15 2024 at 23:50):

Mike Shulman said:

The example is also followed by a parenthetical that says

For technical reasons,
rather than a single set of terms as here, in the general case we will allow each
judgment of our intended theory to be assigned a different set of “potential
terms”; see below.

I think I was just wondering as to whether the single sorted term algebra example 'could be adapted' to the multisorted case.

view this post on Zulip Mike Shulman (Oct 15 2024 at 23:57):

Well, I didn't develop any of the examples very precisely, but I think what I had in mind was that (in the absence of binding) to define a term system, you would first define a single-sorted signature, usually with one operation corresponding to each (typed) operation of Σ\Sigma, consider the initial algebra for that signature, and then use that same set as the (possibly ill-typed) terms at all judgments.

view this post on Zulip Mike Shulman (Oct 15 2024 at 23:58):

In the presence of binding, I would probably take the possibly-ill-typed terms to nevertheless be well-scoped, so that for a judgment with a context of length n the terms would be those with n free variables.

view this post on Zulip Mike Shulman (Oct 15 2024 at 23:59):

This is mentioned later in section A.6:

in a simple type theory (chapter 2) the terms for ΓB\Gamma\vdash B should have nn
variable bindings outermost, where nn is the length of Γ\Gamma (this is why in §A.5 we
allowed different judgments to have different sets of potential terms).

view this post on Zulip Suraaj K S (Oct 16 2024 at 00:34):

Mike Shulman said:

Well, I didn't develop any of the examples very precisely, but I think what I had in mind was that (in the absence of binding) to define a term system, you would first define a single-sorted signature, usually with one operation corresponding to each (typed) operation of Σ\Sigma, consider the initial algebra for that signature, and then use that same set as the (possibly ill-typed) terms at all judgments.

And is the idea that you then give a Σ-algebra structure on this (family of sets which are really the same set)? The notes say that a "term system for Σ is a Σ-algebra"...

view this post on Zulip Suraaj K S (Oct 16 2024 at 00:36):

I think that the appendix is quite interesting because I really haven't seen precise definitions of what is 'deductive system' is. And we seem to use them all over the place: logics, programming languages (type judgements, operational semantics, and even syntax!)

view this post on Zulip Mike Shulman (Oct 16 2024 at 00:51):

Yes, that's right, you give a (non-free) Σ\Sigma-algebra structure on that family of sets, using the way you chose the single-sorted signature for which the set of terms is the free algebra.

view this post on Zulip Suraaj K S (Oct 16 2024 at 14:22):

Often, these 'inference rule systems' also have inference rules with 'side conditions' which can be semantic. Would it be safe to say that the deductive systems in the notes don't handle those systems?

view this post on Zulip Mike Shulman (Oct 16 2024 at 15:05):

What sort of conditions are you thinking of?

view this post on Zulip Suraaj K S (Oct 16 2024 at 15:28):

Here's an example: https://www.cs.cornell.edu/courses/cs6110/2023sp/lectures/lec08.pdf. In section 3.1, you have the side condition -- (where n2 is the sum of n0 and n1)

view this post on Zulip Suraaj K S (Oct 16 2024 at 15:30):

Moreover, in https://www.cs.cornell.edu/courses/cs6110/2023sp/lectures/lec06.pdf, section 5, formula (4) is one definition of what an inference rule is..

view this post on Zulip Mike Shulman (Oct 16 2024 at 15:57):

Suraaj K S said:

Here's an example: https://www.cs.cornell.edu/courses/cs6110/2023sp/lectures/lec08.pdf. In section 3.1, you have the side condition -- (where n2 is the sum of n0 and n1)

I would fit that in my framework by saying that there are actually an infinite family of judgments a,σan\langle a,\sigma\rangle \Downarrow_a n, one for each external integer nn, and what looks like one rule with a side condition is shorthand for infinitely many rules, one for each triple of external integers (n0,n1,n2)(n_0,n_1,n_2) with n0+n1=n2n_0+n_1=n_2.

view this post on Zulip Mike Shulman (Oct 16 2024 at 16:02):

There are certainly different formal definitions of "deductive system" that one might give, and the one you cited is indeed different from mine. That one appears to be focused on inductive definitions of subsets of some previously fixed set, whereas mine is focused more on inductively defining new sets. With the latter end in mind, I needed to insist that the entire structure be algebraic, which means there are no "side conditions" in a formal sense, although as in this case many "side conditions" can be reexpressed as talking about parametrized families of judgments and rules.

view this post on Zulip Suraaj K S (Oct 16 2024 at 17:22):

Another thing that I'm noting (which might be incorrect) is that while the definition of deductive systems in your notes seems to restrict the kinds of systems you can build, you can always go overboard and define whatever you want. What I'm thinking might be better expressed using an example like the follows:

Let your Σ1 be a signature with one sort and a single nullary operator unit. WΣ1 has only one element.
I think you could define Σ2 with the following sorts - each sort is a pair <r, unit> where r is a real number and unit is the element of WΣ1. Now, your set of sorts is isomorphic to the reals.

Is there anything that prevents me from defining such a 'weird' deductive system?

view this post on Zulip Mike Shulman (Oct 16 2024 at 23:31):

Nope! That's perfectly fine.

view this post on Zulip John Baez (Oct 16 2024 at 23:54):

I remember feeling a sense of shock when I first realized real vector spaces are algebras of a Lawvere theory. That was a case of having uncountably many operations rather than sorts.

I remember feeling a similar sense of shock when I read about infinitary logic, which allows certain formulas of infinite length, e.g. conjunctions and disjunctions of up to κ\kappa many formulas.

My feeling was something like "hey, syntax isn't supposed to be so big!"

view this post on Zulip Mike Shulman (Oct 17 2024 at 00:12):

Of course, if one actually wants to implement a type theory in a proof assistant or as a programming language, then the syntax has to be finitely generated.

view this post on Zulip Suraaj K S (Oct 17 2024 at 01:47):

Another way that the notes seem to differs from typical descriptions of type theories is as follows (I think)...

I think that one traditionally thinks of judgements of the form λx.x : Nat → Nat, the notes above would treat the judgement as just Nat → Nat, and λx.x is a 'term'. I guess you could also have another deductive system (as defined in the notes) where the judgements are really 'traditional judgements' too?? Going a little further, maybe you even have some freedom / cleverness in actually choosing what part of a 'traditional judgement' you consider a 'term' and the rest is the judgement..

view this post on Zulip Mike Shulman (Oct 17 2024 at 02:21):

I don't know what's "traditional", but it's true that I made a choice to regard terms as notations for derivations of judgments as much as possible, rather than regarding typing judgments as a property of terms. My reason for that is that it makes it easier to define categorical semantics by induction over derivations. However, as you say, the formal setup of my Appendix A works just as well for a deductive system with typing judgments as a property of terms.

view this post on Zulip Suraaj K S (Oct 17 2024 at 14:19):

I see. I also think that, in your framework, if you think of things like λx.x : Nat → Nat as judgements (which are sorts), then you'd end up having at most one operator for each sort (where the sort appears as the 'target' of the operator), which is ?trivial? and maybe not that useful..

view this post on Zulip Todd Trimble (Oct 17 2024 at 19:18):

John Baez said:

I remember feeling a sense of shock when I first realized real vector spaces are algebras of a Lawvere theory. That was a case of having uncountably many operations rather than sorts.

I remember feeling a similar sense of shock when I read about infinitary logic, which allows certain formulas of infinite length, e.g. conjunctions and disjunctions of up to κ\kappa many formulas.

My feeling was something like "hey, syntax isn't supposed to be so big!"

Alternatively, one could shift over to a notion of multi-sorted Lawvere theory T\mathcal{T}, involving a sort symbol kk for the ground ring (usually a field) and a sort symbol VV for the module (or vector space), and tame that syntax back under control. This is not just a stylistic choice. For example, this is the right choice to make if you want to interpret a topological vector space as a product-preserving functor from T\mathcal{T} to topological spaces: you want the interpretation to give a continuous map R×VV\mathbb{R} \times V \to V, a stronger requirement than just having the uncountably many scalar operations r:VVr \cdot -: V \to V be continuous.

view this post on Zulip John Baez (Oct 17 2024 at 20:29):

That's a good point. In the end I actually enjoyed the shock of thinking about theories that are too big to write down. There's some logician who talked about the "sacred" versus "profane" attitudes toward logic, where in the sacred approach you are struggling to come close as possible to absolute certainty, and thus want to found everything on rules for manipulating finite strings drawn from a finite alphabet, while in the "profane" approach you treat it more like other mathematics, so you're more willing to countenance the use of infinite structures to study logic.

view this post on Zulip Madeleine Birchfield (Oct 17 2024 at 20:53):

First time I'd had to deal with infinitary logic was when Steve Vickers and Ming Ng wrote a series of papers on real analysis and Ostrowski's theorem and informally using the internal logic of a Grothendieck topos in their papers.

I had a hard time wrapping my head around it because the infinitary nature of the internal logic meant that it was nigh impossible to come up with a finite set of rules to formalise said logic, and it was hard to accept that some things cannot be formalised using inference rules from scratch.

I guess this can be formalised by adding an auxillary level and judgments representing the base topos, like the actual definition of a Grothendieck topos and similar to the multi-sorted Lawvere theories mentioned above, but at the time I was trying to avoid such multi-level theories.

view this post on Zulip Mike Shulman (Oct 17 2024 at 22:52):

Todd Trimble said:

For example, this is the right choice to make if you want to interpret a topological vector space as a product-preserving functor from T\mathcal{T} to topological spaces: you want the interpretation to give a continuous map R×VV\mathbb{R} \times V \to V, a stronger requirement than just having the uncountably many scalar operations r:VVr \cdot -: V \to V be continuous.

Of course, that choice has the consequence that its algebras are topological modules over any topological ring, rather than specifically over the real numbers.

A way to get specifically topological real vector spaces is to use John's original theory but regard it as topologically enriched, so that the product-preserving functors to spaces have to also be topologically enriched.

view this post on Zulip Mike Shulman (Oct 17 2024 at 22:52):

Suraaj K S said:

I see. I also think that, in your framework, if you think of things like λx.x : Nat → Nat as judgements (which are sorts), then you'd end up having at most one operator for each sort (where the sort appears as the 'target' of the operator), which is ?trivial? and maybe not that useful..

Yes, that's true, but I don't think that necessarily makes it trivial or unuseful.

view this post on Zulip Todd Trimble (Oct 17 2024 at 23:55):

John Baez said:

That's a good point. In the end I actually enjoyed the shock of thinking about theories that are too big to write down. There's some logician who talked about the "sacred" versus "profane" attitudes toward logic, where in the sacred approach you are struggling to come close as possible to absolute certainty, and thus want to found everything on rules for manipulating finite strings drawn from a finite alphabet, while in the "profane" approach you treat it more like other mathematics, so you're more willing to countenance the use of infinite structures to study logic.

I think it might have been @David Corfield at the n-Category Cafe who spoke of "sacred" (which I just misspelled as "scared", ha!) versus "profane". I like it. Model theorists must be very profane. There's a standard trick where you start with a model of a theory and then create a new theory with a constant term for every element of that model, and that feels like a profane tactic in this sense.

view this post on Zulip Todd Trimble (Oct 18 2024 at 00:20):

Mike wrote

A way to get specifically topological real vector spaces is to use John's original theory but regard it as topologically enriched, so that the product-preserving functors to spaces have to also be topologically enriched.

Ah yes, where the parameter object T(1)\mathcal{T}(1) of unary operations has the topological structure of R\mathbb{R}.

view this post on Zulip John Baez (Oct 18 2024 at 00:31):

Todd Trimble said:

I think it might have been David Corfield at the n-Category Cafe who spoke of "sacred" (which I just misspelled as "scared", ha!) versus "profane". I like it.

Actually I learned about this "sacred/profane" distinction from James Dolan, who has several times told me that the terminology goes back to some famous logician. Unfortunately he can never remember which logician.

view this post on Zulip Todd Trimble (Oct 18 2024 at 00:43):

This is the Cafe post by David. I wonder if James (who also goes by "Jim", which I've settled on having heard him give this name in a coffee shop once, but who doesn't commit to a definitive choice if you ask him) had in mind Dirk van Dalen, the author of this quotation from David's post:

Logic appears in a ‘sacred’ and in a ‘profane’ form; the sacred form is dominant in proof theory, the profane form in model theory. The phenomenon is not unfamiliar, one observes this dichotomy also in other areas, e.g. set theory and recursion theory. Some early catastrophes such as the discovery of the set theoretical paradoxes or the definability paradoxes make us treat a subject for some time with the utmost awe and diffidence. Sooner or later, however, people start to treat the matter in a more free and easy way. Being raised in the ‘sacred’ tradition my first encounter with the profane tradition was something like a culture shock. Hartley Rogers introduced me to a more relaxed world of logic by his example of teaching recursion theory to mathematicians as if it were just an ordinary course in, say, linear algebra or algebraic topology. In the course of time I have come to accept this viewpoint as the didactically sound one: before going into esoteric niceties one should develop a certain feeling for the subject and obtain a reasonable amount of plain working knowledge. For this reason this introductory text sets out in the profane vein and tends towards the sacred only at the end. (p. V)

view this post on Zulip Todd Trimble (Oct 18 2024 at 00:44):

Abraham Robinson was probably someone who was relaxed and profane. Tarski: definitely not.

view this post on Zulip John Baez (Oct 18 2024 at 01:25):

I bet Jim read this quote! Thanks - I'll ask him.

view this post on Zulip David Corfield (Oct 18 2024 at 08:56):

Todd Trimble said:

This is the Cafe post by David

Good to revisit that discussion.

With the use of categorical logic now to plan one's breakfast have we reached the full extent of the profane, or do we recover a sense of the sacred in the everyday?

view this post on Zulip John Baez (Oct 18 2024 at 15:42):

Yes, this seems to be where James Dolan heard this story (and then recounted it to me):

yes, a lot of the specifics seem as i remember it, for example having grown up in the sacred tradition but then converting under hartley rogers's influence, seeing logic taught "as though like any other mathematical field", etc ....

view this post on Zulip Suraaj K S (Oct 18 2024 at 16:36):

This sentence is on the Wikipedia article on Model theory too!

https://en.m.wikipedia.org/wiki/Model_theory

view this post on Zulip Todd Trimble (Oct 18 2024 at 22:12):

Sorry for the off-topic, but this sentence from David Corfield

With the use of categorical logic now to plan one's breakfast have we reached the full extent of the profane, or do we recover a sense of the sacred in the everyday?

inevitably reminds me of the historian of religions Mircea Eliade, who wrote often about hierophanies of the sacred breaking into the everyday and quotidian, typically in camouflaged form. I just wonder whether David was also recalling him, when he wrote that sentence.

I suppose category theory is my religion, and I probably often sound that way, too, with the Yoneda lemma being a holy of holies. On the other hand, when I'm doing a categorical calculation at home, it just feels like an ordinary piece of algebra, without any feeling of "awe and diffidence". So I guess I don't practice what I preach. ;-)

view this post on Zulip David Corfield (Oct 19 2024 at 09:44):

Todd Trimble said:

I just wonder whether David was also recalling him, when he wrote that sentence.

Someone recently recommended Eliade to me, but I haven't started yet.

I think it was part Buddhist ("Show me where the Buddha isn't, and I'll spit there") and part reflection on the ingenuity that allowed everything to come together in the same place to be able to make that breakfast, right back to the domestication of wheat and the rise of metalworking.

view this post on Zulip Todd Trimble (Oct 19 2024 at 12:16):

"Show me where the Buddha isn't, and I'll spit there" <-- what a completely fantastic utterance, almost like a koan. It reminds me of a question a student asked after a talk by Bernie Glassman, a Zen teacher/roshi, something on the order of "how can I fully live in the here and now?", and Glassman replied, "Will anyone who is not here now please stand up!".

view this post on Zulip Suraaj K S (Oct 20 2024 at 23:35):

I was reading "Notions of Computation and Monads" (https://www.cs.cmu.edu/~crary/819-f09/Moggi91.pdf). Here, the author talks about three semantic-based approaches to prove equivalence of programs - operational, denotational, and logical.

I think it's safe to say that there's no clear categorical semantics analog of program equivalence using operational semantics, which, if I understand correctly, is defined in terms of 'contextual/observational equivalence' of program terms.

On the other hand, I think that the logical approach might just be the game of categorical semantics (an adjunction between a category of structured categories and the category of theories, although I don't know whether the programming language would be the doctrine or a theory).

Denotational semantics is also probably a part of this framework, where your intended model is just a structured category (or maybe a functor too, I'm not too sure)?

view this post on Zulip John Baez (Oct 20 2024 at 23:57):

I think it's safe to say that there's no clear categorical semantics analog of program equivalence using operational semantics, which, if I understand correctly, is defined in terms of 'contextual/observational equivalence' of program terms.

If there isn't, it could be that category theorists have work left to do! There are definitely papers that try to understand operational semantics using category theory. However, since I'm not an expert on operational semantics, I'm not sure how close they come to capturing the concepts that experts in operational semantics want captured!

view this post on Zulip Mike Shulman (Oct 21 2024 at 01:07):

I'm not enough of a computer scientist to be entirely confident in my answer, but reading what he says, it does sound like what he calls the "logical approach" is indeed just what we call categorical semantics, and what he calls the "denotational approach" is just a special case of that for a particular "intended" model.

view this post on Zulip Mike Shulman (Oct 21 2024 at 01:09):

I think usually the "intended model" in denotational semantics is some kind of category of domains. You can try https://en.wikipedia.org/wiki/Denotational_semantics to start with.

view this post on Zulip Suraaj K S (Oct 21 2024 at 13:48):

Mike Shulman said:

I think usually the "intended model" in denotational semantics is some kind of category of domains. You can try https://en.wikipedia.org/wiki/Denotational_semantics to start with.

Would it be correct to say that the intended model is just a category of domains? Or does it also include the interpretation functor (which is a morphism in the category of structured categories)?

view this post on Zulip Mike Shulman (Oct 21 2024 at 23:25):

It includes the functor.

view this post on Zulip Reed Mullanix (Oct 25 2024 at 18:54):

The view I take on operational semantics is that it gives a recipe for building a categorical semantics via some sort of realizability model. There are lots of knobs you can turn here, but the basic plan typically follows the following shape:

This is all rather hand-wavy, so let's work through a concrete example. Our first move is to pick a theory of computation: I'm going to go with Hoffstra's basic combinatorial objects (BCOs), as they nicely model small-step operational semantics. In particular, a BCO consists of:

adom(g),g(a)dom(f)adom(c(f,g))c(f,g)(a)f(g(a))\forall a \in dom(g), g(a) \in dom(f) \Rightarrow a \in dom(c(f,g)) \land c(f,g)(a) \leadsto f(g(a))

We should think of c(f,g)c(f,g) as being a sort of computable weak composite of two computable functions.

A nice example of a BCO is the untyped lambda calculus equipped with a call-by-value operational semantics:

eb abea eaebe b \downarrow\ \Rightarrow a \leadsto^{*} b \Rightarrow e a \downarrow \land\ e a \leadsto^{*} e b

which is pretty straightforward to show.

Finally, let's show how we can link up to categorical logic by showing how to turn a BCO AA into a fibration of preorders Pred(A)Set\rm{Pred}(A) \to \rm{Set} of $A$-valued predicates.

xX.adom(r).ϕ(x,a)ψ(x,r(a)) \forall x \in X. \forall a \in dom(r). \phi(x, a) \Rightarrow \psi(x, r(a))

In more intuitive terms, ϕψ\phi \leq \psi whenever we have an AA-computable function that turns AA-computable evidence of ϕ\phi into AA-computable evidence of ψ\psi.

From here, we can use the normal tools of categorical logic to build even more interesting fibrations from Pred(A)\rm{Pred}(A), use the internal logic of Pred(A)\rm{Pred}(A), etc. What this ends up buying us is the ability to prove that certain programs (a) exist and (b) obey some specification.

view this post on Zulip Mike Shulman (Oct 25 2024 at 21:09):

Would you say that such a realizability model could then be considered an "intended" model and thereby become a "denotational" semantics?

view this post on Zulip Reed Mullanix (Oct 25 2024 at 22:38):

To be honest, I don't really like the term "denotational". Typically, it's defined as "constructing a bunch of mathematical objects that describe the meaning of programs", but this includes any notion of semantics lol.

Putting terminological gripes aside, here's my understanding of what the linked PDF is trying to get across. In the "denotational approach", we are doing all of our work within a single category (the "intended model"). This is typically how we'd work when using some sort of domain-theoretic semantics, as the category of domains is typically rich enough that we don't need to leave. Conversely, in the "logical approach", we are working in a doctrine rather than a single category. A more apt description might be "petit"/"gros".

view this post on Zulip Reed Mullanix (Oct 25 2024 at 22:44):

Realizability models kind of break this distinction, as you tend to do a lot of work inside a realizability model, but you also are interested in relating it to other models.

view this post on Zulip Mike Shulman (Oct 26 2024 at 00:11):

Reed Mullanix said:

To be honest, I don't really like the term "denotational". Typically, it's defined as "constructing a bunch of mathematical objects that describe the meaning of programs", but this includes any notion of semantics lol.

Oh good, I'm glad I'm not the only one who feels that way.

view this post on Zulip James Deikun (Oct 26 2024 at 00:28):

I tend to consider semantics "denotational" when there's a more-or-less-compositional mapping that is doing most of the work.

view this post on Zulip Mike Shulman (Oct 26 2024 at 00:46):

Doesn't a "compositional mapping" just mean that it's a functor, which is true of any notion of semantics?

view this post on Zulip Reed Mullanix (Oct 26 2024 at 00:50):

One more plausible definition is that you are looking at models in concrete categories; perhaps “concrete semantics” is a better descriptor?

view this post on Zulip Suraaj K S (Oct 26 2024 at 00:52):

Mike Shulman said:

Doesn't a "compositional mapping" just mean that it's a functor, which is true of any notion of semantics?

If I understand my PL right, I think that a compositional map is defined inductively over the syntax of the programming language.

view this post on Zulip James Deikun (Oct 26 2024 at 00:52):

My platonic ideal of denotational semantics is that it's basically defined by structural recursion on the syntax, yeah.

view this post on Zulip James Deikun (Oct 26 2024 at 00:56):

The more you need some fancy recursion scheme that can't be explained as "structural" the less denotational it is.

view this post on Zulip Suraaj K S (Oct 26 2024 at 00:57):

Would it be fair to say that the 'structured categories' in the Moggi paper are categories with monads?

I guess 'computational λ-calculus' is the doctrine?

view this post on Zulip Mike Shulman (Oct 26 2024 at 00:59):

I'm failing to see how you could define a notion of semantics other than by recursion on the syntax. Can you give an example?

view this post on Zulip Mike Shulman (Oct 26 2024 at 01:00):

Reed Mullanix said:

One more plausible definition is that you are looking at models in concrete categories; perhaps “concrete semantics” is a better descriptor?

Do you mean "concrete" in the precise sense of "admitting a faithful functor to Set"?

view this post on Zulip James Deikun (Oct 26 2024 at 01:00):

A big-step or small-step operational semantics will often recurse on the same piece of syntax many times or even no times. This is recursion but it's not structural recursion.

view this post on Zulip Mike Shulman (Oct 26 2024 at 02:00):

Sorry, I still don't get it. Are you talking about β\beta-reduction steps?

view this post on Zulip Suraaj K S (Oct 26 2024 at 02:22):

Yes. I think that James is talking about β reduction steps.

view this post on Zulip Suraaj K S (Oct 26 2024 at 02:30):

If I understand correctly, in small-step operational semantics (also called transitional semantics), you define the next step relation/function using recursion on the syntax (and I think that this 'syntax' is not just the syntax of the programming language, but you could have additional things like 'stores'/'memories', etc.). For instance, we could be giving the definition of a function step: (Program, Store) → (Program, Store).
The 'real' semantics is then defined using the next step relation. 'Real' semantics should (imo) be a function which maps Programs to some semantic domain. For instance, in our example above, real_semantics : Program → [Store → (Store + 'diverge')].

view this post on Zulip Suraaj K S (Oct 26 2024 at 02:35):

On the other hand, when we want to give a compositional (aka denotational aka fixedpoint) semantics, we 'directly' define the 'real_semantics' by recursion on the syntax of programs. Moreover, I think that often, you'd need fancy semantic domains to be able to do this.

view this post on Zulip Sam Staton (Oct 26 2024 at 09:46):

Hi I learnt that there are three forms of semantics of programming languages: operational, denotational and axiomatic. I think I learnt it from Andy Pitts' lecture notes, slide 3. The three forms can be related via various theorems but they have different aims and people don't always state or prove them.

view this post on Zulip Sam Staton (Oct 26 2024 at 09:47):

In his lecture notes, Andy does talk about equivalence. But I think often operational semantics often doesn't focus on equivalence of programs, e.g. this is only very briefly hinted at in the operational semantics in the definition of Standard ML. When I started out, it seemed that a lot of people were mainly proving the theorems in the style of Wright and Felleisen, which was very influential.

view this post on Zulip Suraaj K S (Oct 26 2024 at 22:38):

Let's consider the doctrine which is the following: λ-calculus, with a base type o and function types only. In this doctrine, I think that theories are just sets of equations, where an equation is a 4 tuple (H, M, N, τ), where H is a context, τ is a type, and M, N are (well-typed) terms.

In this 'doctrine', there is a notion of soundness and completeness with respect to 'type frames' (for example, as defined here: https://fsl.cs.illinois.edu/teaching/2021/cs522/CS522-Fall-2021-Simply-Typed-Lambda-Calculus.pdf page 33).

I'm wondering whether we can consider 'type frames' as structured categories (as one does in the adjunction/equivalence between the category of structured categories and the category of theories)?

view this post on Zulip Mike Shulman (Oct 26 2024 at 23:18):

Normally I would consider function types to be part of the doctrine, but the base type o to be part of a theory.

view this post on Zulip Mike Shulman (Oct 26 2024 at 23:19):

In general a theory can contain any number of base types, any number of undefined term symbols, as well as axioms.

view this post on Zulip Suraaj K S (Oct 26 2024 at 23:40):

Would it be safe to say that categorical models are a generalization of these 'type frame' models? I think people do say this, but I'm not really sure what this means

view this post on Zulip Mike Shulman (Oct 26 2024 at 23:45):

I expect with a bit of work, a 'type frame' can be turned into a model in Set, since it associates each type to a set.

view this post on Zulip Suraaj K S (Oct 29 2024 at 16:56):

Here's a question which is particularly vague (mostly because I don't understand it things well enough), but here it goes.

There seem to be 'type-theory' style proof assistants. These include coq, agda, nuprl, etc. These have dependent types. The Curry-Howard correspondence also applies here. Proofs are programs / morphisms into some proposition object. Propositions are types.

There are also 'logic' style proof assistants. For instance, Isabelle/HOL. Here, propositions are terms, and what you need to prove is p = true, for a proposition p. Here, proofs are not first class objects that you can manipulate, like you can do in dependent type theories.

Here's my vague question: Does the framework of categorical logic only apply to the first kind of doctrine? Maybe a related question is as follows: when you have the doctrine / type theory for first order logic, is a proof a morphism to a proof object (the first style), or is a proof a demonstration that two morphisms are equal (the second style)?

view this post on Zulip Mike Shulman (Oct 29 2024 at 17:14):

Well, for one thing, I don't think the dichotomy is as sharp as that. Most dependent type theory based proof assistants have Russell-style universes, in which case propositions, like all types, are also terms, namely terms belonging to some universe. And some such proof assistants now have universes of proof-irrelevant propositions ("SProp") such that any two proofs of such a proposition are definitionally identical, so while they are still technically first-class objects you never need to know or think about them beyond whether or not there is one.

view this post on Zulip Mike Shulman (Oct 29 2024 at 17:16):

I would say the main difference is in whether the judgmental/dependency structure is "stratified" or not: whether propositions belong to the same "world" of types so that types can depend on propositions as well as vice versa, or whether propositions are allowed to depend on types but not the other way around. These two choices correspond to two different kinds of categorical structure: in one case the fibration of types/propositions has "comprehension" whereas in the other it doesn't. But both have a categorical semantics.

view this post on Zulip Suraaj K S (Oct 29 2024 at 17:27):

Mike Shulman said:

I would say the main difference is in whether the judgmental/dependency structure is "stratified" or not: whether propositions belong to the same "world" of types so that types can depend on propositions as well as vice versa, or whether propositions are allowed to depend on types but not the other way around. These two choices correspond to two different kinds of categorical structure: in one case the fibration of types/propositions has "comprehension" whereas in the other it doesn't. But both have a categorical semantics.

If propositions are allowed to depend on types, but not the other way around, I'm guessing that that would correspond to 'logic'-style (perhaps this is bad terminology), where you typically want to prove p=true?

view this post on Zulip Max New (Oct 29 2024 at 18:02):

Suraaj K S said:

Here's my vague question: Does the framework of categorical logic only apply to the first kind of doctrine?

Certainly not. Lawvere's work on hyperdoctrines is I think the original "categorical logic" and it is models first-order logic.

view this post on Zulip Mike Shulman (Oct 29 2024 at 18:12):

Suraaj K S said:

If propositions are allowed to depend on types, but not the other way around, I'm guessing that that would correspond to 'logic'-style (perhaps this is bad terminology), where you typically want to prove p=true?

Whether you phrase the proof rules as "proving p" or "proving p=true" is an irrelevant detail of syntax, it doesn't change the categorical semantics.

view this post on Zulip Suraaj K S (Oct 29 2024 at 19:05):

I am a little curious about the following - theories of doctrines don't need to have a special sort Prop, right? However, I'm guessing that when people say first order logic, they are referring to a theory which has Prop in the first order logic doctrine? Or does the doctrine itself have Prop built in?

view this post on Zulip Mike Shulman (Oct 29 2024 at 19:14):

A doctrine can stipulate the existence of certain special types like Prop, in addition to whatever base types are specified by different theories in that doctrine. However, when I think of first-order logic, I don't think of Prop as being a type but rather a judgment. When Prop is a type, then that's more like higher-order logic.

view this post on Zulip Suraaj K S (Oct 29 2024 at 20:20):

I see. That makes sense. I was reading https://www.google.com/books/edition/Theories_of_Programming_Languages/X_ToAwAAQBAJ?hl=en&gbpv=1&pg=PA1&printsec=frontcover.

The book starts off with the syntax and semantics of predicate logic (section 1.1-1.2). Here is a summary of the syntax:

The next section defines denotational semantics.

It is fairly straightforward to define these semantic functions.

If we want to view this using the lens of categorical semantics, I would say that the text described a theory in a doctrine, and not a doctrine itself. We would want to be able to have other theories in the doctrine, besides the theory of natural numbers, I guess.
What would be the 'right' doctrine in this case?

Another thing that I notice is that the next doesn't really give axioms, but just gives an intended model. On the other hand, the description of doctrines must (?) have axioms too?

view this post on Zulip Mike Shulman (Oct 29 2024 at 21:14):

One small note is that variables are not nullary operations, but just a standard term-constructor. All theories contain variables, so they aren't specified as part of the theory the way constants and operations like + are.

view this post on Zulip Mike Shulman (Oct 29 2024 at 21:16):

Overall, I haven't looked at the book, but from your description it looks like the doctrine is just first-order logic. The theory in this doctrine then has one sort (called "integers"). As I said, in first-order logic, assertions/propositions are another judgment, not a sort.

view this post on Zulip Mike Shulman (Oct 29 2024 at 21:17):

As for axioms, a theory in a doctrine is allowed to contain axioms, but it could decline to exercise that right (or, equivalently, have an empty set of axioms).

view this post on Zulip Mike Shulman (Oct 29 2024 at 21:18):

So it looks to me as though this is a description of a particular theory in the doctrine of first-order logic (with the doctrine and the theory a bit mixed together in the presentation), with one sort called "integers" and some operation symbols (nullary constants, binary plus, etc.).

view this post on Zulip Suraaj K S (Oct 29 2024 at 21:24):

Mike Shulman said:

One small note is that variables are not nullary operations, but just a standard term-constructor. All theories contain variables, so they aren't specified as part of the theory the way constants and operations like + are.

Oh yes. That makes sense. The book has <IntExp> := 0|1.... | x | y ... , etc. which I blindly translated..

view this post on Zulip Suraaj K S (Oct 29 2024 at 21:28):

The text however doesn't have 'assertions' variables too, like it does for 'integer expression' variables. Maybe that doesn't make it as clean as it should be

view this post on Zulip Mike Shulman (Oct 29 2024 at 21:34):

No, I wouldn't expect it to.

view this post on Zulip Mike Shulman (Oct 29 2024 at 21:34):

That goes along with propositions being a judgment and not a sort: you can't (in ordinary first-order logic) hypothesize an arbitrary proposition.

view this post on Zulip Suraaj K S (Oct 31 2024 at 16:08):

There seems to be three notions of what 'an algebra' is:

I was wondering if the above notions of 'algebra' are related? Maybe the endofunctor version is just a generalization of the monad version? I'm not sure how this links to the categorical logic version, though..

view this post on Zulip John Baez (Oct 31 2024 at 16:26):

I don't know what "just a generalization" means, but algebras of monads seem quite different from algebras of endofunctors. Note that if a TT is a monad, an algebra of the monad TT is not the same as an algebra of the underlying endofunctor of TT.

I don't know a way to systematically convert a monad TT into an endofunctor FF such that algebras of the endofunctor FF are equivalent to algebras of the endofunctor TT. Nor do I know a way to systematically convert an endofunctor FF into a monad TT such that algebras of the monad TT are equivalent to algebras of the endofunctor FF. (If such processes exist, even under special conditions, I'd love to know about it!)

view this post on Zulip John Baez (Oct 31 2024 at 16:29):

On the other hand, there's a famous result due to Lawvere that if TT is a finitary monad on the category Set\mathsf{Set}, then there's a Lawvere theory Θ\Theta such that algebras of Θ\Theta are equivalent to algebras of TT. And this is reversible: for any Lawvere theory Θ\Theta we can find a finitary monad on Set\mathsf{Set} with the same algebras.

view this post on Zulip Mike Shulman (Oct 31 2024 at 16:36):

John Baez said:

Nor do I know a way to systematically convert an endofunctor FF into a monad TT such that algebras of the monad TT are equivalent to algebras of the endofunctor FF.

This is called the [[algebraically-free monad]] generated by the endofunctor, and exists under mild conditions such as for an accessible endofunctor on a locally presentable category.

view this post on Zulip Suraaj K S (Oct 31 2024 at 16:51):

Are 'Lawvere theories' a synonym for "theories in the doctrine of equational logic"?

view this post on Zulip Suraaj K S (Oct 31 2024 at 16:55):

https://ncatlab.org/nlab/show/Lawvere+theory seems to tell me that. I just want to make sure that I'm not misunderstanding

view this post on Zulip John Baez (Oct 31 2024 at 20:10):

Perhaps because I don't share your background, I don't know what you think "equational logic" includes. Lawvere theories only describe n-ary operations obeying universal equational laws.

view this post on Zulip Mike Shulman (Oct 31 2024 at 23:42):

A Lawvere theory is sort of the "extensional essense" of a single-sorted theory in the doctrine of equational logic. It doesn't privilege any particular generating operations or axioms, but includes all the derivable operations and equations between them on the same footing.

view this post on Zulip Suraaj K S (Nov 01 2024 at 00:00):

John Baez said:

Perhaps because I don't share your background, I don't know what you think "equational logic" includes. Lawvere theories only describe n-ary operations obeying universal equational laws.

What I'd meant was what people call 'algebraic theories', etc. For example, as seen here: https://awodey.github.io/catlog/notes/catlog1.pdf

view this post on Zulip Suraaj K S (Nov 01 2024 at 00:10):

Mike Shulman said:

A Lawvere theory is sort of the "extensional essense" of a single-sorted theory in the doctrine of equational logic. It doesn't privilege any particular generating operations or axioms, but includes all the derivable operations and equations between them on the same footing.

I see that makes sense. So, from every single-sorted theory of equational logic, I'm guessing that you 'can create' a lawvere theory. It does seem to be related to the 'syntactic' category that the theory gets mapped to under the 'term-model' functor of the adjunction. If I understand correctly, the syntactic category is also freely built this way..

view this post on Zulip Suraaj K S (Nov 01 2024 at 02:45):

This answer: https://math.stackexchange.com/questions/1520546/difference-between-lawvere-theory-and-its-syntactic-category seems to say that they (lawvere theory and the syntactic category) are the same

view this post on Zulip Mike Shulman (Nov 01 2024 at 05:34):

Yes, that's right: the Lawvere theory associated to a syntactic theory is the syntactic category, at least when it's regarded as a theory in the doctrine of finite-product categories. (You can also regard an equational theory as living in the doctrine of cartesian multicategories, in which case its syntactic (multi)category contains the same information but isn't exactly what is usually referred to as a "Lawvere theory".)

view this post on Zulip Mike Shulman (Nov 01 2024 at 05:37):

More on your original question about the three notions of theory:

So you can see that the three notions are closely related, though not identical.

view this post on Zulip Suraaj K S (Nov 03 2024 at 23:56):

I'm curious if something like CiC (Calculus of inductive constructions) would fit in the doctrinal framework.

So far, we have been thinking about doctrines like equational logic, STLC, etc. I am guessing that the Calculus of Constructions can also be thought of as a doctrine. This is the 'topmost' point of the lambda cube. Furthermore, I am guessing that we could have 'theories' in this doctrine, such as the theory of natural numbers, binary trees, etc.

The Calculus of Inductive Constructions (baby coq), in some sense has infinitely many datatypes (in that you can define new datatypes). Can we still consider this as a 'theory' of CoC? Or would CiC be a doctrine in its own right? In that case, what would its theories be?

Note: I picked CiC arbitrarily, and I could have probably asked the same question about Agda, etc...

view this post on Zulip Mike Shulman (Nov 04 2024 at 04:32):

STLC already has infinitely many types, generated by the rule AtypeBtypeABtype\frac{A\,\mathsf{type} \quad B\,\mathsf{type}}{A\to B \, \mathsf{type}}. I don't see the inductive datatype-forming operation in CIC as fundamentally different, although of course it's more complicated.

view this post on Zulip Mike Shulman (Nov 04 2024 at 04:32):

Rules that generate new types are never part of a theory, though; they have to be specified in the doctrine. So CiC is a doctrine that extends CoC.

view this post on Zulip Mike Shulman (Nov 04 2024 at 04:33):

Theories in CiC would be, like theories in any other doctrine, specified by giving some generating types, terms, and axioms.

view this post on Zulip Mike Shulman (Nov 04 2024 at 04:34):

BTW, saying "CiC" has an advantage for this discussion over saying "Coq" or "Agda" in that it is closer to precisely mathematically specified than any real-world proof assistant is.