Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: "The" internal language


view this post on Zulip Pablo Donato (Feb 24 2025 at 18:04):

So this might be a very basic question, but I could not find any answer either on the nlab, stackexchange or this Zulip after searching for "internal language":

Why do we speak of "the" internal language of a category?

This use of the word "the" implies the following in my mind:

  1. If a category has an internal language, it must necessarily be unique
  2. There is a systematic way to derive the definition of the internal language given an axiomatic definition of the category (the functor Lang:CategoriesTheories\mathrm{Lang} : \mathbf{Categories} \to \mathbf{Theories} mentioned on the nlab)

As a proof theorist, I am quite skeptical of (1), because it seems to imply that there is only one canonical way to capture syntactically the morphisms of the category, i.e. only one proof system that satisfies a soundness and a completeness theorem by giving it semantics in that category (not withstanding the fact that this would only work with the syntactic category). But if one considers e.g. Heyting categories, there are a lot of different proof systems that have been built for intuitionistic first-order logic: many variants of Hilbert systems, natural deduction calculi, sequent calculi, deep inference calculi, proof nets, and of course (dependent) type theories.

One way to resolve the confusion might be by speaking of internal logic rather than internal language (assuming we resrict ourselves to the usual syntax of logical formulas/types with connectives and/or quantifiers). But the two terminologies seem to be used interchangeably in some contexts, typically when speaking of the so-called Curry-Howard-Lambek correspondence between (say) CCCs and simply-typed λ\lambda-calculus.
Another way would be to consider internal languages "up-to-isomorphism", but then this would be a very liberal notion of isomorphism that ignores basically every interesting property of a proof system. This also begs the question of how one could axiomatize such an enormous category Languages\mathbf{Languages} (or Theories\mathbf{Theories} according to nlab).

Lastly as a computer scientist (and given the above), I am of course very skeptical of (2), as I have yet to see the construction of the magical Lang\mathrm{Lang} functor. But maybe no one claims to be able to do this, and the concept of internal language is just a loose framework to think about the relationship between logic and category theory.

view this post on Zulip Ryan Wisnesky (Feb 24 2025 at 18:20):

Imo, the answer is that internal languages are indeed considered/defined up to isomorphism (or rather, equivalence, a slightly weaker notion), throwing away all the proof theoretic stuff on purpose. For example, the internal logic of a cartesian category is a Horn theory; from a "semantics" point of view that's all there is to say but from a proof theory point of view, Horn theories vary wildly: some are decidable, others not; some presentations of them have properties other presentations do not, and so on. This isn't so different than what is done in denotational semantics, which ignores e.g. the time/space complexity of programs and only focuses on their 'meaning', throwing away the "operational" and "axiomatic" aspects of semantics on purpose.

view this post on Zulip Mike Shulman (Feb 24 2025 at 18:34):

I would say that we only speak of "the" internal language of a category once we have fixed a doctrine and its semantics, by which I mean a formal system together with an adjunction relating its theories to a class of structured categories, and then "the" internal language of one of those structured categories is its image under the right adjoint part of that adjunction.

view this post on Zulip Mike Shulman (Feb 24 2025 at 18:35):

It's just that people are sloppy because there are certain doctrines that are so standard, like STLC/CCC, or HOL/Topoi, that once you know a category is, say, a topos, when you speak of its internal language everyone assumes you mean with respect to that doctrine.

view this post on Zulip Mike Shulman (Feb 24 2025 at 18:36):

But it's certainly true, for instance, that a topos has both a HOL internal language and an MLTT internal language.

view this post on Zulip Damiano Mazza (Feb 24 2025 at 18:41):

Concerning (2), the Lang\mathrm{Lang} functor may certainly be defined very explicitly. Volume 2 of the Elephant is one place where you can find a very precise description for lots of kinds of doctrines, from algebraic theories/cartesian categories all the way to higher-order theories/elementary toposes.

view this post on Zulip Damiano Mazza (Feb 24 2025 at 18:47):

A general, rough description is that you can define a logical language by taking the objects of C\mathcal C as sorts and the morphisms/subobjects of C\mathcal C as function symbols/relation symbols. Thanks to the structure of C\mathcal C (which we are assuming to be a certain kind of category that fits with the theories we are using), every formula of this language (over the logic we are considering) will have an interpretation in C\mathcal C, so Lang(C)\mathrm{Lang}(\mathcal C) will be the theory whose axioms are all the formulas which are "true" when interpreted in C\mathcal C.

view this post on Zulip Kevin Carlson (Feb 24 2025 at 19:20):

But note that (as Mike also already noted) you have to already know the "certain kind of category", i.e. doctrine, that you're in--the question may have arisen mostly from imagining you can construct the language in this way from a bare category, which isn't enough info.

view this post on Zulip Morgan Rogers (he/him) (Feb 24 2025 at 19:59):

Mike Shulman said:

But it's certainly true, for instance, that a topos has both a HOL internal language and an MLTT internal language.

This point may appear at a glance to support @Pablo Donato 's point about there being different fragments of logic interpretable in the same class of categories which have different proof-theoretic properties. However, it's worth highlighting that when Mike talks about doctrines, this includes the choice of structure-preserving-functors between the categories involved. In this instance, the difference is between (topoi with logical functors) and (topoi with atomic geometric morphisms)^op, if I'm not mistaken about the content of MLTT?

Still, I expect there can be differences in the proof theory that aren't accounted for.

view this post on Zulip Mike Shulman (Feb 24 2025 at 21:08):

Well, it depends on what you mean by "MLTT". Probably for 1-toposes I should mean extensional MLTT, without universes if I don't want to assume those in my toposes, in which case the morphisms would be locally cartesian closed functors. Plain MLTT doesn't say anything about the subobject classifier, nor, as far as I know, about there being right adjoints. But one can enhance MLTT to talk about the whole topos structure, such as with a type of propositions for the subobject classifier, and in that case the morphisms would again be logical functors, even though the syntax is different from HOL.

view this post on Zulip Pablo Donato (Feb 24 2025 at 21:39):

Thanks for your answers! This gives me a clearer picture of what is meant by "internal language", given that I never took a look at the Elephant for instance.

As mentioned by @Mike Shulman and @Kevin Carlson, my misunderstanding was mostly based on the idea that the formal system and extra-structure on categories are not specified in advance. I suspect however that this misconception might be quite common among category theorists, for instance taking this quote from this article from @Evan Patterson and Michael Lambert :

In some cases, the logic defined by a 2-category can be identified with a
preexisting system given in the traditional syntactic style. Lawvere theories correspond
to single-sorted algebraic theories, categories with finite products to multi-sorted alge-
braic theories, categories with finite limits to essentially algebraic theories, and cartesian
closed categories to simply typed lambda calculus with product types.

To me the use of the word "identified" suggests that there is an exact correspondence with the preexisting proof system that comes a posteriori, which implicitly discards the possibility of other proof systems that could fit the correspondence in different and interesting way. This is a feeling I get when reading most material on categorical logic, but maybe this is because category theorists and proof theorists don't have the same objectives in mind when studying logic.

@Morgan Rogers (he/him) I think the choice of structure-preserving functors is also something that was not in my picture of the story. But Mike's comment (and your last remark) makes me doubt the possibility for doctrines to capture the full story occurring in the proof theory, especially given that there does not seem to be an agreed upon definition of doctrines according to the article I quoted.

@Damiano Mazza I have seen the general construction of Lang\mathrm{Lang} in the nlab article, but I don't see how it fits with λ\lambda-calculus and more generally type theory. This seems to be specific to the syntax of multi-sorted FOL with function symbols, but I guess it also works somehow with HOL in simple type theory, since it is the original example of internal language for elementary topoi?

view this post on Zulip Mike Shulman (Feb 24 2025 at 21:44):

Pablo Donato said:

To me the use of the word "identified" suggests that there is an exact correspondence with the preexisting proof system that comes a posteriori, which implicitly discards the possibility of other proof systems that could fit the correspondence in different and interesting way.

I suspect that many category theorists, at least, know what's going on and are just being sloppy in the way I mentioned. But some might indeed have a misconception. And, as you suggest, some category theorists might consider two formal systems "equivalent" as soon as they have the same categorical models, even though to a logician they might be quite different.

view this post on Zulip Mike Shulman (Feb 24 2025 at 21:45):

My unfinished categorical logic notes try to emphasize the general picture of how logics in varying doctrines correspond to structured categories.

view this post on Zulip Kevin Carlson (Feb 24 2025 at 22:31):

I see nothing sloppy in the Lambert-Patterson quote, to be sure. They clearly specify a doctrine and then the syntax-up-to-equivalence side of the duality associated to that doctrine.

view this post on Zulip Kevin Carlson (Feb 24 2025 at 22:33):

I think in particular you're just not reading "identified" in the standard mathematicla usage.

view this post on Zulip Mike Shulman (Feb 24 2025 at 23:11):

I dunno, I could see an argument that that quote is misleadingly phrased, that it suggests the syntactic side is uniquely determined by the 2-category. Here's some more context from the article:

In categorical logic, a logical system is defined by a 2-category of categories with extra
structure. Theories within the logic are objects of the 2-category; models of a theory are
structure-preserving functors out of the theory; and model homomorphisms are natural
transformations. In some cases, the logic defined by a 2-category can be identified with a
preexisting system given in the traditional syntactic style.

view this post on Zulip Pablo Donato (Feb 24 2025 at 23:17):

@Mike Shulman What I am not sure about is how one can define rigorously the notion of "having a categorical model", given that we don't know how to model syntax categorically in a systematic way. Taking the case of STLC, the usual interpretation maps judgments x1:A1,,xn:Ant:Ax_1 : A_1, \ldots, x_n : A_n \vdash t : A to morphisms Γt:A:A1×...×AnA\llbracket \Gamma \vdash t : A \rrbracket : \llbracket A_1\rrbracket \times ... \times \llbracket A_n \rrbracket \to \llbracket A \rrbracket in a CCC. But if you don't have product types, morphisms of the form AB×CA \to B \times C cannot be reflected in the syntax, so you don't get a honest completeness theorem. And even if you have product types, the interpretation collapses the distinction between products A×BA \times B in types and "products" x:A,y:Bx : A, y : B in contexts, thus we are missing some important structure. This can be recovered by giving a categorical model with more structure (e.g. with CwF as done here, or maybe with multicategories), but the point is that there is no canonical way to give meaning to the sentence "two formal systems have the same categorical models" : it might always be possible to find (classes of) categories which have exactly the structure we want for the models to coincide with well-chosen interpretations, but then it kinda feels like mirroring syntax in the semantics. I guess this is one way to relate two formal systems by identifying what they have in common, without resorting to a direct translation between them or into a third system.

@Kevin Carlson So for instance in the case of CCCs and STLC, the doctrine is the 2-category of CCCs, and the "syntax-up-to-equivalence side of the duality" is STLC? Thus instead of considering the specific category of STLC-theories, I should consider any category equivalent to it? I am not sure the term "identified" has the same unambiguous reading in every mathematical context. Here you use it as "there exists an (syntax-semantics duality) adjunction up-to-equivalence", which is a very specific (and non-obvious to me) meaning.

view this post on Zulip Mike Shulman (Feb 24 2025 at 23:57):

When I say that some syntactic doctrine has categorical models in some class of structured categories, I mean that there is an adjunction between the category of syntactic theories in that doctrine and the category of such structured categories. I think this is pretty precise.

With this definition, you're right that a given syntax can have more than one kind of categorical model, as well as that a given class of categories can be semantics for more than one kind of syntax. So when I said that category theorists may be tacitly considering formal systems "the same" when they have the same categorical models, I meant "have" in the sense of structure rather than mere property, in the same way that we might say that a set "has a group structure".

view this post on Zulip Kevin Carlson (Feb 25 2025 at 00:15):

Pablo Donato said:

Kevin Carlson So for instance in the case of CCCs and STLC, the doctrine is the 2-category of CCCs, and the "syntax-up-to-equivalence side of the duality" is STLC? Thus instead of considering the specific category of STLC-theories, I should consider any category equivalent to it? I am not sure the term "identified" has the same unambiguous reading in every mathematical context. (...)

Perhaps not, but I meant something much more general. Anyway, it's clear enough that phrasings like "the syntax", "the logic" can suggested single-valuedness where it doesn't exist.

view this post on Zulip Pablo Donato (Feb 25 2025 at 01:03):

Mike Shulman said:

So when I said that category theorists may be tacitly considering formal systems "the same" when they have the same categorical models, I meant "have" in the sense of structure rather than mere property, in the same way that we might say that a set "has a group structure".

Okay so basically you were saying "they have the same models in the given class of structured categories", rather than in the abstract. This makes a lot more sense. But then I am not sure about what you mean by "syntactic doctrine": is this related to Dolan's definition on the nlab?

view this post on Zulip Mike Shulman (Feb 25 2025 at 02:42):

No, by a syntactic doctrine I mean a class of syntactic logics or type theories with the same judgmental structure. For instance, STLC is a doctrine, and we obtain particular theories in that doctrine by adding basic/axiomatic types, terms, and equalities.

view this post on Zulip John Baez (Feb 25 2025 at 03:04):

Syntactic doctrines are indeed "related" to doctrines in Dolan's sense, but it's better not to get into this now, since figuring out the precise relation is even harder than the original question!

view this post on Zulip Damiano Mazza (Feb 25 2025 at 09:54):

Pablo Donato said:

I have seen the general construction of Lang\mathrm{Lang} in the nlab article, but I don't see how it fits with λ\lambda-calculus and more generally type theory. This seems to be specific to the syntax of multi-sorted FOL with function symbols, but I guess it also works somehow with HOL in simple type theory, since it is the original example of internal language for elementary topoi?

For the λ\lambda-calculus, we are talking about the following kinds of theories (I don't know what they're called usually, let's call them simply-typed higher-order theories):

Given a cartesian closed category C\mathcal C, you define Lang(C)\mathrm{Lang}(\mathcal C) as follows:

Of course, we need to say what "interpretation" means. This is as expected:

A morphism ST\mathbb S\to\mathbb T of simply-typed higher-order theories is defined as follows. First, it is a pair of functions from sorts of S\mathbb S to sorts T\mathbb T and from function symbols of S\mathbb S to function symbols of T\mathbb T. This induces a function GG from terms of S\mathbb S to terms T\mathbb T (acting homomorphically on types and constructors of the λ\lambda-calculus). We require that, for every equation s=ss=s' of S\mathbb S, G(s)=G(s)G(s)=G(s') is an equation of T\mathbb T.

One then checks that every cartesian-closed functor F:CDF:\mathcal C\to\mathcal D induces a morphism of simply-typed higher-order theories Lang(C)Lang(D)\mathrm{Lang}(\mathcal C)\to\mathrm{Lang}(\mathcal D), just by mapping A\ulcorner A\urcorner to FA\ulcorner FA\urcorner and f\ulcorner f\urcorner to Ff\ulcorner Ff\urcorner.

This defines a functor from CCCs to simply-typed higher-order theories, which is right adjoint to the "syntactic category" functor, taking a simply-typed higher-order theory T\mathbb T and mapping it to the CCC Syn(T)\mathrm{Syn}(\mathbb T) whose objects are types of T\mathbb T and morphisms are terms of T\mathbb T modulo βη\beta\eta-equivalence.

view this post on Zulip Damiano Mazza (Feb 25 2025 at 10:09):

Of course, the category C\mathcal C above might be more than a CCC. For example, it might be a Grothendieck topos. In that case, there are at least three "internal languages" of C\mathcal C that I can think of: the one I described above, the internal language as an elementary topos (a sort of higher-order set theory); and its geometric language (a sort of positive first-order theory with infinitary disjunctions). And there's probably also some kind of Martin-Löf type theory too, because C\mathcal C is also locally CCC...

So, as it has abundantly been made clear in the above posts, "the" internal language is ambiguous if we do not specify which kind of categories (doctrine) we are regarding C\mathcal C to be a member of. However, in general practice this is usually very clear from the context.

view this post on Zulip Patrick Nicodemus (Feb 25 2025 at 13:53):

Mike Shulman said:

But it's certainly true, for instance, that a topos has both a HOL internal language and an MLTT internal language.

Both MLTT and HOL let you quantify across types, which I think is a really important example of the kind of ambiguity that is being discussed here. I have always thought of the "internal language" as one which, in Lawvere's words, "free it of reliance on an external notion of infinite" - although he was concerned with infinite diagrams and their limits which arguably requires a stronger and more explicit dependence on external indexing constructs than uniform quantification over all types.

view this post on Zulip Patrick Nicodemus (Feb 25 2025 at 13:59):

But as for myself I do not think of "the internal language" of topos theory as permitting quantification over all types.
One can introduce type constants A,BA,B and give constructions and theorems like A,B:Ob{(f,g):AB×BAgf=1gf=1}:ObA, B : Ob |- \{ (f, g) : A^B \times B^A \mid gf = 1 \land gf = 1 \}: Ob but one cannot then apply these theorems to new objects C,DC,D.

I should go back and look at some of the internal language of topos theory papers and see if they are explicit about this distinction

view this post on Zulip Pablo Donato (Feb 25 2025 at 14:25):

@Damiano Mazza thanks for giving me the precise definition, now I have an exact picture of what @Mike Shulman meant by syntactic doctrine (at least in the case of STLC, I guess there is no consensual general definition according to @John Baez's comment). It still appears to me as a very "extensional"/non-computable definition, which was my concern (2). For it to be computable, you would need a way to enumerate all the objects and morphisms of your category C\mathcal{C}. But to be fair I have never seen that claimed anywhere in the literature, it is just something I wish were true.

Also there seems to be some huge redundancy in your mapping \ulcorner \cdot \urcorner, because one has a type/function symbol for every object/morphism, including those that are freely generated with products/internal homs; but those could just be expressed syntactically with the associated type and term constructors in λ\lambda-calculus (i.e. product/arrow types and pairs/λ\lambda-abstractions). Am I missing something?

Damiano Mazza said:

So, as it has abundantly been made clear in the above posts, "the" internal language is ambiguous if we do not specify which kind of categories (doctrine) we are regarding C\mathcal C to be a member of. However, in general practice this is usually very clear from the context.

Am I right to interpret this as saying that for any "semantic" doctrine D\mathfrak{D} (i.e. a 2-category of categories with structures such as CCC\mathrm{CCC}), there should be a unique (up to equivalence of categories) syntactic doctrine D\ulcorner \mathfrak{D} \urcorner such that there exists an adjunction SynLang\mathrm{Syn} \dashv \mathrm{Lang} with Lang:DD\mathrm{Lang} : \mathfrak{D} \to \ulcorner \mathfrak{D} \urcorner? It does not seem to agree with what I understood from previous comments, that it is only once you have fixed the (right) syntactic doctrine that the adjunction holds, and then Lang\mathrm{Lang} is unique simply by the uniqueness of right adjoints.

view this post on Zulip Mike Shulman (Feb 25 2025 at 15:12):

Yes, the internal language of any category in any doctrine is always huge, redundant, and noncomputable. But in practice that's not an issue, because we don't ever actually work with it: when actually reasoning using this correspondence, we specify a small finite theory that is computable, and use the adjunction between theories and categories to conclude that once we can interpret the generating objects, morphisms, and equations of our theory in some category, we can automatically interpret any other constructions derived from them.

view this post on Zulip Mike Shulman (Feb 25 2025 at 15:14):

I have one correction to Damiano's carefuldescription of Lang(C)\mathrm{Lang}(\mathcal{C}): there should be one function symbol f:TU\ulcorner f\urcorner : T \to U for any types TT and UU (not just "generating types" of the form A\ulcorner A \urcorner) and any morphism ff from the interpretation of TT to the interpretation of UU.

view this post on Zulip Mike Shulman (Feb 25 2025 at 15:15):

Patrick Nicodemus said:

Both MLTT and HOL let you quantify across types

What do you mean by that? It sounds like a statement such as "for all types X..." which is not part of MLTT or HOL as I understand them. (If "MLTT" has universes, then you can quantify over all types in some universe, but that's not exactly the same.)

view this post on Zulip Patrick Nicodemus (Feb 25 2025 at 16:16):

I meant MLTT with one universe type, but I'll accept your distinction between small types and arbitrary types.
By the acronym "HOL" I understand that variant of higher-order logic which is the foundation of the family of theorem provers called HOL, which is based on ML-style parametric polymorphism. Every theorem is implicitly quantified over all types in it, and you can apply this theorem with different instantiations of those type variables rather than re-proving the same theorem every time.

view this post on Zulip Mike Shulman (Feb 25 2025 at 16:58):

Ah, I see. That's not what topos theorists usually mean by "HOL".

view this post on Zulip Mike Shulman (Feb 25 2025 at 17:00):

I also wouldn't really call that "quantification", since there is no statement of the form "for all types" inside the language. That sort of polymorphism can be obtained implicitly in the usual syntax/semantics framework by just observing that when we prove a theorem about "arbitrary types" what we are officially doing is working in a theory containing some "axiomatic" generating types, and according to the initiality/freeness of syntax we can then interpret that theory into any category, or any other theory, by sending those generating types anywhere we want.

view this post on Zulip Damiano Mazza (Feb 25 2025 at 17:14):

Mike Shulman said:

I have one correction to Damiano's carefuldescription of Lang(C)\mathrm{Lang}(\mathcal{C}): there should be one function symbol f:TU\ulcorner f\urcorner : T \to U for any types TT and UU (not just "generating types" of the form A\ulcorner A \urcorner) and any morphism ff from the interpretation of TT to the interpretation of UU.

Ah, yes, sorry, I wrote too quickly. In fact, the notation f\ulcorner f\urcorner is ambiguous, because one arrow ff of C\mathcal C may induce several function symbols, one for each possible decomposition of its domain/codomain.

For example, an arrow f:A×BCf:A\times B\to C of C\mathcal C induces at least two function symbols, f1:A×BC\ulcorner f\urcorner_1:\ulcorner A\times B\urcorner\to\ulcorner C\urcorner and f2:A×BC\ulcorner f\urcorner_2:\ulcorner A\urcorner\times\ulcorner B\urcorner\to\ulcorner C\urcorner.

view this post on Zulip Damiano Mazza (Feb 25 2025 at 17:14):

@Pablo Donato, concerning the redundancy, what happens usually (as in this case) is that the redundancy is harmless because it only induces isomorphic objects when you take the syntactic category: Syn(Lang(C))\mathrm{Syn}(\mathrm{Lang}(\mathcal C)) is equivalent to C\mathcal C, so, for instance, A×B\ulcorner A\urcorner\times\ulcorner B\urcorner and A×B\ulcorner A\times B\urcorner become isomorphic.

view this post on Zulip Kevin Carlson (Feb 25 2025 at 18:31):

Pablo Donato said:

Am I right to interpret this as saying that for any "semantic" doctrine D\mathfrak{D} (i.e. a 2-category of categories with structures such as CCC\mathrm{CCC}), there should be a unique (up to equivalence of categories) syntactic doctrine D\ulcorner \mathfrak{D} \urcorner such that there exists an adjunction SynLang\mathrm{Syn} \dashv \mathrm{Lang} with Lang:DD\mathrm{Lang} : \mathfrak{D} \to \ulcorner \mathfrak{D} \urcorner? It does not seem to agree with what I understood from previous comments, that it is only once you have fixed the (right) syntactic doctrine that the adjunction holds, and then Lang\mathrm{Lang} is unique simply by the uniqueness of right adjoints.

No; fixing D\ulcorner \mathfrak D\urcorner and composing with an adjunction out of D\mathfrak{D} gives counterexamples such as “the free Cartesian category generated by a type theory with only unary contexts”, by composing the standard syntax-semantics adjunction for bare categories with that for the free Cartesian category over a category. I think you ought to want something like monadicity for the syntax-semantics adjunction to get closer to uniqueness, but I’m not sure what to say here.

view this post on Zulip Mike Shulman (Feb 25 2025 at 18:36):

Something that should be said is that some categorical logicians prefer to formulate the syntax/semantics relationship as an equivalence of categories. This requires defining "morphisms of theories" to be essentially functors between their corresponding syntactic categories, which I regard as doing a sort of violence to the notion of "syntax". But if you do take that perspective, then of course "the category of theories" corresponding to a given category of structured categories is unique, since it must be equivalent to the latter.

view this post on Zulip Nathanael Arkor (Feb 25 2025 at 18:55):

You can define morphisms of theories syntactically, i.e. specifying a term in the target theory for each operator in the source theory. This leads to an equivalence without being tautological. But, for whatever reason, this seems to be uncommon in the literature.

view this post on Zulip Pablo Donato (Feb 25 2025 at 18:58):

@Mike Shulman hmm yes I guess at this point, one might as well work directly with the syntactic categories.

@Kevin Carlson I am not sure I follow your counter-example. Are you saying that even when both the syntactic and semantic doctrines are fixed, one can find multiple syntax-semantics adjunctions between those?

view this post on Zulip Mike Shulman (Feb 25 2025 at 18:58):

@Nathanael Arkor well, at least a little less tautological...

view this post on Zulip Nathanael Arkor (Feb 25 2025 at 19:19):

Well, I would argue no more tautological than the correspondence between theories and categories.

view this post on Zulip Pablo Donato (Feb 25 2025 at 19:21):

I also have a methodological question which is a bit orthogonal. Currently I am trying to devise a new syntax for proofs in the {,,}\{\top, \wedge, \Rightarrow\}-fragment of intuitionistic logic inspired by Peirce's existential graphs. The structure of terms is quite different from (I think richer than) STLC, but it should still be sound by interpretation in any CCC.

Since I have a hard time devising a bidirectional translation between my system and STLC (which if I understood correctly should give an equivalence between the corresponding syntactic doctrines), I was considering finding instead a syntax-semantics adjunction of type SynLang\mathrm{Syn} \dashv \mathrm{Lang}. Do you think this is a reasonable way to proceed, e.g. has it been done before for another formal proof system? My hope is that this could help me formulate the right syntax for my system with the right equational theory (something akin to βη\beta\eta-equivalence). Somehow I have the feeling that this is not the intended usage of this kind of syntax-semantics adjunction, that they always come a posteriori when the syntax is already well-understood.

view this post on Zulip John Baez (Feb 25 2025 at 20:10):

Mike Shulman said:

Something that should be said is that some categorical logicians prefer to formulate the syntax/semantics relationship as an equivalence of categories. This requires defining "morphisms of theories" to be essentially functors between their corresponding syntactic categories, which I regard as doing a sort of violence to the notion of "syntax".

I agree, and this sort of violence is typical when people seek an equivalence when nature is really giving them an adjunction. Any sort of adjunction can be "improved" to get an equivalence . For example, in Galois theory, given an inclusion of fields kKk \subseteq K we get an adjunction between the poset of intermediate fields kFKk \subseteq F \subseteq K and the (opposite of) the poset of subgroups of Gal(Kk)\text{Gal}(K|k). This is pretty easy to define. But people prefer working with an equivalence, so they put on extra conditions to make that so. Then things inevitably become more technical.

Which is okay, but I think it's always good to start by stating when you have an adjunction, which is often simple and conceptually clear, before mucking around to get an equivalence.

view this post on Zulip Mike Shulman (Feb 25 2025 at 20:53):

Although there are different ways of making an adjunction into an equivalence. One way, which I think the Galois theory people do, is to just restrict to the subcategories of fixed points. Another, which is what's happening with syntax, is if the left adjoint (say) is essentially surjective, you can replace its domain by the full image of the left adjoint, or equivalently by the Kleisli category of the induced monad.

view this post on Zulip John Baez (Feb 25 2025 at 21:06):

Yes, the Galois people seem to more or less restrict to the subcategories of fixed points - though looking it over, they also often require that the extension KK be finite-dimensional over kk, and I'm not sure what that's for.

view this post on Zulip John Baez (Feb 25 2025 at 21:07):

Is there anything on the nLab about taking the subcategories of fixed points of an adjunction to get an equivalence? I was wanting to refer to something like that, but I couldn't instantly find it. They have it for posets here.

view this post on Zulip John Baez (Feb 25 2025 at 21:09):

What happens if you apply that trick to some familiar syntax-semantics adjunction?

view this post on Zulip Mike Shulman (Feb 25 2025 at 21:51):

[[fixed point of an adjunction]]?

view this post on Zulip Mike Shulman (Feb 25 2025 at 22:13):

Another weird thing happens with syntax/semantics that the category of theories is naturally a 1-category, while the category of structured categories is naturally a 2-category. If you stick to strict morphisms of theories that send generators to generators, there seems no way to boost the category of theories up to a 2-category, and no way to make Lang into a 2-functor. So to get an adjunction you kind of have to consider instead a 1-category of structured categories (and strictly-structure-preserving functors).

view this post on Zulip Mike Shulman (Feb 25 2025 at 22:14):

In that picture, there are generally no nontrivial fixed points of the adjunction, since Syn(Lang(C)) will have a lot of extra copies of the objects of C, and Lang(Syn(T)) will have a lot of extra generating types arising from terms of T.

view this post on Zulip Mike Shulman (Feb 25 2025 at 22:15):

But you kind of want to say that every C is fixed, since the counit Syn(Lang(C))C\mathrm{Syn}(\mathrm{Lang}(C)) \to C is always an equivalence, even though that's not visible to the 1-adjunction.

view this post on Zulip Mike Shulman (Feb 25 2025 at 22:17):

I guess if the doctrine allows equations between types, then Lang will remember those and you can get a counit that's always an isomorphism so every category is fixed. In that case the fixed points on the other side should be the theories that are "complete", i.e. in which every type is equal to a generating type and every term is equal to a function symbol.

view this post on Zulip Kevin Carlson (Feb 25 2025 at 22:30):

Pablo Donato said:

Kevin Carlson I am not sure I follow your counter-example. Are you saying that even when both the syntactic and semantic doctrines are fixed, one can find multiple syntax-semantics adjunctions between those?

My example was to give two such adjunctions but with two different syntactic doctrines (unary type theory and simple type theory, both without function types, both mapping to Cartesian categories.) But you should be able to cook up an example with both endpoints fixed, for instance whenever the semantic doctrine admits a nontrivial left adjoint endomorphism.