Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: Overview of Syntax-Semantics Duality


view this post on Zulip John Onstead (Feb 05 2024 at 15:56):

I am currently learning about syntax-semantics duality and related concepts and I wanted to verify my understanding of the concepts as well as get a good overview of the subject.
Mainly, I want to find out how exactly syntax-semantics duality and the Computational Trinity relate. My understanding is: computational trinity = syntax-semantics duality (category <-> type theory) + Curry-Howard correspondence (type theory <-> logical system). Is this a good assessment of their relationship? Secondly, as I understand it, there is a plurality of both- that is, there isn't a single "THE Curry-Howard correspondence" or "THE syntax-semantics duality" and thus no "THE computational trinity". Instead, there exist multiple such correspondences, each one dealing with a particular class of categories, type theories, and logical systems. This can be stated in categorical language as having each main class of categories, type theories, and logical systems form categories, in which case there is then an adjunction between these corresponding categories. Is all this correct- if not, what did I get wrong, and if it is correct, is there any key detail I am overlooking?

view this post on Zulip John Onstead (Feb 05 2024 at 15:57):

One additional question I have is: does the computational trinity always exist- that is, given any syntax-semantics duality, is it always possible to "complete the triangle" to get a full computational trinity? If not, what are the conditions on the classes involved? In addition, is the triangle of adjunctions formed between some category of categories, type theories, and logical systems in some computational trinity a commutative triangle? If so, then which adjunction is the composition of which other two? I was thinking that it's the adjunction between categories and logical systems which is given by composing the syntax-semantics duality adjunction between categories and type theories with the Curry-Howard adjunction between type theories and logical systems, but is this true/always true? Or is there more independence of the adjunctions (IE, perhaps there is a separate "syntax-semantics adjunction" between categories and logical systems independent of any mention of type theories?)

view this post on Zulip Ralph Sarkis (Feb 05 2024 at 16:42):

I don't know if you already stumbled upon @Ivan Di Liberti 's thesis but its introduction helped me clear up some fog (it is not exactly about computational trinity).

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 17:46):

The term "syntax–semantics duality" usually refers to some kind of connection (typically an adjunction or equivalence) between a (2-)category of "theories" and a (2-)category whose objects are the categories of models of such theories, e.g. Gabriel–Ulmer duality is referred to as a syntax–semantics duality between essentially algebraic theories and locally finitely presentable categories.

This is of a different nature to the kind of correspondence described by the nLab page on the [[computational trilogy]] (which is not a standard term). There, the different components are all on the side of "syntax" (loosely speaking), and the distinction is primarily in the presentation (e.g. logic, type theory, category theory).

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 17:47):

To be honest, I think it's a little misleading that [[syntax-semantics duality]] and [[computational trilogy]] are treated as "related concepts" on their nLab pages. They are only very loose related.

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 17:50):

The Curry–Howard correspondence traditionally refers to a specific correspondence, but you do also see some people using it more generally. I think a better generic term is [[propositions as types]].

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 17:52):

One additional question I have is: does the computational trinity always exist- that is, given any syntax-semantics duality, is it always possible to "complete the triangle" to get a full computational trinity?

The problem with this question is that "computational trinity" doesn't have a well-defined meaning. It's a series of analogies, some with precise statements, others simply giving an intuition (at least for the examples on the nLab page). However, one should certainly hope that any well-behaved type theory has a categorical interpretation (and a logic can be obtained essentially by throwing out the terms).

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 17:54):

I think the best way to understand these ideas is to read a textbook account that covers various examples, e.g. Lambek & Scott's Introduction to higher order categorical logic, Crole's Categories for Types, etc.

view this post on Zulip Mike Shulman (Feb 05 2024 at 19:13):

Nathanael Arkor said:

This is of a different nature to the kind of correspondence described by the nLab page on the [[computational trilogy]] (which is not a standard term). There, the different components are all on the side of "syntax" (loosely speaking), and the distinction is primarily in the presentation (e.g. logic, type theory, category theory).

I don't think I agree. I would say that a computational trilogy is a sort of refinement of syntax/structure, where logic and type theory are two different faces of syntax, while category theory is semantic.

view this post on Zulip Madeleine Birchfield (Feb 05 2024 at 19:37):

I'm not sure there is much of a distinction between logic and type theory.

Typed first order logic with equality can be defined as a dependent type theory with identity types; empty types; unit types; product types; disjunctions and existential quantifiers as HITs; types y:Ax=y\prod_{y:A} x = y which say that an element x:Ax:A of a type AA is a center of contraction; dependent product types of propositions (i.e. types AA with a family of witnesses p(x):y:Ax=yp(x):\prod_{y:A} x = y that each element x:Ax:A is a center of contraction), from which one can derive logical implication and negation; weak function extensionality, which says that dependent products of propositions are themselves propositions; and if needed, excluded middle and/or UIP.

Typed higher order logic with equality can be defined as a dependent type theory with identity types, dependent sum types, dependent product types, weak function extensionality, a unit type, a Coquand universe of all propositions in the type theory, and if needed, excluded middle and/or UIP.

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 19:53):

Mike Shulman said:

Nathanael Arkor said:

This is of a different nature to the kind of correspondence described by the nLab page on the [[computational trilogy]] (which is not a standard term). There, the different components are all on the side of "syntax" (loosely speaking), and the distinction is primarily in the presentation (e.g. logic, type theory, category theory).

I don't think I agree. I would say that a computational trilogy is a sort of refinement of syntax/structure, where logic and type theory are two different faces of syntax, while category theory is semantic.

Would you say that algebraic theories are semantic, then? My impression is that the general consensus is that algebraic theories are syntactic (albeit presentation-free) notions.

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 19:55):

(To be honest, I feel the community is not exactly unanimous on exactly where the split between syntax and semantics is, which can make these kinds of discussions particularly confusing. I remember us discussing something related on the Types Zulip at some point.)

view this post on Zulip Kevin Arlin (Feb 05 2024 at 19:58):

An algebraic theory is the initial model of itself, so it's quite natural that whether it's syntactic or semantic is a bit of a vexed question: maybe the whole Lawverian philosophy is precisely to eliminate the clean distinction between syntax and semantics and see how great that is.

view this post on Zulip Kevin Arlin (Feb 05 2024 at 19:59):

Of course a category with finite limits is still somehow much more syntaxy than its category of models, but I think that might be essentially a metaphorical position.

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 20:00):

Madeleine Birchfield said:

I'm not sure there is much of a distinction between logic and type theory.

In my view, the crucial difference between a logic and a type theory is that a type theory has terms that completely describe their derivational structure. Generally speaking, logics have no terms. But many people are happy to call things type theories that have some ambiguity (that is, you cannot reconstruct their derivations from their terms), so this is by no means the generally agreed upon distinction.

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 20:03):

Kevin Arlin said:

An algebraic theory is the initial model of itself, so it's quite natural that whether it's syntactic or semantic is a bit of a vexed question: maybe the whole Lawverian philosophy is precisely to eliminate the clean distinction between syntax and semantics and see how great that is.

On one hand, I think the initial model of any "theory" should be syntactic in a suitable sense, without making it any less semantic, but on the other hand, I also think that the line between syntax and semantics is rather blurry (perhaps even entirely artificial).

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 20:04):

For instance, in the algebraic setting, one can always construct a theory from a model, such that the initial model of the reconstructed theory is precisely the model you started with.

view this post on Zulip Mike Shulman (Feb 05 2024 at 20:50):

There are a lot of different meanings of "theory". I would say that a "presentation" of a theory is a syntactic object, whereas the "walking model" of that theory is a semantic object obtained from the adjunction between syntax and semantics. One can then argue untill doomsday about whether the presentation or the walking model is more deserving of the name "theory". But I've always interpreted the leg "category theory" in a computational trilogy as referring to general semantics, not just the particular semantic objects that are freely built out of syntax.

view this post on Zulip Leopold Schlicht (Feb 05 2024 at 21:02):

@John Onstead In case you are not aware of it yet, I want to recommend the paper Homotopy type theory: the logic of space by Mike Shulman. This is one of the masterpieces of mathematical exposition. It explains both the "syntax-semantics duality" and "propositions as types" in the context of homotopy type theory.

view this post on Zulip Leopold Schlicht (Feb 05 2024 at 21:19):

By the way, as @Nathanael Arkor I would use the term "syntax-semantics duality" differently than you do. In my opinion, theorems such as Makkai duality and Gabriel-Ulmer duality should be called "syntax-semantics dualities". But the theorems you have in mind, like the equivalence between theories in typed λ\lambda-calculus and cartesian closed categories, are not even "dualities", and they are not really about semantics in the sense of model theory.

view this post on Zulip Leopold Schlicht (Feb 05 2024 at 21:27):

I don't know why @Mike Shulman disagrees. For instance, the cartesian closed categories in the above equivalence really play the role of a different presentation of the syntax of a theory in typed λ\lambda-calculus.

view this post on Zulip Mike Shulman (Feb 05 2024 at 21:29):

In my idiolect, a cartesian closed category is not syntax, but semantics.

view this post on Zulip Mike Shulman (Feb 05 2024 at 21:29):

Syntax is about strings of symbols.

view this post on Zulip Mike Shulman (Feb 05 2024 at 21:31):

So, for instance, Gabriel-Ulmer duality is a relation between two different categories of semantic objects.

view this post on Zulip Mike Shulman (Feb 05 2024 at 21:32):

(But I'm glad you like my paper! (-: )

view this post on Zulip Leopold Schlicht (Feb 05 2024 at 21:52):

I understand your point of view, but from my limited understanding, at least in classical logic I would say "syntax" refers to theories and "semantics" to the models of these theories.
So for instance, categorical logicians consider the notion of a Lawvere theory (as you know). If you go as far as calling them "theories", then I think it's only fair to refer to them as "syntax" also - after all, we can assign models to them!

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 21:54):

I suspect this is not a discussion we will ever all come to an agreement on :)

(However, I hope we can all agree that the current state of affairs, with a proliferation of technical terms whose meanings people do not agree upon, is entirely unideal.)

view this post on Zulip Mike Shulman (Feb 05 2024 at 21:57):

Well, as discussed in the blog post I linked to (and particularly its comments), there is a whole continuum of notions of "theory" from the totally syntactic to the somewhat more semantic. A syntactic purist could very well take issue with using the word "theory" for Lawvere's notion.

view this post on Zulip Kevin Arlin (Feb 05 2024 at 21:59):

I agree that this is likely to be an infinite debate. That said! Leopold, a Lawvere theory is just a category with finite products. So, I mean…that’s an awful lot of syntax, and in particular many categories are both syntax and semantics in this sense.

view this post on Zulip Morgan Rogers (he/him) (Feb 05 2024 at 21:59):

Leopold Schlicht said:

So for instance, categorical logicians consider the notion of a Lawvere theory (as you know).

Beware of generalizing about categorical logicians, many of them are here :yum:

view this post on Zulip Leopold Schlicht (Feb 05 2024 at 22:01):

I only now see that the syntax/semantics issue was debated even after this message. To be honest I had read the thread only to this message and now I feel sorry for reinforcing this infinite discussion which is just about different conventions people have! :sweat_smile:

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 22:02):

Kevin Arlin said:

I agree that this is likely to be an infinite debate. That said! Leopold, a Lawvere theory is just a category with finite products. So, I mean…that’s an awful lot of syntax, and in particular many categories are both syntax and semantics in this sense.

I think algebraic theories are a good example of one of the subtleties many people overlook or dismiss an unimportant in these discussions (though is mentioned in the blog post comments Mike linked to). There is an important distinction between algebraic theories à la Lawvere (comprising a certain class of identity-on-objects functors) and cartesian categories. Yet these notions are often conflated.

view this post on Zulip John Onstead (Feb 05 2024 at 22:04):

I find it quite interesting that there is disagreement on what is meant by syntax vs semantics! I guess to clarify my original question a little more, I am interested more in the raw structural aspects of the systems. That is, in discussing the phenomena of "category-type theory duality", which I'll take as any kind of adjunction or equivalence that exists between a class of type theories and categories (regardless of the interpretation of such an adjunction).

view this post on Zulip Leopold Schlicht (Feb 05 2024 at 22:05):

Why do you say "duality", though? :)

view this post on Zulip Kevin Arlin (Feb 05 2024 at 22:06):

Nathanael Arkor said:

I think algebraic theories are a good example of one of the subtleties many people overlook or dismiss an unimportant in these discussions (though is mentioned in the blog post comments Mike linked to). There is an important distinction between algebraic theories à la Lawvere (comprising a certain class of identity-on-objects functors) and cartesian categories. Yet these notions are often conflated.

That’s a good point—would you want to claim that fixing that identity-on-objects functor instantiates the shift to syntax? This rules out Cartesian categories without a nice generating set of objects from being viewed as syntax, as far as I understand…not sure whether that might be desirable.

view this post on Zulip Ivan Di Liberti (Feb 05 2024 at 22:08):

There is no disagreement. The debate you are watching is a debate on nuances. They agree essentially on everything and are showing each one of their feathers. Please guys, let's try to not turn this into misinformation.

view this post on Zulip Mike Shulman (Feb 05 2024 at 22:08):

Up to equivalence, every cartesian category has a generating set of objects...

view this post on Zulip Mike Shulman (Feb 05 2024 at 22:10):

Also it's worth noting that "duality" is often/usually used to refer to a contravariant adjunction or equivalence.

view this post on Zulip Kevin Arlin (Feb 05 2024 at 22:11):

Ah, yes, hmm…I was thinking of Adamek-Rosicky-Vitalé’s book, which focuses on arbitrary Cartesian categories as theories.

view this post on Zulip John Onstead (Feb 05 2024 at 22:15):

I guess I should provide a more detailed breakdown of some confusions I have... here's my current understanding of the matter: In the (or maybe "an") adjunction/equivalence between (some class of) categories and type theories, there is a "Lang" functor that sends a category to its "internal language" type theory, with objects in the category interpreted as types in the corresponding type theory. Its adjoint "Syn" sends a type theory to its syntactic category, but it is the contexts, not necessarily types, that become the objects of the syntactic category. When it comes to the computational trilogy (at least from the "Rosetta stone" under the nlab article for computational trinity), objects and types are associated to one another like with the Lang functor, but in a way that aligns with the "propositions as types", and so by composition objects are associated to propositions. This differs in the original adjunction with internal languages, where propositions are interpreted to be subobjects, not objects of the category. Still, there does seem to be similarity between the two: for instance, the logical conjunction is interpreted to be a meet of subobjects with Lang and a product of objects with the trinity. I am thus wondering what the connection is between all this?

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 22:16):

Kevin Arlin said:

Nathanael Arkor said:

I think algebraic theories are a good example of one of the subtleties many people overlook or dismiss an unimportant in these discussions (though is mentioned in the blog post comments Mike linked to). There is an important distinction between algebraic theories à la Lawvere (comprising a certain class of identity-on-objects functors) and cartesian categories. Yet these notions are often conflated.

That’s a good point—would you want to claim that fixing that identity-on-objects functor instantiates the shift to syntax? This rules out Cartesian categories without a nice generating set of objects from being viewed as syntax, as far as I understand…not sure whether that might be desirable.

As Mike says, it doesn't rule out viewing cartesian categories as syntactic, if one is so inclined. However, there are situations where it is useful to fix a set of generating sorts.

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 22:19):

(One such situation is when you want control over the finiteness of presentations, e.g. you want a finite set of sorts. If you start with a cartesian category and form a set of sorts, that set will generally be infinite, even if you started with something generated by a finite set of sorts, because you have all the objects generated by finite products of the base sorts, etc.)

view this post on Zulip Jonas Frey (Feb 05 2024 at 22:24):

I remember that Lawvere, when asked about syntax-semantics duality, said "I never spoke of syntax-semantics duality, but of theory-model duality". I think that's an important point, since the term "syntacs-semantics-duality" lumps together two separate correspondences, which I think it is helpful to differentiate:

view this post on Zulip Mike Shulman (Feb 05 2024 at 22:24):

John Onstead said:

there is a "Lang" functor that sends a category to its "internal language" type theory, with objects in the category interpreted as types in the corresponding type theory. Its adjoint "Syn" sends a type theory to its syntactic category, but it is the contexts, not necessarily types, that become the objects of the syntactic category.

There are various ways of slicing this pie. If you're working with a syntactic theory where contexts are literally lists of types, then you can't specify the "contexts" directly, so you do have to take the types of Lang(C) to be the objects of C. Then when defining Syn(T) the most natural choice is to take the objects to be contexts of T, which means for instance that Syn and Lang are not inverse equivalences of categories. Personally I prefer to say they are just adjoint anyway, but you can make them equivalences by working in a situation where, for instance, Syn(Lang(C)) is equivalent to C even though it has more objects, e.g. when there are finite products (or Σ\Sigma-types, in the dependently typed case) any context (that's a finite list of types) is equivalent to a single typet.

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 22:27):

Jonas Frey said:

This correspondence is sometimes formulated as a (bi)equivalence of (bi)categories, but I find that somewhat artificial since once one has a class of syntactic presentations of theories and an essentially surjective mapping from this class to a bicategory of categorical theories, one can always upgrade this mapping to a biequivalence by "pulling back" the morphisms.

(I agree that biequivalences of categorical theories and syntactic presentations are artifical, but I think equivalences are natural, so long as one defines translations of presentations syntactically too, rather than "cheating" by defining morphisms of presentations in terms of morphisms of categorical theories, as you describe, which is unfortunately how it often appears in the literature.)

view this post on Zulip Mike Shulman (Feb 05 2024 at 22:27):

John Onstead said:

objects and types are associated to one another like with the Lang functor, but in a way that aligns with the "propositions as types", and so by composition objects are associated to propositions. This differs in the original adjunction with internal languages, where propositions are interpreted to be subobjects, not objects of the category.

This is just mixing up doctrines. In the doctrine of first-order logic, the syntax has separate notions of "type" and "proposition", and when this is interpreted into a category the types are interpreted as objects and the propositions as subobjects. But in the doctrine of dependent type theory, the syntax has only types, some of which are viewed as, or used as, propositions (propositions as types), and when this is interpreted in a category, the types are interpreted as objects in a slice category, some of which may happen to be subobjects

view this post on Zulip Jonas Frey (Feb 05 2024 at 22:32):

Nathanael Arkor said:

Jonas Frey said:

This correspondence is sometimes formulated as a (bi)equivalence of (bi)categories, but I find that somewhat artificial since once one has a class of syntactic presentations of theories and an essentially surjective mapping from this class to a bicategory of categorical theories, one can always upgrade this mapping to a biequivalence by "pulling back" the morphisms.

(I agree that biequivalences of categorical theories and syntactic presentations are artifical, but I think equivalences are natural, so long as one defines translations of presentations syntactically too, rather than "cheating" by defining morphisms of presentations in terms of morphisms of categorical theories, as you describe, which is unfortunately how it often appears in the literature.)

Yes, it can be interesting to try to find an explicit definition of morphisms/translations on the syntactic side, and probably this can even serve as a guiding principle in the design of syntactic calculi. So I shouldn't say it's artificial.

view this post on Zulip Frank Tsai (Feb 05 2024 at 22:42):

Jonas Frey said:

Is there a name for the "pulled back" morphisms? I'm reading a book (called algebraic theories) by J. Ad ́amek, J. Rosicky ́, E. M. Vitale and they have this exact construction.

view this post on Zulip Jonas Frey (Feb 05 2024 at 22:43):

Frank Tsai said:

Jonas Frey said:

Is there a name for the "pulled back" morphisms? I'm reading a book (called algebraic theories) by J. Ad ́amek, J. Rosicky ́, E. M. Vitale and they have this exact construction.

Are you sure they have this construction in the book? I've spent a lot of time with this book while working on clans, and as far as I know it doesn't contain any syntax at all?

view this post on Zulip Nathanael Arkor (Feb 05 2024 at 22:44):

Frank Tsai said:

Jonas Frey said:

Is there a name for the "pulled back" morphisms?

I'm not sure whether it has a specific name in the context of type theory, but categorically, the construction is the full image of the function sending each presentation to its corresponding categorical theory.

view this post on Zulip Frank Tsai (Feb 05 2024 at 22:47):

Jonas Frey said:

Frank Tsai said:

Jonas Frey said:

  • on the other hand, there's the correspondence between representations of theories as categories (pioneered by Lawvere), and syntactic representations of theories by function symbols, logical connectives, axioms etc. This correspondence is sometimes formulated as a (bi)equivalence of (bi)categories, but I find that somewhat artificial since once one has a class of syntactic presentations of theories and an essentially surjective mapping from this class to a bicategory of categorical theories, one can always upgrade this mapping to a biequivalence by "pulling back" the morphisms.

Is there a name for the "pulled back" morphisms? I'm reading a book (called algebraic theories) by J. Ad ́amek, J. Rosicky ́, E. M. Vitale and they have this exact construction.

Are you sure they have this construction in the book? I've spent a lot of time with this book while working on clans, and as far as I know it doesn't contain any syntax at all?

Maybe I misunderstood what you meant by "pulling back" morphisms. I was referring to the proof of theorem 9.15 (Duality of algebraic categories and theories).

view this post on Zulip Mike Shulman (Feb 05 2024 at 22:48):

Note that on Zulip, if no one else has commented after the message you are replying to, then your message will appear immediately below it, so there is no need to also quote the message, unless you want to emphasize only part of it as being the bit you're responding to.

view this post on Zulip Frank Tsai (Feb 05 2024 at 22:49):

I see. Thanks for the FYI

view this post on Zulip Jonas Frey (Feb 05 2024 at 22:52):

Thanks for the reminder Mike.

Anyway @Frank Tsai I think the result you're referring to is a "theory-model duality", so it falls under the first bullet point in my list. the "pullback" comes up in the second bullet point.

view this post on Zulip John Onstead (Feb 05 2024 at 23:08):

Mike Shulman said:

This is just mixing up doctrines. In the doctrine of first-order logic, the syntax has separate notions of "type" and "proposition", and when this is interpreted into a category the types are interpreted as objects and the propositions as subobjects. But in the doctrine of dependent type theory, the syntax has only types, some of which are viewed as, or used as, propositions (propositions as types), and when this is interpreted in a category, the types are interpreted as objects in a slice category, some of which may happen to be subobjects

This clears things up... I think I'm mainly being thrown off by the nlab where I do most of my learning (and so I will definitely take a look at some of the other resources listed above). Checking again, the nlab page for "internal logic" does indeed (eventually) clarify all this, and so I guess the adjunction category under discussion there is more of a "category of logical systems" (in this case the category of first order logics) than a "category of type theories". Revisiting "relation between type theory and category theory" shows that for dependent type theories under Lang and with the category of locally cartesian closed categories, non-dependent types are objects of LCCCs while the dependent types (which can double as propositions under propositions as types) are morphisms (and so indeed objects of the slice category). However, this makes the table given under "overview" on that same page misleading, given that the table corresponds generalized elements (which in the context of dependent type theories should correspond with dependent types and thus propositions) with terms/programs instead. Maybe the table means to talk about objects and morphisms in slice categories instead, but it doesn't seem to make that distinction clear...

view this post on Zulip Kevin Arlin (Feb 05 2024 at 23:16):

The nLab's an incredible resource but I don't think it's a good place to do most of your learning, definitely recommend looking at more fleshed-out resources, including those linked on most nLab pages themselves.

view this post on Zulip Mike Shulman (Feb 05 2024 at 23:17):

Yes, the fact that nLab pages have often been put together out of small edits and additions by many people over time means that they aren't always internally consistent or coherent.

view this post on Zulip John Baez (Feb 05 2024 at 23:58):

Jonas Frey said:

I've spent a lot of time with this book while working on clans, and as far as I know it doesn't contain any syntax at all?

A book that contains no syntax:

.

view this post on Zulip Jonas Frey (Feb 06 2024 at 00:00):

Haha, at least for this book we know that there are no mistakes!

view this post on Zulip John Onstead (Feb 06 2024 at 15:41):

Thanks for all your help yesterday! I realized perhaps I could understand the situation a lot better too if I knew what a category of type theories or logical systems was, which would follow from how such categories are constructed. This would of course make the differences between type theories and logical systems a lot clearer. For instance, I know the category of monoids is constructed from Mon(Set), which can also be thought of as the category of models of the theory of monoids modeled in Set. Thus, it can be constructed from a certain functor category from the syntactic category of the theory of monoids, as functors into Set from this category are models of monoids in Set, and thus monoid objects internal to Set.
What would be extremely meta is if there's actually a such thing as a "theory of type theories" or "theory of logical systems" that works a similar way. That is, a type theory of a certain class can be thought of as a model of the theory of type theories of that particular class, and so also a functor from Syn(theory of type theories of that class) into, say, Set. Then, the appropriate functor category would give the category of type theories of that class. Even if this exists, I'm not sure, however, how to extend this to bicategories (IE, to find a 2-category of type theories), since functor categories between 1-categories are also 1-categories.

view this post on Zulip Morgan Rogers (he/him) (Feb 06 2024 at 15:44):

John Onstead said:

What would be extremely meta is if there's actually a such thing as a "theory of type theories" or "theory of logical systems" that works a similar way.

You'll be pleased to hear that such things exist :cowboy: I'll send you to [[doctrine]] for the time being, but I'm sure there will be plenty of folks here willing to explain the nuances in more depth than I am able to.

view this post on Zulip Morgan Rogers (he/him) (Feb 06 2024 at 18:39):

Following the above, @Mike Shulman linked to a blog post, which @Ivan Di Liberti objected to; that discussion has been moved here. The end result was:
Ivan Di Liberti said:

I do not think there is any way to improve the situation. It's very important though to clarify that such theory does not exist (!) at the moment (which is what I am doing) and that it is at best in the process of being developed.

view this post on Zulip Mike Shulman (Feb 06 2024 at 21:07):

More precisely, some versions of the theory do exist, and others are in the process of being developed. The blog post that I linked to is a discussion of what I believed at the time (2018) that the general picture should look like, and a summary of the partial work we had done on some versions of it and what we had in mind for future steps (some of which have since been obviated by later work of others').

view this post on Zulip John Onstead (Feb 06 2024 at 22:22):

I think it's very interesting that my question of having a "theory of theories" leads to research that is still being conducted! In any case, what I was mainly interested in with my question was about how one would go about "building" a type theory or logical system. For instance, I brought up the theory of monoids because a model of it in Set tells you to take an object M, an operation M x M -> M, and commutative diagrams in Set representing associativity and identity of this monoid operation. I was wondering what "instructions" a theory of type theories or logical systems would give.
For instance, I was imagining a type theory to consist of a set Term of all terms of that type theory, along with a monomorphism Type -> Powerset(Term), satisfying some properties. The set Type is the set of types of the type theory, and is a subset of the powerset where elements are subsets of Term because you can imagine a type in the type theory to be represented by some subset of Term containing all terms of that type. The properties you would want this to satisfy would then depend on the kind of type theory, though there should be some minimum requirements such as non-overlapping (IE, all subsets of Term in Type must have empty intersections, so no term has multiple types). In addition, if one wants to add on the structure of a product type, sum type, or function type, the set Type must be closed under the operations of forming the cartesian product, disjoint union, and internal hom/function space between the subsets in Type, viewed as objects in the category. For instance, for the product type, if I have two subsets of Term, A and B, that are elements of Type, I want to make sure that A x B is also in Type. Of course this is just my personal conjecture for how I think a type theory could be constructed and I'm likely very wrong about it, but I wanted to share for illustrative purposes!

view this post on Zulip John Onstead (Feb 09 2024 at 15:13):

Here's a question I still have after going through that material... Given categories C and D, one can tell if C is a category of algebras for some monad on D if there exists a monadic adjunction between them. Does a similar principle exist for models- that is, given C and D, can one tell if C is a category of models of some theory in D, and if so, what that theory is?

view this post on Zulip Kevin Arlin (Feb 09 2024 at 18:04):

These are to a great extent the same question--for an appropriate notion of "theory", categories of algebras of a monad and categories of models of a theory are the same thing! So you'll start in all but a very strange set of cases by looking for a monadic adjunction again. Then, if you want a finitary theory, you'll want the monadic functor to preserve filtered colimits. And to actually find the theory--in that case, it can be given as the opposite of the category of finitely presentable objects.

view this post on Zulip John Onstead (Feb 09 2024 at 19:40):

@Kevin Arlin Interesting; I know for sure that models of Lawvere theories/algebraic theories (with "small signature") are equivalent to algebras over monads, though I am not sure if this holds in any more generality beyond Lawvere theories. There exist "essentially algebraic" and "generalized algebraic" theories, for instance including a "generalized algebraic theory of categories" whose models in certain categories C are internal categories in C, and so would constitute a small category in Set. In addition, there also exist non-algebraic theories like the theory of fields. Neither of these examples corresponds to a monad over Set. Given just the category of fields and Set, would there be any way of telling that the category of fields represents models of the theory of fields modeled in Set?

view this post on Zulip Kevin Arlin (Feb 09 2024 at 19:58):

A category such as that of fields is accessible, and in general any accessible category is the category of models of a first-order theory in a certain infinitary language (and conversely.) Categories of models of essentially algebraic theories and of generalized algebraic theories (set-valued models, at least) are exactly the locally presentable categories. Neither of these observations generalizes completely smoothly to non-set-valued models, although it's true that models of an essentially algebraic theory in any locally presentable category are again locally presentable and similarly for first-order theories, if I understand correctly.

view this post on Zulip Mike Shulman (Feb 09 2024 at 20:26):

I'm not quite sure what it means to talk about models of an arbitrary first-order theory in an arbitrary locally presentable category. Usually you need the category to have extra structure in order to interpret operations like \vee, \exists, and \forall.

view this post on Zulip Kevin Arlin (Feb 09 2024 at 21:01):

Mmm, I was thinking of models of sketches. I don’t know the accessible categories book too well but I was trying to recall the theorem from there about accessible categories as models of theories in, I think L,.L_{\infty,\infty}. Maybe those theories aren’t using arbitrary first order syntax?

view this post on Zulip Mike Shulman (Feb 09 2024 at 21:04):

Ah, yes, that's true you can talk about models of a limit-colimit sketch in an arbitrary category. I don't have the book at hand right now, but my guess is that they have a particular class of first-order theories that correspond to sketches in some specific way, which happens to coincide with the general method of interpreting arbitrary first-order theories if the category in question supports the latter.

view this post on Zulip John Onstead (Feb 10 2024 at 15:41):

I think this makes sense; after a little more reflection I realized this also seems to connect with another topic brought up above about Gabriel-Ulmer duality. In that case, one has an equivalence between the category of finitely complete categories (interpreted as syntactic categories of essentially algebraic theories) and the category of locally presentable categories (interpreted as the category of models of those essentially algebraic theories). So if I start with some locally presentable category, I can travel back along this equivalence and find the theory whose models are the objects in that locally presentable category. There appears to be a more general analogue of this between geometric categories (interpreted as syntactic categories of geometric theories) and accessible categories (interpreted as categories of models of those geometric theories), though I'm not sure if this is proven to be an equivalence or not.

view this post on Zulip John Onstead (Feb 10 2024 at 15:44):

As for why I think this is useful, I wanted to know if there was any connection between stuff, structure, property and categorical logic. For instance, any category with a faithful functor to Set is a concrete category, and objects of this category can thus be considered sets with extra structure. The question then becomes trying to determine what that "extra structure" actually looks like and entails internal to Set. Realizing a concrete category as a category of models in Set of some theory (I guess, provided it is locally presentable or accessible) can help in answering this, though I do wonder if there's any deeper connections or insights into this!

view this post on Zulip Patrick Nicodemus (Feb 12 2024 at 13:37):

Nathanael Arkor said:

Would you say that algebraic theories are semantic, then? My impression is that the general consensus is that algebraic theories are syntactic (albeit presentation-free) notions.

I think I agree with what Mike has said so far. In my opinion the syntax of a theory should be in most cases presentable as something like an inductive type and this definition excludes quotienting out by the equational rules of the algebraic theory from being part of the syntax, if (Omega,E) is an algebraic theory (Omega the function symbols, E equations) then Omega presents a monad which I would call syntactic but the quotient Omega/E is not syntactic, I cannot think of an equivalence classes of terms under an arbitrary equivalence relation as being syntactic.
With higher inductive types there is some gray area here but still I think of higher inductive types as incorporating both the generators and relations, after all they are homotopy colimits rather than true colimits. A presentation of an algebraic structure by generators and relations is not the algebraic structure itself. The generators and relations would be the syntactic objects, and the resulting quotient is its semantics. The syntax must be tractable to work with. For example you can define functions on trees and other inductive types by structural recursion.

view this post on Zulip Patrick Nicodemus (Feb 12 2024 at 13:40):

Speaking more loosely they are syntactic, for example we can speak of a syntactic model of a theory which is defined as the free widget on the syntax tree modded out by the equations, but this is a model defined using the syntax, it is inappropriate imo to say that this model precisely is the syntax or that this is somehow the correct definition of what it means to give syntax for an algebraic theory.

view this post on Zulip Patrick Nicodemus (Feb 12 2024 at 14:05):

I should clarify I'm not opposed to the term syntax-semantics duality as it's used for example in AG to refer to polynomial rings and varieties even though rings may be quotiented by ideals. It's informal language after all and cannot be expected to be used consistently. I just want to stress that flattening the relations which define the quotient inherently abstracts away from those relations and makes it harder to study them.

view this post on Zulip John Onstead (Feb 27 2024 at 15:29):

As mentioned above I am interested in establishing a clear correspondence between Stuff, Structure, Property (SSP) and Categorical Logic. I gave it some more thought recently but am having massive trouble in getting it to work the way I want it, so any advice would be helpful. Given a forgetful functor U: C -> D, objects in C can be seen as "D objects with extra structure". What I need to know is, for any given forgetful functor, what this "extra structure" actually IS- in fact, I want to have an explicit diagrammatic representation for this extra structure within the codomain of the forgetful functor (almost as if the forgetful functor "unpacks" the objects in the domain into diagrams in the codomain).
To do this, I used sketches because they are basically generalized diagrams, and so a model of a sketch is essentially a diagram in the target category that can involve limits. I then conjectured a canonical functor Faithful -> Sketch that sends a faithful functor (the objects of Faithful) to the sketch (the objects of Sketch) such that the category of models of that sketch in the codomain is equivalent to the domain category. For example, given the faithful functor U: Grp -> Set, it would send U to the sketch of groups (where a group object is a model of this sketch in some category) since Grp is equivalent to the category of models of that sketch in Set, thus allowing us to recover the diagrammatic definition of a group object in Set purely from the forgetful functor. However, if such a functor existed, then all concrete categories (categories with a forgetful functor to Set) would be equivalent to a category of sketch models in Set. But those are precisely the accessible categories, meaning every concrete category would be equivalent to some accessible category. I haven't seen any proofs that concrete categories are a special case of accessible category and so I am doubting this is true, but then I don't get where my reasoning went astray. Again, any help is very greatly appreciated as creating a bridge between categorical logic and SSP is a major part of my project!!

view this post on Zulip Morgan Rogers (he/him) (Feb 27 2024 at 15:58):

Two points:

  1. Your "functor" doesn't look very functorial. Even if you had a way of finding a sketch presenting your faithful functors, it's not unique (there are many possible sketches for the theory of groups) and it's not at all clear how whatever choice of morphisms between faithful functors that you choose will be mapped to morphisms of sketches (there are choices on this side too).
  2. Indeed, it is not the case that arbitrary faithful functors are forgetful functors for models of sketches. Top -> Set is the typical example (Top is not accessible). In this particular case the problem is that limits don't suffice to construct the powerset. More generally there are other sorts of "further adjoint" structure that the objects of D might be constructed from over C. I would be surprised if a complete characterization is possible, to be honest.

view this post on Zulip John Baez (Feb 27 2024 at 17:23):

It's not surprising that this fails, because sketches are nowhere near powerful enough to describe arbitrary structures on finite set. James Dolan has thought about how to implement the stuff/structure/property distinction in logic. I don't remember all the details, but here are a couple of lessons I remember:

1) It's a lot easier to use groupoids than general categories: so, the goal becomes studying groupoids equipped with a functor to the groupoid of sets and bijections, which following nLab I'll call core(Set)\mathsf{core}(\mathsf{Set}).

view this post on Zulip John Baez (Feb 27 2024 at 17:25):

2) We have to decide whether we'll be using classical logic or something else like constructive logic. James used classical logic; the problem is already interesting with that.

view this post on Zulip John Baez (Feb 27 2024 at 17:37):

3) Any theory in traditional first-order logic has a groupoid of models G\mathsf{G} which is equipped with a faithful functor U:Gcore(Set)U: \mathsf{G} \to \mathsf{core(Set)}. But we don't get get all such U:Gcore(Set)U: \mathsf{G} \to \mathsf{core(Set)} (up to equivalence I guess), because we get more by going up to infinitary logic. These are versions of logic that allow sentences with infinite conjunctions and infinite strings of quantifiers.

I believe James told me that every faithful functor U:Gcore(Set)U: \mathsf{G} \to \mathsf{core(Set)} can be obtained from some theory in infinitary first-order logic, but I don't know a precise theorem to this effect. Usually people in infinitary logic put bounds on the number of conjunctions and on the number of quantifiers in a sentence, but we can't do this if we want to get all faithful UU.

view this post on Zulip John Baez (Feb 27 2024 at 17:41):

The subject of infinitary logic is a rich one, as you can see from the linked article. It gives some nice examples, e.g. axioms that characterize finite sets, and thus the functor U:core(FinSet)coreSetU : \mathsf{core(FinSet)} \to \mathsf{core{Set}}. This is a nice example of a faithful functor that can't be described using a sketch.

view this post on Zulip John Baez (Feb 27 2024 at 17:44):

Another nice example they give: the groupoid of models of Peano arithmetic that are isomorphic to the standard model. (Note the "standard model" is defined relative to the set theory that we are using to define models of our axioms.)

view this post on Zulip John Baez (Feb 27 2024 at 17:52):

4) If you could work out all the details, there could be a theorem saying which kinds of theories give full and/or essentially surjective functors. However to describe functors that aren't faithful we should generalize and consider multi-sorted infinitary first-order logic. For a set SS of sorts, theories have models in SetS\mathsf{Set}^S, which we can project down to Set\mathsf{Set} in many ways, typically unfaithfully - since we are forgetting stuff.

view this post on Zulip John Baez (Feb 27 2024 at 17:58):

There's a lot more to say about this, and it would be great if someone has already worked a lot of it out in a rigorous way. This business sets up a nice correspondence between logic and groupoid theory.

view this post on Zulip John Baez (Feb 27 2024 at 18:21):

By the way, James said that Beth's definability theorem is important in this business.

view this post on Zulip Morgan Rogers (he/him) (Feb 27 2024 at 19:37):

Surely you need some higher order logic for topological spaces?

view this post on Zulip Kevin Arlin (Feb 27 2024 at 20:03):

No, you can just have lots of relation symbols, see 5.4 here for instance: [see John's message below]

view this post on Zulip John Onstead (Feb 27 2024 at 20:06):

@Morgan Rogers (he/him) The powerset can be given in two main ways: through a universal property (it is an exponential object in Set) or through an endofunctor/monad (the powerset monad). Is there some generalization of a sketch that allows one to select diagrams in categories that would be able to encode either of these? Perhaps some "walking universal construction" entity that can select any universally constructed object within some category with "all" of that universal property with a functor that preserves said universal property? Then maybe a topological space can be given in terms of models of one of these entities.

view this post on Zulip John Onstead (Feb 27 2024 at 20:08):

@Kevin Arlin I wish I would have seen this paper earlier, thanks for the link!

view this post on Zulip John Baez (Feb 27 2024 at 20:28):

By the way, most of that enormous URL is probably Elsevier's method of tracking us. You can also get the paper here:

https://www.sciencedirect.com/science/article/pii/0022404981901055?ref=pdf_download

view this post on Zulip John Baez (Feb 27 2024 at 20:46):

Excellent - that paper seems to rigorously tackle a lot of what I was discussing, e.g.:

By definition, a concrete category is a category of sets which are endowed with an
unspecified structure. There have been some attempts to make this structure
specific. For example, Blanchard [6] used Bourbaki-type structures and Kurera and
Pultr [17] have determined the structure by a functor SetSet\mathsf{Set} \to \mathsf{Set}. Our aim is to
consider concrete categories as categories of models of first-order theories.
However, for these theories to be a syntactic counterpart of concrete categories,
they must exceed the usual ones in the following three points: nonlogical symbols
are of arbitrary arities, there may be a proper class of them and infinitary logical
symbols are admitted. This language might be called "an unrestricted L,L_{\infty,\infty}". Its
strength is illustrated by the fact that we may imagine any concrete category as a
category consisting of models of this language.

view this post on Zulip John Baez (Feb 27 2024 at 20:52):

The notation L,L_{\infty, \infty} is clarified (though perhaps not actually used) in that article I mentioned on Infinitary logic. There they mainly discuss Lκ,λL_{\kappa,\lambda} for cardinals κ,λ\kappa, \lambda which are bounds on how big our conjunctions can be and how many quantifiers our formulas can have. We have to drop those to describe all concrete categories. And I hadn't known we also need function symbols or predicates of arbitarily large arity!

view this post on Zulip John Baez (Feb 27 2024 at 20:54):

If one wants to study the duality between syntax and groupoids (or categories) of models without sinking into the morass of large cardinals, proper classes and such, one can also go to the other extreme and study finite groupoids.

view this post on Zulip John Onstead (Feb 27 2024 at 21:07):

@John Baez I also think it's interesting that to fully "unpack" any given arbitrary concrete category into the exact structure and properties it is describing onto sets, one needs infinitary language. Perhaps we can consider any concrete category generated by a finitary version of L to be a "nice" concrete category under this sense

view this post on Zulip John Baez (Feb 27 2024 at 21:17):

There are certainly many "nice" structures described by finitary languages, like fields, etc. Interestingly "finite sets" is not one, thanks to the compactness theorem. But we can say what it means for a set to be finite using an infinitary language, where you can use this:

either there exists x such that \forall y we have y = x,
or there exist x, x' such that \forall y we have y = x or y = x',
or there exist x, x', x'' such that \forall y we have y = x or y = x' or y = x'',
or...

etc.

view this post on Zulip John Onstead (Feb 27 2024 at 21:47):

The above infinitary definition of a finite set may be infinitely long, but it seems to me that one could use some form of recursion, where one can generate the next part of the definition in terms of previous parts, to "compress" it into a finite statement. In that sense it reminds me of induction, where we can prove an infinite series of statements all at once without needing to use infinite reasoning to do so, by first proving a base case and then showing that the n+1 case follows from the n case. I don't know how this works out formally however. In any case, if such "compression" does make sense, maybe there's a way to extend the definition of "nice" concrete category to those generated by those infinitary L theories which are "compressible" in some way.

view this post on Zulip Mike Shulman (Feb 27 2024 at 21:48):

John Onstead said:

The above infinitary definition of a finite set may be infinitely long, but it seems to me that one could use some form of recursion, where one can generate the next part of the definition in terms of previous parts, to "compress" it into a finite statement.

I think that's what John did when he wrote "... etc." (-;

view this post on Zulip Mike Shulman (Feb 27 2024 at 21:49):

(By which I mean that we can never actually write infinitely many statements, so whenever we talk about anything infinite, basically our only tool to do it precisely reduces to some form of induction or recursion.)

view this post on Zulip John Onstead (Feb 27 2024 at 21:58):

That makes sense, although it does make me wonder if there's some "recursive/inductive language" in which to formally write down a recursive or inductive statement. That is, such that a finite statement in this language can be shown to be equivalent to some statement (potentially infinitely long) in First Order Logic (or some variant of FOL) that acts as the "expanded out" form of that recursive statement. So basically a logic in which "etc." has a real mathematical meaning!

view this post on Zulip Mike Shulman (Feb 27 2024 at 21:59):

How about the metalanguage in which you define your infinitary language?

view this post on Zulip John Baez (Feb 27 2024 at 22:36):

John Onstead said:

The above infinitary definition of a finite set may be infinitely long, but it seems to me that one could use some form of recursion, where one can generate the next part of the definition in terms of previous parts, to "compress" it into a finite statement.

Of course a more common way to define a finite set is first to write down some axioms of set theory like ZFC in first-order logic, and in them define the natural numbers (or more precisely ω\omega), and then say a set finite if there's a bijection between it and the set of natural numbers n\le n for some nn.

All this is done in ordinary finitary first-order logic - the most common set of axioms for ZFC is infinite since it involves an "axiom schema", but that's a completely different sort of infiniteness; each axiom is finitely long.

If we look at models of the resulting axiom system "ZFC + SS is a finite set", where SS is some constant, we get a lot of models, because there are many models of set theory, and then within each one there are lots of finite sets. This leads to the phenomenon of "nonstandard finite sets".

Anyway, given problems like this, the approach using infinitary first-order logic seems like a reasonably clean way to get a perfect duality between "theories" and "concrete groupoids", so I'm not surprised Rosicky pursued it. But it would be fun to tackle this using a more category-flavored approach to logic.

view this post on Zulip John Onstead (Feb 28 2024 at 15:21):

As they are interesting constructions, I looked more into sketches, particularly in ways they can be generalized to get even more "powerful" notions of categorical diagram. I found two main ways: the Wells "forms" and the Makkai "generalized sketches". Both papers go into detail about how to define these generalized sketches, what they are useful for, what their models are, and even what the model morphisms are so that a category of models can be defined. However, they seem to stop short of actually describing any properties of these categories of models (or maybe they didn't, and I missed it since these papers can be a little dense and I miss things all the time anyways)
So, I'm wondering if anyone knows what kinds of properties that a category of models of forms and a category of models of generalized sketches (in Set) might have, maybe including something analogous to the correspondence between categories of models of normal sketches and accessible categories. Of course, accessible categories would be special cases of these types of categories, but I want to know how many other important classes of categories are subsumed by these, since that will help me determine just how "powerful" these constructions are.

view this post on Zulip John Onstead (Mar 12 2024 at 15:30):

Both the paper on concrete categories attached above as well as a book titled "Locally Presentable and Accessible Categories" (which discusses the connection between sketches and accessible categories) were written by Jiří Rosický, so I figured that if anyone would know the answer to my (numerous) questions on these subjects it would be him! So, I decided to reach out to him with a question about categories of models of generalized sketches. However, that was around five days ago, I haven't heard back yet. Does anyone have any advice about what I should do? I really want to find the answer to this question, but I searched for hours in the literature to no avail, nobody else I've asked knows, and there's no way I could do it myself since I wouldn't even know where to start. Any advice on how to resolve this situation is greatly appreciated!

view this post on Zulip Morgan Rogers (he/him) (Mar 12 2024 at 16:28):

You should know that Jiří Rosický is almost 80 years old, so understandably he might not check his email regularly!
In response to your "if anyone knows"... to understand them, you need to find at least one
example of a generalized sketch whose models are not sketchable
(in other words, they don't form an accessible category). I personally don't know of any examples, but you're more likely to get responses here if you make it clear (visually, I mean) what you're asking and include sufficient information (e.g. a definition of generalized sketch) for the question to be self-contained. Only a very motivated person will go to the trouble of chasing up definitions even given a link to a paper containing them :sweat_smile: .

view this post on Zulip John Onstead (Mar 12 2024 at 20:33):

@Morgan Rogers (he/him) Thanks for the advice!
A common emphasis in these generalized sketch papers is that sketches cannot describe power/exponential objects in cartesian closed categories, with these generalized sketches constructed so that they can express power objects. This would, for instance, include the powerset in Set, which is itself necessary to define a topological space (since we need to identify a set of subsets). Thus, topological spaces are a concept that generalized sketches can express. It is known that Top is not accessible, therefore topological spaces give a good example of a generalized sketch whose models are not sketchable.

view this post on Zulip Kevin Carlson (aka Arlin) (Mar 12 2024 at 20:35):

Eh, spaces are sketchable by a large sketch, though. So it's not as convincing as possible an example.

view this post on Zulip Morgan Rogers (he/him) (Mar 12 2024 at 20:39):

@Kevin Carlson (aka Arlin) but is which categories can be presented as models of these large sketches? Are they distinct from those presentable using the generalization that John is pointing to?

view this post on Zulip John Onstead (Mar 12 2024 at 21:52):

I am not sure what the analogue of accessible categories is for large sketches. The closest I could find is the analogue of locally presentable categories for large limit sketches in the paper "Large Limit Sketches and Topological Space Objects" by Martin Brandenburg. In fact, I believe it is this exact paper that Kevin is citing in the claim that spaces are sketchable by a large sketch, since this is one of the results of the paper. In this work the category of models of a large realized limit sketch is a "strongly compact category". I guess you could just define a name for categories sketchable by any large sketch (maybe we could call them "super-accessible categories") but this is just a label and wouldn't tell us anything about the properties these categories have (that would be needed to prove if a given category is sketchable).