Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: functors between categories of internalized structures


view this post on Zulip David Egolf (Jan 28 2025 at 20:36):

If we have a functor between two categories of structures, does that translate to some corresponding functor between categories of internalized versions of those structures?

For example, there's a "free" functor F:SetGroupF:\mathsf{Set} \to \mathsf{Group} that makes groups from sets. Does this induce a corresponding functor C(Set)C(Group)C(\mathsf{Set}) \to C(\mathsf{Group})? Here C(Group)C(\mathsf{Group}) is supposed to denote the category of groups internal to CC, and C(Set)C(\mathsf{Set}) the category of sets internal to CC (which I am hoping is equivalent to CC). Of course, I am assuming that CC is a category that supports internalization of these structures.

(I don't know when all the internalizations of a given structure actually organize to form a category! I'm hoping it happens commonly.)

If we often do get an induced functor between categories of internal structures, I'm specifically curious:

  1. when we can induce a full and faithful functor between categories of internal structures from a starting full and faithful functor
  2. whether we can induce an adjunction between categories of internal structures when we start with an adjunction between the original categories of structures

view this post on Zulip David Egolf (Jan 28 2025 at 20:40):

I would like to better understand how categories of internalized structures relate to one another. Any thoughts in that direction would be most welcome!

view this post on Zulip John Baez (Jan 28 2025 at 21:10):

If the internalized structure is defined using finite products, like the concept of "group", then any functor that preserves products will preserve that internalized structure. This is a straightforward thing to check, and it's a special case of the general rule "if blee is defined using only blah, any map that preserves blah preserves blee".

view this post on Zulip John Baez (Jan 28 2025 at 21:14):

For example any right adjoint preserves products, so the forgetful functor U:GrpSetU: \mathsf{Grp} \to \mathsf{Set} maps groups internal to Grp\mathsf{Grp} to groups internal to Set\mathsf{Set}. A group internal to Grp\mathsf{Grp} is an abelian group - it's a very good exercise to check this. A group internal to Set\mathsf{Set} is just a group. So what we're seeing is that UU maps abelian groups to groups. This is no surprise: any abelian group has an underlying group! But it may be surprising that we're getting this as a spinoff of the fact that any group has an underlying set.

view this post on Zulip John Baez (Jan 28 2025 at 21:19):

On the other hand, it seems unlikely that the left adjoint F:SetGrpF: \mathsf{Set} \to \mathsf{Grp} sends groups internal to Set\mathsf{Set} to groups internal to Grp\mathsf{Grp}, because left adjoint preserve colimits, not usually limits - and indeed you can check 'by hand' that FF does not preserve products, e.g.

F(1×1)F(1)Z F(1 \times 1) \cong F(1) \cong \mathbb{Z}

while

F(1)×F(1)Z×ZZ2 F(1) \times F(1) \cong \mathbb{Z} \times \mathbb{Z} \cong \mathbb{Z}^2

So it should be very easy to see that FF does not map group objects in Set\mathsf{Set} (i.e. groups) to group objects in Grp\mathsf{Grp} (i.e. abelian groups).

view this post on Zulip David Egolf (Jan 28 2025 at 21:26):

John Baez said:

For example any right adjoint preserves products, so the forgetful functor U:GrpSetU: \mathsf{Grp} \to \mathsf{Set} maps groups internal to Grp\mathsf{Grp} to groups internal to Set\mathsf{Set}. A group internal to Grp\mathsf{Grp} is an abelian group - it's a very good exercise to check this. A group internal to Set\mathsf{Set} is just a group. So what we're seeing is that UU maps abelian groups to groups. This is no surprise: any abelian group has an underlying group! But it may be surprising that we're getting this as a spinoff of the fact that any group has an underlying set.

Thanks for your comment! That makes sense. We can expect then that functors :AB:A \to B that preserve the needed structure to define an internalized SS will then send SS-structures in AA to SS-structures in BB.

And since right adjoints preserve limits, if our internalized SS-structures can be defined using only limits, then we can expect a right adjoint to preserve internalized SS-structures.

view this post on Zulip John Baez (Jan 28 2025 at 21:30):

But now, rereading your post, I see I misunderstood your question!

David Egolf said:

For example, there's a "free" functor F:SetGroupF:\mathsf{Set} \to \mathsf{Group} that makes groups from sets. Does this induce a corresponding functor C(Set)C(Group)C(\mathsf{Set}) \to C(\mathsf{Group})? Here C(Group)C(\mathsf{Group}) is supposed to denote the category of groups internal to CC, and C(Set)C(\mathsf{Set}) the category of sets internal to CC (which I am hoping is equivalent to CC). Of course, I am assuming that CC is a category that supports internalization of these structures.

Oh, now that I understand it, this is a much better question! The answer is yes if CC not only supports internalization of these two structures - sets and groups - but also the way we construct a gree group from a set.

view this post on Zulip David Egolf (Jan 28 2025 at 21:34):

Ah! Interesting! That makes a lot of sense!

view this post on Zulip John Baez (Jan 28 2025 at 21:35):

The moral: we can take any mathematical construction we're interested in, done in some particular category (often but not always the category of sets), and determine what features of that category we actually need to carry out the construction. We can thus determine a kind of category in which the construction works!

A 'kind of category' is called a doctrine. For example, we just ran into three doctrines:

Each doctrine is actually a 2-category consisting of categories with the chosen bells and whistles, functors that preserve those extra bells and whistles, and natural transformations that get along with those bells and whistles.

view this post on Zulip David Egolf (Jan 28 2025 at 21:41):

I see! I suppose it could be tricky to find the doctrine that we need to work in to apply some particular functor FF to internal structures. We'll want to work in a doctrine where we can do the following:

  1. express the needed structure (e.g. finite products)
  2. express what our functor FF does

In the example above, figuring out how we can do (2) was manageable but in general it might be rather difficult!

view this post on Zulip John Baez (Jan 28 2025 at 21:41):

David Egolf said:

If we often do get an induced functor between categories of internal structures, I'm specifically curious:

  1. when we can induce a full and faithful functor between categories of internal structures from a starting full and faithful functor
  2. whether we can induce an adjunction between categories of internal structures when we start with an adjunction between the original categories of structures

People may have proved theorems about this. One could be very ambitious and try to prove such theorems for a large class of doctrines, or less ambitious and try to prove them for a specific doctrine. One could be very optimistic and try to prove them for all structures that can be expressed in a given doctrine, or less optimistic and accept that extra hypotheses may be required.

I'm too old and tired to tackle questions like this in maximum generality: it takes time to get up to speed for things like that. Luckily, it's often pretty easy to answer them in particular cases: i.e., particular structures formulated in some particular doctrine. And if you're a practical fellow, you're often likely to run into these questions in particular cases.

view this post on Zulip David Egolf (Jan 28 2025 at 21:45):

I got to thinking about this in the context of quantales (and quantaloids). So maybe at some point I'll be able to think up a more specific question in the direction of (1) and (2), using ideas from that setting!

view this post on Zulip David Egolf (Jan 28 2025 at 21:46):

But for now, I think you've given me some nice conceptual answers that will help me organize these ideas in my head going forward. Thanks again!

view this post on Zulip John Baez (Jan 28 2025 at 21:59):

Sure! Here's one good habit, though it takes a lot of work: whenever you learn a mathematical construction that works in Set\mathsf{Set}, figure out which doctrine it works in. This is at least useful for fairly simple concepts like graph, group, monoid, "free monoid on a set", etc.

view this post on Zulip John Baez (Jan 28 2025 at 22:03):

I call this approach "concept and context" - you don't just learn a concept, you also learn the general context in which that concept makes sense. It's yet another Jedi math trick.

view this post on Zulip David Egolf (Jan 28 2025 at 22:46):

Great! That sounds like a useful habit I can try to build towards.

view this post on Zulip John Onstead (Jan 28 2025 at 23:14):

John Baez said:

The moral: we can take any mathematical construction we're interested in, done in some particular category (often but not always the category of sets), and determine what features of that category we actually need to carry out the construction. We can thus determine a kind of category in which the construction works!

John Baez said:

I call this approach "concept and context" - you don't just learn a concept, you also learn the general context in which that concept makes sense. It's yet another Jedi math trick.

This is really interesting, as this is a concept I think I came to understanding somewhat independently in my exploration of spaces when I learned you could either generalize spaces in a concrete way (finding extensions of Top) or in an abstract way (finding a category with similar properties or structure to Top). The way I thought about this was in terms of categorification. To use your language, in a way, "context" is a categorification of "concept". Here's how I thought of it: a "concept" can be thought of as a "1-model"- a Set-valued functor preserving some structure or property (like how a group is a finite product preserving functor from the Lawvere theory of groups). Meanwhile, a "context" can be thought of as the categorification of that situation- a "2-model" which is a Cat-valued functor (since Cat is the categorification of Set) preserving some structure or property.

view this post on Zulip John Onstead (Jan 28 2025 at 23:16):

John Baez said:

So I'd say the 'free internal group object on an object' makes sense in any category with finite products and countable colimits, where binary products distribute over countable colimits.

What's interesting about this is that the notion of "free" something is equivalent to monadicity. This should mean that categories with finite products and countable colimits, where binary products distribute over countable colimits, are (or at least a subset of) categories CC where Grp(C)\mathrm{Grp}(C) is monadic over CC! This might give another way to characterize categories where the notion of a "free group" makes sense- it's a category with an analogue of the "free group monad" from Set!

view this post on Zulip Kevin Carlson (Jan 29 2025 at 00:02):

I wouldn't say that the notion of "free" something is equivalent to monadicity. For there's always the forgetful functor U:Grp(C)C,U:\mathrm{Grp}(C)\to C, and "free groups exist in CC" is most naturally interpreted as UU has a left adjoint. But just because it has a left adjoint doesn't mean UU is monadic! For instance, it's at least pretty reasonable to say that discrete topological spaces are "free" topological spaces, but even internally to Set,\mathrm{Set}, topological spaces are not monadic.

view this post on Zulip Kevin Carlson (Jan 29 2025 at 00:05):

This raises the question of whether there could be a CC in which UU has a left adjoint while failing to be monadic! The monadicity theorem says that the other necessary assumptions are that UU reflects isomorphisms (which is the problem in the case of spaces and is fine in the case of internal equational algebraic structures like groups) and that UU creates UU-split coequalizers. It's not obvious to me in what generality the latter is true for internal groups, although I would guess it's always true for internal models of any Lawvere theory.

view this post on Zulip John Baez (Jan 29 2025 at 00:09):

I like what you said, @John Onstead.

And now that you say it that way: @Todd Trimble has proved a quite general theorem along these lines. Let me state a specialization of his result, which might easily have been proved by someone else earlier. Suppose TT is any Lawvere theory and let ModT(C)\mathsf{Mod}_T(C) be the category of models of TT in a category CC with finite products: so, objects of ModT(C)\mathsf{Mod}_T(C) are finite-product-preserving functors f:TCf: T \to C, and morphisms are natural transformations between these.

Theorem. Suppose CC has all colimits, and finite products distributing over those colimits. Then for any Lawvere theory TT, the forgetful functor Mod(C)C\mathsf{Mod}(C) \to C is monadic.

(I suspect countable colimits would suffice. We can get my earlier example by taking TT to be the theory of groups.)

If you look at Todd's proof, you may see why I asked him to prove this for me. It uses a battery of techniques like the [[crude monadicity theorem]], coends and such.

view this post on Zulip Kevin Carlson (Jan 29 2025 at 00:10):

The assumption on colimits is just to allow a nice construction of the free functor, right? So for the creation of split coequalizers or however one wants to prove the monadicity, you probably don't need any colimit assumptions on CC?

view this post on Zulip John Baez (Jan 29 2025 at 00:15):

I'm not the one to ask, Kevin: I'm a humble engineer who uses theorems of this sort to do things like study electrical circuits. (I used it in the appendix to this paper.) @Todd Trimble could answer that question.

view this post on Zulip John Baez (Jan 29 2025 at 00:16):

But definitely the colimits are needed to get the free functor!

view this post on Zulip Kevin Carlson (Jan 29 2025 at 00:16):

Haha, an engineer, now, are you? But yes, that wasn't necessarily pointed at you specifically.

view this post on Zulip Kevin Carlson (Jan 29 2025 at 00:16):

Ahh, I see, he assumed CC has reflexive coequalizers and then, assuming products distribute over them, proves that models of the theory are closed in functors from the theory under reflexive coequalizers, which gives crude monadicity. That's good enough for me although I'm vaguely curious whether precise monadicity can be applied in case CC doesn't satisfy those conditions!

view this post on Zulip Kevin Carlson (Jan 29 2025 at 00:18):

John Baez said:

But definitely the colimits are needed to get the free functor!

At least some colimits are going to be needed but using Kelly's scary paper on transfinite constructions you probably don't need the products to distribute over them, which is kind of close to assuming Cartesian closedness.

view this post on Zulip John Baez (Jan 29 2025 at 00:21):

The great thing about being a mathematical physicist is that when I'm talking to a mathematician I can pretend I'm a physicist and when I'm talking to a physicist I can pretend I'm a mathematician. But I needed Todd for this paper because I was studying electrical circuits and that forced me to study PROPs and that forced me to study the multi-sorted Lawvere theory whose models are PROPs... at which point I went running for help to Todd, feeling like an engineer out of his depth!

view this post on Zulip John Baez (Jan 29 2025 at 00:27):

I only needed the case C=SetC = \mathsf{Set} so all the questions you're asking were not on my mind. My interest was in generalizing from Lawvere theories to 'multi-sorted Lawvere theories'.

view this post on Zulip Nathanael Arkor (Jan 29 2025 at 01:46):

Kevin Carlson said:

I wouldn't say that the notion of "free" something is equivalent to monadicity. For there's always the forgetful functor U:Grp(C)C,U:\mathrm{Grp}(C)\to C, and "free groups exist in CC" is most naturally interpreted as UU has a left adjoint. But just because it has a left adjoint doesn't mean UU is monadic!

I believe it is true that UU is monadic as soon as it has a left adjoint. For instance, it ought to follow from Proposition 6.4 of Fiore–Hur's On the construction of free algebras for equational systems, though one can also just check the monadicity theorem by hand.

view this post on Zulip Kevin Carlson (Jan 29 2025 at 02:22):

Well, it’s already checked pretty by-hand in Fiore-Hur, but I see at least roughly how to apply it for plain algebraic theories.

view this post on Zulip John Onstead (Jan 29 2025 at 04:26):

This is a very interesting conversation, but I do agree I made a mistake when I said "free things are equivalent to monadicity". My thought process here was specifically in the context where one had a free algebra and could do things with it like quotienting out by a relation via the paradigm of generators and relations. It had slipped my mind that the term "free" has broader use beyond universal algebra to more generally refer to free functors as adjoints to forgetful functors. Though it's really cool to learn about the various monadicity theorems and how they apply in these contexts!

view this post on Zulip Morgan Rogers (he/him) (Jan 29 2025 at 12:53):

I'm surprised no one has yet mentioned categorical logic here in response to David's original question. Not only does the categorical structure determine what types of structures you can reasonably interpret/construct in a given category, it also determines which results can be lifted from "classical" cases such as the category of sets. That is, if you examine the fragment of logic used to prove a property (such as fullness or faithfulness of a functor), that argument will be valid internally to any category sharing sufficient structure with the category of sets to interpret that fragment of logic.

The most frequent invocation of this type of argument is in topos theory: there is a meta-theorem stating that if you can prove something with a set-theoretic argument that doesn't require complementation (that is, an intuitionistic argument), then the same will be true in any Grothendieck topos (where "object of the topos" is substituted for any instance of "set"), and in any elementary topos if you only used finitary arguments.

view this post on Zulip Morgan Rogers (he/him) (Jan 29 2025 at 12:55):

So for instance, the construction of the free-forgetful adjunction between internal groups and internal objects goes through in any Grothendieck topos (in fact, it works in any elementary topos with a natural number object, which is needed to perform inductive constructions with a countable number of steps).

view this post on Zulip Morgan Rogers (he/him) (Jan 29 2025 at 12:56):

My statements are fairly informal because the formal version would be very long (as one should expect for such a powerful theorem), but it's good to retain that this is what people are appealing to when they work in the "internal language" of a category in a particular class.

view this post on Zulip Mike Shulman (Jan 29 2025 at 16:18):

I think John said more or less that in his comment about doctrines, but without using the word "logic". (-:

view this post on Zulip Morgan Rogers (he/him) (Jan 29 2025 at 16:28):

No, John only talked about the categorical structure needed to build the objects in question (groups etc), I was pointing out that things we can prove about those objects can themselves be interpreted in suitable doctrines, which was one of the things David Egolf talked about, and for that it isn't enough to just talk about doctrines: you need to know which doctrine corresponds to a given fragment of logic.

view this post on Zulip Kevin Carlson (Jan 29 2025 at 17:34):

I’ve never quite thought about doing a construction like the free group internally to a topos with natural numbers object. I can handle the free monoid on AA as an initial algebra of X1+AXX\mapsto 1+AX, and I think I know you can construct such initial algebras using the standard filtered colimit indexed by the internal naturals, but I wouldn’t actually know what to do for something like groups whose monad isn’t polynomial. Can you give a hint of how that works without having to say too much?

view this post on Zulip Kevin Carlson (Jan 29 2025 at 17:35):

(I hope we’re not hijacking your thread, @David Egolf! Happy to export the stuff since you last talked if you prefer.)

view this post on Zulip David Egolf (Jan 29 2025 at 17:56):

Thanks @Morgan Rogers (he/him) for your comment talking about the relevance of the fragment of logic that we have available. I've been trying to read a bit in "Theories, Sites, Toposes" by Caramello and so some of what you say is ringing a bell for me! That book talks about different fragments of first-order logic like "geometric logic" or "regular logic", which require having different rules (e.g. "finite conjunction"). And then you can do that logic (I think) inside corresponding categories: you can do geometric logic in geometric categories, and regular logic in regular categories, and so on.

So then it makes sense that if the definition of our original functor of interest makes use of some fragment of logic, and if we hope to induce an analogous functor in a different setting (such as categories of internal structures), then we probably need this fragment of logic to be available in this other setting.

And this is also relevant for trying to prove that an induced functor satisfies some property. If we need some fragment of logic to do that for our original functor, then we better have that fragment of logic available!

view this post on Zulip David Egolf (Jan 29 2025 at 17:57):

I guess a related question then is: To what extent does a category of internal SS-structures in a category CC inherit "features" of the logic from CC? (For example, what inherited logical "features" do we get in the category of groups internal to some topos?)

view this post on Zulip David Egolf (Jan 29 2025 at 17:59):

If we think of an internal SS-structure in CC as a kind of functor (to CC from some kind of "syntactical" category for SS?), then we can hope that the category of SS-structures in CC will inherit limits and colimits from CC. That should result in at least some part of CC's fragment of logic getting inherited, I hope! But this is just an intuitive guess.

view this post on Zulip David Egolf (Jan 29 2025 at 18:07):

Kevin Carlson said:

(I hope we’re not hijacking your thread, David Egolf! Happy to export the stuff since you last talked if you prefer.)

No, this is all interesting stuff! I'm only understanding part of what people are saying, but that's fine - I can always come back at a later time when I'm older and wiser. :big_smile:

view this post on Zulip David Egolf (Jan 29 2025 at 18:07):

And in general I'm always glad when a thread I start helps spark discussion, even if I don't fully understand that discussion.

view this post on Zulip Kevin Carlson (Jan 29 2025 at 18:54):

David Egolf said:

If we think of an internal SS-structure in CC as a kind of functor (to CC from some kind of "syntactical" category for SS?), then we can hope that the category of SS-structures in CC will inherit limits and colimits from CC. That should result in at least some part of CC's fragment of logic getting inherited, I hope! But this is just an intuitive guess.

It’s not quite that simple, because for interesting structures the functors producing models will need to preserve certain limits, and perhaps even some colimits. This means limits and colimits aren’t directly inherited, as we see even for the case of groups in the category of sets. And in general very much of the internal logic is destroyed by passing to models of a theory—categories models of algebraic theories are basically never toposes, even though Set is. There are results like “categories monadic over toposes are Barr exact” (at least, I’m pretty sure this is true over all toposes…) but they seem to have to be proved case by case. I don’t think there’s a general machine that takes in a fragment of logic and a doctrine and says what the internal logic of models of a theory from that doctrine in a category with that internal logic will be.

view this post on Zulip David Egolf (Jan 29 2025 at 19:16):

That makes sense! Thanks for pointing out that the functors producing models need to preserve certain structure. So the category of models of some theory isn't just a usual functor category, and so it in general won't inherit limits/colimits pointwise.

view this post on Zulip Mike Shulman (Jan 29 2025 at 19:32):

Morgan Rogers (he/him) said:

I was pointing out that things we can prove about those objects can themselves be interpreted in suitable doctrines, which was one of the things David Egolf talked about, and for that it isn't enough to just talk about doctrines: you need to know which doctrine corresponds to a given fragment of logic.

Or else phrase your proofs diagrammatically.

view this post on Zulip Mike Shulman (Jan 29 2025 at 19:47):

Kevin Carlson said:

I’ve never quite thought about doing a construction like the free group internally to a topos with natural numbers object. I can handle the free monoid on AA as an initial algebra of X1+AXX\mapsto 1+AX, and I think I know you can construct such initial algebras using the standard filtered colimit indexed by the internal naturals, but I wouldn’t actually know what to do for something like groups whose monad isn’t polynomial. Can you give a hint of how that works without having to say too much?

Using initial algebras for polynomial monads, you can construct free algebras for any algebraic theory with finitely many finitary operations and without axioms. Then, if you also have finitely many axioms, you can take quotients corresponding to each one. This works in the finitary case because you only have to make finitely many choices at a time to equip the quotient with algebra structure. One reference is D5.3 of Sketches of an Elephant, which cites several other papers including Johnstone-Wraith "Algebraic theories in toposes", Rosebrugh "On algebras defined by operations and equations in a topos", and Schumacher "Absolutely free algebras in a topos containing an infinite object".

view this post on Zulip Kevin Carlson (Jan 29 2025 at 19:59):

Beautiful, thanks for the pointers.

view this post on Zulip Morgan Rogers (he/him) (Jan 30 2025 at 09:04):

Mike Shulman said:

Morgan Rogers (he/him) said:

you need to know which doctrine corresponds to a given fragment of logic.

Or else phrase your proofs diagrammatically.

That still requires a translation from logic at some point in the process... for instance, if one is accustomed to thinking of a binary relation RR on a set AA as a subset of A×AA \times A, one might know that RR can be composed with itself via the formula R2={(x,z)y,R(x,y)R(y,z)}R^2 = \{(x,z) \mid \exists y, R(x,y) \wedge R(y,z)\}. Once you know that this expression can abstracted to a pullback followed by an image factorization into a product, you can indeed phrase your proofs in terms of those if you like, but the logical "meta-theorem" is stronger, since it says that you don't have to as long as your target category is regular!
Of course, when it comes to things that you don't already have a proof for over Set or for which the proof is too large to easily check that it is constrained to a given fragment of logic, you can't apply this meta-theorem any more.

view this post on Zulip Mike Shulman (Jan 30 2025 at 17:36):

You don't have to convince me that internal logic is useful! I was just saying that it's possible to get the same results without knowing about it. Phrasing particular proofs diagrammatically doesn't require knowing anything about formal logic or quantifiers, and doesn't even need to proceed by translating an element-based proof line-by-line. Oodles of mathematicians do this all the time. This is actually a source of constant frustration to me, since they could save lots of time if they did learn about internal logic, but they seem to get by.

view this post on Zulip David Egolf (Jan 30 2025 at 18:31):

Mike Shulman said:

Oodles of mathematicians do this all the time. This is actually a source of constant frustration to me, since they could save lots of time if they did learn about internal logic, but they seem to get by.

Until I started looking at this topic a bit more, my impression had been that internal logic was mostly of interest to people who want to study logic in different settings. However, once I realized that we can use logic to define (internal) structures of interest (and then reason about them) - then this topic became much more interesting to me!

view this post on Zulip David Egolf (Jan 30 2025 at 18:33):

On a more speculative note, I think it would be really cool if structures and logic relevant to a specific application could be obtained by internalization of well-studied things in some application-specific category.

view this post on Zulip David Egolf (Jan 30 2025 at 18:37):

(Of course, I specifically hope that this can be useful in the context of thinking about imaging. For example, I would love it if the reasoning patterns I used when creating a reconstruction algorithm could be recovered as part of the internal logic of some category.)

view this post on Zulip Mike Shulman (Jan 30 2025 at 19:25):

David Egolf said:

On a more speculative note, I think it would be really cool if structures and logic relevant to a specific application could be obtained by internalization of well-studied things in some application-specific category.

This is absolutely the case. For instance:

view this post on Zulip John Baez (Jan 30 2025 at 21:17):

Yes, and these examples are some of the reasons why I bothered learning about internalization!

Since these are internal not to topoi but to categories that are even less like Set\mathsf{Set}, the relevant kind of internal logic is weaker, or at least different, than the kind of logic topos theorists have worked out in detail. Thus, like the oodles of mathematicians Mike mentioned, I am usually content to work diagrammatically rather than "logically".

view this post on Zulip John Baez (Jan 30 2025 at 21:29):

As a further example of the fun to be had: I'm working on representation theory with @Todd Trimble, and we're thinking a lot of about representations of certain particular 'affine monoids over kk' where kk is some field.

An affine monoid over kk is a monoid in the category of affine schemes over kk. (I say 'monoid in CC' instead of 'internal monoid object in the category CC'.)

But the category of affine schemes over kk is the opposite of the category of commutative algebras over kk.

So an affine monoid over kk is a monoid in the opposite of the category of commutative algebras over kk.

But a commutative algebra over kk is a commutative monoid in Vectk\mathsf{Vect}_k.

So an affine monoid is a monoid in the opposite of the category of commutative monoids in Vectk\mathsf{Vect}_k.

So we're playing around with a doubly internalized concept, and this leads to all sorts of fun and games (which are by now standard in representation theory).

view this post on Zulip David Egolf (Jan 30 2025 at 21:55):

That's pretty cool! It's too bad a vector space isn't secretly a monoid in some category, so that you could get three levels of internalization...

This kind of thing leads me to wonder if we can build a "periodic table" or flowchart that shows how we get a lot of categories from some starting ingredients and a small number of processes. (The processes could be things like "take the opposite category" or "take the category of monoids internal to this category"... and we could probably think of a few other processes too.)

view this post on Zulip John Baez (Jan 30 2025 at 22:04):

It would definitely be worthwhile exploring such a flowchart, because concepts that arise this way tend to be important, and people tend to have already gotten interested in them even without especially caring whether they arose this way. They tend to play pivotal roles in the structure of mathematics.

view this post on Zulip John Baez (Jan 30 2025 at 22:08):

For example representation theorists have gotten interested in [[plethories]]... but what are these? They are, for starters, birings: a [[biring]] is a commutative ring object in CommRingop\mathsf{CommRing}^{\text{op}}. But the category of birings is equivalent to the category of left adjoint functors CommRingCommRing\mathsf{CommRing} \to \mathsf{CommRing}. Composition of left adjoint functors defines a monoidal structure on this category! A 'plethory' is a monoid in the resulting monoidal category of birings.

But the interest in plethories is not mainly because this is a mind-blowing repeated application of internalization! It's because plethories (or at least one very important plethory) control a lot of what we can do in representation theory.

view this post on Zulip David Egolf (Jan 31 2025 at 17:44):

John Baez said:

It would definitely be worthwhile exploring such a flowchart, because concepts that arise this way tend to be important, and people tend to have already gotten interested in them even without especially caring whether they arose this way. They tend to play pivotal roles in the structure of mathematics.

Maybe I'll start a new thread to think about such a flowchart. But first I want to get back to #learning: reading & references > reading through Baez's topos theory blog posts for a bit.

view this post on Zulip Joe Moeller (Jan 31 2025 at 22:45):

David Egolf said:

That's pretty cool! It's too bad a vector space isn't secretly a monoid in some category, so that you could get three levels of internalization...

Something like it is almost true. A ring is a monoid in abelian groups under tensor product. A module of a ring is an action of the monoid internal to abelian groups. A vector space is a module where the ring happens to be a field. The field part is the thing that most resists this internalization story.

view this post on Zulip Joe Moeller (Jan 31 2025 at 22:53):

There are two important theorems to keep in mind when thinking about layering internal structures: Fox's theorem and the Eckmann-Hilton argument. Eckmann-Hilton says that if you take monoids internal to monoids under cartesian product, the result is actually just commutative monoids. This is because you actually can prove the two units are equal and the two products are equal, and that the order of multiplying flips in the middle of doing that. "Fox's theorem" is a relatively new name for a result that was thought by many to just be folklore: a symmetric monoidal category is cartesian if and only if each object is equipped with a commutative monoid structure that is natural. Combined, these tell you that if you try taking monoids (wrt cartesian product) in a category that already is monoids, you're really just restricting to the ones that were commutative, and if you try taking monoids (wrt cartesian product) in a category that already is commutative monoids, nothing happens. This is why it's really important to specify that rings are monoids internal to Ab with tensor, because monoids internal to Ab with ×\times is just Ab again.

view this post on Zulip John Baez (Jan 31 2025 at 23:36):

Great summary of two key facts of life, @Joe Moeller!

view this post on Zulip David Egolf (Feb 02 2025 at 03:04):

Thanks @Joe Moeller ! I had heard before about Eckmann-Hilton, but I had not heard about Fox's theorem.

Although I'm a bit confused, because the nLab page on Fox's theorem talks about cocommutative comonoids, instead of commutative monoids:

A symmetric monoidal category is cartesian if and only if it is isomorphic to its own category of cocommutative comonoids. Thus every object is equipped with a unique cocommutative comonoid structure, and these structures are respected by all maps.

view this post on Zulip Jean-Baptiste Vienney (Feb 02 2025 at 07:13):

nLab is right, it is about cocommutative comonoids and not commutative monoids. If you have products you get the comultiplication AA×AA \rightarrow A \times A as the pairing 1A,1A\langle 1_A,1_A \rangle and the counit AA \rightarrow \top. If you replace cocommutative comonoids by commutative monoids then you obtain a characterization of cocartesian symmetric monoidal category (the kind of symmetric monoidal category obtained from coproducts).

view this post on Zulip Joe Moeller (Feb 02 2025 at 07:41):

Yeah sorry, I wrote quickly without thinking much. Cartesian has duplication maps, which make comonoids, cocartesian has fold maps which make monoids.

view this post on Zulip David Egolf (Feb 02 2025 at 17:44):

Thanks for clarifying that!

view this post on Zulip John Baez (Feb 02 2025 at 20:35):

So, someone should fix Joe's argument that for any symmetric monoidal category C\mathsf{C},

is the same as

which is the same as

He didn't really give much of an argument, but he made the claim - in a more digestible way than I just did: he said

if you try taking monoids (wrt cartesian product) in a category that already is commutative monoids, nothing happens.

I don't think the parenthetical is necessary: I don't think cartesian products are relevant here. Here's another way to state what's going on:

Let Mon(C)\textsf{Mon}(\mathsf{C}) be the category of monoid objects in a symmetric monoidal category C\mathsf{C}. This is a symmetric monoidal category itself.

Let CommMon(C)\textsf{CommMon}(\mathsf{C}) be the category of commutative monoid objects in a symmetric monoidal category C\mathsf{C}. This is a symmetric monoidal category itself.

Eckmann-Hilton says there's an equivalence

CommMon(C)Mon(Mon(C)) \mathsf{CommMon}(\textsf{C}) \simeq \mathsf{Mon}(\mathsf{Mon}(\textsf{C}))

essentially because the multiplication and unit in a commutative monoid are monoid homomorphisms. But the forgetful functor

Mon(CommMon(C))CommMon(C) \mathsf{Mon}(\mathsf{CommMon}(\textsf{C})) \to \mathsf{CommMon}(C)

is also an equivalence. So we get an equivalence

Mon(Mon(Mon(C)))Mon(Mon(C)) \mathsf{Mon}(\mathsf{Mon}(\mathsf{Mon}(\textsf{C}))) \simeq \mathsf{Mon}(\mathsf{Mon}(\textsf{C}))

but of course

Mon(Mon(C))≄Mon(C) \mathsf{Mon}(\mathsf{Mon}(\textsf{C})) \not\simeq \mathsf{Mon}(\textsf{C})

view this post on Zulip David Egolf (Feb 03 2025 at 03:50):

Thanks for spelling it out, @John Baez ! That makes sense.

view this post on Zulip Joe Moeller (Feb 03 2025 at 04:31):

John Baez said:

I don't think the parenthetical is necessary: I don't think cartesian products are relevant here. Here's another way to state what's going on:

Of course the parenthetical is wrong due to my original mistake, but the correct version is super relevant, since the claim ultimately is that some structure collapses when you choose the (co)cartesian structure, depending on what you're doing.

view this post on Zulip John Baez (Feb 03 2025 at 04:44):

Okay, certainly Fox's theorem is important for @David Egolf's grand flow chart! I just meant that the "monoids in monoids in..." results I mentioned don't need it, and they apply starting with any symmetric monoidal category C.\mathsf{C}.

Fox's theorem enters here: CommMon(C)\mathsf{CommMon}(\mathsf{C}) is always cocartesian.

view this post on Zulip Adittya Chaudhuri (Feb 03 2025 at 17:53):

I am not sure whether it is fully relevant to the discussion, but as @David Egolf asked, " when we can induce a full and faithful functor between categories of internal structures from a starting full and faithful functor?"

I was thinking about the Morita map between Lie groupoids (Definition 3.5 in https://arxiv.org/pdf/0806.4160). According to my understanding, a Morita map is the "smooth version" of a full and faithful and essential surjective functor in the category of Lie groupoids (as the naive internalisation may fail to guarantee the smoothness of the inverse, as the Axiom of Choice does not hold in the category of manifolds). Now, I think the Lemma 3.34 of https://arxiv.org/pdf/0806.4160 guarantees a condition when a "fully faithful functor" can be internalised in the category of Lie groupoids (as a Morita map implies that the underlying functor is full and faithful, but not the other way round ).

Also, I think the definition of Morita map itself guarantees a condition "when a fully faithful functor can be internalised in Lie groupoids"

view this post on Zulip David Egolf (Feb 03 2025 at 21:07):

@Adittya Chaudhuri Thanks for your comments and the link to that paper! It could be interesting to study this specific case at some point.

view this post on Zulip David Egolf (Feb 03 2025 at 21:15):

John Baez said:

Okay, certainly Fox's theorem is important for David Egolf's grand flow chart! I just meant that the "monoids in monoids in..." results I mentioned don't need it, and they apply starting with any symmetric monoidal category C.\mathsf{C}.

Fox's theorem enters here: CommMon(C)\mathsf{CommMon}(\mathsf{C}) is always cocartesian.

I like the idea of making a "grand flow chart", but it sounds like a lot of work. It would be cool if the nLab supported flow charts, so that the work could be more easily distributed across multiple contributors.

If I was going to be really ambitious, I'd love to have a flow chart where the objects are categories or monoidal categories, and the arrows are one of the following:

view this post on Zulip Adittya Chaudhuri (Feb 03 2025 at 22:48):

David Egolf said:

Adittya Chaudhuri Thanks for your comments and the link to that paper! It could be interesting to study this specific case at some point.

You are welcome.