You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
If we have a functor between two categories of structures, does that translate to some corresponding functor between categories of internalized versions of those structures?
For example, there's a "free" functor that makes groups from sets. Does this induce a corresponding functor ? Here is supposed to denote the category of groups internal to , and the category of sets internal to (which I am hoping is equivalent to ). Of course, I am assuming that is a category that supports internalization of these structures.
(I don't know when all the internalizations of a given structure actually organize to form a category! I'm hoping it happens commonly.)
If we often do get an induced functor between categories of internal structures, I'm specifically curious:
I would like to better understand how categories of internalized structures relate to one another. Any thoughts in that direction would be most welcome!
If the internalized structure is defined using finite products, like the concept of "group", then any functor that preserves products will preserve that internalized structure. This is a straightforward thing to check, and it's a special case of the general rule "if blee is defined using only blah, any map that preserves blah preserves blee".
For example any right adjoint preserves products, so the forgetful functor maps groups internal to to groups internal to . A group internal to is an abelian group - it's a very good exercise to check this. A group internal to is just a group. So what we're seeing is that maps abelian groups to groups. This is no surprise: any abelian group has an underlying group! But it may be surprising that we're getting this as a spinoff of the fact that any group has an underlying set.
On the other hand, it seems unlikely that the left adjoint sends groups internal to to groups internal to , because left adjoint preserve colimits, not usually limits - and indeed you can check 'by hand' that does not preserve products, e.g.
while
So it should be very easy to see that does not map group objects in (i.e. groups) to group objects in (i.e. abelian groups).
John Baez said:
For example any right adjoint preserves products, so the forgetful functor maps groups internal to to groups internal to . A group internal to is an abelian group - it's a very good exercise to check this. A group internal to is just a group. So what we're seeing is that maps abelian groups to groups. This is no surprise: any abelian group has an underlying group! But it may be surprising that we're getting this as a spinoff of the fact that any group has an underlying set.
Thanks for your comment! That makes sense. We can expect then that functors that preserve the needed structure to define an internalized will then send -structures in to -structures in .
And since right adjoints preserve limits, if our internalized -structures can be defined using only limits, then we can expect a right adjoint to preserve internalized -structures.
But now, rereading your post, I see I misunderstood your question!
David Egolf said:
For example, there's a "free" functor that makes groups from sets. Does this induce a corresponding functor ? Here is supposed to denote the category of groups internal to , and the category of sets internal to (which I am hoping is equivalent to ). Of course, I am assuming that is a category that supports internalization of these structures.
Oh, now that I understand it, this is a much better question! The answer is yes if not only supports internalization of these two structures - sets and groups - but also the way we construct a gree group from a set.
A 'set internal to ' is just an object of , so that makes sense in any category.
A 'group internal to ' is an object with maps , and making some diagrams commute, so that makes sense in any category with finite products.
The free group on a set is defined by taking words in elements of (namely elements of and 'formal inverses' of elements of ) and then modding out by an equivalence relation, so it looks like this makes sense in any category with countable colimits, but we also need binary products to write down the group structure, and we probably need binary products to distribute over countable colimits to get things to work out nicely. So I'd say the 'free internal group object on an object' makes sense in any category with finite products and countable colimits, where binary products distribute over countable colimits.
Ah! Interesting! That makes a lot of sense!
The moral: we can take any mathematical construction we're interested in, done in some particular category (often but not always the category of sets), and determine what features of that category we actually need to carry out the construction. We can thus determine a kind of category in which the construction works!
A 'kind of category' is called a doctrine. For example, we just ran into three doctrines:
Each doctrine is actually a 2-category consisting of categories with the chosen bells and whistles, functors that preserve those extra bells and whistles, and natural transformations that get along with those bells and whistles.
I see! I suppose it could be tricky to find the doctrine that we need to work in to apply some particular functor to internal structures. We'll want to work in a doctrine where we can do the following:
In the example above, figuring out how we can do (2) was manageable but in general it might be rather difficult!
David Egolf said:
If we often do get an induced functor between categories of internal structures, I'm specifically curious:
- when we can induce a full and faithful functor between categories of internal structures from a starting full and faithful functor
- whether we can induce an adjunction between categories of internal structures when we start with an adjunction between the original categories of structures
People may have proved theorems about this. One could be very ambitious and try to prove such theorems for a large class of doctrines, or less ambitious and try to prove them for a specific doctrine. One could be very optimistic and try to prove them for all structures that can be expressed in a given doctrine, or less optimistic and accept that extra hypotheses may be required.
I'm too old and tired to tackle questions like this in maximum generality: it takes time to get up to speed for things like that. Luckily, it's often pretty easy to answer them in particular cases: i.e., particular structures formulated in some particular doctrine. And if you're a practical fellow, you're often likely to run into these questions in particular cases.
I got to thinking about this in the context of quantales (and quantaloids). So maybe at some point I'll be able to think up a more specific question in the direction of (1) and (2), using ideas from that setting!
But for now, I think you've given me some nice conceptual answers that will help me organize these ideas in my head going forward. Thanks again!
Sure! Here's one good habit, though it takes a lot of work: whenever you learn a mathematical construction that works in , figure out which doctrine it works in. This is at least useful for fairly simple concepts like graph, group, monoid, "free monoid on a set", etc.
I call this approach "concept and context" - you don't just learn a concept, you also learn the general context in which that concept makes sense. It's yet another Jedi math trick.
Great! That sounds like a useful habit I can try to build towards.
John Baez said:
The moral: we can take any mathematical construction we're interested in, done in some particular category (often but not always the category of sets), and determine what features of that category we actually need to carry out the construction. We can thus determine a kind of category in which the construction works!
John Baez said:
I call this approach "concept and context" - you don't just learn a concept, you also learn the general context in which that concept makes sense. It's yet another Jedi math trick.
This is really interesting, as this is a concept I think I came to understanding somewhat independently in my exploration of spaces when I learned you could either generalize spaces in a concrete way (finding extensions of Top) or in an abstract way (finding a category with similar properties or structure to Top). The way I thought about this was in terms of categorification. To use your language, in a way, "context" is a categorification of "concept". Here's how I thought of it: a "concept" can be thought of as a "1-model"- a Set-valued functor preserving some structure or property (like how a group is a finite product preserving functor from the Lawvere theory of groups). Meanwhile, a "context" can be thought of as the categorification of that situation- a "2-model" which is a Cat-valued functor (since Cat is the categorification of Set) preserving some structure or property.
John Baez said:
So I'd say the 'free internal group object on an object' makes sense in any category with finite products and countable colimits, where binary products distribute over countable colimits.
What's interesting about this is that the notion of "free" something is equivalent to monadicity. This should mean that categories with finite products and countable colimits, where binary products distribute over countable colimits, are (or at least a subset of) categories where is monadic over ! This might give another way to characterize categories where the notion of a "free group" makes sense- it's a category with an analogue of the "free group monad" from Set!
I wouldn't say that the notion of "free" something is equivalent to monadicity. For there's always the forgetful functor and "free groups exist in " is most naturally interpreted as has a left adjoint. But just because it has a left adjoint doesn't mean is monadic! For instance, it's at least pretty reasonable to say that discrete topological spaces are "free" topological spaces, but even internally to topological spaces are not monadic.
This raises the question of whether there could be a in which has a left adjoint while failing to be monadic! The monadicity theorem says that the other necessary assumptions are that reflects isomorphisms (which is the problem in the case of spaces and is fine in the case of internal equational algebraic structures like groups) and that creates -split coequalizers. It's not obvious to me in what generality the latter is true for internal groups, although I would guess it's always true for internal models of any Lawvere theory.
I like what you said, @John Onstead.
And now that you say it that way: @Todd Trimble has proved a quite general theorem along these lines. Let me state a specialization of his result, which might easily have been proved by someone else earlier. Suppose is any Lawvere theory and let be the category of models of in a category with finite products: so, objects of are finite-product-preserving functors , and morphisms are natural transformations between these.
Theorem. Suppose has all colimits, and finite products distributing over those colimits. Then for any Lawvere theory , the forgetful functor is monadic.
(I suspect countable colimits would suffice. We can get my earlier example by taking to be the theory of groups.)
If you look at Todd's proof, you may see why I asked him to prove this for me. It uses a battery of techniques like the [[crude monadicity theorem]], coends and such.
The assumption on colimits is just to allow a nice construction of the free functor, right? So for the creation of split coequalizers or however one wants to prove the monadicity, you probably don't need any colimit assumptions on ?
I'm not the one to ask, Kevin: I'm a humble engineer who uses theorems of this sort to do things like study electrical circuits. (I used it in the appendix to this paper.) @Todd Trimble could answer that question.
But definitely the colimits are needed to get the free functor!
Haha, an engineer, now, are you? But yes, that wasn't necessarily pointed at you specifically.
Ahh, I see, he assumed has reflexive coequalizers and then, assuming products distribute over them, proves that models of the theory are closed in functors from the theory under reflexive coequalizers, which gives crude monadicity. That's good enough for me although I'm vaguely curious whether precise monadicity can be applied in case doesn't satisfy those conditions!
John Baez said:
But definitely the colimits are needed to get the free functor!
At least some colimits are going to be needed but using Kelly's scary paper on transfinite constructions you probably don't need the products to distribute over them, which is kind of close to assuming Cartesian closedness.
The great thing about being a mathematical physicist is that when I'm talking to a mathematician I can pretend I'm a physicist and when I'm talking to a physicist I can pretend I'm a mathematician. But I needed Todd for this paper because I was studying electrical circuits and that forced me to study PROPs and that forced me to study the multi-sorted Lawvere theory whose models are PROPs... at which point I went running for help to Todd, feeling like an engineer out of his depth!
I only needed the case so all the questions you're asking were not on my mind. My interest was in generalizing from Lawvere theories to 'multi-sorted Lawvere theories'.
Kevin Carlson said:
I wouldn't say that the notion of "free" something is equivalent to monadicity. For there's always the forgetful functor and "free groups exist in " is most naturally interpreted as has a left adjoint. But just because it has a left adjoint doesn't mean is monadic!
I believe it is true that is monadic as soon as it has a left adjoint. For instance, it ought to follow from Proposition 6.4 of Fiore–Hur's On the construction of free algebras for equational systems, though one can also just check the monadicity theorem by hand.
Well, it’s already checked pretty by-hand in Fiore-Hur, but I see at least roughly how to apply it for plain algebraic theories.
This is a very interesting conversation, but I do agree I made a mistake when I said "free things are equivalent to monadicity". My thought process here was specifically in the context where one had a free algebra and could do things with it like quotienting out by a relation via the paradigm of generators and relations. It had slipped my mind that the term "free" has broader use beyond universal algebra to more generally refer to free functors as adjoints to forgetful functors. Though it's really cool to learn about the various monadicity theorems and how they apply in these contexts!
I'm surprised no one has yet mentioned categorical logic here in response to David's original question. Not only does the categorical structure determine what types of structures you can reasonably interpret/construct in a given category, it also determines which results can be lifted from "classical" cases such as the category of sets. That is, if you examine the fragment of logic used to prove a property (such as fullness or faithfulness of a functor), that argument will be valid internally to any category sharing sufficient structure with the category of sets to interpret that fragment of logic.
The most frequent invocation of this type of argument is in topos theory: there is a meta-theorem stating that if you can prove something with a set-theoretic argument that doesn't require complementation (that is, an intuitionistic argument), then the same will be true in any Grothendieck topos (where "object of the topos" is substituted for any instance of "set"), and in any elementary topos if you only used finitary arguments.
So for instance, the construction of the free-forgetful adjunction between internal groups and internal objects goes through in any Grothendieck topos (in fact, it works in any elementary topos with a natural number object, which is needed to perform inductive constructions with a countable number of steps).
My statements are fairly informal because the formal version would be very long (as one should expect for such a powerful theorem), but it's good to retain that this is what people are appealing to when they work in the "internal language" of a category in a particular class.
I think John said more or less that in his comment about doctrines, but without using the word "logic". (-:
No, John only talked about the categorical structure needed to build the objects in question (groups etc), I was pointing out that things we can prove about those objects can themselves be interpreted in suitable doctrines, which was one of the things David Egolf talked about, and for that it isn't enough to just talk about doctrines: you need to know which doctrine corresponds to a given fragment of logic.
I’ve never quite thought about doing a construction like the free group internally to a topos with natural numbers object. I can handle the free monoid on as an initial algebra of , and I think I know you can construct such initial algebras using the standard filtered colimit indexed by the internal naturals, but I wouldn’t actually know what to do for something like groups whose monad isn’t polynomial. Can you give a hint of how that works without having to say too much?
(I hope we’re not hijacking your thread, @David Egolf! Happy to export the stuff since you last talked if you prefer.)
Thanks @Morgan Rogers (he/him) for your comment talking about the relevance of the fragment of logic that we have available. I've been trying to read a bit in "Theories, Sites, Toposes" by Caramello and so some of what you say is ringing a bell for me! That book talks about different fragments of first-order logic like "geometric logic" or "regular logic", which require having different rules (e.g. "finite conjunction"). And then you can do that logic (I think) inside corresponding categories: you can do geometric logic in geometric categories, and regular logic in regular categories, and so on.
So then it makes sense that if the definition of our original functor of interest makes use of some fragment of logic, and if we hope to induce an analogous functor in a different setting (such as categories of internal structures), then we probably need this fragment of logic to be available in this other setting.
And this is also relevant for trying to prove that an induced functor satisfies some property. If we need some fragment of logic to do that for our original functor, then we better have that fragment of logic available!
I guess a related question then is: To what extent does a category of internal -structures in a category inherit "features" of the logic from ? (For example, what inherited logical "features" do we get in the category of groups internal to some topos?)
If we think of an internal -structure in as a kind of functor (to from some kind of "syntactical" category for ?), then we can hope that the category of -structures in will inherit limits and colimits from . That should result in at least some part of 's fragment of logic getting inherited, I hope! But this is just an intuitive guess.
Kevin Carlson said:
(I hope we’re not hijacking your thread, David Egolf! Happy to export the stuff since you last talked if you prefer.)
No, this is all interesting stuff! I'm only understanding part of what people are saying, but that's fine - I can always come back at a later time when I'm older and wiser. :big_smile:
And in general I'm always glad when a thread I start helps spark discussion, even if I don't fully understand that discussion.
David Egolf said:
If we think of an internal -structure in as a kind of functor (to from some kind of "syntactical" category for ?), then we can hope that the category of -structures in will inherit limits and colimits from . That should result in at least some part of 's fragment of logic getting inherited, I hope! But this is just an intuitive guess.
It’s not quite that simple, because for interesting structures the functors producing models will need to preserve certain limits, and perhaps even some colimits. This means limits and colimits aren’t directly inherited, as we see even for the case of groups in the category of sets. And in general very much of the internal logic is destroyed by passing to models of a theory—categories models of algebraic theories are basically never toposes, even though Set is. There are results like “categories monadic over toposes are Barr exact” (at least, I’m pretty sure this is true over all toposes…) but they seem to have to be proved case by case. I don’t think there’s a general machine that takes in a fragment of logic and a doctrine and says what the internal logic of models of a theory from that doctrine in a category with that internal logic will be.
That makes sense! Thanks for pointing out that the functors producing models need to preserve certain structure. So the category of models of some theory isn't just a usual functor category, and so it in general won't inherit limits/colimits pointwise.
Morgan Rogers (he/him) said:
I was pointing out that things we can prove about those objects can themselves be interpreted in suitable doctrines, which was one of the things David Egolf talked about, and for that it isn't enough to just talk about doctrines: you need to know which doctrine corresponds to a given fragment of logic.
Or else phrase your proofs diagrammatically.
Kevin Carlson said:
I’ve never quite thought about doing a construction like the free group internally to a topos with natural numbers object. I can handle the free monoid on as an initial algebra of , and I think I know you can construct such initial algebras using the standard filtered colimit indexed by the internal naturals, but I wouldn’t actually know what to do for something like groups whose monad isn’t polynomial. Can you give a hint of how that works without having to say too much?
Using initial algebras for polynomial monads, you can construct free algebras for any algebraic theory with finitely many finitary operations and without axioms. Then, if you also have finitely many axioms, you can take quotients corresponding to each one. This works in the finitary case because you only have to make finitely many choices at a time to equip the quotient with algebra structure. One reference is D5.3 of Sketches of an Elephant, which cites several other papers including Johnstone-Wraith "Algebraic theories in toposes", Rosebrugh "On algebras defined by operations and equations in a topos", and Schumacher "Absolutely free algebras in a topos containing an infinite object".
Beautiful, thanks for the pointers.
Mike Shulman said:
Morgan Rogers (he/him) said:
you need to know which doctrine corresponds to a given fragment of logic.
Or else phrase your proofs diagrammatically.
That still requires a translation from logic at some point in the process... for instance, if one is accustomed to thinking of a binary relation on a set as a subset of , one might know that can be composed with itself via the formula . Once you know that this expression can abstracted to a pullback followed by an image factorization into a product, you can indeed phrase your proofs in terms of those if you like, but the logical "meta-theorem" is stronger, since it says that you don't have to as long as your target category is regular!
Of course, when it comes to things that you don't already have a proof for over Set or for which the proof is too large to easily check that it is constrained to a given fragment of logic, you can't apply this meta-theorem any more.
You don't have to convince me that internal logic is useful! I was just saying that it's possible to get the same results without knowing about it. Phrasing particular proofs diagrammatically doesn't require knowing anything about formal logic or quantifiers, and doesn't even need to proceed by translating an element-based proof line-by-line. Oodles of mathematicians do this all the time. This is actually a source of constant frustration to me, since they could save lots of time if they did learn about internal logic, but they seem to get by.
Mike Shulman said:
Oodles of mathematicians do this all the time. This is actually a source of constant frustration to me, since they could save lots of time if they did learn about internal logic, but they seem to get by.
Until I started looking at this topic a bit more, my impression had been that internal logic was mostly of interest to people who want to study logic in different settings. However, once I realized that we can use logic to define (internal) structures of interest (and then reason about them) - then this topic became much more interesting to me!
On a more speculative note, I think it would be really cool if structures and logic relevant to a specific application could be obtained by internalization of well-studied things in some application-specific category.
(Of course, I specifically hope that this can be useful in the context of thinking about imaging. For example, I would love it if the reasoning patterns I used when creating a reconstruction algorithm could be recovered as part of the internal logic of some category.)
David Egolf said:
On a more speculative note, I think it would be really cool if structures and logic relevant to a specific application could be obtained by internalization of well-studied things in some application-specific category.
This is absolutely the case. For instance:
Yes, and these examples are some of the reasons why I bothered learning about internalization!
Since these are internal not to topoi but to categories that are even less like , the relevant kind of internal logic is weaker, or at least different, than the kind of logic topos theorists have worked out in detail. Thus, like the oodles of mathematicians Mike mentioned, I am usually content to work diagrammatically rather than "logically".
As a further example of the fun to be had: I'm working on representation theory with @Todd Trimble, and we're thinking a lot of about representations of certain particular 'affine monoids over ' where is some field.
An affine monoid over is a monoid in the category of affine schemes over . (I say 'monoid in ' instead of 'internal monoid object in the category '.)
But the category of affine schemes over is the opposite of the category of commutative algebras over .
So an affine monoid over is a monoid in the opposite of the category of commutative algebras over .
But a commutative algebra over is a commutative monoid in .
So an affine monoid is a monoid in the opposite of the category of commutative monoids in .
So we're playing around with a doubly internalized concept, and this leads to all sorts of fun and games (which are by now standard in representation theory).
That's pretty cool! It's too bad a vector space isn't secretly a monoid in some category, so that you could get three levels of internalization...
This kind of thing leads me to wonder if we can build a "periodic table" or flowchart that shows how we get a lot of categories from some starting ingredients and a small number of processes. (The processes could be things like "take the opposite category" or "take the category of monoids internal to this category"... and we could probably think of a few other processes too.)
It would definitely be worthwhile exploring such a flowchart, because concepts that arise this way tend to be important, and people tend to have already gotten interested in them even without especially caring whether they arose this way. They tend to play pivotal roles in the structure of mathematics.
For example representation theorists have gotten interested in [[plethories]]... but what are these? They are, for starters, birings: a [[biring]] is a commutative ring object in . But the category of birings is equivalent to the category of left adjoint functors . Composition of left adjoint functors defines a monoidal structure on this category! A 'plethory' is a monoid in the resulting monoidal category of birings.
But the interest in plethories is not mainly because this is a mind-blowing repeated application of internalization! It's because plethories (or at least one very important plethory) control a lot of what we can do in representation theory.
John Baez said:
It would definitely be worthwhile exploring such a flowchart, because concepts that arise this way tend to be important, and people tend to have already gotten interested in them even without especially caring whether they arose this way. They tend to play pivotal roles in the structure of mathematics.
Maybe I'll start a new thread to think about such a flowchart. But first I want to get back to #learning: reading & references > reading through Baez's topos theory blog posts for a bit.
David Egolf said:
That's pretty cool! It's too bad a vector space isn't secretly a monoid in some category, so that you could get three levels of internalization...
Something like it is almost true. A ring is a monoid in abelian groups under tensor product. A module of a ring is an action of the monoid internal to abelian groups. A vector space is a module where the ring happens to be a field. The field part is the thing that most resists this internalization story.
There are two important theorems to keep in mind when thinking about layering internal structures: Fox's theorem and the Eckmann-Hilton argument. Eckmann-Hilton says that if you take monoids internal to monoids under cartesian product, the result is actually just commutative monoids. This is because you actually can prove the two units are equal and the two products are equal, and that the order of multiplying flips in the middle of doing that. "Fox's theorem" is a relatively new name for a result that was thought by many to just be folklore: a symmetric monoidal category is cartesian if and only if each object is equipped with a commutative monoid structure that is natural. Combined, these tell you that if you try taking monoids (wrt cartesian product) in a category that already is monoids, you're really just restricting to the ones that were commutative, and if you try taking monoids (wrt cartesian product) in a category that already is commutative monoids, nothing happens. This is why it's really important to specify that rings are monoids internal to Ab with tensor, because monoids internal to Ab with is just Ab again.
Great summary of two key facts of life, @Joe Moeller!
Thanks @Joe Moeller ! I had heard before about Eckmann-Hilton, but I had not heard about Fox's theorem.
Although I'm a bit confused, because the nLab page on Fox's theorem talks about cocommutative comonoids, instead of commutative monoids:
A symmetric monoidal category is cartesian if and only if it is isomorphic to its own category of cocommutative comonoids. Thus every object is equipped with a unique cocommutative comonoid structure, and these structures are respected by all maps.
nLab is right, it is about cocommutative comonoids and not commutative monoids. If you have products you get the comultiplication as the pairing and the counit . If you replace cocommutative comonoids by commutative monoids then you obtain a characterization of cocartesian symmetric monoidal category (the kind of symmetric monoidal category obtained from coproducts).
Yeah sorry, I wrote quickly without thinking much. Cartesian has duplication maps, which make comonoids, cocartesian has fold maps which make monoids.
Thanks for clarifying that!
So, someone should fix Joe's argument that for any symmetric monoidal category ,
is the same as
which is the same as
He didn't really give much of an argument, but he made the claim - in a more digestible way than I just did: he said
if you try taking monoids (wrt cartesian product) in a category that already is commutative monoids, nothing happens.
I don't think the parenthetical is necessary: I don't think cartesian products are relevant here. Here's another way to state what's going on:
Let be the category of monoid objects in a symmetric monoidal category . This is a symmetric monoidal category itself.
Let be the category of commutative monoid objects in a symmetric monoidal category . This is a symmetric monoidal category itself.
Eckmann-Hilton says there's an equivalence
essentially because the multiplication and unit in a commutative monoid are monoid homomorphisms. But the forgetful functor
is also an equivalence. So we get an equivalence
but of course
Thanks for spelling it out, @John Baez ! That makes sense.
John Baez said:
I don't think the parenthetical is necessary: I don't think cartesian products are relevant here. Here's another way to state what's going on:
Of course the parenthetical is wrong due to my original mistake, but the correct version is super relevant, since the claim ultimately is that some structure collapses when you choose the (co)cartesian structure, depending on what you're doing.
Okay, certainly Fox's theorem is important for @David Egolf's grand flow chart! I just meant that the "monoids in monoids in..." results I mentioned don't need it, and they apply starting with any symmetric monoidal category
Fox's theorem enters here: is always cocartesian.
I am not sure whether it is fully relevant to the discussion, but as @David Egolf asked, " when we can induce a full and faithful functor between categories of internal structures from a starting full and faithful functor?"
I was thinking about the Morita map between Lie groupoids (Definition 3.5 in https://arxiv.org/pdf/0806.4160). According to my understanding, a Morita map is the "smooth version" of a full and faithful and essential surjective functor in the category of Lie groupoids (as the naive internalisation may fail to guarantee the smoothness of the inverse, as the Axiom of Choice does not hold in the category of manifolds). Now, I think the Lemma 3.34 of https://arxiv.org/pdf/0806.4160 guarantees a condition when a "fully faithful functor" can be internalised in the category of Lie groupoids (as a Morita map implies that the underlying functor is full and faithful, but not the other way round ).
Also, I think the definition of Morita map itself guarantees a condition "when a fully faithful functor can be internalised in Lie groupoids"
@Adittya Chaudhuri Thanks for your comments and the link to that paper! It could be interesting to study this specific case at some point.
John Baez said:
Okay, certainly Fox's theorem is important for David Egolf's grand flow chart! I just meant that the "monoids in monoids in..." results I mentioned don't need it, and they apply starting with any symmetric monoidal category
Fox's theorem enters here: is always cocartesian.
I like the idea of making a "grand flow chart", but it sounds like a lot of work. It would be cool if the nLab supported flow charts, so that the work could be more easily distributed across multiple contributors.
If I was going to be really ambitious, I'd love to have a flow chart where the objects are categories or monoidal categories, and the arrows are one of the following:
David Egolf said:
Adittya Chaudhuri Thanks for your comments and the link to that paper! It could be interesting to study this specific case at some point.
You are welcome.