You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
In category theory often one has a sense of 'inevitability' of certain constructions, which are basically the only possible things one can do in a given context. And yet, very often I find myself having to prove lots of coherence and well-posedness conditions... and that seems a waste a time, because I know everything's gonna work out, because how else could it end?
Hence it seems to me there is some very general coherence result lurking in the background, telling us that if I don't do anything bizarre, whatever I do categorically is gonna yield well-defined (functorial, natural, you name it) constructions.
I know it's very vague, but does anyone have an idea about what this result might look like?
One example above all, due to Leinster: Yoneda lemma. If you have a category and a presheaf, there's not much you can talk about. You have and the representables . You can ask about morphisms in and out of from this one. And if you look at morphisms , you get a presheaf again.
Then one might speculate they are the same, and indeed one finds it's trivial to find maps between and , which are evidently inverse to each other. At this point we should be able to argue: we didn't do anything weird, so we can conclude the object we constructed (the iso ) is a bona fide natural transformation. Even more, we should be able to argue that such an isomorphism is itself picked out 'naturally', somehow (what do I mean here? idk)
My own experience of category theory is that it is full of “inevitable” things which turn out to be not inevitable at all and in fact wrong.
(A famous example is that distributive monoidal categories satisfy all equations of linearly distributive categories except one, which they can satisfy if and only if they are posetal.)
@Matteo Capucci (he/him): two papers you might be interested in are Lack's Non-canonical isomorphisms and Nunes's later paper Pseudoalgebras and Non-canonical Isomorphisms.
While there probably isn't a universal theorem that puts category theorists out of business by saying "all commutative diagrams that deserve to commute, do", there are lots of super-general coherence theorems known and probably more waiting to be discovered. So I applaud Matteo's desire to find them!
Homotopy theorists are probably the best experts at proving theorems that let them avoid (infinite towers of) coherence laws.
Possibly also relevant: this paper by Shulman
(While the title states a negative, it also begins with a review of positives, conditions under which there is a general "coherence theorem".)
Amar Hadzihasanovic said:
My own experience of category theory is that it is full of “inevitable” things which turn out to be not inevitable at all and in fact wrong.
Yeah maybe this is the most important fact. Can we have a list of 'forbidden spells' that break coherence?
Nathanael Arkor said:
Matteo Capucci (he/him): one paper you might be interested in is Lack's Non-canonical isomorphisms and Nunes's later paper Pseudoalgebras and Non-canonical Isomorphisms.
Thanks Nathanael, you're always very helpful with literature :) I'll look into them
Isn't Grothendieck's Homotopy Hypothesis something along these lines? It looks like inevitable at first (one simply adds the obvious coherences in a coherator). Maybe you can argue that there are too many non-canonical choices though.
More interesting examples would be isomorphisms of stuff induced by non-canonical morphisms. Something like the arrow in Beck-Chevalley, or Frobenius reciprocity, or in a commutativity of limits and colimits such that the domain and codomain are isomorphic but the canonical morphism fails to be so. Does anyone know explicit examples in those cases?
Isn't Grothendieck's Homotopy Hypothesis something along these lines?
Hmm. This hypothesis says that homotopy types are the same as -groupoids. If you believe it, you don't need to give an independent definition of "-groupoid". So in that sense it does absolve you from writing down lots of coherence laws that might appear in the definition of -groupoid: we can just define homotopy types using topological spaces or Kan complexes.
But notice that as soon as you start trying to work with homotopy types, the coherence issues reappear. For example an space is a homotopy that's a monoid "up to homotopy", where the homotopies obey the pentagon identity up to homotopy, where these homotopies make the 3d Stasheff polytope commute up to homotopy, etc.
Of course you can avoid all this mess by simply decreeing that an space is a space equipped with a (weak) homotopy equivalence to a topological monoid. But this only works if you are willing to believe (or prove) that all the coherence issues work fine. And so on.
This has been a long story within homotopy theory; I'm only talking about things that happened before 1972.
John Baez said:
Isn't Grothendieck's Homotopy Hypothesis something along these lines?
Hmm. This hypothesis says that homotopy types are the same as -groupoids. If you believe it, you don't need to give an independent definition of "-groupoid". So in that sense it does absolve you from writing down lots of coherence laws that might appear in the definition of -groupoid: we can just define homotopy types using topological spaces or Kan complexes.
But notice that as soon as you start trying to work with homotopy types, the coherence issues reappear. For example an space is a homotopy that's a monoid "up to homotopy", where the homotopies obey the pentagon identity up to homotopy, where these homotopies make the 3d Stasheff polytope commute up to homotopy, etc.
Of course you can avoid all this mess by simply decreeing that an space is a space equipped with a (weak) homotopy equivalence to a topological monoid. But this only works if you are willing to believe (or prove) that all the coherence issues work fine. And so on.
In my comment, I was referring to Grothendieck's model for weak -groupoids. In a coherator, one iteratively forces all the possible obvious diagrams, that one would expect to commute, to commute up to higher cells. Still, it's not obvious if that's enough to codify all the coherences in a homotopy type. So my point was that, in this case, one only proceeds the construction doing "what's obvious", but, in the end, no one knows if it's going to work.
Okay, thanks. I was referring to the homotopy hypothesis in its more general meaning as "weak -groupoids (in some definition thereof) are equivalent (in some sense) to homotopy types".
By the way, who was the first one who actually wrote the phrase "homotopy hypothesis" with this meaning?
It might have been me, in my June 2004 talk called Why n-Categories? at the workshop -Categories: Foundations and Applications. On page 17 of my slides I state a version of the homotopy hypothesis.
Or it might have been someone much earlier! The idea clearly goes back at least to Grothendieck. I'm just wondering about the phrase "homotopy hypothesis". I have the feeling I made it up, but it seems hard to believe.
Later, in 2007, I gave a talk at the Fields Institute called The Homotopy Hypothesis.
I do have the sense that the "inevitable facts that turn out to be wrong" are memorable because they're surprising, because we're used to diagrams commuting most of the time, as Matteo said. In addition to the answers already given, another possible answer is "parametricity" in the computer science sense: in some cases, if some operation can be defined constructively, then it is automatically functorial or natural.
An hypothetical "directed type theory" would help formalize this behavior: the terms that you would be able to construct using it would be automatically functorial/natural/etc. without explicitely having to prove it. This remains hypothetical for now though AFAIK.
This discussion made me want to create a [[counterexamples in category theory]] page on the nlab. Turns out it already exists, but could be expanded with the counter-examples discussed here!
Mike Shulman said:
I do have the sense that the "inevitable facts that turn out to be wrong" are memorable because they're surprising, because we're used to diagrams commuting most of the time, as Matteo said. In addition to the answers already given, another possible answer is "parametricity" in the computer science sense: in some cases, if some operation can be defined constructively, then it is automatically functorial or natural.
Parametricity is definitely a key word here. Constructively too. I believe that's because in most case this 'natural feel' of coherence arises from constructing things coherently using from other coherent things. The kind of coherence you need in the previous sentence is what defies me a bit. I know functorial/natural can do, but that's a bit too vague...
"Constructing things coherently from other coherent things" is also part of what directed type theory should achieve, as Josselin said.
Somehow I missed their message! Maybe it's obvious if I think enough about it, but how would that work? It's the same idea as stuff written in hott is automatically coherent once interpreted in any -categories?
Yeah, basically.
Matteo Capucci (he/him) said:
Somehow I missed their message! Maybe it's obvious if I think enough about it, but how would that work? It's the same idea as stuff written in hott is automatically coherent once interpreted in any -categories?
Note that you can't interpret HoTT in any -category unfortunately, you need it to be an -(Grothendieck) topos
basically, the parallel between HoTT and DTT would be that HoTT lets you synthetically manipulate groupoids, whereas DTT lets you synthetically manipulate categories instead.
(Although you can embed any -category in its presheaf -topos and interpret HoTT there.)
Is there a good synthetic way to talk about representable objects in MLTT? Something like a universe of representable types?
Yes, you would postulate a particular universe that's closed under whatever constructions your original category theory has that are preserved by the Yoneda embedding, like -types and -types (if it has finite limits) and -types if it's LCCC.
@Matteo Capucci (he/him) My apologies for necroing this thread but I think it's an interesting question.
I thought about this for a while the past few days, and I wanted to outline my thoughts on what such a general coherence theorem might look like.
An -category is an -category equipped with new relations between -cells, subject to coherence relations stating that certain composites of cells are equal.
To prove a theorem of -category theory, we start by proving the underlying theorem of -category theory.
In a second step, we try to extend this theorem to "truncated -category theory", where we consider the generators of the -cells, but don't consider the relations on such cells, as if they are not subject to equality. Generally, this is either trivial ("truncated -category theory" is just a particular presentation of -category theory) or obviously impossible (the -cells aren't invertible, for example, and so you can't lift a proof that invokes the symmetry argument).
Then the third step is to establish that if we add the right coherence assumptions at level , then the output is fully coherent at level .
Ideally we would be able to formalize this in an appropriate type theory for developing category theory.
We start from introducing a theorem , where is some type context
which introduces some -categories, -functors and so on, as well the appropriate "generating" -cells.
Then, we prove that if appropriate coherence equations are added to making it "fully coherent",
then the construction is fully coherent, i.e. all desired equations about it are satisfied.
I think in order to get such a theorem you would essentially have to bake the desired result into the definition.
That is, one would define a "fully coherent context" to be a context such that for all constructs definable over
and all coherence equations about such contexts, those equations hold.
So I suggest the following research program towards answering your question:
I have been pursuing this approach without much luck. This is why I posted recently asking whether we can syntactically identify coherence equations through being in "most general position" (having the maximum number of possible distinct variables) or a linearity constraint asserting that each -cell variable in the typing context appears exactly once on the left and right hand side of the equation.
By the way, a closely related question is: given a theorem of -category theory, can we directly categorify it by means of a straightforward syntactic translation, giving a theorem of -category theory? My guess is that the answer is simply "no", on the following grounds.
Consider the following theorem of -category theory. Let be preorders and let be a monotone map. Let be a morphism, not necessarily monotone. Suppose that , and . Then is monotone.
Now, suppose we know that the correct categorification of a preorder is a category, and the correct categorification of a monotone map is a functor. So we want to prove now the categorified theorem that "Let be categories, a functor, a function on objects, a family of morphisms, a function, ... (??)... then there is a functor extending .
I wrote the question marks here because we still have to figure out the appropriate coherence conditions to impose on and . As far as I can tell there is no naive direct way to get the right relation to impose between and by a simple translation process akin to the parametricity translation. I don't think there's any way to get the right equations "for free" - we have to think about it. Perhaps there is a way to use tools from rewriting theory to compute the "critical pairs", so the process can still be automated, but it's certainly a bit trickier than just parametricity.
I guess another point I should make is that one can have a broader view of coherence as "any equation we prove in category theory" or a narrow view of coherence.
The narrow view of coherence is that coherence equations are those that arise through categorification, and I think it's more fruitful to concentrate on that narrow sense. The pentagon and triangle identities for a monoidal category are obviously coherence equations. Stretching a bit, the associativity and unit laws for a category are coherence equations if you think of categories as categorified preorders. But not all equations hold, such as for an arbitrary monad . Your Yoneda lemma example, I think before you could claim it follows trivially by coherence you would have to explain in what sense the Yoneda lemma is a categorification of a lower-dimensional theorem.
(I've seen the tag, I'll reply when I have time to properly read your message @Patrick Nicodemus :pray: I'm still interested!)
Patrick Nicodemus said:
I guess another point I should make is that one can have a broader view of coherence as "any equation we prove in category theory" or a narrow view of coherence.
Yeah that seems to be the question---which equations are just nuisance and which are actually meaty.
I'm not sure one can fully automate the invention of coherences. In principle one can ponder e.g. non-pentagonative monoidal categories, they are a perfectly fine 2-algebraic structure. However they are not what we expect from a 2-dimensional notion of monoid, since associativity has a generalization in arbitrary high dimension (cf. [[associahedra]]) which is what we found more natural to consider.
What I'd rather see 'automated' is the process of building coherent structures, i.e. a characterization of a 'coherence-preserving' arguments/constructions. These seems related to #theory: category theory > "general position", linearity in category theory...
Surely your proposal of needing a formal system in which such arguments find a proper formalization is key.
This paper of mine was about a construction that applies to presentations of higher algebraic theories and does have the property of systematically generating higher coherences between its factors, see Section 6. I haven't had time yet to pursue this further and it hasn't really been picked up by anyone else but I think some of the ideas there have huge potential.
(It's on my todo list to update the paper and submit to a journal, only a short conference version is published)
This comment is re: Matteo's suggestion that we need to focus on "coherence-preserving constructions" rather than automating coherence itself; I agree with his point!
Amazing paper Amar! Though it seems to me to support the idea of automatically generating coherences (by tensoring theories) rather than proving them. That is, how would the theory of tensor theories help in proving e.g. the coherences for a braided monoidal category?
Well, if "coherence" is formalised as the property that a certain morphism is a weak equivalence, then certainly having constructions that are suitably "homotopical" in the sense that they preserve these weak equivalences would help. That is, for example, say you have theories that are coherent over “stricter” theories in the sense that certain morphisms , are weak equivalences. For example one of these pairs could be (an embodiment of) the theory of pseudomonoids and of the theory of monoids, respectively. Then if you have a “homotopical” tensor product you would immediately know that is coherent over .
My programme would be to build a topology-inspired vocabulary of such operations that is expressive enough to construct, compositionally, (presentations of) many of the higher algebraic theories that arise in practice. I believe that a presentation of the theory of “braided-commutative pseudomonoids” whose models are braided monoidal categories should be obtainable by taking a smash product of presentations of the theory of pseudomonoids, then its “loop space” in order to reduce the dimension -- a categorification of how the theory of commutative monoids arises as a tensor product of the theory of monoids with itself.
I think there's also a sense in which one presentation of the theory of pseudomonoids is some localised quotient of the infinity-simplex, although I'm not sure right now why this ought to imply coherence.
One caveat is that these operations in general will be maximalistic in their generation of coherences, by virtue of being systematic... I do not believe that one can have very general methods (beyond exhaustive search aided by heuristics) for reducing sets of generators.
For example, while I am confident that is a systematic way of compositionally re-constructing Mac Lane's original 5 equations for monoidal categories / pseudomonoids, I am equally confident that no such method will reproduce Kelly's brute-force proof that 3 of those are redundant.