You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Sorry for the clickbait-y title. I could not resist after reading these questions:
https://cs.stackexchange.com/questions/9818/what-is-meant-by-category-theory-doesnt-yet-know-how-to-deal-with-higher-order
https://cs.stackexchange.com/a/9815/133539
I think that the claim is that "Category theory does not know yet how to deal with higher order functions", and that CT should 'catch up' to deal satisfactorily with higher order structures.
While I don't really understand a few statements such as "What happens in normal mathematics is that mathematicians very cleverly reduce higher-order things to first-order structures", I think I do get the gist of the post.
Gosh, that opinion seems ... very confident. The author admits to reading Eilenberg and Mac Lane (1945) and wonders what happened since.
Wow.
and have the same objects. And yet Category Theory doesn't know how to define a diagonal functor . But some day Category Theory will catch up.
Uday's argument (I can't remember where it was that I met him, but it was a long time ago) needs a little more fleshing out. It seems to me people do consider maps , often in terms of pairs of maps that are linked somehow. For example, could be ordered structures and are adjoint to each other, so that one uniquely determines the other. I'm thinking for example of how Dana Scott built his -model .
(I'm also reminded a little of dagger-categories and dagger-functors.)
Yeah the answer really reads like trolling....
Also, the notion of functor in Haskell and the category-theoretic definition are not the same. The Haskell container type Set
is not a Haskell functor (https://hackage.haskell.org/package/containers-0.7/docs/Data-Set.html) yet it is mathematically just the 'powerset' construction on a type: Set T
is the type of subsets of the type T
. Category theory has a covariant powerset functor, when is Haskell going to catch up?
Todd Trimble said:
and have the same objects. And yet Category Theory doesn't know how to define a diagonal functor . But some day Category Theory will catch up.
Perhaps when loses interest in being a minimal counterexample.
I actually encountered this same post a few months ago and was really confused by it. For instance I wasn't sure of what the author meant when they said that category theory can't deal with higher order functions. I'm not even exactly sure what a higher order function even is- if it's just a function, then you can find it in the category Set and that should be the end of the story, no?
I tried looking up "higher order function" and one approach is that it's a function that takes in functions as input and returns other functions. But if that's the case there's plenty of ways of expressing this in category theory terms. The immediate notion that comes to mind are categories where the objects are vector spaces and the morphisms are linear operators, given some vector spaces are function spaces. Another example from Set would be a function between exponential objects. These are the sets of all functions between two sets, so its elements can be interpreted as functions, so when you have a function between these it is indeed an operation that has functions as inputs and returns them as outputs.
Am I missing anything? Could someone explain the situation in more detail?
So, as others have alluded to, this is written in a somewhat provocative/pretentious way. It has a rather narrow meaning which reflects a specific and narrow critique, the author is concerned with the following problem.
Say I have an algebraic theory with a collection of sorts and functions between these sorts , etc. subject to some equational constraints like . We can model this algebraic theory as a category with objects and morphisms corresponding to the function symbols between sorts. Under this interpretation of an algebraic theory as a "syntactic category" a model of the algebraic theory which assigns sorts to specific sets and function symbols to specific functions should give a functor from the syntactic category into the category of sets; the functor should send the sort symbol to the set which is to serve as the interpretation of the sort symbol, and function symbols to honest functions.
Further, one can define a model homomorphism to be a natural transformation of functors . This is a fairly straightforward way of doing "universal algebra" category-theoretically. For situations where the function symbols take multiple arguments, or no arguments, one can ask that the "syntactic category" has categorical products and that "model" functors preserve the categorical product. This is called a Lawvere theory.
The author Uday is expressing his dissatisfaction that this framework for universal algebra does not generalize well to the situation where our algebraic theory contains higher-order functions such as a function symbol . It is reasonable to define (i.e., interpreting function types as sets of functions in the obvious way), but it is not clear how to define a model homomorphism. At the very least, the usual notion of natural transformation is inapplicable here. One can use dinatural transformations, a generalization of natural transformations, but Uday rejects these because dinatural transformations cannot be composed in general and so there is no category of models and model homomorphisms in this definition.
I think his concerns come from his research in parametricity in type theory. There are some beautiful theorems in type theory that say that all polymorphic functions between functors definable in System F are natural transformations between functors. (Presumably this generalizes to dinatural transformations.) Personally I am optimistic that category theorists and type theorists have somehow discovered two sides of the same coin, that there is some deep relationship between polymorphism and naturality, naturality is a fairly direct condition expressible using straightforward equational logic and polymorphism is a bit more abstract and involves a formal type theory idea of "not being able to inspect the type you are given" or "uniformly given all at once for all types" - but funnily enough, in teaching category theory we always motivate the concept of naturality by describing it informally as polymorphism.
But, Uday's point is that the semantics are not adequate for the syntax, as we can obviously compose polymorphic functions in type theory without difficulty, but we cannot compose dinatural transformations in general. Uday does not mention extranatural transformations in the post (yet another kind of transformation) and it's not clear to me whether they resolve the problems he's alluding to here.
This is a charitable interpretation. I don't have an explanation for the bewildering accusation that
The reaction of category theorists to these issues is a bit perplexing. They act as if higher-order operations form a Computer Science idea; they are of no consequence to mathematics. If that is so, then a foundation of mathematics would not be good enough for a foundation of computer science.
As a graduate student in mathematics my first encounter with a higher order function was the definition of the Hom functor for a group, which acts on its morphisms by postcomposition. Being introduced to higher order functions through category theory, it is difficult to understand what he means by "category theorists don't care about higher order functions" and I am inclined to attribute it to a negligent attitude towards truth.
I think that if we put the pretentiousness aside for a moment, there is a good message here, which is that category theorists should try and understand what type theory research into polymorphism and parametricity means for our understanding of naturality. I only learned about parametric polymorphism a few months ago and it shocked me a lot that we can make such bold claims that all System F definable maps between functors are automatically natural. Somehow the universal quantifier of type theory is not the same as the universal quantifier of ZFC, it gives better/more opaque abstraction. It seems to me (again, optimistically) that further research in this direction has the potential to make many handwavy proofs of naturality /coherence fully rigorous, by showing that the construction is expressible in a language where all constructions are known to be natural.
Ok, so this problem isn't about finding higher order functions in certain categories like Set, it's got more to do with categorical model theory, syntactic categories, functorial semantics, and all that. That makes more sense, thanks. I find the connection between natural transformations and polymorphic functions interesting but I will have to look more into that and what it could mean here!
A few ways I would address Uday's argument jump out. First, the concept of Syn/Lang adjunction states that when you have a category of type theories Type, there is a category of structured categories StrCat that has a syntactic category-internal language adjunction between them. The simplest answer here is that StrCat for type theories with higher order functions might be non-obvious or may not be a straightforward generalization from Lawvere theories. If we want to move beyond algebraic theories then the problem becomes easier. Large portions of math can be done internal to toposes, and higher order functions are a natural part of math, therefore they exist internal to topoi. In addition toposes naturally encode higher order logic which might help in their ability to encode higher order functions! This might be a little "overkill" and doesn't address Uday's concerns specifically since he is talking about universal algebra, but I thought I would mention it since I find toposes fascinating.
Toposes are indeed adequate for talking about higher-order functions but they do not solve the problem he is talking about. In a topos , for any two objects , we can form the object of functions , and this defines a functor . One could take as a special case of this construction and get a function , and this makes sense on objects (it defines a function ) but it cannot be extended to a functor in general (there is no good way to suggest how this construction should act on morphisms of the topos.)
This is a basic consequence of the definitions and there is really no way around it. Uday understands this, he is making some philosophical statement that the creators of category theory defined functors incorrectly, because he thinks this construction should be considered a "functor" in some way. He does not say how what the "correct" definition of a functor is, because he thinks this is our job, not his.
F Lengyel said:
Todd Trimble said:
and have the same objects. And yet Category Theory doesn't know how to define a diagonal functor . But some day Category Theory will catch up.
Perhaps when loses interest in being a minimal counterexample.
Is your point that I should have added "for all " or "in general"?
There's a related difficulty: the free cartesian closed category on a category is not given by a 2-monad on , because the 2-monad would not know what to do with 2-cells. "In general."
Patrick Nicodemus said:
Toposes are indeed adequate for talking about higher-order functions but they do not solve the problem he is talking about. In a topos , for any two objects , we can form the object of functions , and this defines a functor . One could take as a special case of this construction and get a function , and this makes sense on objects (it defines a function ) but it cannot be extended to a functor in general (there is no good way to suggest how this construction should act on morphisms of the topos.)
There's a concept of 'generalized center', which James Dolan and introduced, which generalizes the endomorphisms of a set and the center of a monoid. It's not functorial. But it's nice. What's going on above is that you're internalizing one special case of that concept: the endomorphisms of a set. So as we internalize other cases we'll get a whole pile of nice but non-functorial operations.
All of this needs a lot of work to be made rigorous, but I hope the idea is clear enough:
A k-tuply monoidal n-category is an (n+k)-category where the bottom k levels are 'degenerate'. That is: it has just one object, just one 1-morphism and so on, but as many k-morphisms as it wants.
The generalized center of a k-tuply monoidal n-category is a (k+1)-tuply monoidal n-category defined as follows.
Think of as an object in , the (n+k+1)-category of k-tuply monoidal n-categories. Then form a (n+k+1)-category with
one object, namely
one 1-morphism, namely
.
.
.
and so on for k+1 levels, and then throw in all possible higher morphisms between the ones you've got. is an (n+k+1)-category that's degenerate at the bottom k+1 levels. So, is a (k+1)-tuply monoidal n-category.
Here is the first really fun example: the generalized center of a monoidal category is a braided monoidal category! This is worked out in detail here: [[Drinfeld center]].
Someone sufficiently powerful could internalize the concept of monoidal category to get the concept of a 'monoidal category object' or [[pseudomonoid]] in a 2-topos, and then define the Drinfeld center of a monoidal category object, which will be a braided monoidal category object. And then Uday would complain that this construction is not functorial, because it's not.
This is a basic consequence of the definitions and there is really no way around it. Uday understands this, he is making some philosophical statement that the creators of category theory defined functors incorrectly, because he thinks this construction should be considered a "functor" in some way. He does not say how what the "correct" definition of a functor is, because he thinks this is our job, not his.
That's convenient for him. I will similarly leave it to number theorists to redefine prime numbers so that 57 is prime, because Grothendieck pretended it is and I really wish it were.
Todd Trimble said:
F Lengyel said:
There's a related difficulty: the free cartesian closed category on a category is not given by a 2-monad on , because the 2-monad would not know what to do with 2-cells. "In general."
There is however a 2-monad for free cartesian closed category on the -category of categories, functors and invertible natural transformations if my memory serves right. In relation to what Uday mentions, I think that the existence of this 2-monad could be put in parallel with univalent parametricity in type theory (a variant of parametricity where relations to be preserved are restricted to equivalences).
That comparison would let me hope for a similar 2-monad on a 2-category of categories where 2-cells look like (parametrized) relations. Of course a bare category has no notion of relation and I think that's why Uday and his coauthors worked with reflexive-graph categories (categories enriched over reflexive graphs). I wonder whether there is a 2-monad for free CCC on an appropriate 2-category with such gadgets as objects (and a similar question could be asked for framed bicategories that also seem to come with a proper notion of relational/loose morphism).
Is your point that I should have added "for all C" or "in general"?
It was a joke, not a criticism.
Regarding composition of dinatural transformations perhaps this paper is relevant: https://arxiv.org/abs/2307.09289
Todd Trimble said (sarcastically):
Category Theory doesn't know how to define a diagonal functor
It seems to me that the diagonal of would be the category of tuples
such that and . In this category a morphism would naturally consist of morphisms and such that and .
But this diagonal category is actually equivalent to ! So category theory is explicitly telling us that constructions that use an object both covariantly and contravariantly are functorial on precisely the core of the original category.
there's another view on this for times when we want to "use" something co and contra-variantly: the [[twisted arrow category]] , with the obvious functor to . In this case, the semantic counterpart of using a forall is just taking the limit over , i.e. taking an end!
what's interesting is that in this case, the interpretation of the usual impredicative encoding of booleans can be shown to be the booleans directly!
however, it breaks down once we move to higher order as Uday says, with for example the Church numerals, since we need more than just arrows to really exploit parametricity, but actual relations.
(and there i don't think it's a problem of not being able to "write down a functorial Hom", but rather that functions are too weak to constrain such an encoding to be just )
Graham Manuell said:
Regarding composition of dinatural transformations perhaps this paper is relevant: https://arxiv.org/abs/2307.09289
Looks like a fascinating paper, thank you for sharing.
Patrick Nicodemus said:
Graham Manuell said:
Regarding composition of dinatural transformations perhaps this paper is relevant: https://arxiv.org/abs/2307.09289
Looks like a fascinating paper, thank you for sharing.
Note that there are apparently some issues with the di-Yoneda lemma as stated in that preprint; see https://types2024.itu.dk/abstracts.pdf#page=69
I was thinking about this, and I was wondering if the main question of this thread boils down to the following (which I'm intentionally keeping informal because I don't know enough to make this precise and formal):
An example of a category that one gives when teaching are functional programming languages, with types as objects and functions between types being the morphisms. Eventually, we come to polymorphic functions, and we want to define what 'parametric' polymorphism is. One idea is to just define them to be natural transformations (and parametric polymorphism does seem to be one of the informal examples one uses while teaching what a natural transformation is). This (kind of) makes sense - the "free theorems" of parametricity are just naturality laws (I think). Unfortunately, this analogy doesn't work out, because there are 'obviously' parametrically polymorphic functions which are not natural transformations.
I think that the author's criticism of category theory / need for category theory 2.0 was to be able to define natural transformations and functors so that they can be what parametric polymorphism means, abstractly. Perhaps this is not too unreasonable...
On the other hand, we could say that parametric polymorphism and naturality are really not the same thing, and the similarity is only superficial (which makes me a little bit sad). In this case, it might be good to have a categorical notion which is 'the essence' of what parametric polymorphism really is...
Also, free theorems are not just naturality laws. For some specific types one gets a free theorem of naturality, but there are also other free theorems that aren't any kind of naturality.
So it seems to me that if you have two things A and B, and not every A is a B, and not every B is an A, then why would you be sad that they are different? Math is full of pairs of things that are different. (-:O
Mike Shulman said:
So it seems to me that if you have two things A and B, and not every A is a B, and not every B is an A, then why would you be sad that they are different?
That's fair. I guess people do use polymorphism as informal example of naturality. So I would expect that the ideas are related. And at the right level of abstraction, we could make this 'sameness' precise...
The ideas are certainly related. For instance, as you pointed out, some free theorems are naturality properties.
Often parametricity is a kind of dinaturality, when you have parameters appearing in both negative and positive position, eg . There's a beautiful paper of Neumann about this https://arxiv.org/abs/2307.09289
@Matteo Capucci (he/him) Note that @Josh Chen pointed out an issue in that preprint earlier in this thread.
:O i didn't know that! What is it?
I think the proof of the di-Yoneda lemma is incorrect. I've asked @Jacob Neumann himself to double check
Apparently, there's already an issue with the statement of the di-Yoneda lemma, as explained in the TYPES abstract that Josh linked earlier (https://types2024.itu.dk/abstracts.pdf#page=69)
Tom de Jong said:
I think the proof of the di-Yoneda lemma is incorrect. I've asked Jacob Neumann himself to double check
Apparently, there's already an issue with the statement of the di-Yoneda lemma, as explained in the TYPES abstract that Josh linked earlier (https://types2024.itu.dk/abstracts.pdf#page=69)
Here's the explanation of why the lemma fails:
https://jacobneu.github.io/research/slides/TYPES-2024.pdf#page=26
Arxiv preprint will be updated soon. Apologies for the delay, and any confusion it caused
lmao
image.png
I love that. Did you make it?