You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I'm aware of computational trinitarianism as a sort of slogan. But I recently noticed a nice example of it that I'd like to understand better.
When finding the representation of the functor , The steps in the proof enumerate the requirements we need for the construction of a pair as a type in type theory.
Explicitly, a representation for needs an object and an element defining the natural transformation:
such that there exists a natural inverse satisfying the equations:
I'll write as , then the two equations respectively require:
and
These rules contain the "computation rule" or normalization procedure for the constructors and eliminators of pairs in type theory.
I find a lot of things surprising about this. The rules for products, disjoint unions and function applications in type theory, and natural deduction in particular, were chosen in part so the programs are guaranteed to terminate. So I'm surprised the yoneda lemma basically gives us this by just carefully choosing certain simple functors (like the functor of pairs of functions sharing a domain).
Is there a reason universal properties are so similar to how judgments are built in natural deduction? Is the above construction the first step in defining a category's "Internal language" (whatever that is)? What's the right way to think about this analogy?
I would say rather the rules are chosen so that programs don't get stuck. That is, an elim applied to an intro can always be reduced. Actual termination is a much subtler "global" property of the whole system, while this is a "local" one to each type former. And the local property is precisely whal Yoneda is about: you can give a map into X if you can say what its composites should be with all maps out of X, i.e. if you can guarantee that it won't cause programs to get stuck on X.
Your explanation is intuitive, so I'm trying to use it to infer something else.
My interpretation of it, is having a natural transformation between represented homsets, means our computation doesn't get stuck on one of the representating objects.
Let's say we're thinking about
If we keep our map natural in and as well, then the natural transformation isomorphism
Corresponds to .
Therfore any arrow through to can be simplified to an arrow straight to . So implicit in this is some reduction , because the point is we assumed our composition would go through no matter where we ended up.
I guess naturality buys us just a bit more than "it never gets stuck", it also never gets stuck for the same reasons.
The above was still looser than I'd like, but I think I'm getting the idea.
Any feedback on how I explored that?
Alex Kreitzberg said:
I guess naturality buys us just a bit more than "it never gets stuck", it also never gets stuck for the same reasons.
Yes, I agree. Which is what you need, really, to be able to give a uniform proof of normalization.
My only other comment is that induces only a natural transformation, not a natural isomorphism.
Thank you!