You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I just learned about the Yoneda lemma. The book I was reading makes it sounds like a really big deal:
We can paint a better—rather, a complete—picture of an object once we investigate its interactions with all other objects. This theme finds its origins in an—if not the most—important result in category theory.
(from "Topology, A Categorical Approach")
However, in my opinion, they don't immediately support this claim of huge importance.
What are some reasons that the Yoneda lemma is important? What are some ways in which it has been applied?
Here are the first three applications that come to my mind:
The Yoneda embedding is used in stating that is the free cocompletion of . It is also used in proving that every presheaf can be written as a colimit of representable presheaves: https://en.wikipedia.org/wiki/Density_theorem_(category_theory).
Also, often it is easier to prove or than to exhibit an isomorphism by hand.
A good list of applications of the last kind is https://matheplanet.com/matheplanet/nuke/html/article.php?sid=1430, but it's German.
Also, check out https://ncatlab.org/nlab/show/Yoneda+lemma#applications.
Leopold Schlicht said:
Also, often it is easier to prove or than to exhibit an isomorphism by hand.
In this vein, see Section 8.4 of Steve Awodey's book.
The nLab proof http://nlab-pages.s3.us-east-2.amazonaws.com/nlab/show/adjoints+preserve+(co-)limits uses the Yoneda lemma, but it is not completely rigorous - see https://math.stackexchange.com/questions/101005/right-adjoints-preserve-limits.
Of course, this would not convince a skeptic that the Yoneda Lemma is important, but it's easier to understand than some of the more serious applications, and it conveys the general flavor of how it's used.
The first step, of course, is to get in the state of mind where you can believe, or at least pretend to believe, that the distributivity of over in FinSet is something that deserves explanation - not just something "obvious".
Thanks everyone! That's a lot of great examples and references! I'll have to make reference of this thread as I learn more.
Leopold Schlicht said:
Also, often it is easier to prove or than to exhibit an isomorphism by hand.
That makes sense!
For the record, as someone interested in imaging, the emphasis on understanding an object by its interactions is exciting: in imaging we usually just get to see how an object interacts with other objects, and don't get to see the object itself. I am hoping the Yoneda lemma might be helpful for thinking about imaging for this reason.
David Egolf said:
Leopold Schlicht said:
Also, often it is easier to prove or than to exhibit an isomorphism by hand.
That makes sense!
You can see that Emily Riehl proves the distributive law using just this method.
This might be a bit too specific and simple of an application, but it is one that I particularly like - hope there's something in it for you as well.
The example is from coalgebraic logic, which is a general, category-theoretic framework allowing one to reason uniformly about (modal) logics interpreted over all kinds of transition structures (Kripke frames, Markov chains, automata...). This is all (mostly!) uniform in the kind of transition structure under consideration, which is encoded in a "type functor" , which in its most simple form is an endofunctor on .
Now, in order to define a modal logic for our transition systems (which are -coalgebras), we need a propositional logic together with modalities. Skipping over some of the details of how the propositional logic is defined (and how the modalities are added precisely), we need a way to define the semantics of modalities. The approach of coalgebra considers modal formulas to be "propositions/predicates on the possible one-step transitions of a transition system". This fits in with what is done in e.g. basic modal logic over Kripke frames (which can be treated as coalgebras for the covariant powerset functor ), where asserting means is true at all successor states (which are all one-step transitions from the original state). In a sense, the -modality is mapping a proposition (which can be treated as the set of states at which holds) to a set of sets of states, namely the set of all sets such that . It can be verified that is a natural transformation . Generalizing this, a (unary) modality for coalgebra type is a natural transformation .
This is where the Yoneda lemma comes in. Applying it, we find that the (unary) modalities for our type functor are in correspondence with the elements of (in which we apply to the two-element set). This allows us to do something that I find amazingly useful: we can systematically "search" through and define modalities for any kind of transition system. In fact, for many functors (such as e.g. ), we get that there is a finite amount of modalities one could consider (16 unary modalities for Kripke frames!). We can then also show that we have a kind of "normal form" for our logics, since it can be shown that each modality is a (possibly infinite) join/union/disjunction of certain "elementary" modalities.
There is a lot that I left out of course, and some care needs to be taken (e.g. this really is only about modalities which concern only a single step of transitions), but when I learned this it really stuck with me for some reason. In case anything's unclear about this, I would be glad to try (and potentially fail) to explain it further :)
I'm surprised no one has mentioned Cayley or Dedekind's theorems as special cases (for groups and posets respectively).
Also, the fact that the Yoneda embedding is full and faithful immediately implies that is a dense subcategory of . This means that every presheaf on is the colimit of the diagram . Thus, every functor to a cocomplete category (essentially) uniquely (left Kan) extends to a cocontinuous functor . Thus, the Yoneda embedding realises as the free cocompletion of .
If is some category of basic shapes or spaces, then consists of all the possible formal glueings of these shapes/spaces. Usually you want to impose some colimit relations, giving you a sheaf subcategory . This is the starting point for a lot of contemporary geometry (manifolds, sheaves, schemes, stacks, etc..).
Nima Motamed said:
This might be a bit too specific and simple of an application, but it is one that I particularly like - hope there's something in it for you as well.
The example is from coalgebraic logic, which is a general, category-theoretic framework allowing one to reason uniformly about (modal) logics interpreted over all kinds of transition structures (Kripke frames, Markov chains, automata...).
I assure you the example is not too simple, for me at least! This application really grabs my interest. Transitioning between different states, where different statements are true in different states, sounds like something I want to learn more about. To relate this to image reconstruction - which is my big area of interest - different states could corresponds to having collected different observations of an unknown object, and the statements (with varying truth values across states) could be along the lines of "Given the observations obtained in this state, the unknown object could have property ".
I would like to try and understand what you said. Some of my questions may be a bit basic. I welcome any explanations you are willing and able to share!
This is all (mostly!) uniform in the kind of transition structure under consideration, which is encoded in a "type functor" , which in its most simple form is an endofunctor on .
I've seen people talk about types, using notation like to mean is of the type "integers". I've also seen how we can label things in a fibred(?) way using functors: for a functor we can say has label (type?) if . Is this the kind of functor you are talking about here?
Now, in order to define a modal logic for our transition systems (which are -coalgebras), we need a propositional logic together with modalities. Skipping over some of the details of how the propositional logic is defined (and how the modalities are added precisely), we need a way to define the semantics of modalities.
I had not heard of modal logic before! Checking the nlab, it looks like it's a setting for doing logic with additional fancy operators that encode more complicated properties of a statement (e.g. could be true; will eventually be true). That's really cool!
The approach of coalgebra considers modal formulas to be "propositions/predicates on the possible one-step transitions of a transition system".
Let's see. Say is a statement in a given state (it is either true or false in that state). Then as we move to another state, its truth value could change in some way. If I understand, a modal formula is a statement about how this change in truth values can occur. (e.g. could be: If was true in state , it becomes false in state ).
This fits in with what is done in e.g. basic modal logic over Kripke frames (which can be treated as coalgebras for the covariant powerset functor ), where asserting means is true at all successor states (which are all one-step transitions from the original state).
I see. Here is a bit more complicated, in that it talks about how the truth value of in numerous immediate successor states is related to its truth value in the starting state.
Maybe I'll pause here before trying to understand the rest, to give people a chance to elaborate/clarify, should they wish.
David Egolf said:
I assure you the example is not too simple, for me at least! This application really grabs my interest. Transitioning between different states, where different statements are true in different states, sounds like something I want to learn more about. To relate this to image reconstruction - which is my big area of interest - different states could corresponds to having collected different observations of an unknown object, and the statements (with varying truth values across states) could be along the lines of "Given the observations obtained in this state, the unknown object could have property ".
Yes, this sounds like a perfect example of what coalgebra, modal logic and therefore coalgebraic modal logic is equipped to handle. Nice one!
I would like to try and understand what you said. Some of my questions may be a bit basic. I welcome any explanations you are willing and able to share!
I would be glad to; though it might be best to talk about the basics and specifics of modal logic, coalgebra and coalgebraic logic in a different topic, so the focus here can stay on applications of the Yoneda lemma. I'll answer the following here though, just for clarity :)
I've seen people talk about types, using notation like to mean is of the type "integers". I've also seen how we can label things in a fibred(?) way using functors: for a functor we can say has label (type?) if . Is this the kind of functor you are talking about here?
Not precisely: I am using "type" in quite a loose sense, as something describing the transition structures we are interested in. The functor is really just any endofunctor, though sometimes we make assumptions abouts its properties if needed. The functor can be considered to represent all of the different ways a single step of transitions can look like in the transition structure we're interested in. Take Kripke frames, which are just unlabelled non-deterministic transition systems, or more simply just binary relations on a set . Such a binary relation can equivalently be treated as a function , with , i.e. the set of all successor states of . A single step of transitions in a Kripke frame really is just a set of successor states. So Kripke frames are -coalgebras.
If you would be interested in deterministic automata for some alphabet , then you can verify that these are precisely coalgebras for the functor , ignoring the requirements of a finite set of states and a distinguished initial state.