You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
This is exercise 2.1.16
Let G be a group considered as a category and [G,Set] be the category of functors from G to Set (a.k.a left G-sets). What interesting functors are there between Set and [G,Set]? which if any are adjoint to each other?
I get how open ended questions like this can be beneficial but I also hate how easy it would be to completely miss intended insights.
The only candidates I spotted are the functor (call it U) which "forgets" its G action. and just returns the underlying set, and in the other direction, the functor that imbues any set with the trivial G action, call it T.
I'm still trying to puzzle out whether or not they're adjoint in one order or the other, but firstly I just wanted to ask what, if any, other functors I am missing.
By the way, is there some trick to spot which order they might be adjoint in? Since free -| forgetful, one might wonder if U is the right adjoint. But free functors for algebras seem like they are "elaborating" and the trivial action is kind of a dumbing-down, so maybe it goes on the right.
Or I could be completely off and they're not adjoint in any way :/ But since any set function can be regarded as a G-equivariant map between two sets with trivial G actions, it seems like it is an adjoint relationship.
Your input is appreciated!
When trying to think of more functors between and , the first question that comes to mind for me is this: What are the representable functors from to ?
If we pick some , then what is the functor ? Does this give anything interesting in some special cases, maybe? (In particular, I'm curious what happens in the case where sends each element of to the same identity function).
I don't know if this will be helpful... But after a little bit of searching online (for example, I found this), it seems like there is some connection between a functor being representable and having a left adjoint. So maybe this is relevant!
I’m not sure how much you know about coproducts, but one good way to find left adjoints out of Sets uses the fact that every set is a coproduct of copies of a one-element set (so the whole left adjoint, since it must preserve coproducts, is determined by where it sends a singleton.)
As for finding adjoint relationships with functors you’ve already named, just think about the adjunction relationship. What’s a map from a set into some and what about in the other direction? Could you replace it with one involving some -action related to ?
IIRC both of these are salient to material later in chapter 2, but I was trying to do it with just what was given in and before this section. I do plan to move on regardless of my progress with this exercise, though. This one and 2.1.17 are interesting, but they both put me in the same fraught mindset of "what on earth am I looking for?"
David Egolf said:
In particular, I'm curious what happens in the case where ρ sends each element of G to the same identity function
I think what you're describing is "giving X the trivial G action," the functor T I proposed.
Kevin Carlson said:
What’s a map from a set S into some U(X), and what about in the other direction? Could you replace it with one involving some G-action related to S?
AFAICT there isn't any obvious G action you can give to just any set beyond the trivial action. For example, if X is a 2 element set and G is order 3, there isn't anything else, right? I've been interpreting this exercise as if G cannot be varied, if that makes a difference.
Ryan Schwiebert said:
David Egolf said:
In particular, I'm curious what happens in the case where ρ sends each element of G to the same identity function
I think what you're describing is "giving X the trivial G action," the functor T I proposed.
I'm interested in the case where specifies a trivial action. I think more specifically I'm curious what is like when is acting on a set with a single element.
Let be the single object of , when is viewed as a category. When acts on a singleton set, is a singleton set. Further, in this case a natural transformation from to some amounts to a function such that this square commutes for all :
naturality square
A function from to amounts to choosing an element of , because has only one element. So, a natural transformation from to I think amounts to the choice of an element of that doesn't change when acted on by any element in , using the action .
Building on this, the set of natural transformations from to , namely , I think is in bijection with the set .
If I didn't make a mistake, it seems like there is a functor that sends each group action (acting on some set ) to the set of elements (of ) that are invariant under that action.
Ryan Schwiebert said:
IIRC both of these are salient to material later in chapter 2, but I was trying to do it with just what was given in and before this section. I do plan to move on regardless of my progress with this exercise, though. This one and 2.1.17 are interesting, but they both put me in the same fraught mindset of "what on earth am I looking for?"
I read that question and thought: "oh, Leinster is trying to get people to think about the forgetful functor sending each G-set to its underlying set and the functor sending each set to the free G-set on that set, and show they're adjoints to each other". I believe this is what should flash through the mind of any category theorist in the first second after reading this question. Then there are other things to do, like check if these functors really are adjoint, and look for others.
I wonder if at this point in his book Leinster has mentioned that the words "underlying" or "forgetful" should scream out RIGHT ADJOINT!!! and the word "free" should scream out LEFT ADJOINT!!!
If not, maybe he's trying to get you to learn this through experience.
But when I teach category theory, I just come out and say it - and illustrate it with a bunch of examples. As you basically mention, one of the first steps in getting good at adjoints is being able to quickly smell the difference between a left adjoint and a right adjoint.
David Egolf said:
it seems like there is a functor G,Set:[G,Set]→Set that sends each group action ρ′ (acting on some set ρ′(∗)) to the set of elements (of ρ′(∗)) that are invariant under that action.
Oh! Yes, that's a very natural candidate isn't it! And related to trivial G actions... thanks, I will investigate it.
So so far we've mentioned four functors:
I can think of a fifth functor as well
Oscar Cunningham said:
So so far we've mentioned four functors:
- The underlying set of a G-set.
- The trivial action
- The fixed points
- John mentioned the free G-set on a set, but didn't actually construct what it was.
It's a great exercise to do that, if one doesn't already know what it is.
Another good puzzle: show that functor 3 is adjoint to another functor on this list!
Of course, because it's easy to find functors from to , we can compose the functors on your list with such functors and get tons more functors from to or from to .
So it's good that Leinster didn't ask us to find all functors going back and forth between and because we'd be here all night!
John Baez said:
But when I teach category theory, I just come out and say it - and illustrate it with a bunch of examples. As you basically mention, one of the first steps in getting good at adjoints is being able to quickly smell the difference between a left adjoint and a right adjoint.
This is maybe not the best possible motivating example of something with the right adjoint smell though, since [secrets about this forgetful functor and its adjoint(s)!]
Ryan Schwiebert said:
Kevin Carlson said:
What’s a map from a set S into some U(X), and what about in the other direction? Could you replace it with one involving some G-action related to S?
AFAICT there isn't any obvious G action you can give to just any set beyond the trivial action. For example, if X is a 2 element set and G is order 3, there isn't anything else, right? I've been interpreting this exercise as if G cannot be varied, if that makes a difference.
Who said it had to be an action on the same set as though? Lots of functors between concrete categories change the underlying set.
Kevin Carlson said:
John Baez said:
But when I teach category theory, I just come out and say it - and illustrate it with a bunch of examples. As you basically mention, one of the first steps in getting good at adjoints is being able to quickly smell the difference between a left adjoint and a right adjoint.
This is maybe not the best possible motivating example of something with the right adjoint smell though, since [secrets about this forgetful functor and its adjoint(s)!]
Clearly neither of us want to give away all the details of we're really talking about here, so as not to spoil Ryan's fun. I still think it's a useful to have a sense of smell, so that you walk into an adjunction and instantly have a sense of who is the right adjoint and who is the left adjoint. But then you learn that a right adjoint can have its own right adjoint, which Lawvere called a 'far-right' or 'fascist' functor.
I actually think that Leinster was trying to appeal to the intuition of a reader who has taken a course in group theory here! From that perspective, there is a functor "more obvious" than fixed points that goes from left -sets to sets!
I don't know if this was @Oscar Cunningham 's "fifth functor", but I feel entitled to hint that I expect the list of functors that Leinster was expecting you to come up with should have six functors in it :wink:
The final one is the trickiest because it involves sets of functions, and requires some more advanced algebra intuition to find. If you get stuck at 5, I'm sure someone will further hint at what I'm talking about.
Kevin Carlson said:
Who said it had to be an action on the same set as X, though? Lots of functors between concrete categories change the underlying set.
Certainly I didn't suggest that... I only meant to clarify it sounded like G was to be left alone.
Morgan Rogers (he/him) said:
I expect the list of functors that Leinster was expecting you to come up with should have six functors in it :wink:
The final one is the trickiest because it involves sets of functions, and requires some more advanced algebra intuition to find. If you get stuck at 5, I'm sure someone will further hint at what I'm talking about.
I definitely had a graduate course in group theory including actions on sets, and feel like I've gotten a lot of use out of it over the years, but I haven't the foggiest idea of what I'm missing. I'll probably kick myself when I hear it, but that's how hindsight works...
I think whatever benefits the "guess what functors the author is thinking of" game are over for me at this point. It's coming at the expense of the rest of the exercise and the rest of the book, and if I dwell on this part the more it will only turn into an assignment remembered as a cruel joke. I'd like to give myself time for the next interesting part of the task (investigating their adjointness.) So feel free to "spoil" 5+6 for me.
The one I was thinking of sends a group action to its set of orbits.
The sixth one is the "cofree action" sending X to X^G
I was curious about how one could think up the functor that sends a group action to its set of orbits. I found this interesting quote from "Category Theory in Context" (p. 108):
The limit of a functor is the set of -fixed points... and the colimit is the set of -orbits... .
We obtain functors by taking these limits or colimits, as indicated by these statements (see here, for example):
If I had thought to explore these general statements in the context of this exercise, presumably I would have discovered the functor that sends a group action to its set of orbits!
The other way you could think it up is by finding it as an adjoint to one of the other functors.
Yes, it's all the functors on your list of functors going between and and for each one either find both a left and a right adjoint, or prove some of these adjoints don't exist. (The latter is usually easy by finding a colimit or limit that the functor doesn't preserve.)
@John Baez Pretty sure I'm convinced Free -| forgetful and Trivial -| Fixed. I'm still thinking about the gaps that I haven't mentioned, but I thought it would be good to confirm the following points in case I am wrong:
All that is correct, @Ryan Schwiebert! Good work.
Are you familiar with the set of orbits of a G-set X? This set is also called the quotient X/G. Does the functor from [G,Set] to Set sending a G-set to its set of orbits have a left adjoint? Does it have a right adjoint?
@John Baez Huh! It took me a while but my findings are that $\mathcal O: [G,Set]\to Set$ (my notation for the "orbit functor" ) is left adjoint to the trivial action functor T!
I was not expecting that until I realized the trivial action sort of makes the number of orbits maximal, and if one is to have a G equivariant map into such an action, the fibers are the orbits. From there things seemed to flow naturally.
I think I don't have the energy right now to investigate if $\mathcal O$ or $F$ have left adjoints, or if $\Phi$ ( my notation for the fixed point functor) or U have right adjoints. The only suggested tool at this point is preservation of initials and terminals (the full fact about limits/colimits has not been reached yet, and that doesn't seem to put a dent in any of those problems.)
The other half of the exercise asks the same question but with the categories [G, Vect_k] and Vect_k. I assume we have at least all the same functors and adjoint relationships discovered. I thought proving so might be possible with composition of adjunctions, say free/forgetful functors between Vect and Set but actually I wasn't getting anywhere with that.
Are there any surprising ones that are new?
By the way @Ryan Schwiebert, due to inflation you need to use double dollars, not mere dollar signs, to get LaTeX to work here. You can edit your post that way and make it beautiful.
@Ryan Schwiebert- I think it would be nice to organize all the functors you've found in one or more lists, where within each list each functor is left adjoint to the previous one. I'm having trouble extracting this information from your comments. But these are the 6 functors I've seen people point out here, and the adjunctions that I've seen mentioned:
"set of orbits of a G-set" is left adjoint to "trivial G-set on a set".
"free G-set on a set" is left adjoint to "underlying set of a G-set".
"set of fixed points of a G-set"
"G-set of functions from a set to G"
Can you organize these all into a single list of 6 functors, each left adjoint to the previous one? Or is this impossible? If it's impossible, how close can you get?
I hope experts don't dive in and solve this puzzle unless you ask them to!
But Morgan Rogers (he/him)was already nudging you and winking at you:
I feel entitled to hint that I expect the list of functors that Leinster was expecting you to come up with should have six functors in it :wink:
@John Baez So far I've got and . Mixing fonts/symbology for these things is suboptimal I know but I'm trying to help remind myself which is which.
I haven't considered "G-set of functions from a set to G," but I guess I should give it a whirl too. (I dub thee for "cofree" for the purposes of posting...)
Thanks! You should try to keep making those two chains of adjoints longer and see if you can merge them into one. And I'm pretty sure "G-set of functions from a set to G" is the adjoint of something. (The name "cofree" positively screams that it's a right adjoint.)
Ryan Schwiebert said:
I haven't considered "G-set of functions from a set to G," but I guess I should give it a whirl too. (I dub thee for "cofree" for the purposes of posting...)
That's quite an appropriate name..!
John Baez said:
"G-set of functions from a set to G"
I think you mean "G-set of functions from G to a set", right?
That's what I should have said. There's also a G-set of functions from a set to G, but that's not so interesting in the conversations here.
Well, I have to give up, since I think I'm again approaching the border between "let's struggle with it" and "ragequit." I just can't intuit the correct combination of action and transpose.
I had thought that the correct action for would be, given and , the map given by and
I had thought that the transpose of a map would be given by given by but AFAICT with those definitions is not G equivariant.
If you use then I think it doesn't work as a G action. I thought also of inverting the in the transpose map but that too seemed to misorder things.
Ryan Schwiebert said:
Well, I have to give up, since I think I'm again approaching the border between "let's struggle with it" and "ragequit." I just can't intuit the correct combination of action and transpose.
I had thought that the correct action for would be, given and , the map given by
This formula doesn't make into a left -set, since it obeys
rather than
You're instead getting a right -set. If you're shooting for a left -set - and I seem to recall we're working with left -sets in this conversation - then I think there are two ways to do it.
There's a total of four formulas you could try:
1)
2)
3)
4)
It looks like you tried 1) and 2), though there's a typo in your second try.
Right... I overlooked half the actions because of tunnel vision on the left and either way was checking things wrongly:
and from there I should not have thought it was but rather :confounded:
Investigation resumed... thank you
This stuff about left and right actions always offers plenty of scope for getting confused. In fact any group has 3 left actions on itself and 3 right actions, not counting trivial actions. (The kind we're not concerned with here involve conjugation.)
@John Baez With what I think is the "right choice" () I think everything works to show .
If these two lists were to link up then it'd be one of or . I've stared at them for quite a while today thinking about candidates for transpose maps.
The latter one seems less probable to me. It seems like, given a left action with no fixed points (like acting on itself) would offer no maps while could have lots.
For the more plausible one, I had at least one stab for producing a map from to given a map from A to , but I have trouble seeing how the other direction would go.
I guess I'm asking for a hint: is this latter adjunction supposed to work? Or am I totally off base with the ideas above?
As a side note, I find it completely bonkers these functors fit together even as nicely as we've seen so far. Although, I guess Set, G and [G, Set] are all fairly "pure" categories so maybe one should expect exceptionally good behavior...
Yes, is right adjoint to the underlying set functor from [G,Set] to Set, which is why it deserves to be called "cofree". Good work!
I guess I'm asking for a hint: is this latter adjunction supposed to work? Or am I totally off base with the ideas above?
I don't really know; I'd have to think about it. I just find it hard to believe that the two chains of adjunctions you've found so far don't link together. You say that Set and [G,Set] are "fairly pure" categories but that's a ridiculous understatement: they are exceptional jewels in the firmament of mathematics, among the best things in the universe, so we should expect that everything works out wonderfully well.
I could throw around some jargon and say we're dealing with an essential geometric morphism between Boolean presheaf categories. Basically this means we're in a situation that's really famous for having interesting long-ish chains of adjoint functors. Someone like @Morgan Rogers (he/him) or @Reid Barton could jump in and say a lot more, but they've wisely refrained from doing that, merely giving us a few hints here and there, because they both know it's better if we fiddle around and work out this stuff from scratch, while they watch us and smile knowingly.
Since I'm thinking about fifteen things at a time these days, I tend to forget what you've done unless I see it summarized in a nice little chart. So let me make that chart.
You've got two chains of adjunctions:
and
so there are two possible ways to glue them together into one really long chain.
Okay, I agree that the "cofree G-set on a set ",
looks like it might possibly be adjoint to the "set of orbits of a G-set ",
And given your two chains of adjunctions, the only possibility is that is left adjoint is .
So, as you've already said, our job is to think about whether
If you can find a map going one way, that's already a huge sign that it's going to work. We are standing in a gold mine here. And when you're standing in a gold mine and you see something shiny in the dirt, there's a good chance you've found gold. What's your idea for a map going one way?
I should also note that if you want to show that adjoints to some of these functors don't exist, you need to:
John Baez said:
that's a ridiculous understatement: they are exceptional jewels in the firmament of mathematics,
You'd have the experience to say so, so I believe! When making my comment, I was thinking that other things like categories of groups and modules may be "more crystalline."
If you like [[toposes]], a category of G-sets is a very simple and beautiful example. @David Egolf is writing articles here about about another classic example: the category of sheaves on a topological space.
If you like [[abelian categories]], a category of R-modules for a ring is a very typical example.
And if you don't know what toposes and abelian categories are, these examples are very good ways to start getting a feel for them.
The category of groups is wholly different from either of these, but it's a good example of an [[algebraic category]].
@John Baez Well, given an arrow it seemed plausible that might be given by where is the function on that is constantly and projects elements of to their orbits.
But it seems less and less likely that this is invertible. The candidate in the other direction seemed to require picking (needing to 'choose' already made me doubtful). Given The candidate I tried was . This didn't work out, more or less I think, because we need not have . I had been hoping something about equivariant maps coming out of would make the choice of irrelevant, but I haven't seen why.
I think I'll my leads on this have dried up.
I would find this easier to understand if you told me the domain and codomain of , and what is an element of. I can try to figure out what are the only possibilities that make sense, but it's considered good style to introduce maps by telling us their source and target, and elements by saying what they're elements of. Then the reader can focus on more interesting issues, instead of the detective work needed to figure out those things.
Morgan Rogers (he/him) said:
- consider some limits and colimits which might fail to be preserved by the respective functors
I'm trying to stick with the tools I've been given thus far in the book (I don't officially have the fact about adjoints preserving limits/colimits, or equalizers yet.) But I may circle back later to try this hint out. I think the last thing I'd like to try before I move on is to determine if is true or not.
@John Baez ok, i edited to specify those.
Thanks. So you're looking for a "candidate in the other direction": a recipe which when given an arbitrary function produces a -equivariant map .
In other words, given an arbitrary function and an arbitrary function we need to produce an element of , which will be called . And then we need to check that
to show is -equivariant.
As you note, it seems like a severe uphill climb to get an element of out of and - it looks bad.
and are begging to be composed, and that'll give . But what use is that?
So this looks bad. But as Morgan was hinting, reasoning of this general sort - fiddling around with the things we can do - can't really prove that a functor has no left adjoint. This sort of reasoning lets us either find the left adjoint if it exists, or convince us that it doesn't exist if we're unable to find it. But to put the nail in the coffin, and show that some functor has no left adjoint, we need to assume our functor has a left adjoint and get a contradiction somehow. (This is legitimate even intuitionistically!) And the simplest way is often to show our functor fails to preserve some colimit.
I'm definitely glad you all convinced me to press harder on this problem, but I think the time has come for me to move on. There's just one more problem in this section I'd like to tangle with, and I'll type it up tonight, hopefully.
Good! By the way, I think you've seen a great example of this phenomenon:
Whenever we have a functor between categories we get a functor
given in the obvious way: given we say
Such a functor always has both a left adjoint and a right adjoint!
In particular we can think of a group as a one-object category and apply this idea to groups.
Any group has homomorphisms to and from the trivial group , say
and
But . So, we get functors
and
each of which has both a left and a right adjoint.
This accounts for the 6 functors you found, and why they form two separate adjoint strings, each 3 long.
Since Ryan is moving on, it's worth pointing out that the converse is true too: a functor between categories of presheaves on idempotent complete categories is necessarily induced by a functor... which gives another way of deducing whether or not more of the functors are adjoint!
Presumably you mean a cocontinuous functor between categories of presheaves?
John Baez said:
Such a functor always has both a left adjoint and a right adjoint!
I am curious as to how one could prove this! This seems like a very handy result!
I'm also curious if this result generalizes to the case where we switch out for some other suitably nice category.
@David Egolf They relate to left and right Kan extensions (I think). And, if I remember correctly, they are also named and . The reason behind those names is that it also relates to logic. Another interpretation exists in terms of data migration. I have to dig a bit to find the papers I read that from, but @Ryan Wisnesky gave a very good talk at the Zulip CT Seminar.
Ah, thanks @Peva Blanchard! With the help of your comment, I was able to find the following in "Category Theory in Context" (Proposition 6.1.5):
If, for fixed and , the left and right Kan extensions of any functor along exist, then these define left and right adjoints to the pre-composition functor .
Yes, and these particular right and left Kan extensions can be described very explicitly, for example using ends and coends. So you can actually 'compute' them.
Btw, @David Egolf and @Peva Blanchard, I'll talk about one of these adjoints in Topos Theory (Part 4) when I discuss functors between categories of sheaves (which generalize presheaf categories in a sense). The nice kind of map between categories of sheaves is called a 'geometric morphism', and it consists of a functor called the 'inverse image' and its right adjoint, called the 'direct image'.
When the inverse image functor also has a left adjoint, we say we've got an 'essential' geometric morphism.
So, the stuff we've been doing in this thread can be seen as a warmup for topos theory.
Nathanael Arkor said:
Presumably you mean a cocontinuous functor between categories of presheaves?
Nope, those are more general and induced by profunctors.
I think I'm misunderstanding: how is a cocontinuous functor more general than an arbitrary functor?
A cocontinuous functor from presheaves on A to presheaves on B is more general than an arbitrary functor from A to B.
Aha I now understand the correction you were making! I did indeed mean "a functor between categories of presheaves with a left and a right adjoint" or (in the context of the preceding messages) "such a functor"
Just as an update, the exercise I went on to says:
Exhibit a chain of functors
Between and where is the poset of open subsets of a topological space , and sends a set to a functor which sends each open subset of to and (I think) each morphism to the identity morphism on .
I got candidates for already that seem like they will work out. Will drop back in if I get stuck.
Cool. You are already an expert on chains of adjunctions between presheaf categories and the category Set, though you were looking at presheaves on the group instead of on the poset .
Using defined by and defined by , I was able to show . (Yes, I remembered to define these functors on arrows too, I just am abbreviating.)
I haven't gotten very far with candidates for , although given a set , and crossed my mind as possible presheaves on . I did not catch a connection yet with those. Maybe the fact the elements of the poset are sets is a red herring?
I'm circling back to understand this comment fully for inspiration in case it helps me out here.
That comment implies that you'll get a functor
with both a left and right adjoint from any functor
It also implies that you'll get a functor
with both a left and right adjoint from any functor
Do you see what all the functors
are, and what all the functors
are? They're both rather easy to describe.
Sorry for the not-very-smart question, but I am having trouble visualizing what would a functor between Set and [G,Set] constitute?
Whether or not it's very smart, it's not a very clear question. Such a functor is constituted by, of course, a choice of -set for every set and a choice of -set morphism for every corresponding set morphism respecting identities and composition. Are you asking for an example of such a functor, maybe?
Six examples were discussed in this thread; they were listed here but described individually earlier.
Kevin Carlson said:
Whether or not it's very smart, it's not a very clear question. Such a functor is constituted by, of course, a choice of -set for every set and a choice of -set morphism for every corresponding set morphism...
i.e. a natural transformation, I suppose?
Jencel Panic said:
i.e. a natural transformation, I suppose
Right. A natural transformation because that's the type of arrow in the target category of the functor. But specifically in this case, that transformation amounts to a equivariant map between -sets.
@John Baez From the unique functor , the induced functor in my case is .
For the (multiple) functors , they become "evaluate at functors" meaning where . The two other functors I found were of that type.
Unfortunately I still am still at a loss for candidates for and ...
I'm losing track of notation... let me go back to earlier comments and try to figure out what you're talking about.
Okay, one thing you want is candidate for some functor called , and all you know about it is that it should be right adjoint to some functor called . So if I can figure out what is, I'll know what you're trying to do!
Okay, piecing together the puzzle, it seems you said
is the functor that sends any to .
So you are looking for a functor
with a natural isomorphism
Is that right?
If I haven't screwed up already, this means you are looking for a functor with a natural isomorphism
for every , .
(It would be really helpful if you posed self-contained questions. So far I've just been figuring out what one of your two questions is.)
Hmm, maybe there's something wrong. If I'm reading you right, you've got a functor that sends any set to the constant presheaf on with value equal to . You said the functor that's right adjoint to has
for every presheaf on . But I seem to be getting
.
I could be mixed up. But if we get wrong, we won't be able to calculate its right adjoint correctly. So this is worth straightening out.
In other words, you seem to be claiming that for every and every we have a natural isomorphism
where is defined by for all open sets , while I seem to be getting
I agree that seems wrong.
I thought I was being very meticulous when I wrote this and this but maybe it's worth linking them together.
Given a morphism of sets , I set out to produce the transpose map (a natural transformation) .
For each open set of , we need to specify a set morphism . The reason I chose is because there's a unique arrow in , and applying to this arrow gives you which you can follow with to get a morphism , i.e. the component of is
If instead we use you can only have an arrow and application of gets you , and that doesn't give you anything to combine with if .
The inverse of the transpose map was given by the component of each natural transformation, that is for natural transformation defining , and this seemed to work out for me.)
Dually the choice of works out as a left adjoint to (for me at least.) Have I made some error in the preceding post? @John Baez @Kevin Carlson Progressing to the other two will be impossible if I already chose poorly here :)
(As an aside, if , where , then can we also say the presheaf functors John defined above and are adjoint in some order too? The reason I ask is that I proved earlier that a left adjoint to amounts to an initial object of , and a right adjoint amounts to a terminal object of , and that seems to jive with the top and bottom of winding up in my functors.)
(Also mind you both I'm just using the interesting presheaf theorem as a dowsing rod: I don't want to use it officially until I've gotten to the point of proving it, which I'm not actively doing.)
Ugh, after thinking about it more I think I know what happened: I was interpreting as . That's not officially how it's done though, is it. , being initial in would have the arrows departing wouldn't it. Which means in the opposite poset I wanted arrows departing from .
Ryan Schwiebert said:
For each open set of , we need to specify a set morphism . The reason I chose is because there's a unique arrow in ...
I believe you slipped here: there's a unique arrow from to in .
@John Baez Yep... I agree, as mentioned in my last paragraph. So I think we can proceed with my old and flipped: and .
So now I have a set morphism and aim to create a natural transformation . For an object , I need to bridge this: . Since precedes all the 's I'm kind of at a loss of what to try.
Apparently I used up all my instincts on finding and . Those seemed straightforward. I've got this rudder of definitions but it's not much use in these doldrums...
An interesting fact is that if a functor has a right adjoint , this fact determines , and you can actually crank out what must be using the natural isomorphism
Simply put, this isomorphism lets you figure out what the morphisms from any to are, and this in turn lets you figure out what must be.
So in some sense there's no "creativity" required for determining a right adjoint if one has been told it exists: it's a calculation, not a matter of guesswork. I find this reassuring... though there is creativity required in finding a nice description of the right adjoint.
Let's try it a bit. We've got this functor
defined on objects by
for every , and similarly for morphisms.
Now we want a functor
with a natural isomorphism
i.e.
for every presheaf and set .
What are some things we can figure out about using this "equation"? Ultimately everything. But let's try to tease out that information by making clever choices of , and maybe choosing an easy to start with.
By the way, right now I have no intuition at all for what , so I'm not "cheating" and pretending I'm clueless - I'm actually clueless!
First something really stupid. Take . Then the left-hand side has just one element, so the right-hand side must as well, so must be the terminal presheaf, which happens to be the presheaf that assigns a one-element set to every open . So we know in this one case.
But this is no big surprise: we've just reproved that a right adjoint must send a terminal object to a terminal object.
It's also known that a right adjoint preserves products and equalizers, so we instantly know for every set built from one-element sets by taking products and equalizers. Unfortunately the only sets we get this way are one-element sets. :sad: So, this is not useful... not yet, anyway.
What about when is the empty set? Then the left side of
is empty unless , in which case it has just one element, the identity morphism.
What presheaf might be?
John Baez said:
What are some things we can figure out about ∇(S) using this "equation"? Ultimately everything. But let's try to tease out that information by making clever choices of B, and maybe choosing an easy S to start with.
I'm aware of this, of course, but to me it has all the promise of working as well as guessing what a map based on how it behaves above the axes. But it's a better guide than nothing!
I haven't had any inspiration for new presheaves so I'm forced to look at the ones I mentioned before. For a set , the idea of having seems to promisingly go right when has one element, since I think it means there's only one natural transformation . But the same doesn't seem to be true for , since I think it amounts to saying there aren't any natural transformations from to ever, and that can't be ruled out if can happen.
OTOH with , when is empty, there is always only one natural transformation , but on the other side, won't happen if .
So... I'm still basically at square one. On this matter my brainpan has boiled dry and now simply smoking and blackening.
I should say I did give some time to other ideas like , , but they seemed to fail right away with the checks on .
Exercise 2.2.11: Given an adjunction between categories and with unit and counit , let be the set of objects such that is an isomorphism, and be the set of objects such that is an isomorphism.
Show the restriction of to and the restriction of to is an equivalence of categories. Then describe what this does for a few specific examples of adjunctions.
The proof that there is an equivalence is actually refreshingly straightforward. I picked the free-forgetful adjunction between and to examine as suggested above, first.
My set theory feels a little rusty, but doesn't consist of the emptyset along with sets with cardinality at least ? I think the other half is basically vector spaces of dimension and dimension at least .
If so, it feels like a bit of an oddball equivalence of two categories. I'm trying to find a heuristic to make sense of it.
I can't recall the details but I'm guessing the Galois connection between fixed fields and subgroups of automorphism groups interpreted this way results in the fundamental theorem of Galois theory.
The emptyset shouldn't be fixed, since the free vector space on it still has a 0 vector!
As a hint remember you're not trying to show that the objects are abstractly isomorphic. It's important that the unit/counit be the isomorphism! For instance, take a set of size continuum. Can you explicitly describe the unit map ? Why will it never be an isomorphism?
Following Chris' reply, I think you should check a different type of adjunction to get a meaningful equivalence out: free-forgetful adjunctions will almost always add elements to every set that will not be in the image of the unit.
Chris Grossack (they/them) said:
The emptyset shouldn't be fixed, since the free vector space on it still has a 0 vector!
Ugh, yes, how stupid of me. And on the other end, is a freely generated on a single element (who cares if it used to be called "0") and so it has dimension , and after applying you just get the zero vector space, so it isn't part of .
It probably has something to do with school starting again this week and doing all this after 10:45 P.M.
But on the upside, that makes the equivalence a lot less weird than I initially thought.
Working my way gradually with responses to existing comments. Nobody need pressed to comment more anytime soon...
Chris Grossack (they/them) said:
As a hint remember you're not trying to show that the objects are abstractly isomorphic. It's important that the unit/counit be the isomorphism!
Oh, of course. I think the transpose of the identity is going to send each element to the linear combination in which is for the basis element "a", and elsewhere. Obviously not onto the full span, ever. So the canonical equivalence is empty on both sides for this example?
Morgan Rogers (he/him) said:
I think you should check a different type of adjunction to get a meaningful equivalence out
Like the dude at the end of Indiana Jones and the Last Crusade, I chose poorly.
Building a list of candidates to try again on.
Incidentally, the discrete-forgetful-indiscrete adjunctions between and are denoted with in Leinster's book, and now it lives rent-free in my head as "the drunk adjunctions."
Those adjunctions will be more exciting!
You can also look at the inclusion map , viewing these posets as categories. Do you see what the left/right adjoints are? What is the equivalence? (It's a bit silly here)
For another option, remember if we have an adjunction on the powersets of and . What equivalence do you get in this case?
Chris Grossack (they/them) said:
You can also look at the inclusion map ι:(Z,≤)↪(R,≤), viewing these posets as categories. Do you see what the left/right adjoints are? What is the equivalence? (It's a bit silly here)
That's a nice stepping stone. right? And in both cases, it gives an equivalence between the domain and codomain of .
I think the situation is similar for closure and interior operators on a topological space , and , abusing as representing two different inclusions, one of the closed sets into the powerset, and one of the open sets into the powerset. For each , the equivalence is between its domain and codomain.
Chris Grossack (they/them) said:
For another option, remember if f:X→Y we have an adjunction f∗⊣f−1 on the powersets of X and Y. What equivalence do you get in this case?
The equivalence is between subsets of such that and the subsets of such that , I think. I don't recall ever learning any special adjective for either type... is there one?
Subsets so that are often called "saturated". And a subset $$B \sibseteq Y$$ satisfies the dual property exactly when it's contained in the image of . So this tells you that subsets of are in natural bijection with saturated subsets of . In fact, it tells you the lattice structures on these collections are the same!
Chris Grossack (they/them) said:
saturated subsets
Ah yes, a very natural choice of terminology. Thank you for the suggestion.
Exercise 2.2.12 delivers equivalent conditions for an adjunction to be a reflection. I was able to prove the equivalence, although the hypotheses that contributed to each piece of the proof seemed surprisingly different from each other.
Anyhow, the second part asks you to go back and look at adjunctions to see which ones are reflections. I could use some sanity checks on what I think I found. I think the free/forgetful adjunction between Set and Vect, and the adjunction between and are not reflections.
On the other hand,
The free-inclusion and inclusion-(subset of units) between Mon and Grp both seem to be reflections.
The D-U and U-I adjunctions between Set and Top seem to be reflective.
The Abelianize-inclusion functor between Grp and Ab seem reflections.
Secondly, I'm also trying to reconcile this notion of "reflective subcategory" with "adjunction that is a reflection." I can't quite tell if the same thing is being described in two different ways.
Thirdly, no mention is made of the dual thing with the adjunction, where one could ask the unit is an isomorphism. Does anyone get any mileage out of an "adjunction that is a coreflection"?
Ryan Schwiebert said:
I could use some sanity checks on what I think I found. I think the free/forgetful adjunction between Set and Vect, and the adjunction between and are not reflections.
That's right. I find it helps to think of a reflection as what you get when you have a category of things, a subcategory of "nice" things that have extra properties, but where maps are defined the same way, and a functor called the reflector that "makes things be nice" by imposing those extra properties. Part of the deal here is that if you make a thing be nice, and then make it be nice again, it doesn't change the second time, since it's already nice.
The example I always remember is abelian groups as a reflective subcategory of groups. Abelian groups are just groups with an extra property, namely being abelian. So a homomorphism between abelian group is just a group homomorphism between groups that happen to be abelian - that's what I mean by "maps are defined the same way". And the reflector is abelianization. If you abelianize a group that's already abelian, you're not doing anything to it.
This example also helps me remember that the reflector is a left adjoint. In this example, it's left adjoint to the forgetful functor that forgets the abelianness of a group.
None of this is like what's going on between sets and vector spaces! Nobody ever says sets are vector spaces with a nice property, or vector spaces are sets with a nice property. Sure, vector spaces are sets with extra structure - but the maps between them have to preserve that extra structure, they are not just arbitrary functions between sets that happen to be vector spaces.
On the other hand, the free-inclusion and inclusion-(subset of units) between Mon and Grp both seem to be reflections.
That doesn't sound quite right. It's true that groups are just monoids with an extra property. A group homomorphism is just a monoid homomorphism between monoids that happen to be groups. (We don't need to separately say that inverses are preserved - that follows automatically.)
But it sounds like you're claiming the reflector sends any monoid to its group of units: the submonoid consisting of invertible elements. That can't be. Taking the group of units defines a functor from Mon to Grp, but I don't think this is the left adjoint to the forgetful functor Grp Mon. I think this is the right adjoint. A reflector needs to be a left adjoint!
Secondly, I'm also trying to reconcile this notion of "reflective subcategory" with "adjunction that is a reflection." I can't quite tell if the same thing is being described in two different ways.
Yes, they are equivalent concepts - two ways of talking about the same phenomenon.
Thirdly, no mention is made of the dual thing with the adjunction, where one could ask the unit is an isomorphism. Does anyone get any mileage out of an "adjunction that is a coreflection"?
It sounds like you're talking about the concept of [[coreflective subcategory]]. There are some examples of those at the link. And in fact one of them is this:
the inclusion of groups into monoids, where the right adjoint takes a monoid to its group of units.
(Btw, @Ryan Schwiebert, if you instantly read my reply when I first wrote it, I have now massively rewritten it.)
@John Baez OK, I'm revisiting the trio "free-inclusion-(subgroup of units)" between Mon and Grp. I probably went too fast.
I get where you're coming from with the "nice" thing. Forgetting that a group is abelian and then abelianizing it does not cause it to grow. That was on my mind as I looked at for a group , temporarily forgetting it had inverses in , but then not needing to add any in .
I see now why the other pair is suspect. Take any finite monoid that isn't a group, its group of units will be proper, and upon inclusion back into Mon it will have to have shrunken: isomorphism will be impossible.
John Baez said:
A reflector needs to be a left adjoint!
I have to ask, since Leinster never uses the term "reflector." He classifies an adjunct pair as a "reflection" if the right adjoint is full and faithful. Is that condition equivalent to a set of conditions on the left adjoint, so that one can identify a reflection from the other member of the adjoint pair?
Sorry: as you probably guessed, given a reflective subcategory, we call the left adjoint to the inclusion of the subcategory in the larger category the reflector.
The nLab article on reflective subcategories gives 6 equivalent characterizations of reflective subcategories. Condition 6 only involves a property of the reflector, but I'm not very familiar with it: I'm more used to conditions 1, 2 and 5.
John Baez said:
6 equivalent characterizations of reflective subcategories
Excellent: I hadn't started reading syntopically yet but when I do, I should remember to use nLab.
Exercise 2.2.13 is in part the example suggested earlier here about . The kicker is it asks for a right adjoint for too!
Again, I'm at a loss for a candidate for a right adjoint. If is the right adjoint, I think it has to send the terminal object of to the terminal object of , so . Then I did a bit of thinking about covers of the initial elements of both categories and their dual counterparts along with , without anything productive arising.
I'm certain I will find the answer if I search for it, but before I throw in the towel, I'm hoping someone can provoke a eureka moment and get me to see how one can figure this sort of thing out. I feel like I've plied everything I've learned and it's frustrating to still come up empty handed for something this simple sounding.
Any takers?
Looking at , I am intrigued by the fact that this lets us translate to some equivalent condition. This catches my attention because I want to figure out what is, and is a partial description of .
What if we let be a set with a single element, so ? Does that help us figure out which elements are in ?
As a side note, because I keep getting confused about this, I think we have:
So, and .
Ryan Schwiebert said:
Exercise 2.2.13 is in part the example suggested earlier here about . The kicker is it asks for a right adjoint for too!
Again, I'm at a loss for a candidate for a right adjoint. If is the right adjoint, I think it has to send the terminal object of to the terminal object of , so . Then I did a bit of thinking about covers of the initial elements of both categories and their dual counterparts along with , without anything productive arising.
I'm certain I will find the answer if I search for it, but before I throw in the towel, I'm hoping someone can provoke a eureka moment and get me to see how one can figure this sort of thing out. I feel like I've plied everything I've learned and it's frustrating to still come up empty handed for something this simple sounding.
Any takers?
very set-theoretically, you can massage the logical statement you get from (I'm taking the exercise's notation) to get something that looks like .
...
All this notation is confusing me, but when you have a function any subset has an image in , but also another thing, less widely discussed, which I'll call the 'dual image'. So we get two maps , and one of these is the left adjoint of the inverse image map while the other is the right adjoint.
and
A more systematic way to say it, less likely to offend intuitionistic logicians:
is in the image of if some element with is an element of .
is in the dual image of if every element with is an element of .
(Working these in the order they came in.)
Josselin Poiret said:
very set-theoretically, you can massage the logical statement you get from f∗S→T (I'm taking the exercise's notation) to get something that looks like S⊂….
∀x∈X,(∃y∈S,f(x)=y)⇒x∈T
∀x∈X,∀y∈S,f(x)=y⇒x∈T
Huh! The passage to the second line is something that I missed entirely. But I continued with which I think suggests . Using that I can get , but for whatever reason I can't get the other direction.
Supposing the right hand side holds (). We try to show that . Now since it is obviously not in the piece. I guess that also nets us
Then what? I keep juggling things with the adjunction , but nothing works out :( It tells us that , but I'm stuck with .
Have I gone wrong somewhere here?
Ryan Schwiebert said:
Huh! The passage to the second line is something that I missed entirely.
this part is more second nature to people used to functional programming, it's just [[currying]]!
Ryan Schwiebert said:
But I continued with ∀y∈S,(∃x∈X,f(x)=y)⟹x∈T
be careful here, the you're using in the conclusion of the implication does not refer to a bound variable! you can't curry here! rather, you end up with , ie. and that proposition with free variable defines a subset .
(i'm intentionally being a bit cryptic here so that you can finish figuring it out)
Josselin Poiret said:
it's just currying!
It's interesting because I restrained myself from asking if it had a special name because I thought "nah that's stupid to ask." I'm glad you mention it! I have encountered currying before, and TBH I don't recognize it yet in this situation, or at least it doesn't line up with my mental model yet...
be careful here, the x you're using in the conclusion of the implication does not refer to a bound variable! you can't curry here!
Apparently I have to be more careful! And the candidate I had felt so plausible... at least it satisfies the condition. I will keep pursuing the hint tonight...
@Josselin Poiret Huh! So that's it then: . The elements with fibers contained in ?
It doesn't seem to reduce to anything more familiar... Is it known by a better name?
John Baez said:
but also another thing, less widely discussed, which I'll call the 'dual image'.
I've circled back to this. Yes, indeed, I've never run across it before. I guess most of the time we're focusing on the "forward" direction of maps but this makes sense as some sort of backward looking version.
Ryan Schwiebert said:
Josselin Poiret Huh! So that's it then: . The elements with fibers contained in ?
It doesn't seem to reduce to anything more familiar... Is it known by a better name?
no, no, that's exactly it (rewording it in terms of fibers is a very good observation and how I would describe it)! the second question of the exercise will give you some intuition about what it is.
Ryan Schwiebert said:
John Baez said:
is in the image of if some element with is an element of .
is in the dual image of if every element with is an element of .
I've circled back to this. Yes, indeed, I've never run across it before.
I don't know a really standard name for this 'dual image' thing: it seems sadly unused. You might use it in computer security or other subjects where you're trying to make things safe, like food or drug manufacture and distribution.
Suppose is some set of things, and is the set of things of type that are 'safe'. Suppose converts things of type to things of type . Then you might say that a thing of a type is safe if it can only come from things of type that are safe.
For example a company might only want to buy some food if they're sure all ingredients in that food come from safe sources.
But this application works just as well, or even better, if we generalize to a relation from to . A relation from to is a subset
Any relation from to gives various maps and . In particular, we get two maps which we might call 'image' and 'dual image':
is in the image of if some element with is an element of .
is in the dual image of if every element with is an element of .
So you might be interested in the set of people who have at least one dog that's a poodle, or the set of people all of whose dogs are poodles. (Warning: if you have no dogs, all your dogs are poodles.)
By the general yoga of category theory, we expect that these two constructions, defined using and , are likely to be left and right adjoints, respectively.
This goes back to Lawvere's incredibly important observation in his 1967 paper Adjointness in foundations: can be defined as a left adjoint, while can be defined as a right adjoint!
John Baez said:
∃ can be defined as a left adjoint, while ∀ can be defined as a right adjoint!
So (in the context of the second part of the exercise, which uses the projection ) if the left adjoint is interpreted as "including a universal quantifier on the variable in a two-variable predicate" and the right adjoint is "include an existential quantifier on the variable in a two-variable predicate," we have ways of understanding what and do. But does (also written as in my post) itself have some interpretation like that? Is there a parallel interpretation for what "does" to subsets of beyond what the definition literally says?
Josselin Poiret said:
the second question of the exercise will give you some intuition about what it is
I think I'm OK with the interpretation of the two adjoints as quantifiers, but I still am struggling with the "interpret the unit and counit" part. I think it's because I don't quite get what the meaning of the relationship between predicates in and predicates in given by is supposed to mean.
So for example members of "satisfy the predicate ", and members of some predicate contained in would also be in , so " implies ." But for whatever reason I'm not getting the significance of predicate . (Hopefully clearing up things for this one helps with the other three (co)unit maps.)
What might be a related question: while the quantifier interpretation of this exercise makes sense for the given (the projection onto from , it seems like it would make less sense for stranger functions that depend more on both inputs ( and its projection counterpart only depend on one input, while others might "entangle" them more.) I don't doubt the quantifier interpretation still exists there but I wonder if it is too complicated to be practical...
Ryan Schwiebert said:
John Baez said:
∃ can be defined as a left adjoint, while ∀ can be defined as a right adjoint!
So (in the context of the second part of the exercise, which uses the projection ) if the left adjoint is interpreted as "including a universal quantifier on the variable in a two-variable predicate" and the right adjoint is "include an existential quantifier on the variable in a two-variable predicate," we have ways of understanding what and do.
Did you just mix up left and right? I said existential quantification is a left adjoint, and you're saying it's a right adjoint.
(I always get these things backwards but I know existential quantification is a form of summation and summation is a left adjoint, while universal quantification is a form of multiplication and multiplication is a right adjoint).
I'll tackle your questions later.
I find it helpful to draw the picturePXL_20240908_214319052.jpg
The unit of the adjunction corresponds to the inclusion of the peanut into the smaller rectangle
Ryan Schwiebert said:
but I still am struggling with the "interpret the unit and counit" part. I think it's because I don't quite get what the meaning of the relationship between predicates in X and predicates in X×Y given by p is supposed to mean.
fix the projection.
in logical terms, the operation is known as weakening: it takes a predicate and turns it into the predicate . As you mentioned before, its left adjoint is the existential quantification, so turning into . Then, is .
Unit/Co-unit solution
John Baez said:
Did you just mix up left and right?
Yes! But don't worry, I understand it's the other way around. Excuse me while I correct that... It's working a day-job night study fatigue.
Josselin Poiret said:
Unit/Co-unit solution
This is all Greek to me. I have evidently not enough mathematical logic chops to consider this at this time. I think I'm going to have to pass for now.
Spencer Breiner said:
find it helpful to draw the picture
While clear, I just don't get what the picture would mean. I think the past few responses have indicated I'm not going to get it anytime soon, and I plan to move on. Thanks for giving it a shot.
I've been looking at the last exercise in the section. It generalizes, a bit, the lemma you alluded to earlier @John Baez :
If and , and is any other category, then you have a functor like . Show (hint: use 2.2.5, which says
Given functors there's a 1-1 correspondence between a) adjunctions with F on the left, G on the right; b) pairs of natural transformations , satisfying the triangle identities. (Some notational details omitted.)
The action of on the objects was given (the obvious one) as . From that, I think I found the right action on arrows: given an arrow , I think 's component should be 's component. Using this, I found to act as a functor.
When considering , I felt sure the fact that must yield , so that . This took a lot of thinking, but I think I've established that (where epsilon is the counit for ) witnesses this (this is whiskering again?) Am I on the right track?
I have to consider the triangle identities next. I'm really feeling the burn mentally from reasoning about functors on functor categories.
Where are you getting from? The exercise just stated that is left adjoint to . This implies you have natural transformations , obeying the triangle identities... but these natural transformations don't need to be isomorphisms.
John Baez said:
Where are you getting from?
Sigh I can see it only takes a couple of days for these sorts of hallucinations set in. Signs of desperate grasping for footholds.
With that correction in mind, I've got , , and I need . From , I think the way forward must be something related to the unit for adjunction , right? Can't we horizontally compose the identity transformation on with to get a natural transformation ?
But now my head is splitting as to whether or not this means ... is each horizontal composition using the component of a natural transformation in ? If so I haven't gotten to naturality yet... And _then_ there's something something about the triangle equations to figure out.
Am I missing some observations that are supposed to make the computations simple? I was under the impression I shouldn't be so bogged down by now.
Has Leinster's book explained the ways you can compose functors and natural transformations:
and the rules these operations obey? If you know these, I think this exercise is not too terribly hard: everything we need to get an adjunction between and should come from the corresponding bits of the adjunction between and .
If you don't know this stuff, then you're in the position of having to make a bit of it up. It seems like that's what you're doing. In that case I'll just emphasize what I just said: each of the 4 features of the adjunction between and (the unit, the counit, and the two triangle identities) should come from the corresponding feature of the adjunction between and .
I believe there's a high-powered way to do this exercise which makes it super-quick, but it uses a bit of 2-category theory - namely, any 2-functor maps adjunctions to adjunctions.
For sure I have the definitions of the first two. I understand the third as being a special case of horizontal composition.
John Baez said:
the rules these operations obey?
I know about the interchange law but haven't applied it anywhere yet... it looks very relevant for something like this, here. I'm happy to keep working on it on my own. If my toolbox (1-3 and the interchange law) is missing something obvious for this problem, please let me know.
Good! It sounds like you know all the rules. I hope you know how to draw Whiskering is indeed a special case of horizontal composition, so you can derive its rules from those for horizontal composition, but I believe it's so important for this problem that it's worth special attention.
For example:
I said each item of the adjunction you seek, , came from the corresponding item of the adjunction you have, . But I was forgetting the contravariance: the counit of the adjunction you seek comes from the unit of the adjunction you have and the unit of the adjunction you seek comes from the counit of the adjunction you have. The triangle identities probably also get switched around in this way.
To me it's very important to draw how the counit and unit arise via whiskering from and , because then - I believe - the whole proof can be carried out using pictures.
So one question is whether you know how to draw categories (dots), functors (arrows between dots), and natural transformations (globes between arrows).
For an example of what I'm talking about, see the picture of the interchange law here.
@John Baez Do you have something pithy to convey why the action (and moniker) of "whiskering" is handy? I can tell the name was chosen with something in mind, but I feel like there's an inside joke nobody's telling me about or something. The diagrams like the ones we're discussing don't seem to explain the term, but I know there are alternative forms of such diagrams that might.
A 2-morphism looks like a mouth, and we can attach 1-morphisms that look like whiskers on either side:
John Baez said:
- the counit ϵ∗:F∗G∗⇒1 is whiskering with the unit ϵ:1⇒GF
Isn't it unit to unit, counit to counit? With we'd expect the counit to be , corresponding to to be related to the counit , right? I think the reversal of the letters in the adjunct relationship cancels out the reversal of the letters when they are applied.
John Baez said:
like whiskers on either side
Feels like a stretch but I have to admit it makes for a dali-ghtful pic.
This really is the origin of the term whiskering - not the picture of Dali, but the idea that a whisker is a line segment protruding from a bigon:
I've made some strides thinking in terms of the diagrams, but I'm still blocked. Unless I misunderstood something earlier, the candidate for the counit of is whose components, for each functor are given by whiskering , where is the counit for .
As a natural transformation, is fit to be an arrow between and . What I'm stuck on is showing that if is another functor and , the rest of the diagram commutes, so that is a natural transformation. Thus I'm working on showing .
Maybe this is where I'm going wrong: I _think_ that is just the whiskering . That led me to consider applying the interchange law to , but when I did that I thought I was getting which puzzles me because a) it no longer references either of the Phi's and b) because I'm not completely sure how to link with it.
Can you always write ? If so, it seems like interchange applies to too and that gets you maybe. But again the Phis have disappeared and I lose confidence my line of thought...
Ryan Schwiebert said:
I've made some strides thinking in terms of the diagrams, but I'm still blocked. Unless I misunderstood something earlier, the candidate for the counit of is whose components, for each functor are given by whiskering , where is the counit for .
That sounds right.
If I were talking to you about this in person, I'd want all these entities like to be drawn as 2-dimensional diagrams, and the things you want to showto also be drawn as diagrams. It's far harder to think about sort of argument using 1-dimensional strings of symbols. Unfortunately it's a pain to draw diagrams here (though some people use their cell phone to upload hand-drawn pictures, which is probably the fastest way).
For example, when you say , I'm imagining an arrow whiskered onto a triangle, namely , which has one edge labelled on top and two edges labelled and bottom. Does that match your picture?
(I hope you know you can draw a natural transformation as a triangle in this way, though there are other equally fine drawing styles.)
What I'm stuck on is showing that if is another functor and , the rest of the diagram commutes, so that is a natural transformation.
By the way it's really useful to write natural transformations as so we can instantly distinguish them from functors, drawn as . Natural transformations are arrows going between arrows, so they are 2-dimensional, so a bunch of us denote them by double arrows.
Thus I'm working on showing .
Okay, now let me take time to translate this into a picture.
By the way, I forgot to reply to this:
Ryan Schwiebert said:
John Baez said:
- the counit is whiskering with the unit .
Isn't it unit to unit, counit to counit? With we'd expect the counit to be , corresponding to to be related to the counit , right? I think the reversal of the letters in the adjunct relationship cancels out the reversal of the letters when they are applied.
Quite possibly... I'll draw this stuff and see what's up.
Okay, as a diagram this picture:
.
looks like this:
(You can click on the picture to make it bigger.)
Each side of the equation here is a vertical composite of 2 natural transformations. Each natural transformation is the whiskering of a natural transformation by a functor.
When I draw two functors as parallel lines or arcs and leave one unlabeled, it's the same as the one parallel to it.
The equation is true by interchange law. Both sides are different ways of writing this natural transformation:
@John Baez :astonished: So I did get it right! Well, that's encouraging... now to invest some time in the triangle equalities...
OK, I think I got one half of the triangle identity for :
One 'eureka' for me was figuring out that (for , as in the pic) that (the second one is consecutive whiskering.) After that, the diagrams pointed the way to combine the two things I was to verify.
I can't tell you guys how much I've learned struggling with this handful of problems. I'm really glad I've got the opportunity to do it and I'm thankful for your hints. Now... if only I would gain some velocity :wink:
Ugh, for whatever reason, the image closes a fraction of a second after I open it. In my zulip client, anyhow. Here's a jpeg export:
IMG_0720 2-1727992706203.jpg
I also meant to ask if my intuition that holds (associativity of whiskering on the sides of a natural transformation.) I tried to prove this at one point and wound up with a cube of commutative squares, but not completely reassured I was right. That would be important in the proof above where I wrote without parens...
Actually I got the cube trying to prove associativity of horizontal composition (which would imply associativity of whiskering around a natural transformation.)
Nice, I'm glad you're working ht
These exercises take time to do, but that's fine, because you need to learn all sorts of techniques and ideas to do them... and that's actually the main point of the exercises!
Ryan Schwiebert said:
I also meant to ask if my intuition that holds (associativity of whiskering on the sides of a natural transformation.)
Yes, it's true. Whiskering with a functor is horizontal composition with an identity natural transformation, and horizontal composition of natural transformations is associative (as you should check), so whiskering is associative in this manner.
Since there was already a question at math.stackexchange.com for the "what are adjunctions between groups considered as one-element categories" I posted my answer there: https://math.stackexchange.com/a/4983823/29335
Valid and in the spirit of the chapter, I hope...
Looks like someone did not like it, anyhow :(
Maybe because it didn't reach the final punchy conclusion: any adjunction between groups considered as one-element categories is an equivalence and thus gives an isomorphism between these groups.
Indeed, I saw your answer before it vanished, and I think the problem was a stylistic one: answers on math.se are expected to be definitive, not inconclusive. Don't take it personally!
That expectation goes away for questions that are so hard nonody can answer them conclusively... but here someone already had.
@Morgan Rogers (he/him) I've had a 12.5 year 150k+ career at math.stackexchange, I think I'll be ok. I definitely have had worse answers before. I took it down because I realized, after rereading, that I had skipped the OP's solution and it covered much of the same ground as I did, and failed to cover the OP's last question. I can see why someone would downvote that...
I was careful to read the existing answers, which I noticed did not make any use of the comma category approach, so I wrongly assumed that bringing it up would be novel. Sadly, this is another episode of "stupid things to miss" for me.
John Baez said:
didn't reach the final punchy conclusion: any adjunction between groups considered as one-element categories is an equivalence and thus gives an isomorphism between these groups.
Well, I thought I did _that_ at least. The answer included
Recalling that functors between groups are just group homomorphisms, this is saying that 𝑄 is an isomorphism of . Observing that natural transformations are natural isomorphisms in this case, we can argue (isomorphic in the category of functors on ) and so is also an isomorphism.
But I do feel like I failed to reach the finish line on the answer anyway. Saying merely "an adjunction between groups is an equivalence" feels really trivial to me... I had already asked and answered that question for myself during the section on units/counits.
That can't be the whole punchline, can it? Stopping there feels like it is missing something that was intended to be discovered about the current section (comma categories.) The fact the exercise appears in this chapter feels too intentional to ignore.
Well, I thought I did _that_ at least. The answer included...
Okay, sorry - I didn't even notice that part!
That can't be the whole punchline, can it?
I don't know what else there is. One could ask about going backwards, turning an isomorphism of groups into an adjunction between the corresponding 1-object categories, but that's even easier.
Ryan Schwiebert said:
Morgan Rogers (he/him) I've had a 12.5 year 150k+ career at math.stackexchange, I think I'll be ok.
Hahaha I didn't register that at all, makes my previous comment seem very condescending, sorry :sweat_smile:
I think I'm happy for now with the group question. Thanks both for helping with the feedback. I need some sanity checking on my approach for the next one.
Given Proposition: has a left adjoint iff has an initial element for every object of .
Exercise: State the dual of the proposition above. How would you prove it?
I have a feeling it shouldn't require re-arguing the proofs in the book all dually.
After thinking about it, I think there are few relevant lemmas: (I hope the ad hoc notation isn't too bad. It takes me a while to find the right )
In what follows, and
Lemma 0: is equivalently a functor
Lemma 1: Saying is an adjunction is the same as saying is an adjunction, after viewing and with Lemma 0.
Lemma 2: For comma categories, we have .
With those things in mind, it quickly follows that has a right adjoint iff has a terminal object for every object of , right?
On a side, note, 2.3.10-2.3.12 are all very fun!
This one made me do a double take. I must not be understanding something correctly. So for , apparently the thing I think of as a sequence in that goes is not of the form predicted here, via a function such that and .
I'm comfortable with sequences as functions with domain and codomain in a set . And I can understand iteration of an endomorphism of a set. But the way these two things are being mixed together here perplexes me...
Do you have a specific question? You may be getting confused by taking - it's often confusing to choose an example where two typically distinct things happen to be the same. Try instead , and consider the sequence of real numbers with
The universal property of the natural numbers guarantees that such a sequence exists.
The exercise is not saying that all sequences arise from endomorphisms!
It's saying that a point and an endomorphism uniquely determine a sequence, not the other way around :)
Morgan Rogers (he/him) said:
It's saying that a point and an endomorphism uniquely determine a sequence, not the other way around :)
Yes, this was exactly what I was seeing, and it was not what I was expecting. I read "let's talk about sequences... N is special to sequences" and was prepared for an intimate relationship of the two (somehow beyond the description already had, as a function with domain N.)
But of course, I read right through recursive without realizing that was the main focus. A category of recursive sequences and not sequences in general. I haven't really thought about them since undergrad...
John Baez said:
Do you have a specific question?
It was formerly Is the intended topic sequences or not?
It says "A crucial property of sequences is that they can be defined recursively. [etc description]" Immediately I thought "How is that description a property of sequences? 1,2,1,3,... doesn't have that form..."
I think I wouldn't have tripped on it if it had said "a crucial type of sequences are those defined recursively..."
Thanks @Ryan Schwiebert , that's a good point. I should have said something like "some sequences can be defined recursively" instead. Rereading what I wrote, I can see how it gives the false impression that all sequences arise in the way described.
Aha, yes. This is one of those subtle math writing points. "Can be" can be used to mean "can sometimes be", but it can also be used to mean "can always be", so it can be dangerous to use this simple phrase unadorned.
:canned_food::bee:
@Tom Leinster Hey hey! Nice to see you here! If you run across any of my moping above please rest assured it's just part of my learning process and that I really am enjoying the book. Finally I'm learning in such a way that things are sticking.
My best description of that category in the exercise in which is initial ( is the successor function on ):
Let denote the category of pointed sets, and be the forgetful functor. First form the comma category . The objects are set arrows and a morphism is a pointed-set arrow that is an arrow that's doubled to to be a morphism in the comma category. So it's a special subcategory of the comma category.
That's the best phrasing I could think of. I came across the term inserter category while I was reading about comma categories... this is an example, no?
It's superficially similar to the image of the diagonal functor, except that the objects aren't just pairs , you can shove any set arrow between them and you get a different object. But the arrows being a copied pair of one arrow makes it look very similar.
This isn't quite right; the comma category would have for its objects triples where are pointed sets. The and could be two different sets, whereas the that you're discussing is a single set.
You're not far off in your thinking, but comma categories or inserters would be more elaborate than is necessary. I don't want to give away the answer, but since objects involve just a single set , as you say, an object might be describable as "a set together with blah-de-blah".
Todd Trimble said:
The X and Y could be two different sets, whereas the A that you're discussing is a single set.
Of course, but that's what the subcategory bit takes care of. It's the subcategory of those triples having objects where and arrows where .
Todd Trimble said:
comma categories or inserters would be more elaborate than is necessary.
I don't think the link to comma categories is unreasonable, though. I can imagine three levels of answer:
1) An elementary description, which I feel is little more than what the problem statement already outlines
2) Recognizing it as a special subcategory of a comma category, a topic introduced in the section preceding the section of the exercise.
3) Calling it an inserter category, which is not discussed in the book at all
Considering the audience and context, 3) is obviously unreasonable as a solution (I just brought it up because I ran into it.) Answering with 1) would be of course correct, but the proximity and similarity to comma categories seemed like a big hint to "do better!"
I really quite enjoyed the process of figuring out the functor and categories to use for the comma category, anyhow.
I just looked it up... these are what the call "first-order recurrence relations"? I think i can imagine how it might generalize to n-th order recurrences...
This is what I was correcting:
First form the comma category . The objects are set arrows and a morphism is a pointed-set arrow that is an arrow that's doubled to to be a morphism in the comma category.
Anyway, yes, you could describe it as a subcategory of a comma category.
I just looked it up... these are what the call "first-order recurrence relations"?
I suppose, but it raises the question "how do you define a first-order recurrence relation?"
What would be your answer in the style of 1)?
@Todd Trimble For 1) Just something like "objects are pairs where is an object of and is an arrow from (omission of is intentional) and a morphism from is an arrow in such that .
Sanity check, since this type of question has given me problems before:
Let be the functor sending a small category to its set of objects. Exhibit a chain of adjunctions .
This time I think I found the right candidates. I think is the functor that turns a set into a small category by making it the discrete category on (in other words, a preorder where distinct elements are incomparable), and makes an "indiscrete" category on by furnishing one arrow between every two objects (a preorder where every pair of elements is comparable.)
The part to pin down is . If I'm right about , I can see that its purpose is to furnish us a way to group objects of a small category so that we can produce functors from . Objects must be grouped so that members of separate groups do not have arrows between.
If you consider the binary relation to mean that there's an arrow in category between and , then symmetrize this relationship, then take the transitive closure, I think it should be an equivalence relation finally. This set of equivalence classes should be , and any function out of it furnishes a way to map objects of to objects in . (I omit the details for now for brevity but I can say more if needed.) I just wonder if this sounds like the right path.
I haven't been rigorous enough about the idea for the equivalence relation. In fact, I'm betting once I get to some category theoretic exercises on relations things might become clearer. Symmetrization and transitive closure are probably functors on relations.
Is it at least true that one can start with a transitive and reflexive relation, then symmetrize it, then take transitive closure, and conclude the final result is an equivalence relation?
Ryan Schwiebert said:
Is it at least true that one can start with a transitive and reflexive relation, then symmetrize it, then take transitive closure, and conclude the final result is an equivalence relation?
yes.
In general, you would more directly say that you define iff. there exists a path .
@Ryan Schwiebert asked:
Is it at least true that one can start with a transitive and reflexive relation, then symmetrize it, then take transitive closure, and conclude the final result is an equivalence relation?
Yes. It's easy to check that:
So when you take any reflexive relation, then symmetrize it, then take its transitive closure, you get an equivalence relation.
We can also 'reflexivize' any relation in the obvious way.
I believe that if you take any relation , then reflexivize it, then symmetrize it, then take its transitive closure, you get the smallest equivalence relation containing .
You can also just say "take the smallest equivalence relation containing ". This exists because the intersection of a collection of equivalence relations is an equivalence relation: it's the smallest equivalence relations containing .
Yeah I guess if I already believe in something like the transitive closure, then I should believe in "equivalence class closure" already. I just never heard it verbalized!
Josselin Poiret said:
In general, you would more directly say that you define a b iff. there exists a path a→a1←a2→⋯→b.
That's appealing too. But also it seems we should include such sequences beginning or ending with arrows pointing in opposite directions. I guess you could just say "such a sequence can be extended to one of the form given above: if it starts with just shim in an identity and likewise if necessary for a on the far right."
At any rate, it's exciting to think I got the adjunction chain by myself for once :)