You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Tim Hosgood said:
not to derail this thread, but there are lots of parts of descent (in the more traditional sense) that I still don't really understand. could anybody explain why we would say that a morphism in a category (with pullbacks) should be said to have "effective descent" if the change of base is monadic?
I like this question because it's a good excuse for me to learn some stuff. So let me explain it the way I wish someone had explained this to me. I may not tell you anything you don't already know! In particular I won't answer the historical question of why we say such a morphism has "effective" descent, which seems to require going back to Definition 1.7 in Grothendieck's Technique de descente et théorèmes d’existence en géométrie algébrique. I. Généralités. Descente par morphismes fidèlement plats and figuring out what he was thinking. Instead I'll just say a bit about why this concept is nice.
First, to set the stage for people who don't know what's going on, we'll start by letting be any category - but we'll think of it as a category of 'spaces' of some sort. This lets us think of a morphism as a 'bundle' over , so we call the [[slice category]] the category of bundles over .
Next, given any morphism
post-composing with defines an obvious functor sending to , which we call
In our way of talking, bundles over can be 'pushed forward' to give bundles over .
But now assume has pullbacks! Then all these functors have right adjoints
In our way of talking, now we can pull back any bundle over along to get a bundle over .
Adjoint functors always give monads. So we also get a monad
By general abstract nonsense each object in gives an algebra for this monad!
Spelling this out a bit, say we have a bundle over , say , which is just a morphism . Then we get a bundle over , namely . But this comes with a morphism
simply because there's a natural transformation , the counit of the adjunction. So we have
and in fact this morphism makes into an algebra of the monad .
Summarizing: given a morphism
every bundle over gives a bundle over that's an algebra of the monad .
Even better, this trick gives a a functor from the category of bundles over to the category of algebras of the monad . And if this functor is an equivalence, we say has [[effective descent]].
When this happens, it's very nice because it provides an answer to the question "what extra structure must a bundle over have for it to have come from pulling back some bundle over along ?" This extra structure is being an algebra of the monad .
I may have gotten some things backwards here, because the question I'm answering seems to be about "ascent" rather than "descent". I easily mix up left and right, so I could have actually gotten the math wrong - and on top of that, there is not only a theory of "monadic descent", there's also a theory of "comonadic descent".
I will come back to this later and try to fix mistakes and go further.
thanks!
I guess part of my discomfort/confusion is that I don't really have a good understanding of why we care about monadicity in general, especially when it comes to stuff like this which feels very geometric/topological
John Baez said:
In particular I won't answer the historical question of why we say such a morphism has "effective" descent, which seems to require going back to Definition 1.7 in Grothendieck's Technique de descente et théorèmes d’existence en géométrie algébrique. I. Généralités. Descente par morphismes fidèlement plats and figuring out what he was thinking.
(for anybody who does want to, you can find an english translation here :wink: )
I’m confused by that: it doesn’t seem like understanding why we care about monadicity in general is necessary for this particular problem, where it turns out that monadicity exactly means that a bundle over is the same thing as a descent datum for
Maybe @Tim Hosgood hasn't gone through the exercise of working out the stuff I was talking about in a bit more detail. When we do that, and see what an algebra of monad actually looks like, we'll see it's just what we'd hope for: a way of describing a bundle in terms of 'charts' and 'transition functions' obeying the 'cocycle condition'
or more precisely a slight generalization of that, which reduces to it when our morphism is a covering of some space by open sets ('charts').
I intend to go through this exercise, though my break just now involved going out to a wine bar to help celebrate the birth of Mike Fourman's new grandson, so I may not do it tonight.
I'd also then like to know what happens about the comonad :wink:
I'm used to seeing things like (co)homology (with proper support) pop up when you look at these (co)monads of six-functor-looking-things, but I've never heard anybody explain these to me using the word "monad", let alone "monadic". Maybe this is just one of the horrendous notational conflicts that happens between topos theorists and algebraic geometers though, where we're using the same upper star/lower star/upper shriek/lower shriek to mean different things
(I remember learning that a topos theorist wrote for what I called , or vice versa, and losing any hope of understanding; for some people, is something that always exists but might simplify to for proper morphisms, and for others it's something that doesn't even exist unless is proper)
OK, I think I'd get something out of this too, so here goes...let's suppose is a "covering" of some kind. Then the monad sends to and then pulls that back along so sends to the pullback which is like the "bundle" whose fiber over some is the sum of all the fibers of over points in the fiber of over This kind of thing is always easiest for me to see when is terminal, so that is just
If is really an open cover of topological spaces, then we can describe more explicitly: agrees with over and while over each of the two copies of you have to sum on a copy of what was doing over the other copy of This is enough to see that a monad algebra structure involves a bundle over a bundle over and maps between their respective restrictions to which is starting to ring a bell!
(The unit axiom of the monad algebra says that the monad algebra doesn't do anything except on these new summands.)
Now I'm guessing that, if then the associativity law for the monad algebra gives us the cocycle conditions.
I think I need more notation to write down in this case. Let's say where and so on. Then before including we just calculated that In particular and similarly for The algebra map, using unitality, amounts to choosing maps and which should eventually be the guys we subject to the cocycle condition.
This also lets us calculate as This is gross but manageable: over we see has a then a lying over the part, copied twice, and finally the odd term, where we took restricted to the intersection, moved that over then restricted again to the intersection and put it back over Thus is equal to over and empty over Anyway, we can see the multiplication is identity does a comultiplication to terms that are copied twice in the sum, and just includes the term back into and similarly for I'm having trouble spotting whether associativity actually says anything for this case...
OK, yeah, it looks like the associativity condition on the monad algebra, when you apply it to the weird terms, says exactly that because you can either send right back into via the monad multiplication, whence nothing happens to it in the algebra, or you can send it into via and then back to using so these two must compose to the identity. Cool!
I'm feeling like that's about all of that I want to do in one session, but maybe somebody else including a later self wants to get the ball over the line by finding the cocycle conditions once we bring back in!
thanks for all the explicit calculations!
now I'm super interested to know why unravelling the definition of "an algebra for the push-pull monad" gives exactly the same thing as unravelling the definition of "a point in the homotopy limit/totalisation of some simplicial set"
the calculations you write here are really really the same as the ones that you get when you do the big calculation of what it means to be a point in a certain homotopy limit. what is the relation between monadic functors and homotopy limits?
Bar construction? https://ncatlab.org/nlab/show/bar+construction
is the Čech nerve given by the bar construction applied to some monad??
Ah, yeah, I think it's the bar construction of this monad, applied to the identity bundle over .
Yes, exactly: the bar construction applied to this monad gives the Cech nerve. I'll do something easier now, which should make that very easy to believe.
I'll return to my tale. I'll quickly review while writing objects in using capital letters to make them look like more 'spaces' to traditionally inclined mathematicians. :upside_down: Similarly, I will often call arbitrary morphisms in this category 'bundles'. And I'll do other things to make you think we're doing topology. But it's really just category theory.
Say we have any morphism in our category . (Secretly think of as the disjoint union of open sets covering , with describing the cover.) This gives a functor pushing forward any bundle over along to get a bundle over .
This functor has a left adjoint sending any bundle over , say
to its pullback to , namely
What does the monad do? It takes a bundle over , say
and pushes it forward along
and then pulls the result back along to get a bundle
So we could call this monad . The multiplication for this monad is thus a natural transformation
In other words, for any bundle we get a map of bundles
John Baez said:
Say we have any morphism in our category . (Secretly think of as the disjoint union of open sets covering , with describing the cover.) This gives a functor pushing forward any bundle over along to get a bundle over .
This functor has a left adjoint sending any bundle over , say
You've swapped and from the previous comments!
Typical of me. I fixed it. I'll continue later - I'm falling asleep. But I hope people who know "Cech nerves" and "bar constructions" are starting to see them appear in my last comment.
Let me try to wrap up what I was going to say.
We have a category with pullbacks. Any morphism gives a monad on sending an object
to
I will call an object of a bundle over and a morphism in a bundle map. I will often write a bundle over simply as rather than , so I can write a bundle map simply as rather than the commutative triangle it actually is.
What does an algebra of our monad look like?
It's a bundle over with a bundle map
making the usual square in the definition of 'monad algebra' commute.
This square says that the two composite morphisms going from the left object here to the right one are equal:
I.e. this diagram is a [[fork]], or maybe a 'cofork' - that's some jargon I need to get into my working vocabulary.
Alas, while this diagram reminds me intensely of diagrams I often draw when playing with sheaves or gerbes or bundles, I'm not instantly seeing how to use the ideas here to define those concepts! So I'll have to fuss around a bit to get things to work out.
For example maybe instead of algebras of the monad I should be using coalgebras of the comonad . Or maybe instead of algebras or coalgebras I should be using pseudoalgebras or pseudocoalgebras. I'll see.
For starters, just to bring this down to earth a bit, suppose and describes an open cover of by open sets , so
and
is defined by all the inclusions .
Then an algebra of our monad is a bundle over with a bundle map
but now
where is the usual notation for restricting the bundle to the open set , i.e.
so
thanks to something about how pullbacks distribute over coproducts in and many other similar categories. (What's the general abstract nonsense at work here?)
So our elegant and terse fork
can be expanded to something more grungy - yet intensely familiar to people who work with bundles, sheaves and the like:
I'm still having trouble linking this idea to [[monadic descent]] - I could easily do it by reading the nLab article, but that feels like cheating when I'm so close. What I can do, however, is satisfy @Tim Hosgood by showing how to get the Cech nerve of the cover via something like the bar construction.
For any algebra of any monad , the [[bar construction]] gives an augmented simplicial object
where all the arrows come from the monad multiplication and the algebra structure .
We're seeing that in , any open cover gives a monad on . I believe is itself an algebra of this monad, and the bar construction then gives this augmented simplicial object in :
But if we write the open cover in the more traditional style as an indexed collection of open sets, so , this is the same as
And this is the Cech nerve of the open cover!
I think I see a little glitch in what I just wrote, but I have to quit now, so I leave it as an exercise to find it and fix it.
I think I'll quit here, leaving the calculation as an "exercise for the reader".
Hi! I've finally had time to go over everything that was discussed above and read and re-read it (and re-re-read it, and so on). I think I might be seeing bits and pieces come together, but to me it almost feels like what was discussed above is like a tiny toe dip into a massive sea of related concepts. As such my resulting questions form a proper class (not a mere set), but admittedly most of them are probably trivial confusions that will induce sighs. I almost didn't want to ask since Baez and all above have put so much effort into providing a clear, concise, and thorough explanation of the topics that I feel really bad for still not understanding some of it. Nonetheless I guess I'll start with some clarifications on effective descent.
First, I'm still not sure exactly what this effective descent is doing. A previous post by Baez (I believe on the past local to global thread) indicated that it was in some way inverting the process of "lifting" a morphism into X along a morphism f: X -> Y by composition to get the composite morphism into Y. This justifies the involvement of the composition functor F: C/X -> C/Y which does this "lifting". But if you truly wanted to invert this process, why not just take the inverse image construction of this functor at a particular value g in C/Y to get all the morphisms into X that compose with f to get g? Of course, maybe not every morphism into Y can be factored along f (IE, the composition functor will not always be surjective in some way), but wouldn't this still be a more direct way of inverting composition? If so then what is the "true meaning" behind a certain morphism into X being the "descent" of one into Y?
Second, I'm confused between John Baez's post at the top of this page compared to the aforementioned one on the past thread. There, he said, "If you have a map f: E -> X and a map p: E -> B, can you find a map g: B -> X such that f = g compose p?" But at the top of this page, he says, "summarizing, given a morphism f: x -> y, every bundle over y gives a bundle over x that's an algebra of the monad Tf. When this happens, it's very nice because it provides an answer to the question of what extra structure must a bundle over x have for it to have come from pulling back some bundle over y along f". If we label a bundle over x as b-x and the one over y as b-y, this is asking the question: given b-y: b -> y and f: x -> y, can you find a morphism b-x: b -> x such that b-y = f compose b-x? But since p and f play analagous roles here, the order of composition clearly has switched between the two threads. So I'm a little confused as to which morphism in this commutative triangle that descent is even asking us to find. I've gone on assuming it's the latter option since it makes more sense with the slice category monad, but I just wanted to clarify!
I also want to make sure I understand effective descent "philosophically". Assume a morphism f: X -> Y with effective descent. Since Tf is just an endofunctor (with extra properties) on C/X, you can (very theoretically/hypothetically) define it independently of C/Y (perhaps just by a lucky guess at choosing a random endofunctor on C/X). But somehow, from this knowledge that seems to be very confined to X and C/X, we get information about all the morphisms into Y, and as we know from Yoneda, this entirely determines Y up to isomorphism. So, in a sense it seems X, as long as there's an effective descent morphism from it to Y, "knows" a lot about Y. Am I understanding this right? It's a little odd, no?
Of course, any help in clearing up my silly confusions is very much appreciated!
Hi! I'm glad you're asking questions, but you're right: what I wrote in this conversation on "effective descent" is just a tiny droplet from a massive sea of related concepts. People have been developing these concepts roughly since Cartan, Grothendieck and others reformulated algebraic geometry using sheaves, though their roots go back much further to Galois theory and also the study of covering spaces and bundles in algebraic topology. So I'd be amazed if you could follow what I wrote without having studied that stuff a certain amount.
A previous post by Baez (I believe on the past local to global thread) indicated that it was in some way inverting the process of "lifting" a morphism into X along a morphism f: X -> Y by composition to get the composite morphism into Y.
Yes, I brought this up as an easy-to-understand example of the general idea of descent. Unfortunately people usually skip this example and go straight into harder examples where the systematic process you're trying to reverse is not lifting a function as above but "pulling back a bundle" or "pulling back a sheaf" or something even more fancy. I mentioned that in these harder examples, things become more elaborate, but follow a similar general pattern. I've never seen anyone clearly explain this general pattern, and I'm afraid I'm not doing it either!
I'm confused between John Baez's post at the top of this page compared to the aforementioned one on the past thread.
Indeed! Please don't think of my posts in this thread here as an attempt to clarify what I was talking about in that past thread. That would indeed be confusing. This thread here is less about lifting functions than pulling back bundles.
In this thread here I was trying to answer Tim Hosgood's question of how a certain popular formalism called monadic descent is related to a concept he seemed to understand in another way, namely an effective descent morphism. The nLab articles on these topics are quite good (and related), but I wanted to struggle a bit and discuss them myself, to learn the material better.
I didn't get nearly as far as I should have, but I wound up sketching how a certain way of describing bundles on a space using an open cover of can be understood in terms of a monad. This is a classic example of 'monadic descent'. An open cover gives a morphism , and this is a classic example of an 'effective descent morphism'.
So, at least in theory, what I wrote might help @Tim Hosgood understand the connection between monadic descent and effective descent morphisms. But the nLab articles probably do a better job!
@Chris Grossack (they/them) and I once had a dream of understanding [[Galois descent]] and [[monadic descent]] a lot better. I think I'm much closer now, but I'll probably remain a bit frustrated until I write a series of blog articles explaining this stuff from the ground up. Unfortunately it's sort of a huge subject.
Anyway, @John Onstead, if you ever want to return to the easier subject of "descent as an attempt to reverse the systematic process of lifting functions", I could try that - but this thread here is probably not the best place!
John Baez said:
So I'd be amazed if you could follow what I wrote without having studied that stuff a certain amount.
I might have mentioned it previously but the last math class I took was high school calculus (it was advanced calculus, but still this was a long time ago). I wanted to jump right into category theory because I was told that, by understanding category theory, you could more quickly learn all the other branches of math more easily since category theory provides a common framework to understand all of them together. However, I haven't found this easy for two reasons: it seems you do need some background in the other branches of math to understand, at the very least, the motivations for the category theory definitions, and the language of other branches of math are usually written assuming a background of set theory, requiring you to have to mentally convert all the concepts from this implicit set theory POV to the category theory POV, which can take work to do.
John Baez said:
I mentioned that in these harder examples, things become more elaborate, but follow a similar general pattern. I've never seen anyone clearly explain this general pattern, and I'm afraid I'm not doing it either!
I've seen an example of this more elaborate pattern on the nlab article "monadic descent". The general pattern appears to be this. Say you have a pseudofunctor F: C^op -> Cat, this defines data, a category in fact, "over" each object of C, and for every morphism f: A -> B in C, you have a functor in Cat between the respective categories. If the pseudofunctor gives rise to a bifibration under the Grothendieck construction, it means that each such functor has an adjoint. Monadic descent then applies to every case where this adjunction is monadic. In a sense, this means you can recover data "over" B from the data "over" A by finding out which data over A is "descended from" that over B. Since, for C a category with pullbacks the slice category functor C/- is bifibrant, this means the above example of effective descent is a special case of monadic descent!
I find this approach to defining descent data in terms of monads very elegant because I already understand monads so now I understand at least the examples of descent given in terms of monads. But it does make me wonder about the relation between descent in general and monadic descent. Monadic descent is one way to approach descent, but can any descent data be given by some monadic descent approach? That is, can all descent problems be solved using monadic descent such that if you understand monadic descent, you understand all of descent?
I wanted to jump right into category theory because I was told that, by understanding category theory, you could more quickly learn all the other branches of math more easily since category theory provides a common framework to understand all of them together. However, I haven't found this easy for two reasons: it seems you do need some background in the other branches of math to understand, at the very least, the motivations for the category theory definitions, and the language of other branches of math are usually written assuming a background of set theory, requiring you to have to mentally convert all the concepts from this implicit set theory POV to the category theory POV, which can take work to do.
I think category theory really does speed up the process of learning math. But to learn lots of math inevitably takes lots of work. I've been studying it for a couple hours a day for about 45 years and I still feel embarrassingly ignorant, with gaping holes in my knowledge all over the place.
However, I only started learning category theory after I knew lots of other stuff, so I've rarely faced the particular challenge of understanding a piece of category theory before I knew how it was used: I usually start by trying to understand something based on my intuitions about set theory, topology, algebra etc. and only later bring in category theory to make things nice.
For example, in that blob of text above where I took monadic descent and looked at what it amounts to in the special case of bundles over topological spaces, when I got to the point of writing equations like
I was like "yay, the good old stuff I learned in school is showing up automatically now!"
There's something very satisfying about this. But it probably slows down my process of learning category theory, because if someone tells me an abstract categorical fact like "any bifibration gives a monad!" I'm likely to say "great, but what does that have to do with me?" Of course I'm smart enough not to say it out loud. But only when I see a couple of examples will it excite me.
For years I didn't really get the point of "descent" - that is, why people were so interested in it. Then I started reading about how Noether, Brauer and Hasse classified finite-dimensional associative [[division algebras]] over . For years I'd been fond of finite-dimensional associative division algebras over : there are just 3, the reals, the complex numbers, and the quaternions. But the classification of such division algebras over is vastly more elaborate. It turns out that to tackle this they needed to invent "descent theory" - though they didn't call it that or even think of it that way. What they actually invented is often called "Galois cohomology". But to understand it conceptually you really need to understand some things about descent, and I found that to be a fascinating journey.
It links together monads, the cohomology of groups, homotopy fixed points, and then a lot of specific stuff about algebra that shows up in this particular application of these concepts.
Believe it or not, I'm slowly leading back to your question:
Monadic descent is one way to approach descent, but can any descent data be given by some monadic descent approach? That is, can all descent problems be solved using monadic descent such that if you understand monadic descent, you understand all of descent?
Of course you don't understand all of descent by understanding one outlook on it: you also need to understand how that outlook connects to all the other outlooks! And in a way that's the fun part. But for the more technical question of whether all descent problems can be phrased in terms of monadic descent... I don't know, but it seems like a lot of them can. In particular, I believe everything I've learned about the applications of Galois cohomology to descent can be put into that framework.
Ultra-tersely, the monad that shows up in monadic descent has a [[bar construction]], and if this monad is the monad for G-sets, this bar construction can be used to define homotopy fixed points of G actions, and also the cohomology of the group G.
When G is a [[Galois group]] all these ideas play together in a way that explains what Noether, Brauer and Hasse were doing.
However, this barrage of jargon conceals the fact that the ideas are simple and beautiful. You're making me want to explain them better.
John Baez said:
For years I didn't really get the point of "descent" - that is, why people were so interested in it. Then I started reading about how Noether, Brauer and Hasse classified finite-dimensional associative [[division algebras]] over Q.
I'm in a similar place now I suppose! It seems descent can give information about local-global (the original context in which this topic was raised). For instance with the bundles the descent data gives transition functions. But "why" descent is able to do this (in the more general case) is still a little unclear to me, but I'm hoping to learn more about this over time!
John Baez said:
Of course you don't understand all of descent by understanding one outlook on it: you also need to understand how that outlook connects to all the other outlooks! And in a way that's the fun part.
It sounds like a Yoneda-esque approach to learning things!
John Baez said:
Ultra-tersely, the monad that shows up in monadic descent has a [[bar construction]]
If I recall from above the bar construction for a monadic descent is related to a Cech nerve? If so then is the cohomology it helps to define for the group G a part of a Cech cohomology in any way?
Also on this same note I did look more into this relationship and the nlab diagram given under the "ideas" sections of "bar construction" and "Cech nerve" look far too similar to be a coincidence (though maybe it's just math pareidolia?) Nlab states a bar construction of a monad is a simplicial object in the monad's EM category. So just to clarify, in the case with bundles and the monadic adjunction C/X -> C/Y for a morphism X -> Y, would the Cech nerve be a simplicial object in C/Y?
John Onstead said:
John Baez said:
Ultra-tersely, the monad that shows up in monadic descent has a [[bar construction]]
If I recall from above the bar construction for a monadic descent is related to a Cech nerve? If so then is the cohomology it helps to define for the group G a part of a Cech cohomology in any way?
I think the better approach is to see the Cech nerve as a special case of the bar construction, and both Cech cohomology and group cohomology as special cases of the same thing: [[monadic cohomology]], which is a way to get cohomology from the bar construction.
Again, this is something that deserves an actual explanation - pointing you to the nLab is not a substitute for an actual explanation!
I'm just saying that "stuff you can do with a monad" reigns supreme here, and all the fancy-sounding things like Cech nerves, Cech cohomology and group cohomology are just special cases of this.
So just to clarify, in the case with bundles and the monadic adjunction C/X -> C/Y for a morphism X -> Y, would the Cech nerve be a simplicial object in C/Y?
Yes!
John Baez said:
Again, this is something that deserves an actual explanation - pointing you to the nLab is not a substitute for an actual explanation!
Indeed, it seems that with every question answered, a new concept pops up that induces a whole bunch more questions! But I'll let it rest for a little bit as I sort through my thoughts. I may post to a new thread since my questions may no longer be directly related to effective descent, so look out for that! And thanks for all the help so far!
Do you see how to get the Cech nerve by applying the bar construction to the monad I described in this series of posts? That would be a good exercise.
John Baez said:
Do you see how to get the Cech nerve by applying the bar construction to the monad I described in this series of posts? That would be a good exercise.
In a way. If you substitute T in the diagram under "bar construction" nlab page with
John Baez said:
So we could call this monad U×X−.
Then you get something that looks syntactically just like the diagram under the nlab page for "Cech nerve". I think that's the full story, anyways.
Edit: it's not quite the same actually. In the bar construction diagram you get TA -> A at the end but in the Cech nerve diagram you get U×X U -> -> U, so there's one more arrow for some reason. Maybe I gotta give this one a little more thought!
Ok I gave it some more thought and while I haven't completely figured it out, I know something strange is going on. The bar construction diagram given on nlab is supposedly "taking place" in the EM category of T because it's supposed to be a simplicial object in the EM category of T, but it involves A, TA, and so on, which are not objects of the EM category but rather the monad's base category. In fact, the last part of the bar construction diagram, TA -> A, is actually an algebra for T, and so should be a singular object in the EM category, but here it is drawn out so this would automatically lead one to believe it is "taking place" in the monad's base category. Certainly, at least to someone easily confusable like me, the diagram on the nlab is pretty misleading!
But I'm thinking that if I can get past this confusion maybe the Cech nerve thing would make more sense?
John Onstead said:
John Baez said:
Do you see how to get the Cech nerve by applying the bar construction to the monad I described in this series of posts? That would be a good exercise.
In a way. If you substitute T in the diagram under "bar construction" nlab page with then you get something that looks syntactically just like the diagram under the nlab page for "Cech nerve". I think that's the full story, anyways.
Edit: it's not quite the same actually. In the bar construction diagram you get TA -> A at the end but in the Cech nerve diagram you get U×X U -> -> U, so there's one more arrow for some reason. Maybe I gotta give this one a little more thought!
You seem to be running into all the confusions I run into when I think about this business after a long break. That's good: they probably make the difference between almost understanding this stuff and fully understanding it.
Are you familiar with the difference between a [[simplicial object]] and an [[augmented simplicial object]]? I think that's what you're encountering here.
A simplicial object has an object of vertices, an object of edges, an object of triangles, an object of tetrahedra etc., and various maps between them. In particular there are two maps from the object of edges to the object of vertices, since an edge has two endpoints. So you seem to be describing the Cech nerve as a simplicial object, with two maps from to .
An augmented simplicial object goes a bit further. It also has an object of (-1)-simplices! There is just one map from the object of vertices to the object of (-1)-simplices. So you seem to be describing the bar construction as an augmented simplicial object. And that sounds right to me.
There is a way to turn an augmented simplicial object into a simplicial object, simply by discarding the object of (-1)-simplices. But there's also a very interesting story about what benefits arise from working with an augmented simplicial object! Are you familiar with that story?
John Onstead said:
Ok I gave it some more thought and while I haven't completely figured it out, I know something strange is going on. The bar construction diagram given on nlab is supposedly "taking place" in the EM category of T because it's supposed to be a simplicial object in the EM category of T, but it involves A, TA, and so on, which are not objects of the EM category but rather the monad's base category.
Why do you say that? is an algebra of some monad on some category , which can be seen as an object of the Eilenberg-Moore category . Of course any algebra has an underlying object in monad's base category, and I guess that's what you're seeing when you see the letter . But we don't usually use a special notation to distinguish between the two! So to tell which category we're in, we have to look at the morphisms, and see if they are algebra morphisms or merely morphisms of their underlying objects.
So we have four choices to keep in mind: are we thinking about a simplicial object or an augmented simplicial object, and are we thinking of it in or in ?
John Baez said:
A simplicial object has an object of vertices, an object of edges, an object of triangles, an object of tetrahedra etc., and various maps between them. In particular there are two maps from the object of edges to the object of vertices, since an edge has two endpoints. So you seem to be describing the Cech nerve as a simplicial object, with two maps from U×XU to U.
An augmented simplicial object goes a bit further. It also has an object of (-1)-simplices! There is just one map from the object of vertices to the object of (-1)-simplices. So you seem to be describing the bar construction as an augmented simplicial object. And that sounds right to me.
I think that clears it up! I figured this confusion was due to inconsistencies between nlab pages (different authors, different times, etc.) The nlab page for bar construction does have (augmented) in parenthesis but I'm only noticing this now, I completely missed it earlier! Sometimes things have to be spelled out very explicitly to me before I'm able to see it, otherwise I fly right by it.
I'm not sure what the benefits of working with augmented simplicial objects over typical simplicial objects is, other than that they can encode cocones (and so colimits) of typical simplicial objects.
I figured this confusion was due to inconsistencies between nlab pages (different authors, different times, etc.).
If there's an inconsistency, or even something unclear, I should fix it.
The nlab page for bar construction does have (augmented) in parenthesis but I'm only noticing this now, I completely missed it earlier!
At the very least I should remove those parentheses. The difference is important.
I'm not sure what the benefits of working with augmented simplicial objects over typical simplicial objects.
Let's do simplicial sets just for concreteness, and see what an 'augmented' simplicial set amounts to.
In an augmented simplicial set:
1) we have a simplicial set but also a set of (-1)-simplices
2) each vertex (0-simplex) is mapped to some (-1)-simplex
3) two vertices with an edge between them are mapped to the same (-1)-simplex
Part 3) follows from the (augmented) simplicial identities, as explained in the nLab page [[augmented simplicial set]].
In fact, according to that nLab page, 1)-3) are equivalent to the full definition of an augmented simplicial set.
If two vertices are connected by an edge we say they're in the same connected component of a simplicial set.
So, 1)-3) say an augmented simplicial set is a simplicial set together with a set whose elements are called (-1)-simplices, where each vertex is mapped to some (-1)-simplex and all the vertices in a connected component are mapped to the same (-1)-simplex.
This means any simplicial set becomes an augmented simplicial set in a canonical way, where the (-1)-simplices are the connected components!
But we could also have an augmented simplicial set where the vertices in several different connected components get mapped to the same (-1)-simplex.
So, up to isomorphism, we can always say a (-1)-simplex is a disjoint union of connected components.
John Baez said:
2) each vertex (0-simplex) is mapped to some (-1)-simplex
Doesn't this just mean there exists a function from the set of 0 simplex to the set of (-1)-simplex?
John Baez said:
But we could also have an augmented simplicial set where the vertices in several different connected components get mapped to the same (-1)-simplex.
So, up to isomorphism, we can always say a (-1)-simplex is a disjoint union of connected components.
Ah this makes sense! This also clarifies, this isn't some sort of construction where we "quotient out equivalence relations" defined by whether two elements of the 0-simplex set are connected by an edge.
John Onstead said:
John Baez said:
2) each vertex (0-simplex) is mapped to some (-1)-simplex
Doesn't this just mean there exists a function from the set of 0 simplices to the set of (-1)-simplices?
Yes, it's a more informal way of saying exactly the same thing.
John Baez said:
But we don't usually use a special notation to distinguish between the two! So to tell which category we're in, we have to look at the morphisms, and see if they are algebra morphisms or merely morphisms of their underlying objects.
I see! This multi-use notation always ends up throwing me off in some way. Indeed, it seems the nlab is using A to refer to the algebra and thus an object in EM(T). But with this problem solved another pops up. When you go down to the "definition" section, it gives how to make the bar construction. Given a monad T on a category C, you can find a comonad T' on the EM category assuming this situation is monadic. This is because, if T = U compose F, then T' = F compose U, with F and U being the adjoint pair that define this monadic adjunction. The nlab page then states that the opposite augmented simplex category is the "walking comonoid" and so there exists a functor Del_a ^op -> [EM(T), EM(T)] that selects T' for [1] and T' T' for [2] and so on. There is then an evaluation functor [EM(T), EM(T)] -> EM(T) that sends an endofunctor to it evaluated on some fixed object, which would be an object of EM(T) and thus an algebra.
Fixing an object/algebra A of EM(T) for this evaluation functor, composing the two functors then gives a diagram where the objects are some chain of T' evaluated on some algebra A. This indeed gives exactly the diagram given at the top of the nlab page (I think, at least on the objects this seems to be the case) but with a catch: the T's in the diagram should be T' instead! In other words, the page seems to be confusing T and T', which itself is confusing me. You can change a few things around to get the diagram with T's: for instance, taking the non-opposite augmented simplex category, you can find a functor Del_a -> [C, C] and then an evaluation functor on some object B of this category B: [C, C] -> C; composing these would then give the desired diagram (with B instead of algebras A), but it's destination is now into C instead of EM(T)!
Yet again, your puzzlement is getting into the heart of what we need to understand to truly understand the bar construction!
This is also connected to the question I didn't have time to answer yet: why is it so much better to think of the bar construction of a monad as giving an augmented simplicial object in the Eilenberg-Moore category , rather than a mere simplicial object? I got around to saying what an augmented simplicial object has that a simplicial object doesn't, but not what we use this stuff for!
Briefly, the bar construction gives an augmented simplicial object in , as you just described. The forgetful functor maps that to a simplicial object in . But we get more: the simplicial object in comes with an 'acyclic structure', as defined in Def. 3.1. here!
I want to explain all this much better, but I'm busy right now....
I also want to understand these comments of yours:
Fixing an object/algebra A of EM(T) for this evaluation functor, composing the two functors then gives a diagram where the objects are some chain of T' evaluated on some algebra A. This indeed gives exactly the diagram given at the top of the nlab page (I think, at least on the objects this seems to be the case) but with a catch: the T's in the diagram should be T' instead! In other words, the page seems to be confusing T and T', which itself is confusing me.
This sentence also confused me: "The T's in the diagram should be T' instead!" But I see, you mean each instance of T should be T'. But I think you are again being a bit too attached to type-checking instead of guessing what the short-hand means.
Let's put it this way. We have a right adjoint
from the Eilenberg-Moore category (you're calling it EM(T)) of the monad to the category the monad is on. This has a left adjoint
We start with an algebra, i.e. an object . From this we can form an augmented simplicial object
where the arrows I drew come from the algebra structure and the counit of the adjunction, a natural transformation
This is indeed an augmented simplicial object in , since all the arrows (and also the arrows I didn't draw) are morphisms in .
When we apply to everything we get an augmented simplicial object in :
but now we can put additional arrows in the diagram! For example, not only do we have the morphism coming from the algebra structure , we also have second morphism coming from the counit !
These additional arrows (and the equations they obey) mean that we have an acyclic augmented simplicial object in .
This is where all my remarks on the meaning of augmented simplicial objects become important!
In the case where , we see that we have an augmented simplicial set
with one connected component for each point of .
Furthermore, each connected component has a distinguished vertex. These come from the map that arises from the unit of our adjunction.
Finally - and this takes the nLab a while to show - each of the connected components is contractible in the usual sense of topology, now generalized to simplicial sets. Among other things, this means every vertex in our (augmented) simplicial set comes equipped with a specified edge from it to the distinguished vertex in its component.
This contractibility, or 'acyclic structure', is the real point of the bar construction.
But anyway, I hope you see that we get augmented simplicial objects, first in , and then in .
Thanks for your help, I think it clears it up...
John Baez said:
We start with an algebra, i.e. an object A∈ET. From this we can form an augmented simplicial object
⋯→→LRLRA⇉LRA→A
Yes, this was indeed what I had in mind. I just put T' = LR and compared this with the nlab diagram, which suggested T = RL instead, to find the inconsistency.
John Baez said:
When we apply R to everything we get an augmented simplicial object in E:
⋯→→RLRLRA⇉RLRA→RA
This appears to be what the nlab calls the "bar resolution" for a bar construction. If you "evaluate" this simplicial object in E, you get: ⋯→→TTA⇉TA→A where I've replaced A as an algebra with A the underlying object in E of that algebra (so RA with respect to the algebra A). This basically gets you the diagram on nlab (minus the extra arrows you mention in the next paragraph). So if the diagram on nlab is actually the bar resolution, not the bar construction, of the monad T, then it ought to say so explicitly, so that it doesn't lead more people like me to confusion! But other than that, I think that clears up my confusion: a monad T induces (at least) two simplicial objects, one in its home category and the other in EM(T), one being a "construction" and the other a "resolution", with both having interesting properties and playing interesting roles! (I'd be ok with calling both of these simplicial objects "bar constructions", especially since they both seem to be cases of a "two sided bar construction", but I still take issue with the nlab article because it chooses to make the distinction between construction and realization!)
(PS: I saw you edited the nlab article to remove the parenthesis! Thanks!)
John Baez said:
This contractibility, or 'acyclic structure', is the real point of the bar construction.
Very interesting! So it seems the contractibility is what allows the bar construction to be good at "being a resolution" of objects. That's how both these concepts tie together.
Exactly. Say we have a monad on . The bar construction takes any algebra A of the monad and "puffs it up", replacing all equations by edges, all equations between equations by triangles, and so on. But it does so in a way that each element of the original algebra gets replaced by a contractible simplicial set. In fact it's a simplicial set with an "A-acyclic structure", which is like a choice of contraction of each component down to an element of your original algebra A.
And the great theorem is that the bar construction takes any algebra A of your monad and produces the initial A-acyclic simplicial algebra.
All this works for monads on categories that aren't , but we have to be a bit more abstract: their algebras may not have a set of 'elements', and their simplicial algebras may not have a set of 'vertices', etc.
All the stuff I just mentioned is a way of explaining the point of a [[resolution]]: it's a way of taking a gadget and 'maximally puffing it up' while still making sure the result is equivalent in a certain sense to the original gadget.
John Baez said:
All the stuff I just mentioned is a way of explaining the point of a [[resolution]]: it's a way of taking a gadget and 'maximally puffing it up' while still making sure the result is equivalent in a certain sense to the original gadget.
Now that I feel at least some clarification on bar constructions, I am interested in doing a "deep dive" into learning more on general resolutions, especially simplicial resolutions, since they seem to be behind a lot of the stuff that we were talking about: cech, bar constructions, local-global, descent, etc. They also seem to be extremely interesting constructions on their own. But I'm wondering if this discussion should take place on another thread, since this one is mainly about effective descent?
Sure, it's good to start a new thread if you want to talk about resolutions.
I don't think we've run out of things to talk about here. I don't think we've talked yet much about the historical origins of descent theory, or what descent is used for, or what it's deep inner meaning. I also don't think we've talked about examples of 'non-effective' descent, to better understand that word 'effective' - this is something I would struggle with.
But the concept of 'resolution' is also very interesting.
John Baez said:
I don't think we've run out of things to talk about here. I don't think we've talked yet much about the historical origins of descent theory, or what descent is used for, or what it's deep inner meaning. I also don't think we've talked about examples of 'non-effective' descent, to better understand that word 'effective' - this is something I would struggle with.
We can eventually get there. But what I've found is that learning this field runs the risk of "concept overload" where so many related concepts are thrown out all at once (IE, descent object vs descent morphism vs category of descent data vs descent). In order to avoid drinking from a fire hose maybe it's best to introduce each concept one by one, so I wanted to start with resolutions and slowly expand out from there, making sure each connection is well developed before heading to the next idea. I hope that's not too bad of an idea!
I don't think that's bad at all. I find it useful to think historically, so I can're resist laying out a very simplified history of what we're talking about:
1) People had a lot of interesting problems to solve in algebra, geometry and topology.
2) Noether figured out how to solve lots of them using "homology and cohomology groups".
3) She discovered that homology and cohomology groups come from "chain and cochain complexes".
4) People discovered many ways to build chain and cochain complexes; Noether's student Mac Lane worked with Eilenberg on these and realized that these "ways" were actually functors.
5) People discovered an important way to build chain and cochain complexes used a trick called "resolutions". Eilenberg and Mac Lane discovered an important family of resolutions all called the "bar resolution".
6) Daniel Kan discovered the concept of "simplicial object" and realized that chain complexes are simplicial objects in , the category of abelian groups, while cochain complexes are simplicial objects in .
7) People realized that the concept of "resolution" is very general: given an object in a category , it has resolutions that are simplicial objects in .
8) People realized that the "bar resolution" makes sense whenever is the Eilenberg-Moore category of some monad on some category . The process of building the bar resolution takes advantage of a beautiful fact: any comonad on any category gives a simplicial object in the endofunctor category of .
It may seem as if by stage 6) we've completely lost sight of stage 1)! But in fact, every later stage helped us get better and better at understanding the original problems in algebra, geometry and topology that people were thinking about in stage 1). And sociologically, this is a big part of what justified all the later stages.
I always enjoy learning about the historical context because it provides a good sense of the motivation for all the different kinds of definitions in math, even the most abstract ones! In a way that helps to ground everything. This also lays out a good map/path for what I want to explore, though I would likely be taking the reverse direction (my first question on resolutions might relate to the last few points here in fact!)
The way category theory works, the final most polished concepts are often the simplest, though the most abstract.
I can see you like starting at the abstract end.... so much so that I'm yearning to talk about the concrete beginnings of this particular subject, and the amazing way that the abstract concepts were able to solve concrete problems. But I can think about that myself.
John Baez said:
I also don't think we've talked about examples of 'non-effective' descent, to better understand that word 'effective' - this is something I would struggle with.
same!
I know that faithfully flat descent describes when you can obtain a "thing" on a scheme by giving that thing on a faithfully flat cover along with some gluing/descent data, but I'm not too sure what to say about changing "faithfully flat" for "effective"
is it true that we just say "effective epi" instead of "faithfully flat"?
Looking at FGA, effective gluing data is (roughly) gluing data that secretly comes from the coequaliser (but not necessarily actually the coequaliser, just something of the right shape). Here's what I mean by that.
If we have a fibred category , and morphisms in , then we define a gluing data (with respect to the morphisms and ) on an object to be an isomorphism in .
If we now have some in , then we say that a gluing data on is effective (with respect to the morphism ) if is isomorphic (with its gluing data, which means "in such a way that some square commutes") to for some .
The coequaliser now turns up in a lemma: if is the coequaliser of , then the category of objects of endowed with effective gluing data is equivalent to the category . (Without saying the word "effective", we only get a fully faithful functor from the latter to the former.)
It's taking me a while to absorb these definitions, and then your question, but it's happening. Here's what the nLab article [[effective descent]] says - I find it helpful to try to see whether what you're saying is secretly the same:
Let be a [[category]] with [[pullbacks]] and [[coequalizers]]. For any [[morphism]] , we have an [[internal category]] defined by (the [[kernel pair]] of ). The [[category of descent data]] for is the category (the "[[descent object]]") of internal diagrams on this internal category. Explicitly, an object of is a morphism together with an action satisfying suitable axioms.
The evident internal functor (viewing as a discrete internal category) induces a comparison functor . We say that is:
- a descent morphism if this [[comparison functor]] is [[fully faithful]], and
- an effective descent morphism if this comparison functor is an [[equivalence of categories]].
It is a little unfortunate that the more important notion of effective descent has the longer name, but it seems unwise to try to change it (although the [[Elephant]] uses "pre-descent" and "descent").
I want to see if an object in the category of "descent data" in the nLab article is the same as what you're calling a "gluing datum".
These are some interesting questions, though I have not much to add specifically. But as a quick heads up I opened a new thread in parallel at here.
I'm looking back at the nLab page [[monadic descent]], in particular section 2, Definition.
It starts by assuming given a category C and a bifibration on it. Then for a morphism , it claims there is an associated adjoint triple between fibers of the bifibration. But shouldn't it just be an adjoint pair ?
From @John Baez's explicit unwinding of the definitions in the case of the codomain fibration on Top, the functor doesn't appear to be involved at all in the story (and in fact it may or may not exist, even in this example, depending on the map , I think).
If does exist, then I guess there should be an alternative version of the story involving , but then we should call it "comonadic descent", maybe?
Yeah, the triple should really only be associated to a trifibration. I'm not sure why they write that.
Good point! Maybe we should fix it.