Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: effective descent


view this post on Zulip John Baez (May 08 2024 at 17:07):

Tim Hosgood said:

not to derail this thread, but there are lots of parts of descent (in the more traditional sense) that I still don't really understand. could anybody explain why we would say that a morphism f ⁣:xyf\colon x\to y in a category CC (with pullbacks) should be said to have "effective descent" if the change of base f ⁣:C/yC/xf^*\colon C/y\to C/x is monadic?

I like this question because it's a good excuse for me to learn some stuff. So let me explain it the way I wish someone had explained this to me. I may not tell you anything you don't already know! In particular I won't answer the historical question of why we say such a morphism has "effective" descent, which seems to require going back to Definition 1.7 in Grothendieck's Technique de descente et théorèmes d’existence en géométrie algébrique. I. Généralités. Descente par morphismes fidèlement plats and figuring out what he was thinking. Instead I'll just say a bit about why this concept is nice.

First, to set the stage for people who don't know what's going on, we'll start by letting CC be any category - but we'll think of it as a category of 'spaces' of some sort. This lets us think of a morphism p:axp: a \to x as a 'bundle' over xx, so we call the [[slice category]] C/xC/x the category of bundles over xx.

Next, given any morphism

f:xyf : x \to y

post-composing with ff defines an obvious functor sending p:axp: a \to x to fp:ayf \circ p : a \to y, which we call

f!:C/xC/y f_!: C/x \to C/y

In our way of talking, bundles over xx can be 'pushed forward' to give bundles over yy.

But now assume CC has pullbacks! Then all these functors f!f_! have right adjoints

f:C/yC/xf^* : C/y \to C/x

In our way of talking, now we can pull back any bundle over yy along ff to get a bundle over xx.

Adjoint functors always give monads. So we also get a monad

Tf=ff!:C/xC/xT_f = f^* f_! : C/x \to C/x

By general abstract nonsense each object in C/yC/y gives an algebra for this monad!

Spelling this out a bit, say we have a bundle over yy, say pC/yp \in C/y, which is just a morphism p:byp: b \to y. Then we get a bundle over xx, namely fpf^* p. But this comes with a morphism

ff!(fp)fp f^* f_! (f^* p) \to f^* p

simply because there's a natural transformation f!f1f_! f^* \to 1, the counit of the adjunction. So we have

Tf(fp)fp T_f (f^\ast p) \to f^\ast p

and in fact this morphism makes fpf^* p into an algebra of the monad TfT_f.

view this post on Zulip John Baez (May 08 2024 at 17:10):

Summarizing: given a morphism

f:xyf: x \to y

every bundle over yy gives a bundle over xx that's an algebra of the monad TfT_f.

Even better, this trick gives a a functor from the category of bundles over yy to the category of algebras of the monad TfT_f. And if this functor is an equivalence, we say ff has [[effective descent]].

view this post on Zulip John Baez (May 08 2024 at 17:13):

When this happens, it's very nice because it provides an answer to the question "what extra structure must a bundle over xx have for it to have come from pulling back some bundle over yy along ff?" This extra structure is being an algebra of the monad TfT_f.

view this post on Zulip John Baez (May 08 2024 at 17:22):

I may have gotten some things backwards here, because the question I'm answering seems to be about "ascent" rather than "descent". I easily mix up left and right, so I could have actually gotten the math wrong - and on top of that, there is not only a theory of "monadic descent", there's also a theory of "comonadic descent".

view this post on Zulip John Baez (May 08 2024 at 17:23):

I will come back to this later and try to fix mistakes and go further.

view this post on Zulip Tim Hosgood (May 08 2024 at 17:32):

thanks!

view this post on Zulip Tim Hosgood (May 08 2024 at 17:33):

I guess part of my discomfort/confusion is that I don't really have a good understanding of why we care about monadicity in general, especially when it comes to stuff like this which feels very geometric/topological

view this post on Zulip Tim Hosgood (May 08 2024 at 17:34):

John Baez said:

In particular I won't answer the historical question of why we say such a morphism has "effective" descent, which seems to require going back to Definition 1.7 in Grothendieck's Technique de descente et théorèmes d’existence en géométrie algébrique. I. Généralités. Descente par morphismes fidèlement plats and figuring out what he was thinking.

(for anybody who does want to, you can find an english translation here :wink: )

view this post on Zulip Kevin Carlson (May 08 2024 at 17:46):

I’m confused by that: it doesn’t seem like understanding why we care about monadicity in general is necessary for this particular problem, where it turns out that monadicity exactly means that a bundle over YY is the same thing as a descent datum for f.f.

view this post on Zulip John Baez (May 08 2024 at 19:04):

Maybe @Tim Hosgood hasn't gone through the exercise of working out the stuff I was talking about in a bit more detail. When we do that, and see what an algebra of monad TfT_f actually looks like, we'll see it's just what we'd hope for: a way of describing a bundle in terms of 'charts' and 'transition functions' obeying the 'cocycle condition'

gijgjk=gik g_{ij} g_{jk} = g_{ik}

or more precisely a slight generalization of that, which reduces to it when our morphism f:xyf: x \to y is a covering of some space by open sets ('charts').

I intend to go through this exercise, though my break just now involved going out to a wine bar to help celebrate the birth of Mike Fourman's new grandson, so I may not do it tonight.

view this post on Zulip Tim Hosgood (May 08 2024 at 20:15):

I'd also then like to know what happens about the comonad f!ff_!f^* :wink:

view this post on Zulip Tim Hosgood (May 08 2024 at 20:17):

I'm used to seeing things like (co)homology (with proper support) pop up when you look at these (co)monads of six-functor-looking-things, but I've never heard anybody explain these to me using the word "monad", let alone "monadic". Maybe this is just one of the horrendous notational conflicts that happens between topos theorists and algebraic geometers though, where we're using the same upper star/lower star/upper shriek/lower shriek to mean different things

view this post on Zulip Tim Hosgood (May 08 2024 at 20:19):

(I remember learning that a topos theorist wrote ff^* for what I called ff_*, or vice versa, and losing any hope of understanding; for some people, f!f_! is something that always exists but might simplify to ff_* for proper morphisms, and for others it's something that doesn't even exist unless ff is proper)

view this post on Zulip Kevin Carlson (May 08 2024 at 20:38):

OK, I think I'd get something out of this too, so here goes...let's suppose f:XYf: X\to Y is a "covering" of some kind. Then the monad T=ff!:C/XC/XT=f^*f_!: C/X\to C/X sends p:EXp:E\to X to fp:EYfp:E\to Y and then pulls that back along f,f, so sends pp to the pullback E×YX,E\times_Y X, which is like the "bundle" whose fiber over some xXx\in X is the sum of all the fibers of pp over points in the fiber of ff over f(x).f(x). This kind of thing is always easiest for me to see when YY is terminal, so that T(p)T(p) is just E×XX.E\times X\to X.

view this post on Zulip Kevin Carlson (May 08 2024 at 20:38):

If f=U+VYf=U+V\to Y is really an open cover of topological spaces, then we can describe TT more explicitly: TpTp agrees with pp over UVU\setminus V and VU,V\setminus U, while over each of the two copies of UV,U\cap V, you have to sum on a copy of what pp was doing over the other copy of UV.U\cap V. This is enough to see that a monad algebra structure involves a bundle over U,U, a bundle over V,V, and maps between their respective restrictions to UV,U\cap V, which is starting to ring a bell!

view this post on Zulip Kevin Carlson (May 08 2024 at 20:40):

(The unit axiom of the monad algebra says that the monad algebra doesn't do anything except on these new summands.)

view this post on Zulip Kevin Carlson (May 08 2024 at 20:41):

Now I'm guessing that, if X=U+V+W,X=U+V+W, then the associativity law for the monad algebra gives us the cocycle conditions.

view this post on Zulip Kevin Carlson (May 08 2024 at 20:56):

I think I need more notation to write down TpTp in this case. Let's say p:EX=pU+pV+pWp:E\to X=p_U+p_V+p_W where pU:EUUp_U:E_U\to U and so on. Then before including W,W, we just calculated that Tp=p+pUV+pVU.Tp=p+p_U|_V+p_V|_U. In particular (Tp)U=pU+pVU,(Tp)_U=p_U+p_V|_U, and similarly for (Tp)V.(Tp)_V. The algebra map, using unitality, amounts to choosing maps φUV:pUVpV\varphi_{UV}:p_U|_V\to p_V and φVU:pVUpU,\varphi_{VU}:p_V|_U\to p_U, which should eventually be the guys we subject to the cocycle condition.

view this post on Zulip Kevin Carlson (May 08 2024 at 20:57):

This also lets us calculate T2pT^2p as p+pUV+pVU+(pU+pVU)V+(pV+pUV)U=p+pUV+pVU+pUV+pVUV+pVU+pUVU.p+p_U|_V+p_V|_U+(p_U+p_V|_U)|_V+(p_V+p_U|_V)|_U=p+p_U|_V+p_V|_U+p_U|_V+p_V|_U|_V+p_V|_U+p_U|_V|_U. This is gross but manageable: over U,U, we see T2pT^2p has a pU,p_U, then a pVUp_V|_U lying over the UVU\cap V part, copied twice, and finally the odd pUVUp_U|_V|_U term, where we took pU,p_U, restricted to the intersection, moved that over V,V, then restricted again to the intersection and put it back over U.U. Thus pUVUp_U|_V|_U is equal to pUp_U over UVU\cap V and empty over UV.U\setminus V. Anyway, we can see the multiplication T2pTpT^2p\to Tp is identity p,p, does a comultiplication to terms that are copied twice in the sum, and just includes the pUVUp_U|_V|_U term back into pUp_U and similarly for pVUV.p_V|_U|_V. I'm having trouble spotting whether associativity actually says anything for this case...

view this post on Zulip Kevin Carlson (May 08 2024 at 21:01):

OK, yeah, it looks like the associativity condition on the monad algebra, when you apply it to the weird pVUVp_V|_U|_V terms, says exactly that φUV=φVU1,\varphi_{UV}=\varphi_{VU}^{-1}, because you can either send pVUVp_V|_U|_V right back into pVp_V via the monad multiplication, whence nothing happens to it in the algebra, or you can send it into pUVp_U|_V via φVUV,\varphi_{VU}|_V, and then back to pVp_V using φUV,\varphi_{UV}, so these two must compose to the identity. Cool!

view this post on Zulip Kevin Carlson (May 08 2024 at 21:02):

I'm feeling like that's about all of that I want to do in one session, but maybe somebody else including a later self wants to get the ball over the line by finding the cocycle conditions once we bring WW back in!

view this post on Zulip Tim Hosgood (May 08 2024 at 21:47):

thanks for all the explicit calculations!

view this post on Zulip Tim Hosgood (May 08 2024 at 21:48):

now I'm super interested to know why unravelling the definition of "an algebra for the push-pull monad" gives exactly the same thing as unravelling the definition of "a point in the homotopy limit/totalisation of some simplicial set"

view this post on Zulip Tim Hosgood (May 08 2024 at 21:49):

the calculations you write here are really really the same as the ones that you get when you do the big calculation of what it means to be a point in a certain homotopy limit. what is the relation between monadic functors and homotopy limits?

view this post on Zulip Kevin Carlson (May 08 2024 at 21:50):

Bar construction? https://ncatlab.org/nlab/show/bar+construction

view this post on Zulip Tim Hosgood (May 08 2024 at 21:51):

is the Čech nerve given by the bar construction applied to some monad??

view this post on Zulip Kevin Carlson (May 08 2024 at 21:57):

Ah, yeah, I think it's the bar construction of this monad, applied to the identity bundle over UU.

view this post on Zulip John Baez (May 08 2024 at 22:01):

Yes, exactly: the bar construction applied to this monad Tf=ff!T_f = f^\ast f_! gives the Cech nerve. I'll do something easier now, which should make that very easy to believe.

view this post on Zulip John Baez (May 08 2024 at 22:30):

I'll return to my tale. I'll quickly review while writing objects in CC using capital letters to make them look like more 'spaces' to traditionally inclined mathematicians. :upside_down: Similarly, I will often call arbitrary morphisms in this category 'bundles'. And I'll do other things to make you think we're doing topology. But it's really just category theory.

Say we have any morphism f:UXf: U \to X in our category CC. (Secretly think of UU as the disjoint union of open sets covering XX, with ff describing the cover.) This gives a functor f!:C/UC/Xf_!: C/U \to C/X pushing forward any bundle over UU along ff to get a bundle over XX.

This functor has a left adjoint f:C/XC/Uf^* : C/X \to C/U sending any bundle over XX, say

EX E \to X

to its pullback to UU, namely

U×XEU U \times_X E \to U

What does the monad Tf=ff!T_f = f^* f_! do? It takes a bundle over UU, say

PU P \to U

and pushes it forward along ff

PUX P \to U \to X

and then pulls the result back along ff to get a bundle

U×XPU U \times_X P \to U

view this post on Zulip John Baez (May 08 2024 at 22:36):

So we could call this monad U×XU \times_X -. The multiplication for this monad is thus a natural transformation

U×XU×XU×X U\times_X U \times_X - \Rightarrow U \times_X -

view this post on Zulip John Baez (May 08 2024 at 22:38):

In other words, for any bundle PUP \to U we get a map of bundles

U×XU×XPU×XPU \times_X U \times_X P \to U \times_X P

view this post on Zulip David Michael Roberts (May 08 2024 at 22:41):

John Baez said:

Say we have any morphism f:UXf: U \to X in our category CC. (Secretly think of UU as the disjoint union of open sets covering XX, with ff describing the cover.) This gives a functor f:C/UC/Xf^\ast : C/U \to C/X pushing forward any bundle over UU along ff to get a bundle over XX.

This functor has a left adjoint f!:C/XC/Uf_! : C/X \to C/U sending any bundle over XX, say

You've swapped f!f_! and ff^* from the previous comments!

view this post on Zulip John Baez (May 08 2024 at 22:52):

Typical of me. I fixed it. I'll continue later - I'm falling asleep. But I hope people who know "Cech nerves" and "bar constructions" are starting to see them appear in my last comment.

view this post on Zulip John Baez (May 11 2024 at 09:21):

Let me try to wrap up what I was going to say.

We have a category CC with pullbacks. Any morphism f:UXf: U \to X gives a monad on C/XC/X sending an object

EX E \to X

to

U×XEXU \times_X E \to X

view this post on Zulip John Baez (May 11 2024 at 09:23):

I will call an object of C/XC/X a bundle over XX and a morphism in C/XC/X a bundle map. I will often write a bundle over XX simply as EE rather than EXE \to X, so I can write a bundle map simply as EEE \to E' rather than the commutative triangle it actually is.

view this post on Zulip John Baez (May 11 2024 at 09:25):

What does an algebra of our monad look like?

It's a bundle EE over XX with a bundle map

U×XEEU \times_X E \to E

making the usual square in the definition of 'monad algebra' commute.

view this post on Zulip John Baez (May 11 2024 at 09:26):

This square says that the two composite morphisms going from the left object here to the right one are equal:

U×XU×XEU×XEE U \times_X U \times_X E \rightrightarrows U \times_X E \to E

view this post on Zulip John Baez (May 11 2024 at 09:54):

I.e. this diagram is a [[fork]], or maybe a 'cofork' - that's some jargon I need to get into my working vocabulary.

view this post on Zulip John Baez (May 11 2024 at 09:58):

Alas, while this diagram reminds me intensely of diagrams I often draw when playing with sheaves or gerbes or bundles, I'm not instantly seeing how to use the ideas here to define those concepts! So I'll have to fuss around a bit to get things to work out.

For example maybe instead of algebras of the monad Tf=ff!T_f = f^* f_! I should be using coalgebras of the comonad Cf=f!fC_f = f_! f^*. Or maybe instead of algebras or coalgebras I should be using pseudoalgebras or pseudocoalgebras. I'll see.

view this post on Zulip John Baez (May 11 2024 at 10:00):

For starters, just to bring this down to earth a bit, suppose C=TopC = \mathsf{Top} and f:UXf: U \to X describes an open cover of XX by open sets UαU_\alpha, so

U=αUαU = \bigsqcup_\alpha U_\alpha

and

f:αUαX f: \bigsqcup_\alpha U_\alpha \to X

is defined by all the inclusions UαXU_\alpha \hookrightarrow X.

view this post on Zulip John Baez (May 11 2024 at 10:05):

Then an algebra of our monad TfT_f is a bundle EE over XX with a bundle map

U×XEE U \times_X E \to E

but now

U×XE=αEUα U \times_X E = \bigsqcup_\alpha E|_{U_\alpha}

where EαE|_\alpha is the usual notation for restricting the bundle EE to the open set UαXU_\alpha \subseteq X, i.e.

EUα:=Uα×XEE|_{U_\alpha} := U_\alpha \times_X E

so

U×XE=α(Uα×XE)=αEUα U \times_X E = \bigsqcup_\alpha \left( U_\alpha \times_X E \right) = \bigsqcup_\alpha E|_{U_\alpha}

view this post on Zulip John Baez (May 11 2024 at 10:07):

thanks to something about how pullbacks distribute over coproducts in Top\mathsf{Top} and many other similar categories. (What's the general abstract nonsense at work here?)

view this post on Zulip John Baez (May 11 2024 at 10:10):

So our elegant and terse fork

U×XU×XEU×XEE U \times_X U \times_X E \rightrightarrows U \times_X E \to E

can be expanded to something more grungy - yet intensely familiar to people who work with bundles, sheaves and the like:

α,βEUαUβ  αEUαE \bigsqcup_{\alpha, \beta} E|_{U_\alpha \cap U_\beta} \; \rightrightarrows \bigsqcup_\alpha E|_{U_\alpha} \to E

view this post on Zulip John Baez (May 11 2024 at 10:38):

I'm still having trouble linking this idea to [[monadic descent]] - I could easily do it by reading the nLab article, but that feels like cheating when I'm so close. What I can do, however, is satisfy @Tim Hosgood by showing how to get the Cech nerve of the cover {Uα}\{U_\alpha\} via something like the bar construction.

view this post on Zulip John Baez (May 11 2024 at 10:42):

For any algebra AA of any monad TT, the [[bar construction]] gives an augmented simplicial object

TTATAA \cdots \xrightarrow[\to]{\to} TTA \rightrightarrows TA \to A

view this post on Zulip John Baez (May 11 2024 at 10:44):

where all the arrows come from the monad multiplication TTTTT \Rightarrow T and the algebra structure TAAT A \to A.

view this post on Zulip John Baez (May 11 2024 at 10:59):

We're seeing that in Top\mathsf{Top}, any open cover UXU \to X gives a monad U×XU \times_X - on Top/X\mathsf{Top}/X. I believe UXU \to X is itself an algebra of this monad, and the bar construction then gives this augmented simplicial object in Top/X\mathsf{Top}/X:

U×XU×XUU×XUU \cdots \xrightarrow[\to]{\to} U \times_X U \times_X U \rightrightarrows U \times_X U \to U

view this post on Zulip John Baez (May 11 2024 at 11:03):

But if we write the open cover in the more traditional style as an indexed collection of open sets, so U=αUαU = \bigsqcup_\alpha U_\alpha, this is the same as

α,β,γUαUβUγα,βUαUβαUα \cdots \xrightarrow[\to]{\to} \bigsqcup_{\alpha, \beta, \gamma} U_\alpha \cap U_\beta \cap U_\gamma \rightrightarrows \bigsqcup_{\alpha, \beta} U_\alpha \cap U_\beta \to \bigsqcup_\alpha U_\alpha

view this post on Zulip John Baez (May 11 2024 at 11:04):

And this is the Cech nerve of the open cover!

view this post on Zulip John Baez (May 11 2024 at 11:05):

I think I see a little glitch in what I just wrote, but I have to quit now, so I leave it as an exercise to find it and fix it.

view this post on Zulip John Baez (May 11 2024 at 16:43):

I think I'll quit here, leaving the calculation as an "exercise for the reader".

view this post on Zulip John Onstead (May 12 2024 at 09:17):

Hi! I've finally had time to go over everything that was discussed above and read and re-read it (and re-re-read it, and so on). I think I might be seeing bits and pieces come together, but to me it almost feels like what was discussed above is like a tiny toe dip into a massive sea of related concepts. As such my resulting questions form a proper class (not a mere set), but admittedly most of them are probably trivial confusions that will induce sighs. I almost didn't want to ask since Baez and all above have put so much effort into providing a clear, concise, and thorough explanation of the topics that I feel really bad for still not understanding some of it. Nonetheless I guess I'll start with some clarifications on effective descent.

First, I'm still not sure exactly what this effective descent is doing. A previous post by Baez (I believe on the past local to global thread) indicated that it was in some way inverting the process of "lifting" a morphism into X along a morphism f: X -> Y by composition to get the composite morphism into Y. This justifies the involvement of the composition functor F: C/X -> C/Y which does this "lifting". But if you truly wanted to invert this process, why not just take the inverse image construction of this functor at a particular value g in C/Y to get all the morphisms into X that compose with f to get g? Of course, maybe not every morphism into Y can be factored along f (IE, the composition functor will not always be surjective in some way), but wouldn't this still be a more direct way of inverting composition? If so then what is the "true meaning" behind a certain morphism into X being the "descent" of one into Y?

Second, I'm confused between John Baez's post at the top of this page compared to the aforementioned one on the past thread. There, he said, "If you have a map f: E -> X and a map p: E -> B, can you find a map g: B -> X such that f = g compose p?" But at the top of this page, he says, "summarizing, given a morphism f: x -> y, every bundle over y gives a bundle over x that's an algebra of the monad Tf. When this happens, it's very nice because it provides an answer to the question of what extra structure must a bundle over x have for it to have come from pulling back some bundle over y along f". If we label a bundle over x as b-x and the one over y as b-y, this is asking the question: given b-y: b -> y and f: x -> y, can you find a morphism b-x: b -> x such that b-y = f compose b-x? But since p and f play analagous roles here, the order of composition clearly has switched between the two threads. So I'm a little confused as to which morphism in this commutative triangle that descent is even asking us to find. I've gone on assuming it's the latter option since it makes more sense with the slice category monad, but I just wanted to clarify!

I also want to make sure I understand effective descent "philosophically". Assume a morphism f: X -> Y with effective descent. Since Tf is just an endofunctor (with extra properties) on C/X, you can (very theoretically/hypothetically) define it independently of C/Y (perhaps just by a lucky guess at choosing a random endofunctor on C/X). But somehow, from this knowledge that seems to be very confined to X and C/X, we get information about all the morphisms into Y, and as we know from Yoneda, this entirely determines Y up to isomorphism. So, in a sense it seems X, as long as there's an effective descent morphism from it to Y, "knows" a lot about Y. Am I understanding this right? It's a little odd, no?

Of course, any help in clearing up my silly confusions is very much appreciated!

view this post on Zulip John Baez (May 12 2024 at 11:26):

Hi! I'm glad you're asking questions, but you're right: what I wrote in this conversation on "effective descent" is just a tiny droplet from a massive sea of related concepts. People have been developing these concepts roughly since Cartan, Grothendieck and others reformulated algebraic geometry using sheaves, though their roots go back much further to Galois theory and also the study of covering spaces and bundles in algebraic topology. So I'd be amazed if you could follow what I wrote without having studied that stuff a certain amount.

view this post on Zulip John Baez (May 12 2024 at 16:47):

A previous post by Baez (I believe on the past local to global thread) indicated that it was in some way inverting the process of "lifting" a morphism into X along a morphism f: X -> Y by composition to get the composite morphism into Y.

Yes, I brought this up as an easy-to-understand example of the general idea of descent. Unfortunately people usually skip this example and go straight into harder examples where the systematic process you're trying to reverse is not lifting a function as above but "pulling back a bundle" or "pulling back a sheaf" or something even more fancy. I mentioned that in these harder examples, things become more elaborate, but follow a similar general pattern. I've never seen anyone clearly explain this general pattern, and I'm afraid I'm not doing it either!

I'm confused between John Baez's post at the top of this page compared to the aforementioned one on the past thread.

Indeed! Please don't think of my posts in this thread here as an attempt to clarify what I was talking about in that past thread. That would indeed be confusing. This thread here is less about lifting functions than pulling back bundles.

In this thread here I was trying to answer Tim Hosgood's question of how a certain popular formalism called monadic descent is related to a concept he seemed to understand in another way, namely an effective descent morphism. The nLab articles on these topics are quite good (and related), but I wanted to struggle a bit and discuss them myself, to learn the material better.

I didn't get nearly as far as I should have, but I wound up sketching how a certain way of describing bundles on a space XX using an open cover of XX can be understood in terms of a monad. This is a classic example of 'monadic descent'. An open cover gives a morphism UXU \to X, and this is a classic example of an 'effective descent morphism'.

So, at least in theory, what I wrote might help @Tim Hosgood understand the connection between monadic descent and effective descent morphisms. But the nLab articles probably do a better job!

view this post on Zulip John Baez (May 12 2024 at 17:11):

@Chris Grossack (they/them) and I once had a dream of understanding [[Galois descent]] and [[monadic descent]] a lot better. I think I'm much closer now, but I'll probably remain a bit frustrated until I write a series of blog articles explaining this stuff from the ground up. Unfortunately it's sort of a huge subject.

view this post on Zulip John Baez (May 12 2024 at 17:12):

Anyway, @John Onstead, if you ever want to return to the easier subject of "descent as an attempt to reverse the systematic process of lifting functions", I could try that - but this thread here is probably not the best place!

view this post on Zulip John Onstead (May 12 2024 at 18:28):

John Baez said:

So I'd be amazed if you could follow what I wrote without having studied that stuff a certain amount.

I might have mentioned it previously but the last math class I took was high school calculus (it was advanced calculus, but still this was a long time ago). I wanted to jump right into category theory because I was told that, by understanding category theory, you could more quickly learn all the other branches of math more easily since category theory provides a common framework to understand all of them together. However, I haven't found this easy for two reasons: it seems you do need some background in the other branches of math to understand, at the very least, the motivations for the category theory definitions, and the language of other branches of math are usually written assuming a background of set theory, requiring you to have to mentally convert all the concepts from this implicit set theory POV to the category theory POV, which can take work to do.

view this post on Zulip John Onstead (May 12 2024 at 18:29):

John Baez said:

I mentioned that in these harder examples, things become more elaborate, but follow a similar general pattern. I've never seen anyone clearly explain this general pattern, and I'm afraid I'm not doing it either!

I've seen an example of this more elaborate pattern on the nlab article "monadic descent". The general pattern appears to be this. Say you have a pseudofunctor F: C^op -> Cat, this defines data, a category in fact, "over" each object of C, and for every morphism f: A -> B in C, you have a functor in Cat between the respective categories. If the pseudofunctor gives rise to a bifibration under the Grothendieck construction, it means that each such functor has an adjoint. Monadic descent then applies to every case where this adjunction is monadic. In a sense, this means you can recover data "over" B from the data "over" A by finding out which data over A is "descended from" that over B. Since, for C a category with pullbacks the slice category functor C/- is bifibrant, this means the above example of effective descent is a special case of monadic descent!
I find this approach to defining descent data in terms of monads very elegant because I already understand monads so now I understand at least the examples of descent given in terms of monads. But it does make me wonder about the relation between descent in general and monadic descent. Monadic descent is one way to approach descent, but can any descent data be given by some monadic descent approach? That is, can all descent problems be solved using monadic descent such that if you understand monadic descent, you understand all of descent?

view this post on Zulip John Baez (May 13 2024 at 07:18):

I wanted to jump right into category theory because I was told that, by understanding category theory, you could more quickly learn all the other branches of math more easily since category theory provides a common framework to understand all of them together. However, I haven't found this easy for two reasons: it seems you do need some background in the other branches of math to understand, at the very least, the motivations for the category theory definitions, and the language of other branches of math are usually written assuming a background of set theory, requiring you to have to mentally convert all the concepts from this implicit set theory POV to the category theory POV, which can take work to do.

I think category theory really does speed up the process of learning math. But to learn lots of math inevitably takes lots of work. I've been studying it for a couple hours a day for about 45 years and I still feel embarrassingly ignorant, with gaping holes in my knowledge all over the place.

However, I only started learning category theory after I knew lots of other stuff, so I've rarely faced the particular challenge of understanding a piece of category theory before I knew how it was used: I usually start by trying to understand something based on my intuitions about set theory, topology, algebra etc. and only later bring in category theory to make things nice.

For example, in that blob of text above where I took monadic descent and looked at what it amounts to in the special case of bundles over topological spaces, when I got to the point of writing equations like

gαβgβγ=gαγ g_{\alpha \beta} g_{\beta \gamma} = g_{\alpha \gamma}

I was like "yay, the good old stuff I learned in school is showing up automatically now!"

view this post on Zulip John Baez (May 13 2024 at 07:23):

There's something very satisfying about this. But it probably slows down my process of learning category theory, because if someone tells me an abstract categorical fact like "any bifibration gives a monad!" I'm likely to say "great, but what does that have to do with me?" Of course I'm smart enough not to say it out loud. But only when I see a couple of examples will it excite me.

view this post on Zulip John Baez (May 13 2024 at 07:37):

For years I didn't really get the point of "descent" - that is, why people were so interested in it. Then I started reading about how Noether, Brauer and Hasse classified finite-dimensional associative [[division algebras]] over Q\mathbb{Q}. For years I'd been fond of finite-dimensional associative division algebras over R\mathbb{R}: there are just 3, the reals, the complex numbers, and the quaternions. But the classification of such division algebras over Q\mathbb{Q} is vastly more elaborate. It turns out that to tackle this they needed to invent "descent theory" - though they didn't call it that or even think of it that way. What they actually invented is often called "Galois cohomology". But to understand it conceptually you really need to understand some things about descent, and I found that to be a fascinating journey.

view this post on Zulip John Baez (May 13 2024 at 07:39):

It links together monads, the cohomology of groups, homotopy fixed points, and then a lot of specific stuff about algebra that shows up in this particular application of these concepts.

view this post on Zulip John Baez (May 13 2024 at 07:39):

Believe it or not, I'm slowly leading back to your question:

Monadic descent is one way to approach descent, but can any descent data be given by some monadic descent approach? That is, can all descent problems be solved using monadic descent such that if you understand monadic descent, you understand all of descent?

view this post on Zulip John Baez (May 13 2024 at 07:42):

Of course you don't understand all of descent by understanding one outlook on it: you also need to understand how that outlook connects to all the other outlooks! And in a way that's the fun part. But for the more technical question of whether all descent problems can be phrased in terms of monadic descent... I don't know, but it seems like a lot of them can. In particular, I believe everything I've learned about the applications of Galois cohomology to descent can be put into that framework.

view this post on Zulip John Baez (May 13 2024 at 08:33):

Ultra-tersely, the monad that shows up in monadic descent has a [[bar construction]], and if this monad is the monad for G-sets, this bar construction can be used to define homotopy fixed points of G actions, and also the cohomology of the group G.

view this post on Zulip John Baez (May 13 2024 at 08:35):

When G is a [[Galois group]] all these ideas play together in a way that explains what Noether, Brauer and Hasse were doing.

view this post on Zulip John Baez (May 13 2024 at 08:40):

However, this barrage of jargon conceals the fact that the ideas are simple and beautiful. You're making me want to explain them better.

view this post on Zulip John Onstead (May 13 2024 at 09:14):

John Baez said:

For years I didn't really get the point of "descent" - that is, why people were so interested in it. Then I started reading about how Noether, Brauer and Hasse classified finite-dimensional associative [[division algebras]] over Q.

I'm in a similar place now I suppose! It seems descent can give information about local-global (the original context in which this topic was raised). For instance with the bundles the descent data gives transition functions. But "why" descent is able to do this (in the more general case) is still a little unclear to me, but I'm hoping to learn more about this over time!

John Baez said:

Of course you don't understand all of descent by understanding one outlook on it: you also need to understand how that outlook connects to all the other outlooks! And in a way that's the fun part.

It sounds like a Yoneda-esque approach to learning things!

view this post on Zulip John Onstead (May 13 2024 at 09:22):

John Baez said:

Ultra-tersely, the monad that shows up in monadic descent has a [[bar construction]]

If I recall from above the bar construction for a monadic descent is related to a Cech nerve? If so then is the cohomology it helps to define for the group G a part of a Cech cohomology in any way?
Also on this same note I did look more into this relationship and the nlab diagram given under the "ideas" sections of "bar construction" and "Cech nerve" look far too similar to be a coincidence (though maybe it's just math pareidolia?) Nlab states a bar construction of a monad is a simplicial object in the monad's EM category. So just to clarify, in the case with bundles and the monadic adjunction C/X -> C/Y for a morphism X -> Y, would the Cech nerve be a simplicial object in C/Y?

view this post on Zulip John Baez (May 13 2024 at 10:31):

John Onstead said:

John Baez said:

Ultra-tersely, the monad that shows up in monadic descent has a [[bar construction]]

If I recall from above the bar construction for a monadic descent is related to a Cech nerve? If so then is the cohomology it helps to define for the group G a part of a Cech cohomology in any way?

I think the better approach is to see the Cech nerve as a special case of the bar construction, and both Cech cohomology and group cohomology as special cases of the same thing: [[monadic cohomology]], which is a way to get cohomology from the bar construction.

Again, this is something that deserves an actual explanation - pointing you to the nLab is not a substitute for an actual explanation!

view this post on Zulip John Baez (May 13 2024 at 10:34):

I'm just saying that "stuff you can do with a monad" reigns supreme here, and all the fancy-sounding things like Cech nerves, Cech cohomology and group cohomology are just special cases of this.

view this post on Zulip John Baez (May 13 2024 at 13:37):

So just to clarify, in the case with bundles and the monadic adjunction C/X -> C/Y for a morphism X -> Y, would the Cech nerve be a simplicial object in C/Y?

Yes!

view this post on Zulip John Onstead (May 13 2024 at 19:31):

John Baez said:

Again, this is something that deserves an actual explanation - pointing you to the nLab is not a substitute for an actual explanation!

Indeed, it seems that with every question answered, a new concept pops up that induces a whole bunch more questions! But I'll let it rest for a little bit as I sort through my thoughts. I may post to a new thread since my questions may no longer be directly related to effective descent, so look out for that! And thanks for all the help so far!

view this post on Zulip John Baez (May 13 2024 at 19:36):

Do you see how to get the Cech nerve by applying the bar construction to the monad I described in this series of posts? That would be a good exercise.

view this post on Zulip John Onstead (May 13 2024 at 22:22):

John Baez said:

Do you see how to get the Cech nerve by applying the bar construction to the monad I described in this series of posts? That would be a good exercise.

In a way. If you substitute T in the diagram under "bar construction" nlab page with
John Baez said:

So we could call this monad U×X​−.

Then you get something that looks syntactically just like the diagram under the nlab page for "Cech nerve". I think that's the full story, anyways.
Edit: it's not quite the same actually. In the bar construction diagram you get TA -> A at the end but in the Cech nerve diagram you get U×X U -> -> U, so there's one more arrow for some reason. Maybe I gotta give this one a little more thought!

view this post on Zulip John Onstead (May 14 2024 at 01:01):

Ok I gave it some more thought and while I haven't completely figured it out, I know something strange is going on. The bar construction diagram given on nlab is supposedly "taking place" in the EM category of T because it's supposed to be a simplicial object in the EM category of T, but it involves A, TA, and so on, which are not objects of the EM category but rather the monad's base category. In fact, the last part of the bar construction diagram, TA -> A, is actually an algebra for T, and so should be a singular object in the EM category, but here it is drawn out so this would automatically lead one to believe it is "taking place" in the monad's base category. Certainly, at least to someone easily confusable like me, the diagram on the nlab is pretty misleading!
But I'm thinking that if I can get past this confusion maybe the Cech nerve thing would make more sense?

view this post on Zulip John Baez (May 14 2024 at 09:04):

John Onstead said:

John Baez said:

Do you see how to get the Cech nerve by applying the bar construction to the monad I described in this series of posts? That would be a good exercise.

In a way. If you substitute T in the diagram under "bar construction" nlab page with U×XU \times_X - then you get something that looks syntactically just like the diagram under the nlab page for "Cech nerve". I think that's the full story, anyways.

Edit: it's not quite the same actually. In the bar construction diagram you get TA -> A at the end but in the Cech nerve diagram you get U×X U -> -> U, so there's one more arrow for some reason. Maybe I gotta give this one a little more thought!

You seem to be running into all the confusions I run into when I think about this business after a long break. That's good: they probably make the difference between almost understanding this stuff and fully understanding it.

Are you familiar with the difference between a [[simplicial object]] and an [[augmented simplicial object]]? I think that's what you're encountering here.

A simplicial object has an object of vertices, an object of edges, an object of triangles, an object of tetrahedra etc., and various maps between them. In particular there are two maps from the object of edges to the object of vertices, since an edge has two endpoints. So you seem to be describing the Cech nerve as a simplicial object, with two maps from U×XUU \times_X U to UU.

An augmented simplicial object goes a bit further. It also has an object of (-1)-simplices! There is just one map from the object of vertices to the object of (-1)-simplices. So you seem to be describing the bar construction as an augmented simplicial object. And that sounds right to me.

There is a way to turn an augmented simplicial object into a simplicial object, simply by discarding the object of (-1)-simplices. But there's also a very interesting story about what benefits arise from working with an augmented simplicial object! Are you familiar with that story?

view this post on Zulip John Baez (May 14 2024 at 09:14):

John Onstead said:

Ok I gave it some more thought and while I haven't completely figured it out, I know something strange is going on. The bar construction diagram given on nlab is supposedly "taking place" in the EM category of T because it's supposed to be a simplicial object in the EM category of T, but it involves A, TA, and so on, which are not objects of the EM category but rather the monad's base category.

Why do you say that? AA is an algebra of some monad TT on some category EE, which can be seen as an object of the Eilenberg-Moore category ETE^T. Of course any algebra has an underlying object in monad's base category, and I guess that's what you're seeing when you see the letter AA. But we don't usually use a special notation to distinguish between the two! So to tell which category we're in, we have to look at the morphisms, and see if they are algebra morphisms or merely morphisms of their underlying objects.

So we have four choices to keep in mind: are we thinking about a simplicial object or an augmented simplicial object, and are we thinking of it in EE or in ETE^T?

view this post on Zulip John Onstead (May 14 2024 at 10:00):

John Baez said:

A simplicial object has an object of vertices, an object of edges, an object of triangles, an object of tetrahedra etc., and various maps between them. In particular there are two maps from the object of edges to the object of vertices, since an edge has two endpoints. So you seem to be describing the Cech nerve as a simplicial object, with two maps from U×X​U to U.

An augmented simplicial object goes a bit further. It also has an object of (-1)-simplices! There is just one map from the object of vertices to the object of (-1)-simplices. So you seem to be describing the bar construction as an augmented simplicial object. And that sounds right to me.

I think that clears it up! I figured this confusion was due to inconsistencies between nlab pages (different authors, different times, etc.) The nlab page for bar construction does have (augmented) in parenthesis but I'm only noticing this now, I completely missed it earlier! Sometimes things have to be spelled out very explicitly to me before I'm able to see it, otherwise I fly right by it.
I'm not sure what the benefits of working with augmented simplicial objects over typical simplicial objects is, other than that they can encode cocones (and so colimits) of typical simplicial objects.

view this post on Zulip John Baez (May 14 2024 at 10:20):

I figured this confusion was due to inconsistencies between nlab pages (different authors, different times, etc.).

If there's an inconsistency, or even something unclear, I should fix it.

view this post on Zulip John Baez (May 14 2024 at 10:21):

The nlab page for bar construction does have (augmented) in parenthesis but I'm only noticing this now, I completely missed it earlier!

At the very least I should remove those parentheses. The difference is important.

view this post on Zulip John Baez (May 14 2024 at 10:29):

I'm not sure what the benefits of working with augmented simplicial objects over typical simplicial objects.

Let's do simplicial sets just for concreteness, and see what an 'augmented' simplicial set amounts to.

In an augmented simplicial set:

1) we have a simplicial set but also a set of (-1)-simplices
2) each vertex (0-simplex) is mapped to some (-1)-simplex
3) two vertices with an edge between them are mapped to the same (-1)-simplex

Part 3) follows from the (augmented) simplicial identities, as explained in the nLab page [[augmented simplicial set]].

view this post on Zulip John Baez (May 14 2024 at 10:36):

In fact, according to that nLab page, 1)-3) are equivalent to the full definition of an augmented simplicial set.

view this post on Zulip John Baez (May 14 2024 at 10:37):

If two vertices are connected by an edge we say they're in the same connected component of a simplicial set.

view this post on Zulip John Baez (May 14 2024 at 10:43):

So, 1)-3) say an augmented simplicial set is a simplicial set XX together with a set X1X_{-1} whose elements are called (-1)-simplices, where each vertex is mapped to some (-1)-simplex and all the vertices in a connected component are mapped to the same (-1)-simplex.

view this post on Zulip John Baez (May 14 2024 at 10:46):

This means any simplicial set becomes an augmented simplicial set in a canonical way, where the (-1)-simplices are the connected components!

But we could also have an augmented simplicial set where the vertices in several different connected components get mapped to the same (-1)-simplex.

So, up to isomorphism, we can always say a (-1)-simplex is a disjoint union of connected components.

view this post on Zulip John Onstead (May 14 2024 at 10:52):

John Baez said:

2) each vertex (0-simplex) is mapped to some (-1)-simplex

Doesn't this just mean there exists a function from the set of 0 simplex to the set of (-1)-simplex?

John Baez said:

But we could also have an augmented simplicial set where the vertices in several different connected components get mapped to the same (-1)-simplex.

So, up to isomorphism, we can always say a (-1)-simplex is a disjoint union of connected components.

Ah this makes sense! This also clarifies, this isn't some sort of construction where we "quotient out equivalence relations" defined by whether two elements of the 0-simplex set are connected by an edge.

view this post on Zulip John Baez (May 14 2024 at 11:05):

John Onstead said:

John Baez said:

2) each vertex (0-simplex) is mapped to some (-1)-simplex

Doesn't this just mean there exists a function from the set of 0 simplices to the set of (-1)-simplices?

Yes, it's a more informal way of saying exactly the same thing.

view this post on Zulip John Onstead (May 14 2024 at 13:19):

John Baez said:

But we don't usually use a special notation to distinguish between the two! So to tell which category we're in, we have to look at the morphisms, and see if they are algebra morphisms or merely morphisms of their underlying objects.

I see! This multi-use notation always ends up throwing me off in some way. Indeed, it seems the nlab is using A to refer to the algebra and thus an object in EM(T). But with this problem solved another pops up. When you go down to the "definition" section, it gives how to make the bar construction. Given a monad T on a category C, you can find a comonad T' on the EM category assuming this situation is monadic. This is because, if T = U compose F, then T' = F compose U, with F and U being the adjoint pair that define this monadic adjunction. The nlab page then states that the opposite augmented simplex category is the "walking comonoid" and so there exists a functor Del_a ^op -> [EM(T), EM(T)] that selects T' for [1] and T' T' for [2] and so on. There is then an evaluation functor [EM(T), EM(T)] -> EM(T) that sends an endofunctor to it evaluated on some fixed object, which would be an object of EM(T) and thus an algebra.

Fixing an object/algebra A of EM(T) for this evaluation functor, composing the two functors then gives a diagram where the objects are some chain of T' evaluated on some algebra A. This indeed gives exactly the diagram given at the top of the nlab page (I think, at least on the objects this seems to be the case) but with a catch: the T's in the diagram should be T' instead! In other words, the page seems to be confusing T and T', which itself is confusing me. You can change a few things around to get the diagram with T's: for instance, taking the non-opposite augmented simplex category, you can find a functor Del_a -> [C, C] and then an evaluation functor on some object B of this category B: [C, C] -> C; composing these would then give the desired diagram (with B instead of algebras A), but it's destination is now into C instead of EM(T)!

view this post on Zulip John Baez (May 14 2024 at 13:34):

Yet again, your puzzlement is getting into the heart of what we need to understand to truly understand the bar construction!

This is also connected to the question I didn't have time to answer yet: why is it so much better to think of the bar construction of a monad T:EET: E \to E as giving an augmented simplicial object in the Eilenberg-Moore category ETE^T, rather than a mere simplicial object? I got around to saying what an augmented simplicial object has that a simplicial object doesn't, but not what we use this stuff for!

view this post on Zulip John Baez (May 14 2024 at 13:38):

Briefly, the bar construction gives an augmented simplicial object in ETE^T, as you just described. The forgetful functor ETEE^T \to E maps that to a simplicial object in EE. But we get more: the simplicial object in EE comes with an 'acyclic structure', as defined in Def. 3.1. here!

view this post on Zulip John Baez (May 14 2024 at 13:40):

I want to explain all this much better, but I'm busy right now....

view this post on Zulip John Baez (May 14 2024 at 13:41):

I also want to understand these comments of yours:

Fixing an object/algebra A of EM(T) for this evaluation functor, composing the two functors then gives a diagram where the objects are some chain of T' evaluated on some algebra A. This indeed gives exactly the diagram given at the top of the nlab page (I think, at least on the objects this seems to be the case) but with a catch: the T's in the diagram should be T' instead! In other words, the page seems to be confusing T and T', which itself is confusing me.

view this post on Zulip John Baez (May 14 2024 at 15:26):

This sentence also confused me: "The T's in the diagram should be T' instead!" But I see, you mean each instance of T should be T'. But I think you are again being a bit too attached to type-checking instead of guessing what the short-hand means.

Let's put it this way. We have a right adjoint

R:ETE R: E^T \to E

from the Eilenberg-Moore category (you're calling it EM(T)) of the monad TT to the category EE the monad is on. This has a left adjoint

L:EETL : E \to E^T

We start with an algebra, i.e. an object AETA \in E^T. From this we can form an augmented simplicial object

LRLRALRAA \dots \xrightarrow[\to]{\to} LRLRA \rightrightarrows LRA \to A

view this post on Zulip John Baez (May 14 2024 at 15:28):

where the arrows I drew come from the algebra structure LRAALRA \to A and the counit of the adjunction, a natural transformation RL1ERL \Rightarrow 1_E

view this post on Zulip John Baez (May 14 2024 at 15:30):

This is indeed an augmented simplicial object in ETE^T, since all the arrows (and also the arrows I didn't draw) are morphisms in ETE^T.

view this post on Zulip John Baez (May 14 2024 at 15:32):

When we apply RR to everything we get an augmented simplicial object in EE:

RLRLRARLRARA \dots \xrightarrow[\to]{\to} RLRLRA \rightrightarrows RLRA \to RA

view this post on Zulip John Baez (May 14 2024 at 15:35):

but now we can put additional arrows in the diagram! For example, not only do we have the morphism RLRARARLRA \to RA coming from the algebra structure LRAALRA \to A, we also have second morphism RLRARARLRA \to RA coming from the counit RL1ERL \Rightarrow 1_E!

view this post on Zulip John Baez (May 14 2024 at 15:35):

These additional arrows (and the equations they obey) mean that we have an acyclic augmented simplicial object in EE.

view this post on Zulip John Baez (May 14 2024 at 15:43):

This is where all my remarks on the meaning of augmented simplicial objects become important!

In the case where E=SetE = \mathsf{Set}, we see that we have an augmented simplicial set

RLRLRARLRARA \dots \xrightarrow[\to]{\to} RLRLRA \rightrightarrows RLRA \to RA

with one connected component for each point of RARA.

Furthermore, each connected component has a distinguished vertex. These come from the map RARLRARA \to RLRA that arises from the unit 1ETLR1_{E^T} \Rightarrow LR of our adjunction.

view this post on Zulip John Baez (May 14 2024 at 15:45):

Finally - and this takes the nLab a while to show - each of the connected components is contractible in the usual sense of topology, now generalized to simplicial sets. Among other things, this means every vertex in our (augmented) simplicial set comes equipped with a specified edge from it to the distinguished vertex in its component.

view this post on Zulip John Baez (May 14 2024 at 15:45):

This contractibility, or 'acyclic structure', is the real point of the bar construction.

view this post on Zulip John Baez (May 14 2024 at 15:47):

But anyway, I hope you see that we get augmented simplicial objects, first in ETE^T, and then in EE.

view this post on Zulip John Onstead (May 14 2024 at 19:41):

Thanks for your help, I think it clears it up...

John Baez said:

We start with an algebra, i.e. an object A∈ET. From this we can form an augmented simplicial object

⋯→→​LRLRA⇉LRA→A

Yes, this was indeed what I had in mind. I just put T' = LR and compared this with the nlab diagram, which suggested T = RL instead, to find the inconsistency.

John Baez said:

When we apply R to everything we get an augmented simplicial object in E:

⋯→→​RLRLRA⇉RLRA→RA

This appears to be what the nlab calls the "bar resolution" for a bar construction. If you "evaluate" this simplicial object in E, you get: ⋯→→​TTA⇉TA→A where I've replaced A as an algebra with A the underlying object in E of that algebra (so RA with respect to the algebra A). This basically gets you the diagram on nlab (minus the extra arrows you mention in the next paragraph). So if the diagram on nlab is actually the bar resolution, not the bar construction, of the monad T, then it ought to say so explicitly, so that it doesn't lead more people like me to confusion! But other than that, I think that clears up my confusion: a monad T induces (at least) two simplicial objects, one in its home category and the other in EM(T), one being a "construction" and the other a "resolution", with both having interesting properties and playing interesting roles! (I'd be ok with calling both of these simplicial objects "bar constructions", especially since they both seem to be cases of a "two sided bar construction", but I still take issue with the nlab article because it chooses to make the distinction between construction and realization!)
(PS: I saw you edited the nlab article to remove the parenthesis! Thanks!)

view this post on Zulip John Onstead (May 14 2024 at 19:45):

John Baez said:

This contractibility, or 'acyclic structure', is the real point of the bar construction.

Very interesting! So it seems the contractibility is what allows the bar construction to be good at "being a resolution" of objects. That's how both these concepts tie together.

view this post on Zulip John Baez (May 14 2024 at 20:25):

Exactly. Say we have a monad on Set\mathsf{Set}. The bar construction takes any algebra A of the monad and "puffs it up", replacing all equations by edges, all equations between equations by triangles, and so on. But it does so in a way that each element of the original algebra gets replaced by a contractible simplicial set. In fact it's a simplicial set with an "A-acyclic structure", which is like a choice of contraction of each component down to an element of your original algebra A.

And the great theorem is that the bar construction takes any algebra A of your monad and produces the initial A-acyclic simplicial algebra.

view this post on Zulip John Baez (May 14 2024 at 20:30):

All this works for monads on categories that aren't Set\mathsf{Set}, but we have to be a bit more abstract: their algebras may not have a set of 'elements', and their simplicial algebras may not have a set of 'vertices', etc.

view this post on Zulip John Baez (May 14 2024 at 20:32):

All the stuff I just mentioned is a way of explaining the point of a [[resolution]]: it's a way of taking a gadget and 'maximally puffing it up' while still making sure the result is equivalent in a certain sense to the original gadget.

view this post on Zulip John Onstead (May 15 2024 at 07:06):

John Baez said:

All the stuff I just mentioned is a way of explaining the point of a [[resolution]]: it's a way of taking a gadget and 'maximally puffing it up' while still making sure the result is equivalent in a certain sense to the original gadget.

Now that I feel at least some clarification on bar constructions, I am interested in doing a "deep dive" into learning more on general resolutions, especially simplicial resolutions, since they seem to be behind a lot of the stuff that we were talking about: cech, bar constructions, local-global, descent, etc. They also seem to be extremely interesting constructions on their own. But I'm wondering if this discussion should take place on another thread, since this one is mainly about effective descent?

view this post on Zulip John Baez (May 15 2024 at 07:22):

Sure, it's good to start a new thread if you want to talk about resolutions.

I don't think we've run out of things to talk about here. I don't think we've talked yet much about the historical origins of descent theory, or what descent is used for, or what it's deep inner meaning. I also don't think we've talked about examples of 'non-effective' descent, to better understand that word 'effective' - this is something I would struggle with.

But the concept of 'resolution' is also very interesting.

view this post on Zulip John Onstead (May 15 2024 at 07:57):

John Baez said:

I don't think we've run out of things to talk about here. I don't think we've talked yet much about the historical origins of descent theory, or what descent is used for, or what it's deep inner meaning. I also don't think we've talked about examples of 'non-effective' descent, to better understand that word 'effective' - this is something I would struggle with.

We can eventually get there. But what I've found is that learning this field runs the risk of "concept overload" where so many related concepts are thrown out all at once (IE, descent object vs descent morphism vs category of descent data vs descent). In order to avoid drinking from a fire hose maybe it's best to introduce each concept one by one, so I wanted to start with resolutions and slowly expand out from there, making sure each connection is well developed before heading to the next idea. I hope that's not too bad of an idea!

view this post on Zulip John Baez (May 15 2024 at 10:24):

I don't think that's bad at all. I find it useful to think historically, so I can're resist laying out a very simplified history of what we're talking about:

1) People had a lot of interesting problems to solve in algebra, geometry and topology.

2) Noether figured out how to solve lots of them using "homology and cohomology groups".

3) She discovered that homology and cohomology groups come from "chain and cochain complexes".

4) People discovered many ways to build chain and cochain complexes; Noether's student Mac Lane worked with Eilenberg on these and realized that these "ways" were actually functors.

5) People discovered an important way to build chain and cochain complexes used a trick called "resolutions". Eilenberg and Mac Lane discovered an important family of resolutions all called the "bar resolution".

6) Daniel Kan discovered the concept of "simplicial object" and realized that chain complexes are simplicial objects in Ab\mathsf{Ab}, the category of abelian groups, while cochain complexes are simplicial objects in Abop\mathsf{Ab}^{\rm op}.

7) People realized that the concept of "resolution" is very general: given an object in a category C\mathsf{C}, it has resolutions that are simplicial objects in C\mathsf{C}.

8) People realized that the "bar resolution" makes sense whenever C\mathsf{C} is the Eilenberg-Moore category of some monad on some category E\mathsf{E}. The process of building the bar resolution takes advantage of a beautiful fact: any comonad on any category C\mathsf{C} gives a simplicial object in the endofunctor category of C\mathsf{C}.

It may seem as if by stage 6) we've completely lost sight of stage 1)! But in fact, every later stage helped us get better and better at understanding the original problems in algebra, geometry and topology that people were thinking about in stage 1). And sociologically, this is a big part of what justified all the later stages.

view this post on Zulip John Onstead (May 15 2024 at 10:40):

I always enjoy learning about the historical context because it provides a good sense of the motivation for all the different kinds of definitions in math, even the most abstract ones! In a way that helps to ground everything. This also lays out a good map/path for what I want to explore, though I would likely be taking the reverse direction (my first question on resolutions might relate to the last few points here in fact!)

view this post on Zulip John Baez (May 15 2024 at 10:41):

The way category theory works, the final most polished concepts are often the simplest, though the most abstract.

view this post on Zulip John Baez (May 15 2024 at 10:42):

I can see you like starting at the abstract end.... so much so that I'm yearning to talk about the concrete beginnings of this particular subject, and the amazing way that the abstract concepts were able to solve concrete problems. But I can think about that myself.

view this post on Zulip Tim Hosgood (May 15 2024 at 14:04):

John Baez said:

I also don't think we've talked about examples of 'non-effective' descent, to better understand that word 'effective' - this is something I would struggle with.

same!

view this post on Zulip Tim Hosgood (May 15 2024 at 22:08):

I know that faithfully flat descent describes when you can obtain a "thing" on a scheme XX by giving that thing on a faithfully flat cover YXY\to X along with some gluing/descent data, but I'm not too sure what to say about changing "faithfully flat" for "effective"

view this post on Zulip Tim Hosgood (May 15 2024 at 22:09):

is it true that we just say "effective epi" instead of "faithfully flat"?

view this post on Zulip Tim Hosgood (May 15 2024 at 22:15):

Looking at FGA, effective gluing data is (roughly) gluing data that secretly comes from the coequaliser (but not necessarily actually the coequaliser, just something of the right shape). Here's what I mean by that.

If we have a fibred category FC\mathcal{F}\to\mathcal{C}, and morphisms f,g ⁣:YXf,g\colon Y\to X in C\mathcal{C}, then we define a gluing data (with respect to the morphisms ff and gg) on an object ξFX\xi\in\mathcal{F}_X to be an isomorphism fξgξf^*\xi\xrightarrow{\sim}g^*\xi in FY\mathcal{F}_Y.

If we now have some c ⁣:XCc\colon X\to C in C\mathcal{C}, then we say that a gluing data on ξFX\xi\in\mathcal{F}_X is effective (with respect to the morphism cc) if ξ\xi is isomorphic (with its gluing data, which means "in such a way that some square commutes") to c(η)c^*(\eta) for some ηFC\eta\in\mathcal{F}_C.

The coequaliser now turns up in a lemma: if CC is the coequaliser of YXY\rightrightarrows X, then the category of objects of FX\mathcal{F}_X endowed with effective gluing data is equivalent to the category FC\mathcal{F}_C. (Without saying the word "effective", we only get a fully faithful functor from the latter to the former.)

view this post on Zulip John Baez (May 16 2024 at 05:38):

It's taking me a while to absorb these definitions, and then your question, but it's happening. Here's what the nLab article [[effective descent]] says - I find it helpful to try to see whether what you're saying is secretly the same:

Let CC be a [[category]] with [[pullbacks]] and [[coequalizers]]. For any [[morphism]] p ⁣:ABp\colon A\to B, we have an [[internal category]] ker(p)ker(p) defined by A×BAAA\times_B A \rightrightarrows A (the [[kernel pair]] of pp). The [[category of descent data]] for pp is the category Cker(p)C^{ker(p)} (the "[[descent object]]") of internal diagrams on this internal category. Explicitly, an object of Cker(p)C^{ker(p)} is a morphism CAC\to A together with an action A×BCCA\times_B C \to C satisfying suitable axioms.

The evident internal functor ker(p)Bker(p) \to B (viewing BB as a discrete internal category) induces a comparison functor CBCker(p)C^B \to C^{ker(p)}. We say that pp is:

It is a little unfortunate that the more important notion of effective descent has the longer name, but it seems unwise to try to change it (although the [[Elephant]] uses "pre-descent" and "descent").

view this post on Zulip John Baez (May 16 2024 at 05:44):

I want to see if an object in the category of "descent data" in the nLab article is the same as what you're calling a "gluing datum".

view this post on Zulip John Onstead (May 16 2024 at 08:45):

These are some interesting questions, though I have not much to add specifically. But as a quick heads up I opened a new thread in parallel at here.

view this post on Zulip Reid Barton (May 21 2024 at 18:50):

I'm looking back at the nLab page [[monadic descent]], in particular section 2, Definition.
It starts by assuming given a category C and a bifibration on it. Then for a morphism f:XYf : X \to Y, it claims there is an associated adjoint triple f!fff_! \dashv f^* \dashv f_* between fibers of the bifibration. But shouldn't it just be an adjoint pair f!ff_! \dashv f^*?
From @John Baez's explicit unwinding of the definitions in the case of the codomain fibration on Top, the functor ff_* doesn't appear to be involved at all in the story (and in fact it may or may not exist, even in this example, depending on the map ff, I think).

view this post on Zulip Reid Barton (May 21 2024 at 18:51):

If ff_* does exist, then I guess there should be an alternative version of the story involving fff^* \dashv f_*, but then we should call it "comonadic descent", maybe?

view this post on Zulip Kevin Carlson (May 21 2024 at 20:26):

Yeah, the triple should really only be associated to a trifibration. I'm not sure why they write that.

view this post on Zulip John Baez (May 22 2024 at 06:17):

Good point! Maybe we should fix it.