Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: reading & references

Topic: reading through Baez's topos theory blog posts


view this post on Zulip David Egolf (Mar 26 2024 at 17:08):

I'm going to try an experiment, where I work through John Baez's topos theory blog posts while "thinking out loud" in this topic. (For discussion of this idea, see #meta: meta > "learning out loud"? ).

A few disclaimers: I'm not sure how far I'll get! Also, my discussion here is unlikely to be self-contained; if you want to follow along, you may need to reference the blog posts. And finally, I am almost certainly going to make a lot of mistakes!

Please feel free to join in, or just to point out when I'm confused about something! I can't guarantee I'll have the energy to give you a full response if you do so, but please know that I will still appreciate whatever you choose to post.

view this post on Zulip David Egolf (Mar 26 2024 at 17:29):

I'm going to take a puzzle/exercise-based approach. I find it helps me focus my thoughts to have a particular thing I'm trying to figure out. (Sometimes I'll even jump straight to an exercise before reading a section! Then that exercise helps motivate my reading.)

The first exercise I want to contemplate is this:

In many of these examples something nice happens. First, suppose we have sFUs \in F U and an open cover of UU by open sets UiU_i. Then we can restrict ss to UiU_i getting something we can call sUis|_{U_i}. We can then further restrict this to UiUjU_i \cap U_j. And by the definition of presheaf, we have

(sUi)UiUj=(sUj)UiUj(s|_{U_i})|_{U_i \cap U_j} = (s|_{U_j})|_{U_i \cap U_j}

In other words, if we take a guy in FUF U and restrict it to a bunch of open sets covering UU, the resulting guys agree on the overlaps UiUjU_i \cap U_j. Check that this follows from the definition of functor and some other facts!

view this post on Zulip Morgan Rogers (he/him) (Mar 26 2024 at 17:35):

The exercise is "Check that...", right? Do you have an idea of where to start? :wink:

view this post on Zulip David Egolf (Mar 26 2024 at 17:40):

Morgan Rogers (he/him) said:

The exercise is "Check that...", right? Do you have an idea of where to start? :wink:

Yes, that's right! And I think I do have an idea of where to start! It'll take me a minute to type it out, though.

view this post on Zulip David Egolf (Mar 26 2024 at 17:45):

We have a functor F:O(X)opSetF:\mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set}. Here, O(X)op\mathcal{O}(X)^{\mathrm{op}} is the category of open sets of a topological space XX, where we have a unique morphism from UU to VV exactly if VUV \subseteq U. I think of a morphism from UU to VV as saying "UU contains VV".

The first thing I want to note is that O(X)op\mathcal{O}(X)^{\mathrm{op}} is a poset. Consequently, all diagrams commute in O(X)op\mathcal{O}(X)^{\mathrm{op}}! In particular, this diagram commutes for any Ui,UjUU_i, U_j \subseteq U:
diagram

view this post on Zulip David Egolf (Mar 26 2024 at 17:50):

Next, I'll use the fact that functors send commutative diagrams to commutative diagrams. That means that this diagram in Set\mathsf{Set} also commutes:
diagram

by rUVr_{U \to V} I mean the "restriction function" that restricts things from UU to VV, for VUV \subseteq U. This is the image under FF of the unique morphism from UU to VV in O(X)op\mathcal{O}(X)^{\mathrm{op}}.

view this post on Zulip David Egolf (Mar 26 2024 at 17:53):

Now, let's pick some sFUs \in FU. Since this diagram commutes, we have that rUiUiUjrUUi(s)=rUjUiUjrUUj(s)r_{U_i \to U_i \cap U_j} \circ r_{U \to U_i}(s) = r_{U_j \to U_i \cap U_j} \circ r_{U \to U_j}(s). I believe this is just different notation for the thing we wanted to prove!

view this post on Zulip Peva Blanchard (Mar 26 2024 at 18:01):

(side question: how do you make the diagrams (pictures) so fast?)

view this post on Zulip David Egolf (Mar 26 2024 at 18:12):

Peva Blanchard said:

(side question: how do you make the diagrams (pictures) so fast?)

I use this amazing website to draw the diagrams: https://q.uiver.app/ .
Then all I have to do is take screenshots, and paste them into my draft!

view this post on Zulip David Egolf (Mar 27 2024 at 17:33):

Alright, we next have our first official "puzzle"!

Puzzle. Let X=RX = \mathbb{R} and for each open set URU \subseteq \mathbb{R} take FUF U to be the set of continuous real-valued functions on UU. Show that with the usual concept of restriction of functions, FF is a presheaf and in fact a sheaf.

I'll start by seeking to show that FF is a presheaf.

view this post on Zulip David Egolf (Mar 27 2024 at 17:39):

To show that FF is a presheaf, we need to show that it is a functor F:O(R)opSetF: \mathcal{O}(\mathbb{R})^{\mathrm{op}} \to \mathsf{Set}. Now, for each open URU \subseteq \R, we have that FUFU is the set of continuous-real valued functions on UU. To talk about "continuous" functions means that there needs to be some topology on UU. I think it's reasonable to assume that UU is equipped with the subspace topology it inherits from R\mathbb{R}.

I'm a bit intrigued by the fact that we then have FU=Top(U,R)FU = \mathsf{Top}(U, \mathbb{R}), where UU is equipped with the subspace topology. This leads me to consider the functor Top(,R):TopopSet\mathsf{Top}(-,\mathbb{R}): \mathsf{Top}^{\mathrm{op}} \to \mathsf{Set}.

view this post on Zulip David Egolf (Mar 27 2024 at 17:43):

I'd now like dream up some functor G:O(R)opTopopG: \mathcal{O}(\mathbb{R})^{\mathrm{op}} \to \mathsf{Top}^{\mathrm{op}}, so that we can express FF as F=Top(,R)GF = \mathsf{Top}(-, \mathbb{R}) \circ G.

The first idea that comes to mind is to let GG act as follows:

view this post on Zulip David Egolf (Mar 27 2024 at 17:44):

It remains to show that GG is a really a functor, and that F=Top(,R)GF = \mathsf{Top}(-, \mathbb{R}) \circ G.

view this post on Zulip Peva Blanchard (Mar 27 2024 at 18:17):

I would have gone for a more direct proof that FF is functor.

spoiler

view this post on Zulip David Egolf (Mar 27 2024 at 18:22):

To show that GG is a functor, I think the only tricky thing to check (unless I'm missing something) is as follows: We want to show that if UU and VV, with VUV \subseteq U, are open subsets of R\mathbb{R} equipped with the subspace topology inherited from R\mathbb{R}, then the inclusion function i:VUi:V \to U is continuous.

To show this, I want to use this property of the subspace topology: "Let YY be as subspace of XX and let i:YXi:Y \to X be the inclusion map. Then for any topological space ZZ a map f:ZYf: Z \to Y is continuous if and only if the composite map ifi \circ f is continuous".

In our case, we have the inclusion map iVU:VUi_{V \to U}: V \to U and the two inclusions to R\mathbb{R}, namely: iV:VRi_V: V \to \mathbb{R} and iU:URi_U: U \to \mathbb{R}. Since iUiVU=iVi_U \circ i_{V \to U} = i_V and iVi_V is continuous, we conclude that iVU:VUi_{V \to U}: V \to U is also continuous.

view this post on Zulip David Egolf (Mar 27 2024 at 18:24):

@Peva Blanchard I was strongly considering aiming for a more direct proof! I'll be interested to take a look at the "spoiler" in a bit!

view this post on Zulip David Egolf (Mar 27 2024 at 18:49):

It remains to show that F=Top(,R)GF = \mathsf{Top}(-, \mathbb{R}) \circ G. To show this, it suffices to show the following:

  1. Top(,R)G\mathsf{Top}(-, \mathbb{R}) \circ G sends each open subset UU of R\mathbb{R} to the set of real-valued continuous function on UU
  2. Top(,R)G\mathsf{Top}(-, \mathbb{R}) \circ G sends each morphism in O(R)op\mathcal{O}(\mathbb{R})^{\mathrm{op}} to a function that restricts functions

Let's consider r:UVr:U \to V in O(R)op\mathcal{O}(\mathbb{R})^{\mathrm{op}}. So, VUV \subseteq U. First, we note that Top(,R)G(U)\mathsf{Top}(-, \mathbb{R}) \circ G(U) first equips UU with the subspace topology and then gives the set of all real-valued continuous functions on UU.

Next, we consider Top(,R)G(r)\mathsf{Top}(-, \mathbb{R}) \circ G(r). G(r)G(r) is the morphism iopi^{\mathrm{op}} from UU to VV corresponding to the (continuous) inclusion function i:VUi:V \to U. Then, Top(,R)(iop)=i:Top(U,R)Top(V,R)\mathsf{Top}(-, \mathbb{R})(i^{\mathrm{op}}) = i^*: \mathsf{Top}(U, \mathbb{R}) \to \mathsf{Top}(V, \mathbb{R}). Here, ii^* acts by precomposition, so it sends a continuous function f:URf:U \to \mathbb{R} to the function fi:VRf \circ i:V \to \mathbb{R}. We note that Top(,R)G(r)=i\mathsf{Top}(-, \mathbb{R}) \circ G(r) = i^* is indeed acting to restrict functions, as desired.

view this post on Zulip David Egolf (Mar 27 2024 at 18:52):

Peva Blanchard said:

I would have gone for a more direct proof that FF is functor.

spoiler


I've taken a look at this now! Thanks for chiming in! I went for the less direct approach in part because it felt like it made it easier for me to realize that there are topology things going on. (The "reduction ad obvious-um" is great :laughing:, but I wanted to try and work out the details this time!)

I think there's going to be a bit more topology involved in showing that FF also satisfies the sheaf condition... :upside_down:

view this post on Zulip Peva Blanchard (Mar 27 2024 at 19:09):

:smile: I see, I was not sure which level of details you wanted to work at.

Btw, I find this format very nice. I read John's blog post series on topos not that long ago, but I never took the time to do the puzzles in detail. If you don't mind I'll probably join you on some puzzles, using the "spoiler" feature.

view this post on Zulip David Egolf (Mar 27 2024 at 19:18):

Peva Blanchard said:

:smile: I see, I was not sure which level of details you wanted to work at.

Btw, I find this format very nice. I read John's blog post series on topos not that long ago, but I never took the time to do the puzzles in detail. If you don't mind I'll probably join you on some puzzles, using the "spoiler" feature.

That sounds awesome!

view this post on Zulip Julius Hamilton (Mar 28 2024 at 11:53):

(deleted)

view this post on Zulip Julius Hamilton (Mar 28 2024 at 11:55):

David Egolf said:

I'm going to take a puzzle/exercise-based approach. I find it helps me focus my thoughts to have a particular thing I'm trying to figure out. (Sometimes I'll even jump straight to an exercise before reading a section! Then that exercise helps motivate my reading.)

The first exercise I want to contemplate is this:

In many of these examples something nice happens. First, suppose we have $s \in F U$ and an open cover of $U$ by open sets $U_i$. Then we can restrict $s$ to $U_i$ getting something we can call $s|_{U_i}$. We can then further restrict this to $U_i \cap U_j$. And by the definition of presheaf, we have

$(s|_{U_i})|_{U_i \cap U_j} = (s|_{U_j})|_{U_i \cap U_j}$

In other words, if we take a guy in $F U$ and restrict it to a bunch of open sets covering $U$, the resulting guys agree on the overlaps $U_i \cap U_j$. Check that this follows from the definition of functor and some other facts!

I’ll aim to attempt this exercise today.

view this post on Zulip David Egolf (Mar 28 2024 at 16:44):

Julius Hamilton said:

I’ll aim to attempt this exercise today.

Sounds great! By the way, I think it's very much in the spirit of this topic to "think out loud" a bit on these exercises. So, if you feel like it, please feel free to share some thoughts on the exercise - whether you're stuck on it or whether you have completed it!

view this post on Zulip David Egolf (Mar 28 2024 at 16:45):

Here's the puzzle I was working on above:

Puzzle. Let X=RX = \mathbb{R} and for each open set URU \subseteq \mathbb{R} take FUF U to be the set of continuous real-valued functions on UU. Show that with the usual concept of restriction of functions, FF is a presheaf and in fact a sheaf.

We saw above that FF is a presheaf. It remains to show that FF is a sheaf.

view this post on Zulip David Egolf (Mar 28 2024 at 16:56):

FF is a sheaf if we can do the following:

view this post on Zulip David Egolf (Mar 28 2024 at 17:03):

To give an analogy to imaging, we might think of each sis_i as a picture of some region in space. Then we would like to be able to "stitch together" a bunch of pictures that agree on their overlaps to get one picture of a larger area. Depending on what conditions we place on the pictures in question, this may or may not be possible! For example if we care about "plausibility" in some sense, note that we can't always stitch together plausible images of small areas to make a plausible image of some larger area.

view this post on Zulip David Egolf (Mar 28 2024 at 17:12):

Let us assume that we have selected some UO(R)U \in \mathcal{O}(\mathbb{R}), a bunch of UiO(R)U_i \in \mathcal{O}(\mathbb{R}) so that U=iUiU = \cup_i U_i, and for each ii an siFUis_i \in FU_i so that these selected continuous-real-valued functions "agree on overlaps" in the sense mentioned above. We wish to show that these sis_i can be glued together to form a real-valued continuous function s:URs:U \to \mathbb{R} that restricts to sis_i on UiU_i, and further that this resulting function is unique.

Now, a function s:URs:U \to \mathbb{R} is completely determined by the value that it takes at each point in UU. Since iUi=U\cup_i U_i = U, an arbitrary point xx of UU is in some UiU_i. And since sUi=sis|_{U_i} = s_i we have s(x)=si(x)s(x) = s_i(x). So, the value of ss is determined at each point once we pick sis_i for each ii. So, if ss exists, it is certainly unique.

view this post on Zulip David Egolf (Mar 28 2024 at 17:15):

Let's define s:URs:U \to \mathbb{R} as follows. For xUx \in U, if xUix \in U_i, let s(x)=si(x)s(x) = s_i(x). This definition gives us a function, because if xUix \in U_i and xUjx \in U_j, then si(x)=sj(x)s_i(x) = s_j(x). It remains to show that ss is continuous.

view this post on Zulip David Egolf (Mar 28 2024 at 17:21):

At this point, I recall the "gluing lemma" from topology (the quote below is from "Introduction to Topological Manifolds" by Lee):

Let XX and YY be topological spaces, and let {Ai}\{A_i\} be either an arbitrary open cover of XX or a finite closed cover of XX. Suppose that we are given continuous maps fi:AiYf_i: A_i \to Y that agree on overlaps: fiAiAj=fjAiAj{f_i}|_{A_i \cap A_j} = {f_j}_{A_i \cap A_j}. Then there exists a unique continuous map f:XYf: X \to Y whose restriction to each AiA_i is equal to fif_i .

view this post on Zulip David Egolf (Mar 28 2024 at 17:23):

This lemma assures us that ss is indeed continuous! I think that concludes this puzzle. (If I was working this out offline, I'd consider trying to prove the relevant part of this "gluing lemma". But to keep this topic a bit more focused, I'll not do that here).

view this post on Zulip Reid Barton (Mar 28 2024 at 17:28):

Here's another (possibly tricky) puzzle for you. When you proved that FF was a presheaf, you introduced another functor, GG. Does this proof that FF is a sheaf have some formulation that involves GG?

view this post on Zulip Reid Barton (Mar 28 2024 at 17:29):

(To make things a bit simpler, I suggest getting rid of the "op"s in the definition of GG.)

view this post on Zulip John Baez (Mar 28 2024 at 18:16):

I would like to see a proof of the gluing lemma! To my mind this is the most interesting part of the whole puzzle. It's also not hard to prove. In fact, I never knew anyone had stated it formally as a "lemma" in some book.

The reason it's important is this: to show that R\mathbb{R}-valued continuous functions form a sheaf, it turns out that the crucial step - the only step where something about continuity matters - is the step where you show that continuity is a "local" property.

That is, you can check if a function is continuous by running around your space, looking at each point, and asking "is the function continuous here?" Your function is continuous iff the answer is "yes" at each point.

Later I give an example of a property that's not local: namely, for an R\mathbb{R}-valued function to be bounded is not local. So there's no sheaf of bounded R\mathbb{R}-valued functions.

So, by outsourcing this "gluing lemma" to some textbook, I think you're missing out on an insight that this puzzle was designed to deliver.

view this post on Zulip Peva Blanchard (Mar 28 2024 at 18:16):

David Egolf said:

This lemma assures us that ss is indeed continuous! I think that concludes this puzzle. (If I was working this out offline, I'd consider trying to prove the relevant part of this "gluing lemma". But to keep this topic a bit more focused, I'll not do that here).

This gluing business is really what clicked for me when trying to understand sheaves. Continuity is a "local" property, so it goes well with gluing. I'll try to prove the gluing lemma.

spoiler

view this post on Zulip David Egolf (Mar 28 2024 at 19:06):

Thanks for pointing that out, @John Baez ! (And @Reid Barton , thanks for suggesting another puzzle - I'll potentially take a look at it in a bit! It sounds intriguing.)

I was already a bit "spoiled" on the proof of the gluing lemma, when I looked it up in my book earlier. Lee uses what he calls the "Local Criterion for Continuity" to prove the lemma in the case where the {Ai}\{A_i\} form an open cover of XX. Here is a statement of the "Local Criterion for Continuity" from Lee's book:

A map f:XYf:X \to Y between topological spaces is continuous if and only if each point of XX has a neighborhood on which (the restriction of ) ff is continuous.

If we accept this, then we can use this to prove that our sFUs \in FU is continuous (we recall that s:URs:U \to \mathbb{R}). Let us pick an arbitrary point xx from URU \subseteq \mathbb{R}. We want to show that there is a neighborhood of xx on which the restriction of ss is continuous. Since U=iUiU = \cup_i U_i, we know that xUix \in U_i for some ii. The restriction of ss to UiU_i is sis_i, which we know is continuous. By the "Local Criterion for Continuity", we conclude that s:URs: U \to \mathbb{R} is continuous.

view this post on Zulip David Egolf (Mar 28 2024 at 19:08):

It might be good to take this a step further and prove the "Local Criterion for Continuity" described above. I'll probably give this a try in a bit!

view this post on Zulip John Baez (Mar 28 2024 at 22:23):

Yes, that "local criterion for continuity" is the key step. It's actually a wonderful fact: while continuity is defined in a somewhat "global" way for a function f:XYf: X \to Y, there turns out to be a concept of a function being "continuous at a point", such that ff is continuous iff it is continuous at each point xXx \in X.

If you know how to prove this in your sleep, then of course there's no need to prove it here! But otherwise, it's worth thinking about.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 13:03):

David Egolf said:

Julius Hamilton said:

I’ll aim to attempt this exercise today.

Sounds great! By the way, I think it's very much in the spirit of this topic to "think out loud" a bit on these exercises. So, if you feel like it, please feel free to share some thoughts on the exercise - whether you're stuck on it or whether you have completed it!

That’s exactly what I want to do. I get self conscious that my amateur sloppiness feels like fluff. But I will. One sec.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 13:20):

David Egolf said:

I'm going to take a puzzle/exercise-based approach. I find it helps me focus my thoughts to have a particular thing I'm trying to figure out. (Sometimes I'll even jump straight to an exercise before reading a section! Then that exercise helps motivate my reading.)

The first exercise I want to contemplate is this:

In many of these examples something nice happens. First, suppose we have $s \in F U$ and an open cover of $U$ by open sets $U_i$. Then we can restrict $s$ to $U_i$ getting something we can call $s|_{U_i}$. We can then further restrict this to $U_i \cap U_j$. And by the definition of presheaf, we have

$(s|_{U_i})|_{U_i \cap U_j} = (s|_{U_j})|_{U_i \cap U_j}$

In other words, if we take a guy in $F U$ and restrict it to a bunch of open sets covering $U$, the resulting guys agree on the overlaps $U_i \cap U_j$. Check that this follows from the definition of functor and some other facts!

I live under time constraints (like all of us) so if it seems like I could easily get the answers to these questions just by reading more, I’m just trying to make it clear that I was encouraged by David egolf to “think out loud” and it is helpful for me to be able to learn on-the-go like this. Thanks.

I took a real analysis class as an undergraduate but did not take a lot of other standard math classes. I never really studied topology.

The definition seems simple (I already looked it up). I want to make my thinking super rigorous which is why I am trying to formulate everything in Coq lately. That’s been fun but means I also need to learn more about Coq itself.

(This is meant to be on-topic, I’m saying I prefer to learn by expressing math in Coq).

Set is a fundamental keyword in Coq - I think a Type. I know there is the meta-type Sort as well. I am not clear in some regards how Coq treats types and sets differently. For example, I don’t know if Types have zero implication of their size, ie how many terms are assumed to exist in any given type.

I need to express “some specific element / term of type Set, but I don’t know which one (becusss it’s meant to be arbitrary)”. I think this is the Parameter keyword but not sure yet.

Parameter X : Set.

A topology is arguably of Coq Sort “Record”. I understand Record to be yet another of these mathematical ways of describing “collections” of things. A Record tries to have no commitment to any theories of mathematics, like Sets do. A record is a Type, but it can contain “multiple things”. So, a topology is:

A Record T
Consisting of a set X
And a set O of subsets of X
Such that the 3 topology conditions hold. They are:
Empty set and X are in S.
Arbitrary unions.
Finite intersections.
(I never took the time to think about why unions can be arbitrary but intersections have to be finite).

Actually, Topology should not be a record, it should use keyword Definition, since a Record is better for a single instance of an object? Not sure.

Record T : Type :=
| Parameter X : Set
| Parameter S : Set :=
(assume we just import a “power set function” until I have time to define one myself)
| (I haven’t learn how to state constraints on a type yet, but I need to state here the topology requirements. I guess it’s of type Prop, since they are Boolean. Some like Definition has_empty : S -> Prop := (assume we import a definition of empty set and set membership).

And so on.

There’s my thinking aloud before I head off to work. I wanted to work up to questions I had about presheafs. I was stuck on thinking about inclusion mappings.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 13:21):

Baez’s post mentions that we need to reverse the direction of the arrows (O(X)^{op}) and I was trying to fully understand.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 13:24):

I have actually been spending a lot of time thinking about what the real nature of being “functional” is. I know the common definition. But I want to see clearly why the properties of categories come from mappings being functional (sometimes).

An inclusion mapping is functional. It maps an element of a subset of S to the same element in the set S. How do you define that mathematically?

view this post on Zulip Julius Hamilton (Mar 29 2024 at 13:26):

I guess we can express “mapped to itself” with an expression like f: S1 -> S such that S1 \subset S and f(x) = x. This is allowed by the axiom of extensionality? Though they are elements in different sets, there is already an equality relation defined on the elements of the two respective sets.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 13:26):

How can we reverse an inclusion mapping? It wouldn’t be functional. An inclusion mapping is injective but not surjective.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 13:26):

Thanks!

view this post on Zulip David Egolf (Mar 29 2024 at 17:08):

Thanks for joining in @Julius Hamilton ! I don't know enough about Coq to understand your questions relating to it (hopefully someone else here does, though!).

I can talk a bit about how I understand O(X)\mathcal{O}(X) and O(X)op\mathcal{O}(X)^{\mathrm{op}} though, in case that is helpful to you.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 17:08):

Yeah please do

view this post on Zulip David Egolf (Mar 29 2024 at 17:14):

In my understanding, O(X)\mathcal{O}(X) is a category that we can create when we're given a topology on the set XX. To create this category, we need to say what its objects and morphisms are.

As you mentioned above, any topology on XX has a collection of subsets of XX - the "open sets". The open sets of our topology are the objects of O(X)\mathcal{O}(X).

And given two open sets UU and VV in our topology on XX, we can ask this question: Is UU a subset of VV? If the answer to this question is "yes!", then we put a morphism from UU to VV. Otherwise, we don't put a morphism from UU to VV.

view this post on Zulip David Egolf (Mar 29 2024 at 17:17):

Then O(X)op\mathcal{O}(X)^{\mathrm{op}} is another category: it's just like O(X)\mathcal{O}(X) except we turn all the arrows around. So now we put a morphism from UU to VV exactly if VUV \subseteq U.

view this post on Zulip David Egolf (Mar 29 2024 at 17:19):

I don't really think of these morphisms as functions. I think of them more like "yes!" answers to a yes/no question.

I'm not sure I was really addressing your point of confusion :sweat_smile:. Hopefully this is still somewhat helpful!

view this post on Zulip Julius Hamilton (Mar 29 2024 at 17:28):

Ok. Yeah that simplifies it dramatically.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 17:28):

I think I was confused regarding how simple a morphism can be. I’ll think more about that.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 17:39):

In many of these examples something nice happens. First, suppose we have sFUs \in F U and an open cover of UU by open sets UiU_i.

s is some arbitrary set.
U is an open set in O(X).
Does it matter if it turned out that s was a set in O(X)? Since O(X) is surely a sub-category of Set?

I’ll assume an open cover of U is a union of sets in O(X) such that U \subset of the union.

Then we can restrict ss to UiU_i getting something we can call sUis|_{U_i}.

Baez’s post said we need to flip the morphisms precisely so we can “restrict” the functor. So this is saying, “only those elements of s such that there exists an element x in U_i for which F(x) \in s”. ?

We can’t do this without flipping the morphisms? I’d like to think about how. (Thinking out loud :wink:)

view this post on Zulip David Egolf (Mar 29 2024 at 17:53):

The notation sFUs \in FU means that ss is an element of the set FUFU. For example, depending on what FF does, ss might be a continuous real-valued function with domain of UU.

view this post on Zulip John Baez (Mar 29 2024 at 17:56):

@Julius Hamilton - I think it's very helpful to focus on a specific example of a sheaf when trying to understand the definition of sheaf, and (unsurprisingly) I recommend the example David is talking about, where FUFU is the set of continuous functions f:URf: U \to \mathbb{R}.

Or, if "continuous" is distracting to you, think about the sheaf where FUFU is the set of all functions f:URf: U \to \mathbb{R}.

Or, if R\mathbb{R} is distracting to you, replace it with some other set.

If you read all the sheaf axioms keeping some example like this in mind, they should make more sense.

view this post on Zulip John Baez (Mar 29 2024 at 18:05):

(The reason I picked a sheaf of continuous functions is because topos theory originated as a generalization of topology, as the name suggests - so ideas from topology can help explain what people are doing in topos theory. There are also other ways to get into topos theory, but my course notes - and the book they're based on - start with topology. Luckily you only need to know a small amount of topology.)

view this post on Zulip David Egolf (Mar 29 2024 at 18:53):

In the spirit of trying to better understand why we can detect continuity of a function in a local way, I'll now to try to prove this:

A map f:XYf:X \to Y between topological spaces is continuous if and only if each point of XX has a neighborhood on which (the restriction of ) ff is continuous.

I got stuck along the way. To get unstuck, I referenced Lee's book on topological manifolds. So, what I wrote below below closely follows the proof in Lee's book.

If ff is continuous, then for any point xXx \in X, we have that XX is a neighborhood of xx on which the restriction of ff (which is just ff) is continuous.

For the other direction, we assume that each point of XX has a neighborhood (an open set) on which the restriction of ff is continuous. We wish to show that f:XYf: X \to Y must then be continuous. To show that ff is continuous, we consider an arbitrary open subset VV of YY. We wish to show that the preimage of VV under ff is open in XX.

I'll call this preimage f(V)f^*(V). Now, a set AA is open exactly if every point xx in AA is contained in an open subset UxAU_x \subseteq A. Thinking of openness in this way seems likely helpful, as it involves a condition that can be checked at each point - and we are trying to understand why continuity involves a condition that can be checked at each point.

Now, we know that any xf(V)x \in f^*(V) has some neighborhood UxU_x such that fUx:UxYf|_{U_x}: U_x \to Y is continuous. Since fUxf|_{U_x} is continuous and VV is open, (fUx)(V)(f|_{U_x})^*(V) is also open in the subspace topology on UxU_x. Thus, it is the intersection of some open set AA (in the topology on XX) with UxU_x. Since UxU_x is open and AA is open, (fUx)(V)(f|_{U_x})^*(V) is also open (in the topology on XX).

We are hoping that (fUx)(V)(f|_{U_x})^*(V) is an open subset of f(V)f^*(V) that contains xx. This would provide the neighborhood of "breathing room" about xx in f(V)f^*(V) that we need for f(V)f^*(V) to be open.

By definition, (fUx)(V)(f|_{U_x})^*(V) is the subset of UxU_x that maps into VV under ff. So, its elements are exactly those that are: (1) in UxU_x and (2) map to VV under ff. Thus, (fUx)(V)=Uxf(V)(f|_{U_x})^*(V) = U_x \cap f^*(V). Note that xUxf(V)x \in U_x \cap f^*(V). We also note that (fUx)(V)=Uxf(V)(f|_{U_x})^*(V) = U_x \cap f^*(V) is a subset of f(V)f^*(V).

We conclude that an arbitrary point of f(V)f^*(V) has a neighborhood contained in f(V)f^*(V). Therefore ff is continuous.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 19:02):

John Baez said:

Julius Hamilton - I think it's very helpful to focus on a specific example of a sheaf when trying to understand the definition of sheaf, and (unsurprisingly) I recommend the example David is talking about, where $FU$ is the set of continuous functions $f: U \to \mathbb{R}$.

Or, if "continuous" is distracting to you, think about the sheaf where $FU$ is the set of all functions $f: U \to \mathbb{R}$.

Or, if $\mathbb{R}$ is distracting to you, replace it with some other set.

If you read all the sheaf axioms keeping some example like this in mind, they should make more sense.

A presheaf FF is a functor from O(X)opO(X)^{op} to Set{\mathbf Set}. (Just restating definitions to exercise myself).

Baez says that we can see why we would want to take the opposite category of O(X)O(X) if we think of one possible presheaf FF sending each UOb(O(X))U \in Ob(O(X)) to that set in {Set}\{\mathbf Set\} that contains all real-valued functions over UU.

Let’s consider some examples.

If XX is the real numbers, let’s consider a common topology over the reals. (Which is?)

Topologies are a way to express geometric concepts. Why are they so fundamentally defined in terms of “open sets”?

Perhaps it has to do with continuity and limits. Maybe it allows us to define the epsilon-delta condition without recourse to a distance metric?

view this post on Zulip Julius Hamilton (Mar 29 2024 at 19:06):

David Egolf said:

The first thing I want to note is that $\mathcal{O}(X)^{\mathrm{op}}$ is a poset. Consequently, all diagrams commute in $\mathcal{O}(X)^{\mathrm{op}}$!

Why?

view this post on Zulip Eric M Downes (Mar 29 2024 at 19:29):

Julius Hamilton said:

David Egolf said:

The first thing I want to note is that $\mathcal{O}(X)^{\mathrm{op}}$ is a poset. Consequently, all diagrams commute in $\mathcal{O}(X)^{\mathrm{op}}$!

Why?

  1. Can you distinguish between the "generator arrows" of a poset, and the morphisms only in the closure? The latter are "required to be there" to satisfy the associativity of composition, given the generators; hint: three objects, with two morphisms f,g; dom.g=cod.ff,g;~dom.g=cod.f and dom.fcod.gdom.f\neq cod.g apply transitivity of partial order; what have you got?)
  2. Given two distinct objects X,Yobj.O(X)X,Y\in obj.\mathcal{O}(X), how many generator arrows are you allowed to draw? That is, how big is the homset hom(X,Y)hom(X,Y)?
  3. Does moving to the opposite category change any of this?
  4. So, if you draw a diagram with only morphisms within O(X)\mathcal{O}(X) (no functors to other cats allowed!), how do (1) and (2) constrain said diagrams?

I'm sure there are more sophisticated ways of thinking about this, but that is how I approach it.

view this post on Zulip David Egolf (Mar 29 2024 at 19:58):

Julius Hamilton said:

David Egolf said:

The first thing I want to note is that O(X)op\mathcal{O}(X)^{\mathrm{op}} is a poset. Consequently, all diagrams commute in O(X)op\mathcal{O}(X)^{\mathrm{op}}!

Why?

Here's how I think of it ("it" being "all diagrams commute in a poset"):

When we say (part of) a diagram "commutes", we mean that two different sequences of morphisms compose to the same morphism. If a category is a poset, it has at most one morphism from AA to BB, for any objects AA and BB. Therefore, if I have two different sequences of morphisms from AA to BB, when I compose the morphisms in each sequence there's only one possible morphism for me to get as a result!

Consequently, these two sequences of morphisms must compose to the same morphism. And hence the corresponding (part of a) diagram must commute.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 22:00):

David Egolf said:

Julius Hamilton said:

David Egolf said:

The first thing I want to note is that $\mathcal{O}(X)^{\mathrm{op}}$ is a poset. Consequently, all diagrams commute in $\mathcal{O}(X)^{\mathrm{op}}$!

Why?

Here's how I think of it ("it" being "all diagrams commute in a poset"):

When we say (part of) a diagram "commutes", we mean that two different sequences of morphisms compose to the same morphism. If a category is a poset, it has at most one morphism from $A$ to $B$, for any objects $A$ and $B$. Therefore, if I have two different sequences of morphisms from $A$ to $B$, when I compose the morphisms in each sequence there's only one possible morphism for me to get as a result!

Consequently, these two sequences of morphisms must compose to the same morphism. And hence the corresponding (part of a) diagram must commute.

That is such a beautifully simple explanation. You have a knack for clear simple understanding.

view this post on Zulip Julius Hamilton (Mar 29 2024 at 22:19):

Eric M Downes said:

  1. Can you distinguish between the "generator arrows" of a poset, and the morphisms only in the closure?

I’d never thought of that before. I’m curious to know what those elements might be called in an abstract algebra setting. I’ve been thinking about how a one object category is a monoid, and if there is a corresponding abstract algebraic structure for a “multi-object category”. The thing is, not all the arrows (elements of the “structure”) compose with one another. All algebraic structures I know of are defined by closure and by “totality”.

I think your point is that all the arrows in a thin category are generators. Now I have to think about what how categories can have non-generator arrows.
I think the most basic way of expressing the compositional requirement of the arrows in a category is, “if they can compose, they do.” “If they are composable, then compose.”

I’ll come to the other points you made in a bit.

view this post on Zulip Julius Hamilton (Mar 30 2024 at 00:31):

David Egolf said:

Now, let's pick some $s \in FU$. Since this diagram commutes, we have that $r_{U_i \to U_i \cap U_j} \circ r_{U \to U_i}(s) = r_{U_j \to U_i \cap U_j} \circ r_{U \to U_j}(s)$. I believe this is just different notation for the thing we wanted to prove!

I believe I follow your argument piece by piece but want to digest it more.

Every diagram in a poset commutes.
Functors preserve commutative diagrams (why)?
In O(X)opO(X)^{op}, the morphisms essentially say “contains”.
If we think of FF as mapping a set to a function defined on that set, we can think of the “contains” morphism in O(X)opO(X)^{op} as corresponding, under FF, to a restriction of some function on some subset of its domain.
Baez basically just asks us to show that restricting some sFUs \in FU (which can be thought of as a function) to an open subset UiUU_i \in U, and restrict it to some other subset UjUU_j \in U, you can further restrict both of those restricted functions to UiUjU_i \cap U_j, and they are the same. Which David showed.

Questions:

  1. What is an example of a function which does not have this property?
  2. What if we stop thinking of FF as mapping a domain to a function over that domain? It is harder for me to regain the intuition of what the arrows in Set{\mathbf {Set} } “mean” or “correspond to”.

view this post on Zulip Julius Hamilton (Mar 30 2024 at 00:40):

David Egolf said:

Alright, we next have our first official "puzzle"!

Puzzle. Let $X = \mathbb{R}$ and for each open set $U \subseteq \mathbb{R}$ take $F U$ to be the set of continuous real-valued functions on $U$. Show that with the usual concept of restriction of functions, $F$ is a presheaf and in fact a sheaf.

I'll start by seeking to show that $F$ is a presheaf.

In order to show it is a presheaf, I think we have to show XX has a natural topology and forms a thin category, and then that FF fulfills the functor axioms (if we reverse the direction of the arrows). I’ve been trying to tell myself “a functor is a morphism in the category of categories” as a single idea to remind myself of the definition. I think the important thing is if two arrows f,gf, g in CC compose, then arrows Ff,FgFf, Fg must compose in such a way that FgFf=F(gf)Fg \circ Ff = F(g \circ f). Which basically says, “when you map the morphisms over, you can take the composition before, or you can take it after”.

view this post on Zulip Julius Hamilton (Mar 30 2024 at 00:42):

I’ll follow the rest of David’s proof a little later.

view this post on Zulip David Egolf (Mar 30 2024 at 02:35):

I don't have energy right now to respond in detail to your comments, @Julius Hamilton. But I did notice that above you asked "why?", regarding the fact that functors send commutative diagrams to commutative diagrams. You might find it a helpful exercise to pick a particular (simple) commutative diagram, and then aim to show that applying a functor FF to that diagram yields a commutative diagram.

view this post on Zulip Julius Hamilton (Mar 30 2024 at 02:35):

All good brotha rest up. Yes I will think about that.

view this post on Zulip David Tanzer (Mar 30 2024 at 04:47):

To prove it, need to first formalize what it means for a diagram to be commutative.

view this post on Zulip David Tanzer (Mar 30 2024 at 06:16):

As an aside, in an alternative definition, one could take preserving diagram commutativity as a defining characteristic of a functor; then recover T(fg)=T(f)T(g)T(f \circ g) = T(f) \circ T(g) by applying this to a simple diagram. Not as efficient as a technical definition, but seems conceptually useful.

view this post on Zulip David Egolf (Mar 30 2024 at 16:14):

Reid Barton said:
Before moving on to the next puzzle, I want think about this for a bit:

Here's another (possibly tricky) puzzle for you. When you proved that FF was a presheaf, you introduced another functor, GG. Does this proof that FF is a sheaf have some formulation that involves GG?

The words "possibly tricky" strike some fear into my heart, but this sounds fun to think about - so I'll see what I can figure out...

view this post on Zulip David Egolf (Mar 30 2024 at 16:22):

First, let's recall the context:

view this post on Zulip David Egolf (Mar 30 2024 at 21:53):

Imagine we have two open sets U1U_1 and U2U_2 (in the standard topology on R\mathbb{R}) with U=U1U2U = U_1 \cup U_2. For Top(,R)G\mathsf{Top}(-, \mathbb{R}) \circ G to be a sheaf, I think we need to be able to construct a unique real-valued continuous function s:GURs:GU \to \mathbb{R} from a pair of real-valued continuous functions s1:GU1Rs_1:GU_1 \to \mathbb{R} and s2:GU2Rs_2:GU_2 \to \mathbb{R}, provided that s1s_1 and s2s_2 agree on GU1GU2GU_1 \cap GU_2. [Or should they agree on G(U1U2)G(U_1 \cap U_2)? Still figuring this out...]

We also want to be able to go the other way: given the s:GURs:GU \to \mathbb{R} constructed from s1s_1 and s2s_2 above, we want to be able to recover s1s_1 and s2s_2 from ss by appropriately restricting ss.

When we have two batches of information that we want to be equivalent, that makes me start to think of limits or colimits!

view this post on Zulip David Egolf (Mar 30 2024 at 22:05):

I'm pretty sure what I just wrote above isn't quite right. Or, at least, I'm quite confused about it.

But here's a picture illustrating the very rough idea I have in mind, that I'm still working to clearly spell out:
picture

This diagram is in Topop\mathsf{Top}^{\mathrm{op}}.

view this post on Zulip David Egolf (Mar 30 2024 at 22:07):

Very roughly, I'm starting to wonder if GG should do something like "preserving pullbacks", if Top(,R)G\mathsf{Top}(-, \mathbb{R}) \circ G is to be a sheaf. But it will take me some thinking to express this idea more clearly!

view this post on Zulip John Baez (Mar 30 2024 at 22:16):

I think you're on the right track. I think all this can be simplified a bit.

view this post on Zulip John Baez (Mar 30 2024 at 22:24):

I go into this in a bit more detail in Part 3 of my course. Just two short paragraphs. But you seem to be enjoying discovering this stuff on your own, which is really better.

view this post on Zulip John Baez (Mar 30 2024 at 22:27):

If you try to develop a subject on your own, the way you're doing, it can become much easier to understand what people are doing when you read the 'official' treatment.

view this post on Zulip John Baez (Mar 30 2024 at 22:36):

By the way, another tiny point:

David Egolf said:

Here is a statement of the "Local Criterion for Continuity" from Lee's book:

A map f:XYf:X \to Y between topological spaces is continuous if and only if each point of XX has a neighborhood on which (the restriction of ) ff is continuous.

I'm losing track of who said what where, but I think I saw someone derive this "local criterion for continuity" from something more basic, which might be called the "local criterion for openness:

A subset SS of a topological space is open     \iff each point xSx \in S is contained in some open set OxO_x contained in SS.

This is amusingly easy to prove. For the     \implies direction just take Ox=SO_x = S. For the \Leftarrow direction just note S=xSOxS = \bigcup_{x \in S} O_x and use the fact that a union of open sets is open.

view this post on Zulip John Baez (Mar 30 2024 at 22:38):

So we can say openness is a 'local' condition: to see if a set is open, you can run around checking some condition about all its points, and the set is open iff this condition holds for all its points.

And this implies that continuity is also a local condition: to see if a function is continuous, you can run around checking some condition at all points of its domain, and the function is continuous iff this condition holds for all those points.

view this post on Zulip David Egolf (Mar 30 2024 at 22:40):

Yes, I made implicit use of this "local criterion for openness" above! Before doing so, I had never noticed this connection between continuity being "locally detectable" and openness being "locally detectable"! Cool stuff! :smile:

view this post on Zulip John Baez (Mar 30 2024 at 22:40):

Yes!

And there's a "for all" in both of these "local criteria". I haven't thought about it hard, but I bet this is connected to the fact that the sheaf condition can be stated in terms of limits. (A "for all" is a limit, and the pullback you were looking at is also a limit.)

view this post on Zulip John Baez (Mar 30 2024 at 22:43):

I guess for now the main moral is that sheaves are all about "locally detectable" properties.

view this post on Zulip Peva Blanchard (Mar 30 2024 at 22:46):

I'm wondering then what makes a property "local".

It is as if it was something that is trivially parallelizable: I imagine a (possibly uncountable) set of agents that would check for each point xx whether the property holds "around" that point, and, most importantly, they don't need to communicate/synchronize. And the agents are indistinguishable (each of them runs the same check procedure).

view this post on Zulip John Baez (Mar 30 2024 at 23:00):

That sounds right! It's a nice thought. I can try to make it slightly more precise. There's an agent for each point. Each agent runs the same check procedure, which can be checked in an arbitrarily small neighborhood of their point. Then at the end we report the answer "true" if and only if they all get the answer "true".

view this post on Zulip John Baez (Mar 30 2024 at 23:02):

Important properties of functions f:RRf : \mathbb{R} \to \mathbb{R} like continuity, differentiability, smoothness, analyticity, upper and lower continuity, and measurability all work like this.

view this post on Zulip John Baez (Mar 30 2024 at 23:02):

Later in my posts I talk about the idea of a 'germ', which is connected to this stuff.

view this post on Zulip Peva Blanchard (Mar 30 2024 at 23:22):

Oh yes, I remember now the formal definition of 'germ', but it's the first time that I get an inverse "tree-like" mental picture of it. Here's what I have in mind.

Initially, we have one agent that checks whether the property PP holds over the open set XX. The agent can spawn (possibly uncountably many) new agents, that are clones of their genitor, each of them being responsible for an open subset of XX. The parent agent reports true iff all its children reports true. And the process continues like this.

This is a very poor algorithm: depending on the topological space, this could take a transfinite number of steps and a transfinite number of agents.

What I find amusing is that the opposite category of open subsets of XX somehow describes this big clone-spawning branching process. The points of XX are exactly the (infinitely long) branches.

view this post on Zulip Peva Blanchard (Mar 30 2024 at 23:32):

ps: To be more precise, I should force agents to merge when they work on the same open subset.

view this post on Zulip David Egolf (Mar 31 2024 at 00:10):

The idea of "local agents that can work in parallel" reminds me a whole lot of an ultrasound reconstruction technique I know of, where the reconstruction at each point can be computed in isolation of the reconstruction at different points. But this is not quite analogous because the agents in this case would report a number, not just a "yes!" or "no!" regarding whether some property holds.

view this post on Zulip David Egolf (Mar 31 2024 at 00:11):

One could also consider checking a reconstructed image point by point, and at each point asking if the reconstruction at that point is "plausible" (in some sense) given the observations. (This would probably involve assessing whether the observed data "relevant to this point" is similar enough relative to what we'd expect if our reconstruction as this point reflects the true object).

However, just because a reconstructed image agrees (in some sense) with the observed data at each point does not imply that the entire reconstructed image agrees with the observed data. So, in this example, "reconstruction plausibility" is not "locally detectable".

view this post on Zulip David Egolf (Mar 31 2024 at 00:16):

This has been a fruitful bonus question to think about!

However, starting next week, I'm hoping to move on to the next puzzle in the blog post, which is this:

Let X=RX = \mathbb{R} and for each open set URU \subseteq \mathbb{R} take FUF U to be the set of bounded continuous real-valued functions on UU. Show that with the usual concept of restriction of functions, FF is a separated presheaf but not a sheaf.

view this post on Zulip Peva Blanchard (Mar 31 2024 at 00:44):

David Egolf said:

The idea of "local agents that can work in parallel" reminds me a whole lot of an ultrasound reconstruction technique I know of, where the reconstruction at each point can be computed in isolation of the reconstruction at different points. But this is not quite analogous because the agents in this case would report a number, not just a "yes!" or "no!" regarding whether some property holds.

I'll cheat a bit (because I remember later posts from John's series). I think these Boolean agents actually are the truth values of the topos of sheaves over XX. When the root agent reports "yes!", then the property holds everywhere. When one of its children does, then the property holds on the associated subset. I think we can think of this "collection of boolean agents over XX" as a specific sheaf.

spoiler

Now regarding agents that would report numbers. I think we should remember that the parent agent aggregates the results of its children. In the "yes/no" case, the aggregation is simply a big conjunction. If, oth, "point"-children report numbers, then the parent can aggregate just by making a tuple out of them, indexed by their location. In other words, each agent really reports real-valued function, and the parent aggregation process amounts to gluing the functions reported by its children, i.e., taking a categorical limit.

I will stop there, as otherwise, it's just going to be another burrito tutorial.

Anyway, a lot of things clicked today, so thank you :)

view this post on Zulip John Baez (Mar 31 2024 at 01:44):

Good stuff, Peva! I'm too tired to think hard about what you said, so I'll just report the slogan "topos theory is the study of local truth".

view this post on Zulip Madeleine Birchfield (Mar 31 2024 at 15:16):

Do presheaf categories still have subobject classifiers if Set does not have a subobject classifier because one is a constructive predicativist?

view this post on Zulip Morgan Rogers (he/him) (Mar 31 2024 at 15:23):

Did you already see the nLab page [[predicative topos]]?

view this post on Zulip Eric M Downes (Mar 31 2024 at 18:13):

Julius Hamilton said:

Eric M Downes said:

  1. Can you distinguish between the "generator arrows" of a poset, and the morphisms only in the closure?

I’d never thought of that before. I’m curious to know what those elements might be called in an abstract algebra setting. I’ve been thinking about how a one object category is a monoid, and if there is a corresponding abstract algebraic structure for a “multi-object category”.

"Generators" is most common for such elements in most contexts; famously the symmetric group SnS_n can be generated by just two maps mm+1(modn)m\mapsto m+1\pmod{n} and (12)(12). What is the operation under which a permutation group specifically can be said to be closed?

A family of subsets F(X)\mathcal{F}(X) can generate a topology cl,(F)\mathrm{cl}_{\cup,\cap}(\mathcal{F}). The topology is the closure under arbitrary unions and finite intersections. Is a topology a category?

There is such a thing as a delooping; simplest context is a finite commutative monoid, in which every element is an object. Draw an arrow xyx\to y just when z; y=zx\exists z;~y=zx (Green's relations). Take your favorite finite commutative monoid (without inverses if you want to deal with fewer arrows), how few arrows can you specify, such that asserting closure under composition of arrows fills in the rest of the cayley table?

The above arrow drawing requires associativity of elements. "Magmas" are the non-associative binops. For a familiar structure that is closed in an elemental sense but not closed in another very meaningful sense, consider the rock-paper-scissors magma
rpsrrprpppssrss\begin{array}{c|ccc}& r&p&s\\ \hline r & r & p & r\\ p & p & p & s\\ s & r & s & s\end{array}
This binary operator is not associative. You can rephrase the associativity condition as a kind of closure (or lack-thereof) under a certain familiar operation. What is it? How many elements must the closed structure have?

view this post on Zulip Eric M Downes (Mar 31 2024 at 18:36):

Julius Hamilton said:

I think your point is that all the arrows in a thin category are generators. Now I have to think about what how categories can have non-generator arrows.

No, you can have non-generator arrows in a thin category. Consider a poset ({x,y,z},)\big(\{x,y,z\},\leq\big) and there are two "generator" arrows
xy,yzx\leq y, y\leq z
what third arrow must also be present?

view this post on Zulip Eric M Downes (Mar 31 2024 at 18:44):

(And, having answered that question, and recalling there is at most one arrow between any two objects in a thin category, you should understand why all diagrams* in a thin category commute.)

view this post on Zulip David Egolf (Apr 01 2024 at 16:29):

Here's the next puzzle I want to work through:

Let X=RX = \mathbb{R} and for each open set URU \subseteq \mathbb{R} take FUF U to be the set of bounded continuous real-valued functions on UU. Show that with the usual concept of restriction of functions, FF is a separated presheaf but not a sheaf.

We start by showing FF is a functor F:O(R)opSetF: {\mathcal{O}(\mathbb{R})}^{\mathrm{op}} \to \mathsf{Set}, which means it is a presheaf. On objects, FF sends an open set URU \subseteq \mathbb{R} to the set FUFU of bounded continuous real-valued functions on UU. Note: to determine if a function from UU it continuous, we need to put a topology on UU. To talk about continuity, we equip UU with the subspace topology it inherits from R\mathbb{R}.

On morphisms, FF sends a morphism r:ABr:A \to B to the corresponding restriction function, which sends a bounded continuous real-valued function f:ARf:A \to \mathbb{R} to fi:BRf \circ i:B \to \mathbb{R}, where i:BAi: B \to A is the inclusion map. We saw earlier that inclusion maps like ii are continuous, and therefore the restriction of a continuous function is continuous. Further, restricting a bounded function yields a bounded function.

For each object UU in O(R)op{\mathcal{O}(\mathbb{R})}^{\mathrm{op}}, FF sends the identity morphism 1U:UU1_U: U \to U to the identity function on FUFU. This is because restricting a function to its own domain leaves the function unchanged.

If we have the situation rr=rr \circ r' = r'' in O(R)op{\mathcal{O}(\mathbb{R})}^{\mathrm{op}}, then we have F(r)F(r)=F(r)F(r) \circ F(r') = F(r''). That is because restricting a function to some domain in two steps yields the same result as restricting it to that domain all at once.

We conclude that FF is a functor F:O(R)opSetF: {\mathcal{O}(\mathbb{R})}^{\mathrm{op}} \to \mathsf{Set} and hence a presheaf.

view this post on Zulip David Egolf (Apr 01 2024 at 16:34):

Next, we show that FF is a separated presheaf but not a sheaf. If FF was a sheaf, we'd always be able to do the following:

If this ss always exists and is unique, then FF is a sheaf. If ss doesn't always exist, but is unique when it exists, then FF is a separated presheaf.

view this post on Zulip David Egolf (Apr 01 2024 at 16:40):

In this puzzle, if ss exists it is unique. For any xUx \in U, since U=iUiU = \cup_i U_i, we have that xUix \in U_i for some ii. Then, since sUi=sis|_{U_i} = s_i, we have that s(x)=si(x)s(x) = s_i(x). So, the value of ss at every point is fixed (if it exists) once we pick all our sis_i.

But ss doesn't always exist! That's because if you glue together an infinite number of bounded real-valued continuous functions that agree on overlaps, you don't always get a bounded function! Intuitively, if you run around and check that each little bit of a function is locally bounded, you can't conclude that the whole thing is bounded.

We conclude that FF is a separated presheaf and not a sheaf.

view this post on Zulip David Egolf (Apr 01 2024 at 16:56):

There is quite a bit of discussion before the next puzzle! So, I'll try to introduce the next puzzle a little.

To my understanding, part of the goal of the next puzzle is to work towards a notion of morphism between categories of sheaves. And since each category of sheaves is an "elementary topos", this is relevant for thinking about morphisms between elementary topoi.

view this post on Zulip David Egolf (Apr 01 2024 at 17:03):

And why do we care about morphisms between topoi? Here are a couple possible reasons:

view this post on Zulip David Egolf (Apr 01 2024 at 17:06):

At any rate, here is the next puzzle:

Show that fFf_\ast F is a presheaf. That is, explain how we can restrict an element of (fF)(V)(f_\ast F)(V) to any open set contained in VV, and check that we get a presheaf this way.

Here, fFf_\ast F is defined as: (fF)(V)=F(f1V)(f_\ast F)(V) = F(f^{-1} V) for each open subset VV of a topological space YY. Note that f:XYf: X \to Y is a continuous function and FF is a presheaf on XX. We also have f1V={xX:  f(x)V}Xf^{-1} V = \{x \in X :\; f(x) \in V \} \subseteq X. Note that f1Vf^{-1} V is open because VV is open and ff and continuous.

Roughly, our goal here is to make a presheaf on YY given a continuous function f:XYf:X \to Y and a presheaf FF on XX.

view this post on Zulip Peva Blanchard (Apr 01 2024 at 21:59):

David Egolf said:

In this puzzle, if ss exists it is unique. For any xUx \in U, since U=iUiU = \cup_i U_i, we have that xUix \in U_i for some ii. Then, since sUi=sis|_{U_i} = s_i, we have that s(x)=si(x)s(x) = s_i(x). So, the value of ss at every point is fixed (if it exists) once we pick all our sis_i.

But ss doesn't always exist! That's because if you glue together an infinite number of bounded real-valued continuous functions that agree on overlaps, you don't always get a bounded function! Intuitively, if you run around and check that each little bit of a function is locally bounded, you can't conclude that the whole thing is bounded.

We conclude that FF is a separated presheaf and not a sheaf.

I wanted to provide a concrete counter-example.

Consider the topological space X=(0,1]X = (0,1], the half-open unit interval, with the induced topology from R\mathbb{R}. For all nNn \in \mathbb{N}, let Un=(11+n,1]U_n = \left(\frac{1}{1 + n}, 1\right] and fn(x)=1xf_n(x) = \frac{1}{x} on UnU_n. The UnU_n cover XX, X=nUnX = \bigcup_n U_n, and each fnf_n is bounded. Moreover, for any nmn \leq m, UnUm=UnU_n \cap U_m = U_n and fnf_n and fmf_m obviously match on UnU_n. If the presheaf F\mathcal{F} of bounded functions were a sheaf, there would exist a bounded function ff defined on XX such that fUn=fnf_{|U_n} = f_n. In particular, for all nn

sup fsup fn=n+1sup~f \geq sup~f_n = n + 1

whence a contradiction, i.e., F\mathcal{F} is not a sheaf.

view this post on Zulip Peva Blanchard (Apr 01 2024 at 22:11):

There are variants of this idea: presheaf of Lipschitz functions (take x\sqrt{x} or ln xln~x in the example above) (edit: 1x\frac{1}{x} works as well), presheaf of functions with bounded derivatives of order kk (just integrate kk times the previous examples).

view this post on Zulip David Egolf (Apr 01 2024 at 22:22):

Thanks for sharing some specific examples! I hadn't thought of those!

The counterexample I had in mind looks like this in picture form:
counterexample picture

The idea is to consider the identity function f:RRf: \mathbb{R} \to \mathbb{R} that sends xx to xx. Then, we can get each sis_i by restricting ff to, say, Ui=(i,i+2)U_i=(i, i+2). Each sis_i is then bounded, and the collection of sis_i agrees on overlaps, but when we try to glue together all the sis_i, our resulting function isn't bounded anymore.

view this post on Zulip Peva Blanchard (Apr 01 2024 at 22:29):

oh yes, nice! The common pattern to get a "non-sheafy presheaf" is to start from a invalid candidate defined globally such that the restrictions of this candidate to specific open subsets satisfy a condition.

Each invalid candidate has a sort of singularity at some point (e.g., 0 in my example, and ++\infty in yours), and then we just restrict to open subsets that avoid "just enough" this point.

view this post on Zulip Peva Blanchard (Apr 01 2024 at 22:50):

Mmh, these examples do not work any more if XX is compact. Compact means that from any covering we can extract a finite cover of XX. If XX is compact, is the presheaf of bounded functions on XX a sheaf? It seems so.

view this post on Zulip David Egolf (Apr 01 2024 at 22:59):

If XX is compact, couldn't it still have a non-compact open subset UU? And then we could maybe set up an unbounded function ff defined on UU to show that we don't have a sheaf. (Restrict ff to a bunch of UiU_i where iUi=U\cup_i U_i = U, and where fUif|_{U_i} is bounded for each ii. Then these glue together to ff, which is unbounded. So then, we can't always glue together a bunch of compatible FUiFU_i to make an element of FUFU.)

view this post on Zulip Peva Blanchard (Apr 01 2024 at 23:05):

I'm not sure that a compact set can contain a non-compact open subset. (I'm thinking about the closed unit interval [0,1][0,1]). (edit: oh my brain ...)

view this post on Zulip David Egolf (Apr 01 2024 at 23:07):

My initial thought was that something like (0.4,0.6)(0.4,0.6) would be a non-compact open subset of the topological space [0,1][0,1]. But I am a bit shaky on compactness, so maybe I'm just confused. (I'd need to review this stuff!)

view this post on Zulip Peva Blanchard (Apr 01 2024 at 23:09):

Oh yes you're right!

view this post on Zulip Peva Blanchard (Apr 01 2024 at 23:11):

I focused on the total space, but yes we can reproduce the example on any open subset.

view this post on Zulip Peva Blanchard (Apr 01 2024 at 23:16):

Now, I'm looking for a topological space XX such that the presheaf of bounded functions is indeed a sheaf. From our discussion, it suffices that any open subset of XX be compact, right? I'm wondering what kind of space is that.

view this post on Zulip Peva Blanchard (Apr 01 2024 at 23:17):

One example: any set XX equipped with the trivial topology (\emptyset, XX are the only open sets). Topologically, such a space behaves like a space with one point.

view this post on Zulip Peva Blanchard (Apr 01 2024 at 23:41):

Another example: a finite set XX, with any topology.

view this post on Zulip Peva Blanchard (Apr 01 2024 at 23:42):

In other words: any set XX with a topology admitting a finite number of open sets.

view this post on Zulip David Egolf (Apr 01 2024 at 23:45):

I wonder if there are any examples where XX has an infinite number of open sets.
(Interesting stuff! I need to take a break for today - I have to manage my energy carefully - but of course please feel free to keep posting here.)

view this post on Zulip Peva Blanchard (Apr 01 2024 at 23:50):

Sure! I'll probably continue under another topic, so as not to divert the purpose of yours.

view this post on Zulip John Baez (Apr 01 2024 at 23:56):

Digression: here's another example of a separated presheaf that's not a sheaf, which I just thought of. Take N\mathbb{N} with its usual topology, where all subsets are open, and let FF be the presheaf where F(U)F(U) for any UNU \subset \mathbb{N} is the set of computable partial functions F:NNF: \mathbb{N} \to \mathbb{N} whose domain includes UU.

view this post on Zulip John Baez (Apr 01 2024 at 23:58):

So, simply put, F(U)F(U) consists of all partially defined functions ff from the natural numbers to the natural numbers such that you can write a computer program which halts and spits out f(n)f(n) when nUn \in U.

view this post on Zulip John Baez (Apr 01 2024 at 23:58):

For any finite UU, these are all the functions from UU to N\mathbb{N}.

view this post on Zulip John Baez (Apr 02 2024 at 00:00):

But for U=NU = \mathbb{N} there are lots of functions from UU to NN that aren't computable.

view this post on Zulip Peva Blanchard (Apr 02 2024 at 00:37):

Mmh, it's not as easy to come up with an explicit example (i.e., a witness of the non-sheafiness of FF).

view this post on Zulip Peva Blanchard (Apr 02 2024 at 00:43):

Here's my attempt.

Let N=iNUi\mathbb{N} = \bigcup_{i \in \mathbb{N}} U_i with Ui={i}U_i = \{i\}.

Define si:UiNs_i : U_i \rightarrow \mathbb{N} as the function that outputs 11 if the ii-th Turing machine halts, and 00 otherwise. All the sis_i's agree on the intersections (the UiU_i's are disjoint).

If FF were a sheaf, then there would be a total computable function on N\mathbb{N} that would solve Turing's halting problem, whence a contradiction.

view this post on Zulip Peva Blanchard (Apr 02 2024 at 00:46):

It's a bit weird, I had to convince myself that sis_i does belong to F(Ui)F(U_i) (still not 100% sure). It seems trivially true since UiU_i is finite. The strangeness comes from the fact that I am invoking the halting problem's oracle to define sis_i.

In a sense, the covering of N\mathbb{N} by the UiU_i acts like an oracle.

view this post on Zulip David Egolf (Apr 02 2024 at 17:01):

John Baez said:

So, simply put, F(U)F(U) consists of all partially defined functions ff from the natural numbers to the natural numbers such that you can write a computer program which halts and spits out f(n)f(n) when nUn \in U.

This sounds cool! But I'm having a hard time wrapping my mind around it. Is the idea that we need a single program that takes in any nn and then produces the corresponding f(n)f(n)? Or are we allowed to have different programs, say one for each nn, to calculate f(n)f(n)?

I'm guessing it's the first - we need a single program that can handle any nn. Then if f:UNf:U \to \mathbb{N} and UU is a singleton {n}\{n\}, given f(n)f(n) we can write a very short program that outputs f(n)f(n) given nn: just output f(n)f(n).

Then if UU is finite, and we know f(n)f(n) for each nUn \in U, we can still write a single program that will run in a finite amount of time, and that outputs f(n)f(n) given nn. We can just create a bunch of if/then statements that check to see if the given input value corresponds to the output value f(n)f(n) as nn varies. Since UU is finite, there will be a finite number of if/then statements that run, and so the run-time will be finite.

view this post on Zulip David Egolf (Apr 02 2024 at 17:05):

But if UU is infinite, and there's no clever trick to figure out the values of ff quickly, then I was going to say that the approach I outlined above wouldn't always run in a finite amount of time, because we'd need an infinite number of if/then statements. But, for any finite nn, I think we'd only have to run a finite number of if/then statements to look up the appropriate value for f(n)f(n). So it seems like the runtime would be finite for any input nn?

I think I must be confused on something!

view this post on Zulip David Egolf (Apr 02 2024 at 17:08):

Maybe the problem with the program I outline above (in the case where UU is infinite) is that it would need to be infinite in length (even though its runtime for any input would always be finite). That doesn't sound like a legitimate "computer program"!

view this post on Zulip John Baez (Apr 02 2024 at 17:28):

David Egolf said:

John Baez said:

So, simply put, F(U)F(U) consists of all partially defined functions ff from the natural numbers to the natural numbers such that you can write a computer program which halts and spits out f(n)f(n) when nUn \in U.

This sounds cool! But I'm having a hard time wrapping my mind around it. Is the idea that we need a single program that takes in any nn and then produces the corresponding f(n)f(n)? Or are we allowed to have different programs, say one for each nn, to calculate f(n)f(n)?

You need one program that takes in any nUn \in U and halts after printing out f(n)f(n).

The other option, one program for each nn, would say that every function f:UNf: U \to \mathbb{N} is computable. For any n,mNn, m \in \mathbb{N} you can write a program which prints mm when you input nn.

This fact is exactly the reason why computable functions don't form a sheaf!

view this post on Zulip John Baez (Apr 02 2024 at 17:29):

I can take N\mathbb{N} and cover it with singletons. The restriction of any function f:NNf: \mathbb{N} \to \mathbb{N} to any singleton is computable, even if ff is not computable.

view this post on Zulip John Baez (Apr 02 2024 at 17:31):

In simple rough terms: we're failing to get a sheaf because you can't always glue together infinitely many programs into one program.

Or even more tersely: computability of functions f:NNf: \mathbb{N} \to \mathbb{N} is not a local property.

view this post on Zulip David Tanzer (Apr 02 2024 at 21:49):

From math overflow, another example of presheaves that are not sheaves: presheaves of constant functions

view this post on Zulip David Tanzer (Apr 02 2024 at 21:57):

i.e., generally can't glue compatible local constant functions into a global constant function

view this post on Zulip John Baez (Apr 02 2024 at 23:09):

Nice! We were talking about examples of separated presheaves that are not sheaves, and the presheaf of constant functions is actually a separated presheaf.

Reminder: a presheaf is a sheaf if given sections sαs_\alpha on open sets UαU_\alpha covering UU which agree when restricted to the overlaps UαUβU_\alpha \cap U_\beta, there exists a unique section ss on UU that restricts to each of the sαs_\alpha. If we have uniqueness but perhaps not existence, then our presheaf is called separated. As David showed a while back, the presheaf of bounded real-valued functions on a space is separated but usually not a sheaf.

view this post on Zulip John Baez (Apr 02 2024 at 23:11):

Btw, one reason this concept is important is that there's a trick called 'sheafification' that turns a presheaf into a sheaf. One way to do it involves doing a certain maneuver twice. The first pass turns the presheaf into a separated presheaf, and then second pass turns it into a sheaf! It's kind of amazing.

view this post on Zulip John Baez (Apr 02 2024 at 23:21):

It's probably too technical to get into now, but in case anyone cares, this maneuver is called the "plus construction", and you can read about it on the nLab.

view this post on Zulip David Tanzer (Apr 03 2024 at 07:34):

Cool, thanks for the reminder about the separated aspect. All these examples put a good spotlight on the glueing condition. Now that we've solidly established the definition of a sheaf, which feels rather substantive, I will somewhat naively now ask: what are a couple of cool things that we can do with sheaves, in at least a semi-applied sense? I'm sure there are many; just fishing around here for some favorites.

view this post on Zulip David Tanzer (Apr 03 2024 at 07:55):

p.s. I know that we're headed towards the topos side of town; in this question I'm fishing around for some good immediate / semi-concrete applications. For example, they're somehow going to give us insight into the structure of manifolds? Or stuff in computer science, ...

view this post on Zulip David Tanzer (Apr 03 2024 at 07:59):

(If this would go beyond a few high level points, it could be spun off into a separate topic)

view this post on Zulip Peva Blanchard (Apr 03 2024 at 08:40):

Here is one example of application in network dynamic theory: Opinion dynamics on discourse sheaves.

view this post on Zulip David Egolf (Apr 03 2024 at 16:19):

This paper, which I want to read someday, also comes to mind: Sheaves are the canonical data structure for sensor integration

A sensor integration framework should be sufficiently general to accurately represent all information sources, and also be able to summarize information in a faithful way that emphasizes important, actionable information. Few approaches adequately address these two discordant requirements. The purpose of this expository paper is to explain why sheaves are the canonical data structure for sensor integration and how the mathematics of sheaves satisfies our two requirements. We outline some of the powerful inferential tools that are not available to other representational frameworks.

view this post on Zulip David Egolf (Apr 03 2024 at 16:22):

I want to start thinking a little bit about the next puzzle. Here it is again:

Show that fFf_\ast F is a presheaf. That is, explain how we can restrict an element of (fF)(V)(f_\ast F)(V) to any open set contained in VV, and check that we get a presheaf this way.

Here, fFf_\ast F is defined as: (fF)(V)=F(f1V)(f_\ast F)(V) = F(f^{-1} V) for each open subset VV of a topological space YY. Note that f:XYf: X \to Y is a continuous function and F:O(X)opSetF:{\mathcal{O}(X)}^{\mathrm{op}} \to \mathsf{Set} is a presheaf on XX.

We also have f1V={xX:  f(x)V}Xf^{-1} V = \{x \in X :\; f(x) \in V \} \subseteq X. Note that f1Vf^{-1} V is open because VV is open and ff and continuous.

Roughly, our goal here is to make a presheaf on YY given a continuous function f:XYf:X \to Y and a presheaf F:O(X)opSetF:{\mathcal{O}(X)}^{\mathrm{op}} \to \mathsf{Set} on XX.

view this post on Zulip David Egolf (Apr 03 2024 at 16:23):

I wonder if a continuous function f:XYf: X \to Y induces a functor f:O(Y)opO(X)opf': {\mathcal{O}(Y)}^{\mathrm{op}} \to {\mathcal{O}(X)}^{\mathrm{op}}. If it does, then we could form fFf_\ast F as Ff:O(Y)opSetF \circ f': {\mathcal{O}(Y)}^{\mathrm{op}} \to \mathsf{Set}.

view this post on Zulip David Egolf (Apr 03 2024 at 16:29):

Let's see. If f:XYf: X \to Y is a continuous function, let's try to define a functor f:O(Y)opO(X)opf': {\mathcal{O}(Y)}^{\mathrm{op}} \to {\mathcal{O}(X)}^{\mathrm{op}} as follows:

view this post on Zulip David Egolf (Apr 03 2024 at 16:32):

Our proposed functor f:O(Y)opO(X)opf': {\mathcal{O}(Y)}^{\mathrm{op}} \to {\mathcal{O}(X)}^{\mathrm{op}} automatically respects composition, because all diagrams commute in a poset. And if 1V1_V is the identity morphism for VO(Y)opV \in {\mathcal{O}(Y)}^{\mathrm{op}}, then this gets mapped to the identity morphism on f1(V)f^{-1}(V), as desired.

So, I think that ff' is indeed a functor! (Hopefully I didn't miss something!)

It seems that a continuous function f:XYf: X \to Y does in fact induce a functor f:O(Y)opO(X)opf': {\mathcal{O}(Y)}^{\mathrm{op}} \to {\mathcal{O}(X)}^{\mathrm{op}}.

view this post on Zulip David Egolf (Apr 03 2024 at 16:35):

If that is true, then I think that fFf_* F is just Ff:O(Y)opSetF \circ f':{\mathcal{O}(Y)}^{\mathrm{op}} \to \mathsf{Set}. For an open set VO(Y)opV \in {\mathcal{O}(Y)}^{\mathrm{op}}, it spits out F(f(V))=F(f1(V)F(f'(V)) = F(f^{-1}(V), which is what fFf_*F is supposed to do. And it is indeed a functor, because composing two functors yields a functor.

view this post on Zulip John Baez (Apr 03 2024 at 16:42):

David Tanzer said:

Cool, thanks for the reminder about the separated aspect. All these examples put a good spotlight on the glueing condition. Now that we've solidly established the definition of a sheaf, which feels rather substantive, I'll ask: what are a couple of cool things that we can do with sheaves, in at least a semi-applied sense? I'm sure there are many; just fishing around here for some favorites.

My own favorite applications of sheaves are the ones that made people invent sheaves in the first place - applications to algebraic geometry and toplogy. I don't know how deeply we want to get into those here. But it's not surprising that some of the most exciting applications of a concept are the ones that made people take the trouble to develop it in the first place!

Briefly, since a bounded analytic function must be constant, there are no everywhere defined analytic functions on the Riemann sphere except constants - all the interesting ones have poles. This issue affects all of complex analysis and algebraic geometry. This puts pressure on us to either accept 'partially defined' functions as full-fledged mathematical objects or work with sheaves of functions, e.g. work with lots of different open sets UU in the Riemann sphere and let F(U)F(U) be the set of analytic functions everywhere defined on UU.

Mathematicians took the second course, because partially defined functions where you haven't specified the domain of definition are a pain to work with. So nowadays all of algebraic geometry (subsuming chunks of complex analysis, and much much more) is founded on sheaves. In this subject one can do a lot of amazing things with sheaves. Later on these tricks expanded to algebraic topology. And this is how a typical math grad student (like me) is likely to encounter sheaves.

Needless to say, I'm happy to get into more detail about what we actually do with sheaves. But it's quite extensive: the proof of Fermat's Last Theorem and pretty much all the other big results in algebraic geometry relies heavily on sheaves.

view this post on Zulip John Baez (Apr 03 2024 at 16:47):

Digressing a bit, I found this video pretty amusing, even though it's serious:

This is near the start of a series of over a hundred videos that works through the proof of Fermat's Last Theorem step by step.

view this post on Zulip John Baez (Apr 03 2024 at 16:53):

But this list of prerequisites is very intimidating. Sheaves have a lot of exciting applications in pure math that are infinitely easier to explain.

view this post on Zulip David Tanzer (Apr 03 2024 at 19:14):

I created a new topic to continue the discussion of applications of sheaves #learning: reading & references > Applications of sheaves

view this post on Zulip David Egolf (Apr 04 2024 at 16:51):

Here's the next puzzle in the first blog post:

Show that taking direct images gives a functor from the category of presheaves on XX to the category of presheaves on YY.

In the previous puzzle, we showed that the "direct image" of a presheaf FF on XX is a presheaf fFf_\ast F on YY. As a first step in showing that this gives us a functor, we still need to figure out how our direct image functor D:O(X)^O(Y)^D: \widehat{\mathcal{O}(X)} \to \widehat{\mathcal{O}(Y)} acts on morphisms between presheaves (which are natural transformations).

For my easy reference, I'll note that O(X)^=[O(X)op,Set]\widehat{\mathcal{O}(X)} = [{\mathcal{O}(X)}^{\mathrm{op}}, \mathsf{Set}] and O(Y)^=[O(Y)op,Set]\widehat{\mathcal{O}(Y)} = [{\mathcal{O}(Y)}^{\mathrm{op}}, \mathsf{Set}].

view this post on Zulip David Egolf (Apr 04 2024 at 16:56):

This one is going to take me some thought. I don't have any intuition for natural transformations between presheafs yet. I think what I'll do to start with, is to draw a naturality square describing part of a natural transformation between two presheafs. Hopefully that will help me find some intuition!

view this post on Zulip David Egolf (Apr 04 2024 at 17:10):

If UUU' \subseteq U, we have a (unique) morphism from UU to UU' in O(X)op{\mathcal{O}(X)}^{\mathrm{op}}. Let F,G:O(X)opSetF,G: {\mathcal{O}(X)}^{\mathrm{op}} \to \mathsf{Set} be presheafs on XX. Then, to have a natural transformation α:FG\alpha: F \to G, we need this square to commute for all pairs (U,U)(U, U') where UU and UU' are open sets of XX such that UUU' \subseteq U:

naturality square

view this post on Zulip David Egolf (Apr 04 2024 at 17:16):

Intuitively, for any xFUx \in FU, the natural transformation component αU:FUGU\alpha_U: FU \to GU tells us how to view that FF-data on UU as some GG-data on UU. Further, this process needs to respect restriction.

So we can expect there to be a natural transformation, for example, from the presheaf of bounded and continuous functions on XX to the presheaf of continuous functions on XX. In this case, each αU\alpha_U is an inclusion function.

But I wouldn't expect there to be a natural transformation from the presheaf of continuous functions on XX to the preseheaf of continuosu and bounded function on XX. That's because I can't think of a nice way of converting any continuous functions to a corresponding continuous and bounded function.

view this post on Zulip Peva Blanchard (Apr 04 2024 at 17:18):

I am not so sure about the latter. If GUGU is the singleton set consisting of the zero function on UU, then we can define αU\alpha_{U} as the unique map that sends every element of FUFU to zero.

view this post on Zulip John Baez (Apr 04 2024 at 17:21):

That sounds natural.

Puzzle. Is there a natural transformation from the presheaf of continuous functions to the presheaf of continuous and bounded functions that sends some functions to non-constant functions?

view this post on Zulip Peva Blanchard (Apr 04 2024 at 17:29):

puzzle answer

view this post on Zulip John Baez (Apr 04 2024 at 17:38):

I don't think that's "almost" the answer. I think it's exactly the answer! If there are any non-constant continuous functions on your space, your natural transformation will convert all continuous functions to bounded continuous functions, and send some to non-constant continuous functions.

view this post on Zulip David Egolf (Apr 04 2024 at 17:50):

John Baez said:

Puzzle. Is there a natural transformation from the presheaf of continuous functions to the presheaf of continuous and bounded functions that sends some functions to non-constant functions?

The first idea that comes to mind is:

I think this doesn't work though. That's because a restriction of an unbounded function might be bounded. If that happens, then the naturality square doesn't commute if one tries to follow the approach I described above.

view this post on Zulip David Egolf (Apr 04 2024 at 17:54):

Peaking at @Peva Blanchard 's answer... Huh, I did not expect arctan to show up! I guess its virtue is that it takes in any input, and squishes it down to a fixed finite range. Further, it does this without sending two inputs to the same output. And you can "squish" a function down and then restrict it, or you can restrict it first and then squish it down, and you'll get the same answer. So our naturality square will commute!

view this post on Zulip John Baez (Apr 04 2024 at 17:54):

Anything defined using cases is going to have trouble being natural. It sometimes works - but when I try to do something natural, I avoid methods that involve different cases, because the spirit of naturality is to do something that works uniformly for all cases.

view this post on Zulip John Baez (Apr 04 2024 at 17:56):

What Peva did is postcompose with a bounded continuous function; this turns any continuous function into a bounded continuous function. People who take real analysis use arctan as their go-to guy for this purpose, because this is also 1-1, so postcomposing with it doesn't lose any information, but they could equally well use tanh or lots of other things.

view this post on Zulip John Baez (Apr 04 2024 at 17:56):

For this puzzle postcomposing with sin or cos would work fine too.

view this post on Zulip David Egolf (Apr 04 2024 at 17:58):

That's actually a really cool point! If f:XYf: X \to Y is any continuous function, and g:YRg: Y \to \mathbb{R} is continuous and bounded, then gfg \circ f is also continuous and bounded! And if this post-composition doesn't lose information (which I think corresponds to gg being a monomorphism), then we've managed to produce a continuous bounded function that "still remembers" the original unbounded function that it came from!

view this post on Zulip Peva Blanchard (Apr 04 2024 at 17:59):

This way of "applying the same procedure pointwise". I think it relates to the way one checks the sheaf condition.

view this post on Zulip Peva Blanchard (Apr 04 2024 at 18:02):

(I'm trying to make this statement clearer)

view this post on Zulip John Baez (Apr 04 2024 at 18:03):

It may already show up from the presheaf condition.

To get a natural transformation between presheaves what you do to sections needs to be "local": you can restrict a section to a smaller open set and do the operation, or do the operation and then restrict, and these need to agree.

The most local of local operations are the "pointwise" ones.

But notice that there are other local operations: for example differentiation gives a map from the presheaf of smooth real-valued functions on the real line to itself.

view this post on Zulip John Baez (Apr 04 2024 at 18:04):

(I'm saying "presheaf" a lot here. Each time I could have said "sheaf", but I don't think I'm using the sheaf condition in what I'm saying.)

view this post on Zulip John Baez (Apr 04 2024 at 18:05):

(I should add that a map between sheaves is just defined to be a map between presheaves that happen to be sheaves.)

view this post on Zulip Peva Blanchard (Apr 04 2024 at 18:09):

I have the feeling that, in the case of sheaves, a natural transformation is uniquely determined by what it does "pointwise" (more precisely, on the germs).

view this post on Zulip Peva Blanchard (Apr 04 2024 at 18:14):

Something like: the set of natural transformations from an AA-valued sheaf FF to a BB-valued sheaf GG is equivalently described by a sheaf with values in Set(A,B)Set(A, B) (the functions from AA to BB).

view this post on Zulip John Baez (Apr 04 2024 at 18:14):

Analysts would never say differentiation is done "pointwise" - so yes, I think the correct word should be something like "germwise".

view this post on Zulip John Baez (Apr 04 2024 at 18:15):

This could become a theorem once we (= David) officially study germs; then we could show a map between sheaves is determined by what it does to germs.

view this post on Zulip David Egolf (Apr 05 2024 at 17:13):

Building on the discussion above, I think I can now start to work out how we can get a natural transformation between two direct image functors fF:O(Y)opSetf_*F: \mathcal{O}(Y)^{\mathrm{op}} \to \mathsf{Set} and fG:O(Y)opSetf_*G: \mathcal{O}(Y)^{\mathrm{op}} \to \mathsf{Set}. If we have a natural trasnformation α:FG\alpha: F \to G, then for each open subset VV of YY, we need to figure out how to compute fGf_*G-data on VV from fFf_*F-data on VV. This will serve as the VV-th component of a natural transformation from fFf_*F to fGf_*G.

So, let's assume we have the two sets fF(V)f_*F(V) and fG(V)f_*G(V). We're looking for a function βV:fF(V)fG(V)\beta_V: f_*F(V) \to f_*G(V). Let xfF(V)x \in f_*F(V). Our goal is to figure out βV(x)\beta_V(x).

Since xfF(V)x \in f_*F(V), xF(f1(V))x \in F(f^{-1}(V)). From this, we need to get some element βV(x)fG(V)=G(f1(V))\beta_V(x) \in f_*G(V) = G(f^{-1}(V)). To do this, we can use our natural transformation α:FG\alpha: F \to G. Since αf1V:F(f1(V))G(f1(V))\alpha_{f^{-1}V}: F(f^{-1}(V)) \to G(f^{-1}(V)), we can just provide xx to this function and get out βV(x)\beta_V(x).

We've arrived at the following idea: Let our direct image functor D:O(X)^O(Y)^D:\widehat{\mathcal{O}(X)} \to \widehat{\mathcal{O}(Y)} send a natural transformation α:FG\alpha: F \to G to the natural transformation D(α):fFfGD(\alpha):f_*F \to f_*G having VV-th component D(α)V=αf1(V)D(\alpha)_V = \alpha_{f^{-1}(V)}

view this post on Zulip David Egolf (Apr 05 2024 at 17:29):

Next, let's check that D(α):fFfGD(\alpha): f_*F \to f_*G really is a natural transformation. Evaluating these functors at some morphsim :UV:U \to V in O(Y)op{\mathcal{O}(Y)}^{\mathrm{op}}, we get this square:

square

This square can be rewritten as:
square 2

And this square diagram commutes, because α\alpha is a natural transformation. We conclude that any naturality square for D(α)D(\alpha) commutes, and hence D(α)D(\alpha) is a natural transformation. So, DD is sending natural transformations to natural transformations, as it should.

view this post on Zulip David Egolf (Apr 05 2024 at 17:51):

It only remains to show that D:O(X)^O(Y)^D:\widehat{\mathcal{O}(X)} \to \widehat{\mathcal{O}(Y)} is a functor!

First, we need to check that D(1F)=1F(D)D(1_F) = 1_{F(D)} for any identity morphism 1F:FF1_F: F \to F in O(X)^\widehat{\mathcal{O}(X)}. By our definition of DD, we have D(1F)V=(1F)f1(V)D(1_F)_V = (1_F)_{f^{-1}(V)}. Since each component of 1F1_F is an identity function, (1F)f1(V)(1_F)_{f^{-1}(V)} is the identity function from f1(V)f^{-1}(V) to f1(V)f^{-1}(V). So, we see that D(1F)D(1_F) is the identity natural transformation from fFf_*F to itself. (Indeed, the identity natural transformation from fFf_*F to itself has as its VV-th component the identity function on F(f1V)F(f^{-1}V)).

view this post on Zulip David Egolf (Apr 05 2024 at 18:01):

Lastly, we need to check that DD respects composition. That is, we need to show that D(αβ)=D(α)D(β)D(\alpha \circ \beta) = D(\alpha) \circ D(\beta) for two composable morphisms α,β\alpha, \beta. To show that two natural transformations are equal, it suffices to show that each of their components are equal. So, we wish to show that D(αβ)V=D(α)VD(α)VD(\alpha \circ \beta)_V = D(\alpha)_V \circ D(\alpha)_V, for any VO(Y)opV \in \mathcal{O}(Y)^{\mathrm{op}}.

By definition of DD, we have D(αβ)V=(αβ)f1VD(\alpha \circ \beta)_V = (\alpha \circ \beta)_{f^{-1}V}. We also have D(α)V=αf1VD(\alpha)_V = \alpha_{f^{-1}V} and D(β)V=βf1VD(\beta)_V = \beta_{f^{-1}V}. So, D(αV)D(βV)=αf1Vβf1VD(\alpha_V) \circ D(\beta_V) = \alpha_{f^{-1}V} \circ \beta_{f^{-1}V}. By definition of vertical composition of natural transformations, we have that αf1Vβf1V=(αβ)f1V\alpha_{f^{-1}V} \circ \beta_{f^{-1}V} = (\alpha \circ \beta)_{f^{-1}V}.

We conclude that DD respects composition! And now we can conclude that taking direct images using a continuous function f:XYf: X \to Y yields a functor D:O(X)^O(Y)^D:\widehat{\mathcal{O}(X)} \to \widehat{\mathcal{O}(Y)}!

view this post on Zulip David Egolf (Apr 05 2024 at 18:04):

Whew, that felt like a lot. I suppose this sort of thing gets quicker with practice! But I wonder if there is a faster (more abstract?) way to work this out, as well.

view this post on Zulip David Tanzer (Apr 05 2024 at 18:47):

It would be cool if there were a higher level / more systematic way of proving such things. A proof assistant? I haven't used them. But that wouldn't seem to help with the basic understanding. It's hard to see a way around needed to unpack definitions and verify them in detail. I appreciate the clarity, detail and completeness of your posts here!

view this post on Zulip Peva Blanchard (Apr 05 2024 at 21:30):

I think there is a higher level way of doing, but, at some point, we still need to work out the details.

Here is my attempt.

We have a continuous function f:XYf : X \rightarrow Y. This function ff is equivalently described as a functor f:O(Y)O(X)f^{\star} :O(Y) \rightarrow O(X), viewing the poset of open sets as a category.

Consider the functor H:CatCatopH : Cat \rightarrow Cat^{op} given by the composition

Cat_opCat[_,Set]Catop Cat \xrightarrow{\_^{op}} Cat \xrightarrow{[\_, Set]} Cat^{op}

of the functor "taking the opposite category", and the functor "hom-ing into Set".

In particular, HH maps the functor ff^{\star} to a functor:

Hf:O(X)^O(Y)^H f^{\star} : \widehat{O(X)} \rightarrow \widehat{O(Y)}

It remains to show that HfH f^{\star} is indeed the direct image functor. (I'll skip that part for now)

view this post on Zulip Peva Blanchard (Apr 05 2024 at 21:39):

The tedious details are still present: I haven't proved that the functor "hom-ing into SetSet" is well-defined and a functor. This is, I think, proven exactly as @David Egolf did.

view this post on Zulip John Baez (Apr 05 2024 at 22:36):

David Egolf said:

Whew, that felt like a lot.

For what it's worth, it doesn't feel like a lot to me. I think if this were part of a book it would be less than a page. Some of the work is coming up with the ideas: that's the fun part. But a lot of the work in writing these arguments is just formatting things in LaTeX. I'm very glad you're doing it, because you're helping other people. But it's less work on paper.

Once you do this kind of argument for a few years, the standard moves become so ingrained that they're almost automatic... except when they're not, meaning that some brand new move is required.

view this post on Zulip John Baez (Apr 05 2024 at 22:43):

David Tanzer said:

It would be cool if there were a higher level / more systematic way of proving such things. A proof assistant?

I'm the opposite: what I really want is some software that will go out to dinner and talk to my friends so I can stay home and prove theorems.

view this post on Zulip John Baez (Apr 05 2024 at 23:10):

As for "systematic", I think @David Egolf's approach to this question was perfectly systematic. To prove P implies Q, he expanded out P using definitions to get a short list of things to check, and then checked each of these using Q, which he expanded out just enough to get this done.

view this post on Zulip John Baez (Apr 05 2024 at 23:14):

Category theory is full of proofs like this; many mathematicians look down on it because it's not tricky enough, but to me that's a virtue. The main hard part is keeping track of nested layers of structure imvolved... and a main reason for doing lots of proofs like this is to get good at keeping a lot of structures in mind.

view this post on Zulip John Baez (Apr 05 2024 at 23:27):

The number theorist Serge Lang has an exercise in his book Algebra that goes like this:

Take any book on homological algebra, and prove all the theorems without looking at the proofs given in that book.

Homological algebra was invented by Eilenberg-Mac Lane. General category theory (i. e. the theory of arrow-theoretic results) is generally known as abstract nonsense (the terminology is due to Steenrod).

view this post on Zulip John Baez (Apr 05 2024 at 23:30):

He is of course joking to some extent, and definitely showing off. But the hard part in homological algebra - or other kinds of category theory - is developing an intuition for the structures involved so you can guess what's true. The proofs of theorems are often easy in comparison.

(But sometimes they're not - there are some really deep results too.)

view this post on Zulip David Egolf (Apr 07 2024 at 15:41):

We have now arrived to the final puzzle in the first blog post! For context, recall that f:XYf:X \to Y is a continuous function, that FF is a presheaf on XX, and fFf_*F is the corresponding direct image presheaf on YY. Here's the puzzle:

Puzzle. Show that if FF is a sheaf on XX, its direct image fFf_*F is a sheaf on YY.

We saw above that fFf_*F is a presheaf. So, we only need to check that we can "glue together" things appropriately: if we start with a bunch of sifF(Vi)s_i \in f_*F(V_i) that agree on overlaps, so that siViVj=sjViVj{s_i}|_{V_i \cap V_j}={s_j}|_{V_i \cap V_j} for all ii and jj, then there is always a unique sfF(V)s \in f_*F(V) that restricts to sis_i on ViV_i for each ii. Here, each ViV_i is an open subset of YY and V=iViV = \cup_i V_i.

view this post on Zulip David Egolf (Apr 07 2024 at 16:32):

Let's start out with a bunch of sifF(Vi)s_i \in f_*F(V_i), which agree on overlaps and where iVi=V\cup_i V_i = V. We want to show there is a unique sfF(V)=F(f1V)s \in f_*F(V) = F(f^{-1}V) that restricts to each sis_i on ViV_i.

Note that siF(f1Vi)s_i \in F(f^{-1}V_i) for each ii, by definition of fFf_*F. I want to use the fact that FF is a sheaf to glue together these sis_i to get some sF(f1V)=fF(V)s \in F(f^{-1}V) = f_*F(V).

view this post on Zulip David Egolf (Apr 07 2024 at 16:43):

First, let's show that if1Vi=f1V\cup_i f^{-1}V_i = f^{-1}V, making use of the fact that iVi=V\cup_i V_i = V.

Let xf1Vx \in f^{-1}V. That means that there is some yVy \in V so that f(x)=yf(x) = y. Since V=iViV = \cup_i V_i, that means that yViy \in V_i for some ii. Thus, xf1Vix \in f^{-1}V_i for some ii. Hence, xif1Vix \in \cup_i f^{-1}V_i. We conclude that f1Vif1Vif^{-1}V \subseteq \cup_i f^{-1}V_i.

Next, let xif1Vix \in \cup_i f^{-1} V_i. That means that there is some ii so that xf1Vix \in f^{-1}V_i. Thus, there is some yViy \in V_i so that f(x)=Vif(x)=V_i. Since iVi=V\cup_i V_i = V, we know that ViVV_i \subseteq V. Hence f(x)Vf(x) \in V and thus xf1Vx \in f^{-1} V. Therefore, if1Vif1V\cup_i f^{-1} V_i \subseteq f^{-1} V.

We conclude that if1Vi=f1V\cup_i f^{-1} V_i = f^{-1} V.

view this post on Zulip David Egolf (Apr 07 2024 at 17:34):

The next order of business is to talk about "agreeing on overlaps". With respect to fFf_*F, we know that siViVj=sjViVj{s_i}|_{V_i \cap V_j} = {s_j}|_{V_i \cap V_j} for any ii and jj. A particular element sis_i of fF(Vi)f_*F(V_i) is restricted to ViVjV_i \cap V_j as follows: note that sis_i is an element of F(f1Vi)F(f^{-1}V_i) and then restrict it (using the fact that FF is a presheaf, and so provides a notion of restriction) using FF to an element of F(f1(ViVj))=fF(ViVj)F(f^{-1}(V_i \cap V_j)) = f_*F(V_i \cap V_j).

So, if siViVj=sjViVj{s_i}|_{V_i \cap V_j} = {s_j}|_{V_i \cap V_j} with respect to fFf_*F, then this means that restricting sis_i (using FF) from an element of F(f1Vi)F(f^{-1}V_i) to an element of F(f1(ViVj))F(f^{-1}(V_i \cap V_j)) yields the same result as restricting sjs_j (using FF) from an element of F(f1Vj)F(f^{-1}V_j) to an element of F(f1(ViVj))F(f^{-1}(V_i \cap V_j)).

Now, we'd like to show that if sifF(Vi)s_i \in f_*F(V_i) and sjfF(Vj)s_j \in f_*F(V_j) agree on overlaps with respect to fFf_*F, then they agree on overlaps with respect to FF, where we view sis_i as an element of F(f1Vi)F(f^{-1}V_i) and sjs_j as an element of F(f1Vj)F(f^{-1}V_j). To show they agree on overlaps with respect to FF, we need to show that restricting sis_i from an element of F(f1Vi)F(f^{-1}V_i) to an element of F(f1Vif1Vj)F(f^{-1}V_i \cap f^{-1}V_j) yields the same result as restricting sjs_j from an element of F(f1Vj)F(f^{-1}V_j) to an element of F(f1Vif1Vj)F(f^{-1}V_i \cap f^{-1}V_j).

By the above discussion, this follows provided that F(f1Vif1Vi)=F(f1(ViVj))F(f^{-1}V_i \cap f^{-1}V_i) = F(f^{-1}(V_i \cap V_j)).

view this post on Zulip David Egolf (Apr 07 2024 at 17:37):

We now show that F(f1Vif1Vi)=F(f1(ViVj))F(f^{-1}V_i \cap f^{-1}V_i) = F(f^{-1}(V_i \cap V_j)). It suffices to show that f1Vif1Vi=f1(ViVj)f^{-1}V_i \cap f^{-1}V_i = f^{-1}(V_i \cap V_j).

Let xf1(ViVj)x \in f^{-1}(V_i \cap V_j). That means there is some yViVjy \in V_i \cap V_j so that f(x)=yf(x) = y. Since yViVjy \in V_i \cap V_j, we know that yViy \in V_i and yVjy \in V_j. Hence, xf1Vix \in f^{-1}V_i and xf1Vjx \in f^{-1}V_j. Thus, xf1Vif1Vjx \in f^{-1}V_i \cap f^{-1}V_j, and so f1(ViVj)f1Vif1Vjf^{-1}(V_i \cap V_j) \subseteq f^{-1}V_i \cap f^{-1}V_j.

Next, let xf1Vif1Vjx \in f^{-1}V_i \cap f^{-1}V_j. That means that f(x)Vif(x) \in V_i and f(x)Vjf(x) \in V_j. Hence f(x)ViVjf(x) \in V_i \cap V_j and therefore xf1(ViVj)x \in f^{-1}(V_i \cap V_j). Therefore, f1Vif1Vjf1(ViVj)f^{-1}V_i \cap f^{-1}V_j \subseteq f^{-1}(V_i \cap V_j).

We conclude that f1(ViVj)=f1Vif1Vjf^{-1}(V_i \cap V_j) = f^{-1}V_i \cap f^{-1}V_j, and so F(f1Vif1Vi)=F(f1(ViVj))F(f^{-1}V_i \cap f^{-1}V_i) = F(f^{-1}(V_i \cap V_j)).

view this post on Zulip David Egolf (Apr 07 2024 at 17:40):

From the above discussion, if iVi=V\cup_i V_i = V (where each ViV_i is an open subset of YY) and we have a bunch of sifF(Vi)s_i \in f_*F(V_i) (as ii varies) that agree on overlaps with respect to fFf_*F, then we have:

Since FF is a sheaf, there is then a unique sF(f1V)=fF(V)s \in F(f^{-1}V) = f_*F(V) that restricts (using FF) to sis_i on each f1Vif^{-1}V_i. We hope that this ss restricts to each sis_i on ViV_i with respect to fFf_*F. This is true, because restricting ss to an element of fF(Vi)f_*F(V_i) with respect to fFf_*F is done by restricting ss to an element of F(f1Vi)F(f^{-1}V_i) with respect to FF, and we know this yields sis_i (by definition of ss).

view this post on Zulip David Egolf (Apr 07 2024 at 17:46):

So, I think we have managed to show there is at least one "gluing together" of our sifF(Vi)s_i \in f_*F(V_i) to get a sfF(V)s \in f_*F(V) that restricts to sis_i on each ViV_i. It remains to show that there is only one way to do this, so that ss is the unique "gluing" of our sis_i.

view this post on Zulip David Egolf (Apr 07 2024 at 17:54):

Let's imagine we've got some sfF(V)s' \in f_*F(V) that restricts (with respect to fFf_*F) to sis_i on ViV_i, for each ii. That means it restricts from sF(f1V)s' \in F(f^{-1}V) to siF(f1Vi)s_i \in F(f^{-1}V_i) (with respect to FF) for each ii. That is, ss' is a valid "gluing" of all the siF(f1Vi)s_i \in F(f^{-1}V_i). Since FF is a sheaf, there is only one such gluing, namely ss. Therefore, s=ss'=s.

Consequently, there is exactly one way to "glue together" our sifF(Vi)s_i \in f_*F(V_i) to get a sfF(V)s \in f_*F(V). We conclude that fFf_*F is indeed a sheaf!

view this post on Zulip John Baez (Apr 07 2024 at 18:11):

Great! Your proof looks perfect!

view this post on Zulip John Baez (Apr 07 2024 at 18:13):

The only change I might make is to pull out a few facts as "lemmas", since they don't really involve presheaves per se: they are properties of the inverse image f1(V)f^{-1}(V) of a subset VYV \subset Y along a function f:XYf: X \to Y.

Of course the inverse image of an open subset along a continuous map is open, but these properties are even more fundamental: they work for any subset and any map.

view this post on Zulip John Baez (Apr 07 2024 at 18:16):

So suppose f:XYf: X \to Y is any function.

Lemma 1. If ViV_i are subsets of YY then

f1(iVi)=if1(Vi) f^{-1} (\bigcap_i V_i) = \bigcap_i f^{-1}(V_i)

Lemma 2. If ViV_i are subsets of YY then

f1(iVi)=if1(Vi) f^{-1} (\bigcup_i V_i) = \bigcup_i f^{-1}(V_i)

Just for good measure, let c{}^c denote the operation of taking the complement of a subset.

Lemma 3. If VV is a subset of YY then

f1(Vc)=(f1(V))c f^{-1} (V^c) = (f^{-1}(V))^c

view this post on Zulip John Baez (Apr 07 2024 at 18:18):

I think you used Lemma 1 only in the case of the intersection of two subsets. You used the general case of Lemma 2. And you didn't need Lemma 3 at all.

view this post on Zulip John Baez (Apr 07 2024 at 18:21):

Subsets of a set form a [[complete boolean algebra]] - this is jargon is a way of capturing all the rules that govern intersections, unions and complements in classical logic. Lemmas 1-3 say that f1f^{-1} is a morphism of complete boolean algebras!

view this post on Zulip John Baez (Apr 07 2024 at 18:26):

If someone out there has never thought about this, it's worth comparing the 'image' operation. The image of a subset VXV \subseteq X under a function f:XYf: X \to Y is defined by

f(V)={f(x):xV} f(V) = \{f(x): x \in V\}

The image operation does not obey analogues of all three of Lemmas 1-3. I.e. it doesn't preserve all unions, intersections and complements.

Puzzle. Which lemmas fail?

view this post on Zulip John Baez (Apr 07 2024 at 18:26):

Moral: inverse image is 'better' than image. This is one reason it's nice that we use inverse image, not image, to define continuity.

view this post on Zulip John Baez (Apr 07 2024 at 18:27):

All these simple thoughts will get refined more and more as one digs deeper into topos theory.

view this post on Zulip David Egolf (Apr 07 2024 at 18:41):

John Baez said:

Subsets of a set form a [[complete boolean algebra]] - this is jargon is a way of capturing all the rules that govern intersections, unions and complements in classical logic. Lemmas 1-3 say that f1f^{-1} is a morphism of complete boolean algebras!

That is very cool! I'll plan to think a bit more about this, as well as the puzzle you gave relating to the image operation.

view this post on Zulip John Baez (Apr 07 2024 at 19:27):

Great! I wish someone had told me - way back when I was a youth - that 'inverse image' is better behaved than 'image', and also had explained why. Back then inverse image seems like a more sneaky concept than image, in part because of its name. So it seemed a bit weird that it was used in the definition of continuity. Of course from another viewpoint this makes perfect sense: this gives a definition of continuity of maps between metric spaces that matches the ϵδ\epsilon-\delta definition! But it's much more satisfying to understand the fundamental role of inverse images.

view this post on Zulip David Egolf (Apr 08 2024 at 18:46):

John Baez said:

Subsets of a set form a [[complete boolean algebra]] - this is jargon is a way of capturing all the rules that govern intersections, unions and complements in classical logic. Lemmas 1-3 say that f1f^{-1} is a morphism of complete boolean algebras!

I've reviewed some things relating to Boolean algebras, and I think this is making sense!

One interesting point jumped out to me. When I think about morphisms "preserving the structure", I usually think of equations like this one: g(xy)=g(x)g(y)g(x \cup y) = g(x) \cup g(y). That is, the binary operation \cup only gets applied once on each side of the equation. This is in contrast to something like f1(iVi)=if1(Vi)f^{-1}(\cup_i V_i) = \cup_i f^{-1}(V_i), where potentially we are taking the union of an infinite number of sets on each side of the equation.

I assume that the idea is to "preserve equations". If we know that we are mapping between two complete Boolean algebras, then arbitrary (small) meets and joins always exist in both the source and target Boolean algebras. So then for any collection of ViV_i in some complete Boolean algebra, iVi\lor_i V_i always exists - call it VV. Consequently, we can always write this kind of equation iVi=V\lor_i V_i = V for any collection of elements ViV_i. Asking this equation to be preserved under gg would mean we'd want ig(Vi)=g(V)=g(iVi)\lor_i g(V_i) = g(V) = g(\lor_i V_i).

I guess the moral of the story is this: if we have fancier equations that hold in all the structures of interest, we'll get fancier corresponding requirements for a structure-preserving map.

view this post on Zulip David Egolf (Apr 08 2024 at 18:52):

For f1f^{-1} to be a morphism of complete Boolean algebras, there's another condition we'll want it to meet. It's simple, but I thought it might be nice to note explicitly. In particular, for f:XYf: X \to Y and f1:P(Y)P(X)f^{-1}: \mathcal{P}(Y) \to \mathcal{P}(X), we'll want UVU \subseteq V in P(Y)\mathcal{P}(Y) to imply that f1(U)f1(V)f^{-1}(U) \subseteq f^{-1}(V) in P(X)\mathcal{P}(X).

To see that this holds, let xf1(U)x \in f^{-1}(U). That means that f(x)Uf(x) \in U. Since UVU \subseteq V, that means f(x)Vf(x) \in V and hence xf1(V)x \in f^{-1}(V). Therefore, f1(U)f1(V)f^{-1}(U) \subseteq f^{-1}(V), as desired.

view this post on Zulip Peva Blanchard (Apr 08 2024 at 19:03):

I think the idea of "preserving equations" is right. It can also be rephrased as "preserving limits/colimits". When seeing a complete boolean algebra as a category, then the join (resp. meet) of an arbitrary collection of elements is literally the colimit (resp. limit) of this collection.

view this post on Zulip David Egolf (Apr 08 2024 at 19:25):

I now want to think about the image operator I:P(X)P(Y)I: \mathcal{P}(X) \to \mathcal{P}(Y) for a function f:XYf: X \to Y. We have I(U)={f(x)xU}I(U) = \{f(x) | x \in U\} for any UP(X)U \in \mathcal{P}(X). Let's see in which ways II fails to be a morphism between the complete Boolean algebras P(X)\mathcal{P}(X) and P(Y)\mathcal{P}(Y).

First, notice that I(X)I(X) is not necessarily YY. So the biggest ("top") element is not always mapped to the top element! (This is because ff is not always surjective). However, II does map the empty set to the empty set, and it preserves arbitrary unions. Also, if UUU \subseteq U', then I(U)I(U)I(U) \subseteq I(U').

II doesn't preserve intersections in general. For example, if UU and UU' are disjoint non-empty subsets of XX, but I(U)=I(U)I(U) = I(U'), then I(UU)=I()=I(U \cap U') = I(\emptyset) = \emptyset but I(U)I(U)=I(U)I(U) \cap I(U') = I(U) is not empty. This sort of thing can happen when ff isn't injective.

II also doesn't preserve complements in general. That is, if Uc=VU^c = V, then we don't necessarily have that I(U)c=I(V)=I(Uc)I(U)^c = I(V) = I(U^c). For example, let U=XU = X. Then I(Xc)=I()=I(X^c) = I(\emptyset) = \emptyset. But I(X)I(X) isn't necessarily all of YY (as ff isn't necessarily surjective). Therefore, I(X)cI(X)^c is not always empty.

view this post on Zulip John Baez (Apr 08 2024 at 21:23):

Great, now you see exactly how inverse image is better than image! Inverse image sends maps between sets to maps of complete boolean algebras, and indeed gives a functor from Set\mathrm{Set} to the opposite of the category of complete boolean algebras. I like to call this "the duality between set theory and logic".

There's more to say about this, but enough for now.

A tiny typo here:

II doesn't preserve intersections in general. For example, if UU and UU' are disjoint non-empty subsets of XX, but I(U)=I(U)I(U) = I(U'), then I(UU)=I()=I(U \cap U') = I(\emptyset) = \emptyset but I(U)I(U)=I(U)I(U) \cap I(U') = I(U) is not empty.

You meant it may not be empty... as you made clear in your very next sentence.

view this post on Zulip John Baez (Apr 08 2024 at 21:26):

If you can stand one more puzzle about this: do you see the way in which inverse image is 'logically simpler' and 'easier to compute' than image?

view this post on Zulip David Egolf (Apr 08 2024 at 21:31):

Since I assumed that UU and UU' are non-empty, and also that I(U)=I(U)I(U) = I(U'), I think I(U)I(U)I(U) \cap I(U') actually isn't empty in this case. But one wouldn't need to assume I(U)=I(U)I(U) = I(U')! More generally, I think the idea is that even if UU and UU' are disjoint, I(U)I(U) and I(U)I(U') can have some elements in common.

view this post on Zulip David Egolf (Apr 08 2024 at 21:32):

John Baez said:

If you can stand one more puzzle about this: do you see the way in which inverse image is 'logically simpler' and 'easier to compute' than image?

Hmmm, that's interesting. Nothing immediately comes to mind, but I'll give it some thought!

view this post on Zulip David Egolf (Apr 08 2024 at 21:38):

My initial thought is that an inverse image seems harder to compute then an image!

It seems like it requires at least as many evaluations of ff to compute an inverse image, as compared to an image.

I'll give it some more thought and see what else I can think of...

view this post on Zulip John Baez (Apr 08 2024 at 21:39):

David Egolf said:

Since I assumed that UU and UU' are non-empty, and also that I(U)=I(U)I(U) = I(U'), I think I(U)I(U)I(U) \cap I(U') actually isn't empty in this case. But one wouldn't need to assume I(U)=I(U)I(U) = I(U')! More generally, I think the idea is that even if UU and UU' are disjoint, I(U)I(U) and I(U)I(U') can have some elements in common.

Whoops - you're right, I didn't read your comment carefully enough. Sorry.

view this post on Zulip John Baez (Apr 08 2024 at 21:46):

David Egolf said:

My initial thought is that an inverse image seems harder to compute then an image!

It seems like it requires more evaluations of ff to compute an inverse image, as compared to an image.

Maybe there are dual ways of thinking about this!

I was thinking: given f:XYf: X \to Y and a subset of its domain, what do we have to do to check whether a given element of YY is in the image of that subset?

Given ff and a subset of its codomain, what do we have to do to check whether a given element of XX is in the inverse image of that subset?

view this post on Zulip John Baez (Apr 08 2024 at 21:48):

I believe this explains why inverse image is 'better': always a boolean algebra homomorphism.

view this post on Zulip David Egolf (Apr 08 2024 at 21:52):

I have to take a look at what you just said! Here's the idea that came to mind for me, though:

I think we can use the fact that the inverse image interacts nicely with unions, intersections, and complements. For example, let's say I want to compute the inverse image of V=iViV = \cap_i V_i, and I know the inverse image of each ViV_i. Then I can just compute the intersection of the inverse images of the ViV_i.

By contrast, if I know the image of a bunch of UiU_i, I can't directly compute the image of U=iUiU = \cap_i U_i in an analogous way. That's because I(iUi)I(\cap_i U_i) is not necessarily equal to iI(Ui)\cap_i I(U_i).

So, the inverse image operation is computationally nicer than the image operation in this sense: you can compute more things directly, when given some known prior things.

view this post on Zulip David Egolf (Apr 08 2024 at 21:58):

John Baez said:

Maybe there are dual ways of thinking about this!

I was thinking: given f:XYf: X \to Y and a subset of its domain, what do we have to do to check whether a given element of YY is in the image of that subset?

Given ff and a subset of its codomain, what do we have to do to check whether a given element of XX is in the inverse image of that subset?

Assume we have f:XYf: X \to Y and a subset UU of XX. To check if yYy \in Y is in the image of UU, we need to check each element of UU and see if we ever get yy.

Let's now assume we have a subset VV of YY. To check if xXx \in X is in the inverse image of VV, we just need to compute f(x)f(x) and see if it lands in VV!

So, we'll need fewer evaluations of ff to see if a particular element is in a inverse image, as compared to what we need to check if an element is in an image!

view this post on Zulip David Egolf (Apr 08 2024 at 21:59):

It's interesting to compare these two things:

view this post on Zulip David Egolf (Apr 08 2024 at 22:31):

John Baez said:

Great, now you see exactly how inverse image is better than image! Inverse image sends maps between sets to maps of complete boolean algebras, and indeed gives a functor from Set\mathrm{Set} to the opposite of the category of complete boolean algebras. I like to call this "the duality between set theory and logic".

I somehow missed reading this earlier. I like it!

view this post on Zulip Peva Blanchard (Apr 09 2024 at 07:49):

David Egolf said:

It's interesting to compare these two things:

In complexity theory, enumerating elements of a subset and checking that an element is a member of a subset are somehow related, but not really equivalent (this depends on how "easy to compute" is defined).

a. For instance, regarding your second bullet point, it seems that you have the following picture in mind (I may be wrong):

b. Now the related way is to ask for procedures that decide on whether an element is a member of a subset. In that case, your first bullet point can be restated as:

I think cases a and b are "as easy as each other". They seem "dual", although I don't know if we can make this statement precise.

view this post on Zulip Peva Blanchard (Apr 09 2024 at 07:50):

Things become complicated if we try to apply one case starting from the other. I mean:

view this post on Zulip Peva Blanchard (Apr 09 2024 at 08:53):

Assuming that one can also check the equality "y = f(x)", and one can enumerate elements of the domain of ff, here are two (informal) ways:

view this post on Zulip Peva Blanchard (Apr 09 2024 at 09:33):

Oh! I think there is a more "categorical" rewording of all of the above. Given any category CC and morphism f:XYf: X \rightarrow Y in CC, we get two functors:

  1. between the over categories C/XC/YC / X \rightarrow C / Y : simply post-compose with ff.
  2. between the under categories X/CY/CX / C \leftarrow Y / C : simply pre-compose with ff.

And each of these functors may have adjoints:

  1. left/right Kan lift C/XC/YC/ X \leftarrow C/Y along ff.
  2. left/right Kan extension X/CY/CX/C \rightarrow Y/C along ff.

view this post on Zulip Peva Blanchard (Apr 09 2024 at 10:08):

(this is the kind of things that makes my head spin: over/under, post/pre, left/right)

view this post on Zulip Peva Blanchard (Apr 09 2024 at 12:11):

(edit: deleted the latest message. I thought I could reformulate the above in terms of adjunctions between pre/post-composition and left/right Kan extension/lift, but it was incorrect.)

view this post on Zulip David Egolf (Apr 09 2024 at 16:14):

Thanks for the link on "dovetailing". That's a neat concept I'd not heard of before!

I'm not quite understanding why you want to enumerate the elements (y,x)(y,x) of V×dom(f)V \times dom(f), though. Wouldn't it be less work to just loop over all the elements in dom(f)dom(f) (which I've been assuming is all of XX) and then apply ff to those, and see if we ever get an element of VV?

I guess my proposed approach here involves two loops:

for all xdom(f)x \in dom(f):
-- y=f(x)y=f(x)
-- for all vVv \in V:
---- check if v=yv=y

view this post on Zulip David Egolf (Apr 09 2024 at 16:18):

I suppose these two loops effectively involve considering each element of V×dom(f)V \times dom(f), unless we add a "break" statement in the second loop. The break statement could let us quit out of the second loop early if we find some vVv \in V that y=f(x)y=f(x) is equal to.

That explains, I think, why you wanted to enumerate all elements of V×dom(f)V \times dom(f). I think we basically had the same idea; I was just struggling to see that the way you formalized the idea matched (at least roughly) what I had in mind!

view this post on Zulip Peva Blanchard (Apr 09 2024 at 16:26):

I think it is the same idea. Dovetailing is relevant when the sets are infinite. For instance, in your pseudo-code example (the two loops), the inner loop can be infinite, so you never enumerate all of V×dom(f)V \times dom(f). But, besides that technical point, you really expressed the same idea.

view this post on Zulip David Egolf (Apr 09 2024 at 16:41):

Today, I want to start on Part 2 of the blog post series! :tada:

The first puzzle is this:

Check that with this choice of restriction maps Γp\Gamma_p is a presheaf, and in fact a sheaf.

This puzzle needs some context! Here it is:

view this post on Zulip Peva Blanchard (Apr 09 2024 at 16:44):

Now, it seems to me that, in maths, unless you work in computability theory, it is unusual to consider structures that can be enumerated. It makes no sense, a priori, for most mathematical structures, e.g., the real numbers. The other way, i.e., testing an element is more common, and, by varying what is meant by "testing", easier to generalize.

This is why I think, the inverse image f1(V)f^{-1}(V) seems "logically easier" to conceive than the image f(U)f(U). More precisely, in SetSet, the mother of all tests is the membership test yVy \in V, i.e., a characteristic function χV:Y{0,1}\chi_V : Y \rightarrow \{0,1\}. You can transport these characteristic functions along ff just by pre-composing with ff.

view this post on Zulip Peva Blanchard (Apr 09 2024 at 16:45):

oops, you already moved to Part 2. Let's move on then :)

view this post on Zulip David Egolf (Apr 09 2024 at 16:47):

Peva Blanchard said:

oops, you already moved to Part 2. Let's move on then :)

There's no rush! What you are saying is interesting, and I look forward to thinking about it! If there's more you want to say on that topic, please feel free to keep posting about it here. I can always shuffle the "part two" messages to the bottom afterwards.

view this post on Zulip David Egolf (Apr 09 2024 at 17:00):

Here's a picture, to help visualize the concept of a "bundle":

bundle

In this case the blue area is YY and the black line is XX. Our continuous map p:YXp:Y \to X projects each point down to the corresponding point on the black line.

view this post on Zulip Peva Blanchard (Apr 09 2024 at 17:04):

There is another bundle that may be familiar to you. Say we encode RGB pixel values with real values between 00 and 11, i.e., C=[0,1]3C = [0,1]^3 is the space of colors. Let S=[0,1]2S = [0,1]^2 be the unit square, thought of as a canvas. Then we have a (trivial) bundle S×CSS \times C \rightarrow S (just the projection on the first factor) that represent all the possible RGB-images on the unit square.

view this post on Zulip David Egolf (Apr 09 2024 at 17:10):

That's a neat example! I'm imagining all possible RGB values "floating over" each point of our square. A section of that bundle I think corresponds to an RGB image over (an open subset) of the unit square.

view this post on Zulip David Egolf (Apr 09 2024 at 17:12):

Here's a picture to visualize the concept of "section", using the bundle I drew above:
section

The orange line is a section of our bundle p:YXp:Y \to X over the yellow subset of XX.

In this case, we might imagine that the black line is an observed noisy signal, and the blue "envelope" describes at each point the possible "actual" (without noise) signal values. A section then is a "point-wise plausible guess" for a portion of a denoised signal.

view this post on Zulip John Baez (Apr 09 2024 at 17:25):

At first you said the black line was a picture of X, but to me it looks like a picture of a section... they're both reasonable interpretations but people tend use the latter, and draw X as a horizontal line down below the bundle itself, so that p "projects down" from Y to X. Later you seem to have used the latter interpretation because you said "the black line is an observed noisy signal", which sounds like a section to me.

view this post on Zulip John Baez (Apr 09 2024 at 17:26):

Here's a 3d picture of a bundle and a section s from Wikipedia:

.

view this post on Zulip John Baez (Apr 09 2024 at 17:27):

People like to use E instead of X and B instead of Y, and call E the total space and B the base space of the bundle.

view this post on Zulip Peva Blanchard (Apr 09 2024 at 17:49):

I guess we might have weird cases, like this one.

image.png

view this post on Zulip Morgan Rogers (he/him) (Apr 09 2024 at 19:02):

It might also be useful to have some more explicitly defined examples, like the exponential map from C\mathbb{C} to C{0}\mathbb{C} - \{0\} ;)

view this post on Zulip Peva Blanchard (Apr 09 2024 at 19:07):

Morgan Rogers (he/him) said:

It might also be useful to have some more explicitly defined examples, like the exponential map from C\mathbb{C} to C{0}\mathbb{C} - \{0\} ;)

oh yes, another one would be zz2z \mapsto z^2 on the complex numbers, or more generally zznz \mapsto z^n.

(Which makes me wonder, since ez=nznn!e^z = \sum_n \frac{z^n}{n!}, if we can combine multiple bundles together)

view this post on Zulip Morgan Rogers (he/him) (Apr 09 2024 at 19:29):

There are ways to combine bundles over general spaces XX but the kind you're thinking of (taking products and sums) relies on the algebraic structure of C\mathbb{C}. See if you can figure out how you might use the addition and multiplication of the complex numbers to combine bundles! Power series are a bonus challenge ;)

view this post on Zulip John Baez (Apr 09 2024 at 20:18):

Peva Blanchard said:

I guess we might have weird cases, like this one.

Yes, that picture shows a 'bundle' in the extremely general sense introduced here (a continuous map from a space EE to a space BB), but not a [[fiber bundle]]. For a fiber bundle we typically want the 'fibers' p1(b)p^{-1}(b) to be homeomorphic to each other for all bBb \in B, while in the picture some fibers are homeomorphic to [0,1][0,1] and others to [0,1][2,3][0,1] \cup [2,3].

view this post on Zulip Peva Blanchard (Apr 09 2024 at 21:43):

Morgan Rogers (he/him) said:

There are ways to combine bundles over general spaces XX but the kind you're thinking of (taking products and sums) relies on the algebraic structure of C\mathbb{C}. See if you can figure out how you might use the addition and multiplication of the complex numbers to combine bundles! Power series are a bonus challenge ;)

I see. Indeed, if we consider only bundles over C\mathbb{C} (or any other ring actually), we can do something as follows. Let p1:E1Cp_1 : E_1 \rightarrow \mathbb{C} and p2:E2Cp_2 : E_2 \rightarrow \mathbb{C}, then we can define p1+p2:E1×E2Cp_1 + p_2 : E_1 \times E_2 \rightarrow \mathbb{C} as the composite

E1×E2(p1,p2)C×C+C E_1 \times E_2 \xrightarrow{(p_1, p_2)} \mathbb{C} \times \mathbb{C} \xrightarrow{+} \mathbb{C}

We can do the same for any other binary continuous operations like, e.g., multiplication.

Similarly, given a complex number ss and a bundle p:ECp : E \rightarrow \mathbb{C}, I can define the bundle sp:ECs \cdot p : E \rightarrow \mathbb{C} by pointwise multiplication esp(e)e \mapsto s \cdot p(e).

Now, let's use the notation z:CCz : \mathbb{C} \rightarrow \mathbb{C} for the bundle corresponding to the identity morphisms. Then we wave zn:CnCz^n : \mathbb{C}^n \rightarrow \mathbb{C}, and more generally for any polynomial

P=a0+a1X++anXnP = a_0 + a_1 \cdot X + \dots + a_n \cdot X^n

we get a bundle

P(z):0knCkC((zki)1ik)0kn0knak1ikzki\begin{align*} P(z) : \prod_{0 \le k \le n} \mathbb{C}^k &\rightarrow \mathbb{C} \\ ((z_{ki})_{1 \le i \le k})_{0 \le k \le n} &\mapsto \sum_{0 \le k \le n} a_k \cdot \prod_{1 \le i \le k} z_{ki} \end{align*}

view this post on Zulip Peva Blanchard (Apr 09 2024 at 21:49):

(This is a weird entity. I would expect the total space to look more like a polynomial, but here it is just a big cartesian product)

view this post on Zulip Peva Blanchard (Apr 09 2024 at 21:58):

If PP is a power series instead of a mere polynomial, the bundle P(z)P(z) is not well-defined. It is not clear at all if the series converges; plus it is a weird series as it involves infinitely many different variables.

view this post on Zulip Peva Blanchard (Apr 09 2024 at 22:07):

If I assume that the series has a positive radius of convergence, e.g. 1, it might help. We know then that for any complex number ss with s<1|s| < 1, the series kaksk\sum_k a_k \cdot s^k converges to a well-defined (complex) value, and the induced function is continuous.

I could restrict my setting to the situation where the zkiz_{ki} all have modulus less than 11. But yet, as we have infinitely many variables, I'm not sure that the series kak1ikzki\sum_k a_k \prod_{1 \le i \le k} z_{ki} converges. Even less that it depends continuously on the zkiz_{ki}'s.

view this post on Zulip Peva Blanchard (Apr 09 2024 at 22:19):

I will leave it here. I may have taken the wrong turn when defining things.

view this post on Zulip Morgan Rogers (he/him) (Apr 10 2024 at 05:37):

Nice attempt! You got the main ideas I think, but some things to note: first, at the moment you have a lot of variables around; there is a way to reduce them by also precomposing with something. Second, there are indeed a bunch of subtleties to beware of for power series: for it to define a bundle, you not only need to restrict to a subset where the series converges, you need to make sure the function is continuous! It's hard to express those conditions categorically, which is why you'll rarely see analysis and category theory talking to each other. Or to turn that comment into an exercise: one way you might hope to express a power series is as a diagram of bundles over C\mathbb{C} whose colimit would determine the power series. But this can't work because of the way we define morphisms of bundles; can you see why?

view this post on Zulip Peva Blanchard (Apr 10 2024 at 11:42):

Indeed, there is a brutal way to reduce the number of variables. It suffices to precompose what I did with the diagonal map z(z,z,z,)z \mapsto (z, z, z, \dots). This amounts to consider only the bundles of the form CC\mathbb{C} \rightarrow \mathbb{C}.

More precisely, given a polynomial PP, this amounts to consider the function zP(z)z \mapsto P(z) as a bundle CC\mathbb{C} \rightarrow \mathbb{C}.

When PP is a (formal) power series, then the domain of PP cannot be the whole complex plane. For instance,

P(z)=11z=n0znP(z) = \frac{1}{1-z} = \sum_{n \ge 0} z^n

which is defined on C{1}\mathbb{C} - \{1\}.

So we must restrict the domain. Let's consider all the ways to restrict this power series. That is we consider all the bundles UCU \rightarrow \mathbb{C} with zP(z)z \mapsto P(z), where UU is an open subset where the series converges and is continuous. My knowledge about complex analysis is a bit rusty, so I'm being a bit sketchy here...

These bundles are objects in the over category Top/CTop/\mathbb{C}. Let's consider the category P\mathcal{P} with those bundles as objects, and as morphisms the ones induced by inclusion of subsets UVU \subseteq V. We could take the colimit L:XCL : X \rightarrow \mathbb{C} of P\mathcal{P} in Top/CTop/\mathbb{C}, provided it exists (I don't know any argument in favor of that, on the top of my mind).

It looks like XX would be the "maximal" domain of definition of the power series PP. But, thanks to your hint, I think this is wrong. That's because there are other morphisms in Top/CTop/\mathbb{C}, e.g., homeomorphisms. So, there could be issues like the total space XX being homeomorphic to the maximal domain of PP (???).

view this post on Zulip Peva Blanchard (Apr 10 2024 at 11:46):

Oh yes, in particular, the coefficients of the power series are not preserved by homeomorphisms.

view this post on Zulip Peva Blanchard (Apr 10 2024 at 11:47):

For instance:

P(2z)=112z=n02nznP(2\cdot z) = \frac{1}{1 - 2z} = \sum_{n \ge 0} 2^n \cdot z^n

view this post on Zulip Peva Blanchard (Apr 10 2024 at 11:48):

Which means that one cannot hope to recover the power series from the colimit LL, even if it exists.

view this post on Zulip John Baez (Apr 10 2024 at 13:25):

By the way, there's a huge amount to say about analytic functions and power series using sheaves: this is one of the things sheaves were developed for!

view this post on Zulip Morgan Rogers (he/him) (Apr 10 2024 at 15:10):

Peva Blanchard said:

It looks like XX would be the "maximal" domain of definition of the power series PP. But, thanks to your hint, I think this is wrong. That's because there are other morphisms in Top/CTop/\mathbb{C}, e.g., homeomorphisms. So, there could be issues like the total space XX being homeomorphic to the maximal domain of PP (???).

Great work! I was actually trying to hint at something a bit more basic than this: the fact that morphisms of bundles over XX fix the values in XX. I can't express the bundle corresponding to a power series as the colimit of the partial sums because there aren't bundle morphisms between the bundles corresponding to those partial sums in general!

view this post on Zulip David Egolf (Apr 10 2024 at 16:24):

Peva Blanchard said:

Now, it seems to me that, in maths, unless you work in computability theory, it is unusual to consider structures that can be enumerated. It makes no sense, a priori, for most mathematical structures, e.g., the real numbers. The other way, i.e., testing an element is more common, and, by varying what is meant by "testing", easier to generalize.

I was wondering if one could come up with a procedure for enumerating (listing the elements of, I assume?) a subset UXU \subseteq X of interest given a way to test if elements in XX are in UU. But I suppose if XX has infinitely many elements, this could be very impractical - this procedure may often require an infinite number of tests to be run. From that perspective, it does make more sense to focus on testing individual elements - as that is something that we can probably actually do!

view this post on Zulip Peva Blanchard (Apr 10 2024 at 17:14):

Actually, if all you have is a testing procedure for UU, via a characteristic function X{0,1}X \rightarrow \{0,1\} that consumes elements of XX, there is no generic way to build a procedure that produces elements of UU. For that you need another assumption, e.g., that you already have a procedure that produces elements of XX, which you can then "filter" using the characteristic function.

view this post on Zulip Peva Blanchard (Apr 10 2024 at 17:17):

I don't want to reveal too much about John's later posts (also because I don't have an expert knowledge in those things), but these "testing procedures", or "characteristic functions", will play a crucial role with respect to an important notion in topos theory, namely that of "subobject classifier".

view this post on Zulip John Baez (Apr 10 2024 at 22:53):

Yes, I was focused on testing procedures. My claim that inverse images are "logically simpler" than images merely meant this:

Say you have a function f:XYf: X \to Y. Then if SXS \subseteq X we have

f(S)={yYxS  f(x)=y} f(S) = \{y \in Y| \exists x \in S \; f(x) = y \}

while if SYS \subseteq Y we have

f1(S)={xX  f(x)S} f^{-1}(S) = \{ x \in X| \; f(x) \in S \}

In this sense, images are defined using an existential quantifier \exists. So:

But since inverse images are defined much more simply, without any quantifiers, they preserve unions, intersections and complements!

view this post on Zulip David Egolf (Apr 11 2024 at 17:27):

I'm feeling tired today, but I'd like to try and make at least a little progress on the current puzzle. Here it is again:

Check that with this choice of restriction maps Γp\Gamma_p is a presheaf, and in fact a sheaf.

And here's the context, again:

view this post on Zulip David Egolf (Apr 11 2024 at 17:32):

First, I want to show that Γp:O(X)opSet\Gamma_p: \mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set} is a functor, and hence a presheaf. Here's what it does on objects and morphisms:

view this post on Zulip David Egolf (Apr 11 2024 at 17:36):

To show that Γp\Gamma_p is a functor , we first need to show that restricting a section of pp actually gives us a section of pp. Let s:UYs:U \to Y be a section of pp over UU, where UU is an open subset of XX. We want to show that sV:VYs|_V: V \to Y given by sV=siVUs|_V = s \circ i_{V \to U} is a section of pp over VV. (Here VV is an open subset of XX with VUV \subseteq U).

To do this, it suffices to show that psV=1Vp \circ s|_V = 1_V. Checking this at an element vVv \in V, we find psV(v)=p(siVU)(v)=(ps)iVU(v)=1UiVU(v)=1U(v)=vp \circ s|_{V}(v) = p \circ (s \circ i_{V \to U})(v) = (p \circ s) \circ i_{V \to U}(v) = 1_U \circ i_{V \to U}(v) = 1_U(v) = v. We conclude that psV=1Vp \circ s|_V = 1_V, as desired. (Also, sVs|_V is continuous, as it is given by composing continuous functions).

view this post on Zulip David Egolf (Apr 11 2024 at 17:42):

Next, we want to show that Γp(1U)=1Γp(U)\Gamma_p(1_U) = 1_{\Gamma_p(U)} for any object (open subset of XX) UU. By definition, Γp(1U):Γp(U)Γp(U)\Gamma_p(1_U): \Gamma_p(U) \to \Gamma_p(U) is the function that takes a given section s:UYs: U \to Y and restricts its domain to UU, yielding s:UYs: U \to Y. We note that this is the identity function on Γp(U)\Gamma_p(U), so that Γp(1U)=1Γp(U)\Gamma_p(1_U) = 1_{\Gamma_p(U)}.

view this post on Zulip David Egolf (Apr 11 2024 at 17:45):

To finish showing that Γp:O(X)opSet\Gamma_p: \mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set} is a functor (and hence a presheaf), we need to show that Γp\Gamma_p respects composition. That is, if we have an equation of the form rr=rr \circ r' = r'' in O(X)op\mathcal{O}(X)^{\mathrm{op}}, then we need to show that Γp(r)Γp(r)=Γp(r)\Gamma_p(r) \circ \Gamma_p(r') = \Gamma_p(r''). This is true because restricting the domain of a section in two steps, or restricting the domain all at once yields the same result.

We conclude that Γp:O(X)opSet\Gamma_p: \mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set} is a presheaf!

view this post on Zulip David Egolf (Apr 11 2024 at 17:49):

The next order of business is to show that Γp\Gamma_p is not only a presheaf, but also a sheaf! But that's a job for another day, when I have a bit more energy.

view this post on Zulip Peva Blanchard (Apr 11 2024 at 19:41):

I can't resist to give it a try.

spoiler

view this post on Zulip John Baez (Apr 12 2024 at 10:16):

Great! Good luck on your energy levels. I think you'll find that showing that the presheaf of sections of a bundle is a sheaf is similar to the earlier problem where you showed that the presheaf of continuous real-valued functions is a sheaf. Indeed that earlier problem can be seen as a special case of this one if you take Y=X×RY = X \times \mathbb{R}.

view this post on Zulip David Egolf (Apr 14 2024 at 15:50):

I now want to show that the presheaf of sections Γp:O(X)opSet\Gamma_p: \mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set} is in fact a sheaf.

To do this, let's start out with a bunch of siΓp(Ui)s_i \in \Gamma_p(U_i) as ii varies. Let's require that (si)UiUj=(sj)UiUj(s_i)|_{U_i \cap U_j} = (s_j)|_{U_i \cap U_j} for all i,ji,j and iUi=U\cup_i U_i = U. (Here, each UiU_i is an open subset of XX). To conclude that Γp\Gamma_p is a sheaf, we need to show that there always exists a unique sΓpUs \in \Gamma_pU such that sUi=sis|_{U_i} = s_i for all ii.

Recalling that p:YXp:Y \to X, any particular si:UiYs_i:U_i \to Y is a continuous map such that psi=1Uip \circ s_i = 1_{U_i}. Intuitively, we want to "glue together" these sections to get a section sΓp(U)s \in \Gamma_p(U) of pp over UU. If ss exists, it is unique. That is because for any uUu \in U, uUiu \in U_i for some ii, so we must have s(u)=sUi(u)=si(u)s(u) = s|_{U_i}(u) = s_i(u). This produces a function :UY:U \to Y because (si)UiUj=(sj)UiUj(s_i)|_{U_i \cap U_j} = (s_j)|_{U_i \cap U_j} for all i,ji,j.

It remains to show that ss exists. To do that, we need to check that defining ss as s(u)=sUi(u)=si(u)s(u) = s|_{U_i}(u) = s_i(u) (where uUiu \in U_i) gives us a continuous function s:UYs: U \to Y such that ps=1Up \circ s = 1_U.

We start by considering continuity. By the "local criterion for continuity" discussed above, a function s:UYs:U \to Y is continuous exactly if for any point uUu \in U there is a neighborhood UuU_u of uu such that sUu:UuYs|_{U_u}: U_u \to Y is continuous. For any uUu \in U, there is some UiU_i so that uUiu \in U_i, because iUi=U\cup_i U_i = U. And by assumption we know that sUi=sis|_{U_i} = s_i is continuous. We conclude that s:UYs: U \to Y is continuous.

Next, we need to show that ps=1Up \circ s = 1_U. For any uUu \in U, there is some ii so that uiUiu_i \in U_i. Then, ps(u)=psi(u)=1Ui(u)=up \circ s(u) = p \circ s_i(u) = 1_{U_i}(u) = u. We conclude that ps=1Up \circ s = 1_U, as desired.

view this post on Zulip David Egolf (Apr 14 2024 at 16:08):

John Baez said:

Great! Good luck on your energy levels. I think you'll find that showing that the presheaf of sections of a bundle is a sheaf is similar to the earlier problem where you showed that the presheaf of continuous real-valued functions is a sheaf. Indeed that earlier problem can be seen as a special case of this one if you take Y=X×RY = X \times \mathbb{R}.

Thanks for the good luck! Sometimes hoping for higher energy levels does feel a bit like waiting for a lucky dice roll; I find it's quite difficult to predict my energy levels accurately.

I want to consider the case p:X×RXp: X \times \mathbb{R} \to X, where pp sends (x,a)(x,a) to xx for any aa. Then a section of pp over UU (where UU is an open subset of XX) is a continuous function s:UX×Rs:U \to X \times \mathbb{R} such that ps=1Up \circ s = 1_U. I want to show that a section of pp over UU gives us a real-valued continuous function :UR:U \to \mathbb{R}, and a real-valued continuous function :UR:U \to \mathbb{R} gives us a section of pp over UU.

A section UX×RU \to X \times \mathbb{R} is in particular a continuous function. Therefore, by the universal property of products, it corresponds to two continuous functions: (1) a function :UX:U \to X and (2) a function URU \to \mathbb{R}. So, given a section s:UX×Rs:U \to X \times \mathbb{R}, we get a real-valued continuous function :UR:U \to \mathbb{R}, given by πRs:UR\pi_\mathbb{R} \circ s: U \to \mathbb{R}.

Let's now start with a continuous function f:URf: U \to \mathbb{R}. We want to construct a section of pp over UU from ff. By the universal property of products, to get a continuous function s:UX×Rs: U \to X \times \mathbb{R}, we just need to specify a continuous function from UU to XX and a continuous function from UU to R\mathbb{R}. Let's take our function :UX:U \to X to be the inclusion i:UXi:U \to X (which is continuous because UU has the subspace topology).

We want to show that the induced function s:UX×Rs: U \to X \times \mathbb{R} is in fact a section. Indeed, ps(u)=p(u,f(u))=up \circ s(u) = p(u, f(u)) = u, as desired.

view this post on Zulip David Egolf (Apr 14 2024 at 16:15):

I guess what I really wanted to show is that there is a bijection between the set of sections of pp over UU and the set of real-valued continuous functions from UU. I'm running out of steam, but this seems important to note: Since ps=1Up \circ s= 1_U for any section ss, we have that s(u)s(u) is of the form (u,f(u))(u, f(u)) for some ff. That is, the function :UX:U \to X induced by a section ss must be the inclusion.

I'm hoping that one can make use of this fact to show that the procedures I described above ((1) for constructing a continuous real-valued function on UU from a section of pp over UU and (2) for constructing a section of pp over UU from a continuous real-valued function on UU) are in fact inverses of one another.

I'll stop here for now!

view this post on Zulip John Baez (Apr 14 2024 at 16:20):

David Egolf said:

I guess what I really wanted to show is that there is a bijection between the set of sections of pp over UU and the set of real-valued continuous functions from UU. I'm running out of steam, but this seems important to note: Since ps=1Up \circ s= 1_U for any section ss, we have that s(u)s(u) is of the form (u,f(u))(u, f(u)) for some ff. That is, the function from UXU \to X induced by a section ss must be the inclusion.

Yes, that's a really important observation. And there's nothing really special about R\mathbb{R} here. More generally this is how you can take any section of a bundle p:X×AXp: X \times A \to X over UXU \subseteq X and turn it into a continuous function f:UAf: U \to A. And you're right: this gives a bijection between such sections and continuous functions f:UAf: U \to A.

view this post on Zulip John Baez (Apr 14 2024 at 16:21):

So, sections of bundles are a generalization of continuous functions. I'll let you do the work, but I wanted to have the fun of stating the dramatic conclusion!

view this post on Zulip Peva Blanchard (Apr 14 2024 at 20:21):

Just to formalize the statement. Does it mean that there is a natural isomorphism between the sheaf of continuous functions on XX and the sheaf of sections of the bundle X×RXX \times \mathbb{R} \rightarrow X?

view this post on Zulip John Baez (Apr 15 2024 at 08:41):

Yes, that's right. Nice!

It's not much extra work to state (or prove) the idea more generally: for any topological spaces XX and AA, there's a natural isomorphism between the sheaf of continuous AA-valued functions on XX and the sheaf of sections of the bundle X×AXX \times A \to X.

view this post on Zulip Peva Blanchard (Apr 15 2024 at 09:24):

Cool!

It also motivates "fiber bundles". The common fiber somehow acts like the codomain of values of the "functions". The difference is that the total space is not necessarily neatly decomposed as a cartesian product X×AX \times A.

The correspondance seems to go this way:

Also, I understand why we would want to consider "étale spaces": the ultimate form of this correspondance game is the equivalence between the category of sheaves on XX and the category of étale spaces over XX.

view this post on Zulip Peva Blanchard (Apr 15 2024 at 09:33):

Mmh, I think my mental picture is wrong ... an étale space EXE \rightarrow X does not seem to generalize fiber bundle.

view this post on Zulip John Baez (Apr 15 2024 at 10:51):

Peva Blanchard said:

The correspondence seems to go this way:

Yes, those are both right. We could make the second one more precise if we wanted, either in lowbrow ways or in highbrow ways using sheaf cohomology. But now is probably not the time to do that, especially since the "course" he's going through does not introduce fiber bundles.

view this post on Zulip John Baez (Apr 15 2024 at 10:52):

Peva Blanchard said:

Mmh, I think my mental picture is wrong ... an étale space EXE \rightarrow X does not seem to generalize fiber bundle.

The course talks more about étale spaces fairly soon, so let's wait a bit and revisit this. But you're right: I would say étale spaces generalize covering spaces, which are fiber bundles with discrete fiber.

view this post on Zulip David Egolf (Apr 15 2024 at 20:13):

In the next puzzle, we work on building a functor Γ:Top/XO(X)^\Gamma: \mathsf{Top}/X \to \widehat{O(X)}. We've already seen how to make a sheaf Γp\Gamma_p on XX from a continuous function p:YXp:Y \to X. It remains to figure out how Γ\Gamma acts on morphisms in Top/X \mathsf{Top}/X, and then to check that our resulting Γ\Gamma really is a functor.

Here's the next puzzle:

Suppose we have two bundles over XX, say p:YXp: Y \rightarrow X and p:YXp': Y' \rightarrow X, and a morphism from the first to the second, say f:YYf: Y \rightarrow Y'. Suppose s:UYs: U \rightarrow Y is a section of the first bundle over the open set UXU \subset X. Show that fsf \circ s is a section of the second bundle over UU. Use this to describe what the functor Γ\Gamma does on morphisms, and check functoriality.

view this post on Zulip David Egolf (Apr 15 2024 at 20:23):

First, we recall that a morphism from a bundle p:YXp:Y \to X to a bundle p:YXp': Y' \to X is a continuous function f:YYf:Y \to Y' such that pf=pp' \circ f = p.

In picture form, this commutative diagram describes a morphism from pp to pp':
a morphism from p to p'

Notice that if yYy \in Y "sits over" xXx \in X (so that p(y)=xp(y)=x), then f(y)Yf(y) \in Y' also sits over xx (as p(f(y))=p(y)=xp'(f(y)) = p(y)=x). We might think of our continuous function f:YYf:Y \to Y' as being possible to decompose into several pieces, where the xthx_{th} piece of ff maps p1(x)p^{-1}(x) to some subset of (p)1(x)(p')^{-1}(x).

view this post on Zulip David Egolf (Apr 15 2024 at 20:37):

With that context in place, I now want to check this:

To show that fs:UYf \circ s: U \to Y' is a section of p:YXp': Y' \to X, we need to show that p(fs):UXp' \circ (f \circ s): U \to X sends each element of uu to itself. Noting that pf=pp' \circ f = p, we find p(fs)(u)=(pf)(s(u))=p(s(u))p' \circ (f \circ s)(u) = (p' \circ f)(s(u)) = p(s(u)). Since ss is a section of pp over UU, p(s(u))=up(s(u)) = u. We conclude that p(fs)(u)=up' \circ (f \circ s)(u) = u for all uUu \in U, and so fsf \circ s is indeed a section of pp' over UU.

view this post on Zulip David Egolf (Apr 15 2024 at 20:39):

The next order of business is to describe what Γ\Gamma does on morphisms. But I'll stop here for today!

view this post on Zulip Peva Blanchard (Apr 15 2024 at 21:05):

I'll just restate, in this setting, an example that we discussed earlier. Let's just look at the case where Y=R×XY = \mathbb{R} \times X and Y=[1,1]×XY' = [-1, 1] \times X. The bundles pp and pp' are just the projection on the second factor.

Then, we get a morphism from pp to pp' in Top/XTop/X with

f:YY(v,x)(arctan v,x)\begin{align*} f : Y &\rightarrow Y' \\ (v, x) &\mapsto (arctan~v, x) \end{align*}

I choose arctanarctan, but clearly we can replay this game with any function RR\mathbb{R} \rightarrow \mathbb{R}.

view this post on Zulip Peva Blanchard (Apr 15 2024 at 21:12):

But there's a funny variant if we take X=RX = \mathbb{R}

g:R×R[1,1]×R(v,x)(arctan(vx),x)\begin{align*} g : \mathbb{R} \times \mathbb{R} &\rightarrow [-1,1] \times \mathbb{R} \\ (v, x) &\mapsto (arctan(v - x), x) \end{align*}

I think this is an example of morphism in Top/RTop/\mathbb{R} which does not arise from a continuous real-valued function RR\mathbb{R} \rightarrow \mathbb{R} as before.

view this post on Zulip David Egolf (Apr 16 2024 at 16:22):

Hmmm, let me try to understand what you just said.

I'll work in Top/X\mathsf{Top}/X. Let us assume we have two bundles of this form: p:A×XXp:A \times X \to X and p:A×XXp':A' \times X \to X, where p(a,x)=xp(a,x)=x and p(a,x)=xp'(a',x)=x for all aAa \in A, aAa' \in A, and xXx \in X.

If we have a continuous function f:AAf:A \to A', then the function (f,1X):A×XA×X(f, 1_X):A \times X \to A' \times X which sends (a,x)(a,x) to (f(a),x)(f(a),x) is continuous. Is it also a morphism of bundles from pp to pp'? Let's consider p(f,1X)(a,x)p' \circ (f,1_X)(a,x) for some (a,x)A×X(a,x) \in A \times X. We get p(f,1X)(a,x)=p((f(a),x)=x=p(a,x)p' \circ (f, 1_X)(a,x) =p'((f(a),x) = x = p(a,x). We conclude that from any continuous function f:AAf:A \to A' we get an morphism of bundles from pp to pp' given by (f,1X):A×XA×X(f,1_X):A \times X \to A' \times X.

view this post on Zulip David Egolf (Apr 16 2024 at 16:23):

I think then the question is: what other morphisms exist from p:A×XXp:A \times X \to X to p:A×XXp':A' \times X \to X that can not be produced in this way?

The gg you give above is not of the form (f,1X)(f, 1_X), for example.

view this post on Zulip David Egolf (Apr 16 2024 at 16:30):

Any continuous function hh from A×XA \times X to A×XA' \times X induces a continuous function ff from A×XA \times X to AA'. Let's define fx:AAf_x: A \to A' by fx(a)=f(a,x)f_x(a) = f(a,x). In some cases, this fxf_x can vary as xx does!

I'm guessing we can't induce a morphism of bundles where this kind of thing happens when we start out with just a single continuous function from AA to AA'.

view this post on Zulip Peva Blanchard (Apr 16 2024 at 16:33):

Yes exactly! To be complete, we should prove that there is no ff such that g(v,x)=(f(v),x)g(v, x) = (f(v), x) for all v,xv, x.

spoiler

view this post on Zulip David Egolf (Apr 17 2024 at 19:09):

I think I have a good guess regarding how to finish describing Γ\Gamma. Once I get a bit more energy - hopefully soon - I will type that up here. But today I need to rest up!

view this post on Zulip Julius Hamilton (Apr 19 2024 at 14:12):

I used to have energy problems, but then I started taking meds (in case that might help you).

view this post on Zulip David Egolf (Apr 22 2024 at 16:05):

Alright, let me take a stab at describing what Γ:Top/XO(X)^\Gamma: \mathsf{Top}/X \to \widehat{\mathcal{O}(X)} does on morphisms. Let's assume we have two bundles over XX, namely p:YXp:Y \to X and p:YXp':Y' \to X. These induce presheaves (indeed sheaves) on XX, by sending each open subset of XX to an appropriate set of sections over that subset.

Let's call these sheaves Γp:O(X)Set\Gamma_p: \mathcal{O}(X) \to \mathsf{Set} and Γp:O(X)Set\Gamma_{p'}: \mathcal{O}(X) \to \mathsf{Set}. Given a morphism of bundles from pp to pp' induced by a continuous function f:YYf:Y \to Y', we want to define a natural transformation Γ(f):ΓpΓp\Gamma(f): \Gamma_p \to \Gamma_p'.

Let's set up a naturality square corresponding to the morphism r:UVr: U \to V in O(X)\mathcal{O}(X) where VV and UU are open subsets of XX and VUV \subseteq U. We recall that:

Given a section ss of pp over UU and a morphism of bundles f:YYf:Y \to Y', we can form fs:UYf \circ s: U \to Y'. We saw earlier that this is indeed a section of pp' over UU. So, post-composing by ff provides a function from sections of pp over UU to sections of pp' over UU.

view this post on Zulip David Egolf (Apr 22 2024 at 16:06):

Based on the above, we now draw a proposed naturality square corresponding to the morphism r:UVr:U \to V in O(X)\mathcal{O}(X):
square

To show we get a natural transformation from Γp\Gamma_p to Γp\Gamma_{p'} in this way, we still need to show this square commutes for an arbitrary morphism r:UVr: U \to V.

view this post on Zulip David Egolf (Apr 22 2024 at 16:17):

Let's pick an s:UYΓp(U)s:U \to Y \in \Gamma_p(U) and trace it around the diagram. Restricting its domain to VV can be accomplished by precomposing with the (continuous) inclusion map i:VUi:V \to U. Going around the top right side of the square, we get fV(s)=f(si)=f(si)f_* \circ |_V(s) = f_*(s \circ i) = f \circ (s \circ i). Going around the bottom left side of the square, we get Vf(s)=V(fs)=(fs)i|_V \circ f_*(s) = |_V \circ (f \circ s) = (f \circ s) \circ i. By associativity of composition, these two results are equal, and so the square commutes.

We conclude that post-composing with ff at each component describes a natural transformation Γ(f):ΓpΓp\Gamma(f):\Gamma_p \to \Gamma_{p'}.

view this post on Zulip David Egolf (Apr 22 2024 at 16:22):

Next, let's show that Γ:Top/XO(X)^\Gamma:\mathsf{Top}/X \to \widehat{\mathcal{O}(X)} is a functor.

First we need to show that Γ(1p)=1Γ(p)\Gamma(1_p) = 1_{\Gamma(p)} for any bundle p:YXp:Y \to X. The identity morphism 1p1_p of pp is induced by the (continuous) identity function 1Y:YY1_Y: Y \to Y. So, Γ(1p):ΓpΓp\Gamma(1_p): \Gamma_p \to \Gamma_p is the natural transformation which post-composes by 1Y1_Y at each component. This is indeed the identity natural transformation from Γ(p)=Γp\Gamma(p) = \Gamma_p to itself, as desired.

view this post on Zulip David Egolf (Apr 22 2024 at 16:28):

Finally, we need to show that Γ(ff)=Γ(f)Γ(f)\Gamma(f \circ f') = \Gamma(f) \circ \Gamma(f') for two composable bundle morphisms ff and ff'. Let's compare the components of these two natural transformations. The UthU_{th} component of Γ(ff)\Gamma(f \circ f') is a function that post composes fff \circ f' after a section ss, so it is the function s(ff)ss \mapsto (f \circ f') \circ s. The UthU_{th} component of Γ(f)Γ(f)\Gamma(f) \circ \Gamma(f') is given by composing the UthU_{th} component of Γ(f)\Gamma(f) after the UthU_{th} component of Γ(f)\Gamma(f'). That means that the UthU_{th} component of Γ(f)Γ(f)\Gamma(f) \circ \Gamma(f') corresponds to a function sf(fs)s \mapsto f \circ (f' \circ s). By associativity of composition, we conclude that the UthU_{th} component of Γ(ff)\Gamma(f \circ f') and Γ(f)Γ(f)\Gamma(f) \circ \Gamma(f') are equal. So, Γ(ff)=Γ(f)Γ(f)\Gamma(f \circ f') = \Gamma(f) \circ \Gamma(f'), as desired.

We conclude that Γ:Top/XO(X)^\Gamma: \mathsf{Top}/X \to \widehat{\mathcal{O}(X)} is indeed a functor!

view this post on Zulip David Egolf (Apr 23 2024 at 15:18):

I'm excited, because the next section of the current blog post talks about "germs"! The rough intuition I have for germs is that they can be used to describe the different possible "very local behaviours" super close to a point.

For example, "The Rising Sea" (by Vakil) defines germs at a point xx to be equivalence classes of smooth functions defined on open sets containing xx: we say that f:UYf:U \to Y is in the same equivalence class as f:UYf': U' \to Y if there is some VUUV \subseteq U \cap U' such that xVx \in V and fU=fUf|_U = f'|_{U'}. So, intuitively, two functions defined on open sets containing xx are in the same germ at xx if they restrict to the same function when we "zoom in" to some open set that is "close enough" to xx.

view this post on Zulip David Egolf (Apr 23 2024 at 15:23):

Above, we saw how to make a presheaf on XX from a bundle over XX. We now want to go in the other direction: can we make a bundle over XX from a presheaf on XX?

Presheafs on XX and bundles over XX can both be viewed as "attaching information" to parts of XX. Given a bundle f:YXf:Y \to X, the data "attached" to some point xXx \in X is f1(x)Yf^{-1}(x) \subseteq Y. Given a presheaf F:O(X)opSetF: \mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set}, the data "attached" to an open subset UU is F(U)F(U).

So, to make a bundle from a presheaf, we need to figure out how to attach data to individual points of XX given data attached to each open subset of XX.

view this post on Zulip David Egolf (Apr 23 2024 at 15:36):

Assume we have some presheaf F:O(X)opSetF: \mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set}. We can come up with a set Λ(F)x\Lambda(F)_x to "attach" to xXx \in X as follows:

view this post on Zulip David Egolf (Apr 23 2024 at 15:40):

Next time, I'd like to think about Λ(X)x\Lambda(X)_x in the particular case where FF is the presheaf (which is also a sheaf) that sends each open subset UU of XX to the set of continuous real-valued functions :UR:U \to \mathbb{R}.

view this post on Zulip John Baez (Apr 23 2024 at 15:59):

Good! That's a great example for getting a more concrete picture of germs. I recommend taking X=RX = \mathbb{R} so you can actually graph these continuous functions and visualize these germs. And in this example I also recommend comparing the sheaves of

Remember, a functions from an open set of R\mathbb{R} to R\mathbb{R} is analytic if at each point it has a Taylor series with a positive radius of convergence.

The reason I bring this up is that derivatives, Taylor series and germs are three famous ways to study how a function looks in an arbitrarily small neighborhood of a point. And there are some revealing differences in the 3 cases listed above!

view this post on Zulip Peva Blanchard (Apr 23 2024 at 16:13):

I remember a funny function f:RRf: \mathbb{R} \rightarrow \mathbb{R}:

f(x)={0if x0e1x2otherwise f(x) = \begin{cases} 0 &\text{if } x \le 0 \\ e^{-\frac{1}{x^2}} &\text{otherwise} \end{cases}

which is smooth (therefore continuous).

One can try to describe the germ of ff at x=0x = 0, when regarded as a continuous, resp. smooth, function.

It also turns out that ff is not analytic, because of what happens at x=0x = 0.

view this post on Zulip John Baez (Apr 23 2024 at 16:31):

Yes, this is a great example of how the germs of smooth functions differ from those of analytic functions. There is more to say about this but I'll let David proceed at his own desired pace so he's not "drinking from a firehose".

view this post on Zulip JR (Apr 23 2024 at 20:53):

Peva Blanchard said:

I remember a funny function f:RRf: \mathbb{R} \rightarrow \mathbb{R}:

f(x)={0if x0e1x2otherwise f(x) = \begin{cases} 0 &\text{if } x \le 0 \\ e^{-\frac{1}{x^2}} &\text{otherwise} \end{cases}

which is smooth (therefore continuous).

One can try to describe the germ of ff at x=0x = 0, when regarded as a continuous, resp. smooth, function.

It also turns out that ff is not analytic, because of what happens at x=0x = 0.

FWIW this function is often used (after integrating and introducing radial or similar coordinates) to construct (relatively) explicit partitions of unity.

view this post on Zulip David Egolf (Apr 24 2024 at 15:59):

I next want to want to understand this part of the blog post, where FF is the presheaf which sends each open subset UU of XX to the set of continuous real-valued functions F(U)=Top(U,R)F(U) = \mathsf{Top}(U, \mathbb{R}):

By the definition of colimit, for any open neighborhood UU of xx we have a map FUΛ(F)xFU \to \Lambda(F)_x.

So any continuous real-valued function defined on any open neighborhood of xx gives a ‘germ’ of a function on xx. But also by the definition of colimit, any two such functions give the same germ iff they become equal when restricted to some open neighborhood of xx.

Specifically I'd like to prove the statement "any two such functions give the same germ iff they become equal when restricted to some open neighborhood of xx".

Once I've done that, then I think it would be good to do the things that John Baez suggests above: take X=RX = \mathbb{R} and make some graphs to visualize germs, and compare the germs we get when considering sheaves of continuous, smooth, or analytic R\mathbb{R}-valued functions. (Then I think it'll be time for the next puzzle, probably!)

view this post on Zulip David Egolf (Apr 24 2024 at 16:51):

I'm not sure where to start, but I think it may be helpful to get some sense for what a cone under our diagram FI:OxSetF \circ I: O_x \to \mathsf{Set} is like.

So, let α:FIΔS\alpha: F \circ I \to \Delta_S be a cone under FIF \circ I. Note that α\alpha is a natural transformation from FIF \circ I to the functor ΔS:OxSet\Delta_S: O_x \to \mathsf{Set} that is constant at the set SS. Since α\alpha is a natural transformation, all its "naturality squares" must commute.

Let's examine the naturality square for the morphism r:UVr:U \to V in OxO_x, where UU and VV are open subsets of XX each containing xx, such that VUV \subseteq U. Here's the corresponding naturality square:
square

Since α\alpha is a cone under FIF \circ I, this diagram commutes. That means αVV=αU\alpha_V \circ |_V = \alpha_U. This will be useful to know in a moment. Intuitively, this tells us that restricting a continuous function (from an open subset containing xx to an open subset containing xx) doesn't change the germ of xx it corresponds to.

I'm interested in the case where two continuous functions f:URf:U \to \mathbb{R} and f:URf':U' \to \mathbb{R} get mapped to the same germ (same element of Λ(F)x\Lambda(F)_x). And I want to show that this happens when there is some open VUUV \subseteq U' \cap U so that xVx \in V and fV=fVf|_V = f'|_V. Let's draw part of a cone α:FIΔS\alpha: F \circ I \to \Delta_S (for some set SS) under the diagram FIF \circ I in the situation where there is some open VUUV \subseteq U \cap U' containing xx:
diagram

We have that αVV=αUU\alpha_V \circ |_V = \alpha_{U \cap U'} and αUUUUU=αU\alpha_{U \cap U'} \circ |_{U' \to U \cap U'} = \alpha_{U'}. That implies that αVVUUU=αU\alpha_V \circ |_V \circ |_{U' \to U \cap U'} = \alpha_{U'}. Similarly, αVVUUU=αU\alpha_V \circ |_V \circ |_{U \to U \cap U'} = \alpha_{U}

Let's now assume that we have continuous functions f:URf:U \to \mathbb{R} and f:URf':U' \to \mathbb{R} such that VUUU(f)=VUUU(f)|_V \circ |_{U \to U \cap U'}(f) = |_V \circ |_{U' \to U \cap U'}(f'). Thus, αVVUUU(f)=αVVUUU(f)\alpha_V \circ |_V \circ |_{U \to U \cap U'}(f) = \alpha_V \circ |_V \circ |_{U' \to U \cap U'}(f'). Therefore, αU(f)=αU(f)\alpha_{U}(f) = \alpha_{U'}(f').

So, we see that if f:URf:U \to \mathbb{R} and f:URf':U' \to \mathbb{R} restrict to the same function on some open subset of UUU \cap U' that contains xx, then they get mapped to the same element by any cone under FI:OxSetF \circ I: O_x \to \mathsf{Set}. In particular, they must correspond to the same germ of xx!

It remains to show that if two continuous functions f:URf:U \to \mathbb{R} and f:URf':U' \to \mathbb{R} correspond to the same germ of xx, then they must restrict to the same function on some open VUUV \subseteq U \cap U' containing xx. I'll leave that for next time, though.

view this post on Zulip John Baez (Apr 24 2024 at 17:38):

Good work! We'll be able to draw a lot of lessons from what you're doing now, because many of the ideas you're coming up with now (and will come up with next time :upside_down: ) apply in far more general situations than the one you're considering here. But I won't distract you with those lessons until you're done!

view this post on Zulip Jacob Zelko (Apr 24 2024 at 18:51):

I have to say a huge thank you to @David Egolf and @John Baez (and multiple others) for this wonderful discussion here on Topos Theory. I am not yet at the point of getting into these blogs like David has, but I am slowly beginning to catch hints of Topos Theory in some of the readings/investigations I have been doing. At some point I think I shall converge back to this discussion but am very thankful it is on this Zulip for future reference. At any rate, I follow along with great curiosity in silent reflection of these points!

view this post on Zulip David Egolf (Apr 25 2024 at 17:49):

Next, I want to show that if two continuous functions f:URf:U \to \mathbb{R} and f:URf':U' \to \mathbb{R} (with UU and UU' being open sets containing xx) correspond to the same germ of xx, then they must restrict to the same function on some open VUUV \subseteq U \cap U' containing xx.

To get there, I first want to think conceptually about what it means for our set of germs Λ(F)x\Lambda(F)_x to be (part of the data of) a colimit of FI:OxSetF \circ I: O_x \to \mathsf{Set}. The full data of the colimit of FIF \circ I is some natural transformation α:FIΔΛ(F)x\alpha: F \circ I \to \Delta_{\Lambda(F)_x}. By definition of a colimit, this cocone of FIF \circ I is initial in the category of cocones of FIF \circ I. That is, for every other cocone β:FIΔS\beta: F \circ I \to \Delta_S (where ΔS\Delta_S is the functor :OxSet:O_x \to \mathsf{Set} constant at some set SS), there is a unique natural transformation Δg:ΔΛ(F)xΔS\Delta_g: \Delta_{\Lambda(F)_x} \to \Delta_S so that Δgα=β\Delta_g \circ \alpha = \beta.

Here's a picture illustrating the situation:
picture

The triangle diagram lives in the category [Ox,Set][O_x, \mathsf{Set}] of functors from OxO_x to Set\mathsf{Set}, together with natural transformations between them. The arrow at the bottom g:Λ(F)xSg:\Lambda(F)_x \to S is a function; it is a morphism in Set\mathsf{Set}. Note that a natural transformation from one constant functor to another is induced by a morphism from the object the first functor is constant at to the object the second functor is at.

In this diagram, α:FIΔΛ(F)x\alpha: F \circ I \to \Delta_{\Lambda(F)_x} I think can be viewed as an "observation" of the functor FIF \circ I. I think our goal is to find the "most informative observation" of the functor FI:OxSetF \circ I: O_x \to \mathsf{Set} having a target of some constant functor :OxSet:O_x \to \mathsf{Set}. Indeed, I think that α\alpha is the "most informative" observation of FIOxF \circ I \to O_x, in the sense that any other observation of it β\beta can be computed as Δgα\Delta_g \circ \alpha for some Δg\Delta_g.

Let's think about this a bit more using components. Let αU:(FI)(U)Λ(F)x\alpha_U: (F \circ I)(U) \to \Lambda(F)_x be the UU-th component of α\alpha, where UU is some open subset of XX containing xx. The UU-th component of Δg\Delta_g is just g:Λ(F)xSg: \Lambda(F)_x \to S, and so the commutativity of our diagram implies that gαU=βUg \circ \alpha_U = \beta_U. Taking some particular f(FI)(U)f \in (F \circ I)(U), so that f:URf:U \to \mathbb{R} is a continuous function defined on the open set UU containing xx, we learn that g(αU(f))=βU(f)g(\alpha_U(f)) = \beta_U(f). So, given the germ that ff belongs to, namely αU(f)\alpha_U(f), we can compute the observation βU(f)\beta_U(f) using some gg. For this reason, I think it makes sense to say that that germ of a particular function f:URf:U \to \mathbb{R} at xx is the "most informative" observation of that function "locally about xx".

Next time, I want to make use this intuition to show that if two continuous functions f:URf:U \to \mathbb{R} and f:URf':U' \to \mathbb{R} correspond to the same germ of xx, then they must restrict to the same function on some open VUUV \subseteq U \cap U' containing xx. To do that, here's my current rough plan:

view this post on Zulip John Baez (Apr 25 2024 at 23:01):

I don't think a proof by contradiction is necessary here, but you can try it and then perhaps straighten it out to a direct proof.

view this post on Zulip David Egolf (Apr 26 2024 at 15:43):

John Baez said:

I don't think a proof by contradiction is necessary here, but you can try it and then perhaps straighten it out to a direct proof.

This makes me want to find a direct proof! But I'll start out with the (attempted) proof by contradiction, and see what happens.

Let α:FIΔΛ(F)x\alpha: F \circ I \to \Delta_{\Lambda(F)_x} be the proposed colimit of FI:OxSetF \circ I:O_x \to \mathsf{Set}. And to obtain a contradiction, assume we have two continuous functions f:URf:U \to \mathbb{R} and f:URf':U' \to \mathbb{R} (with UU and UU' open sets containing xx), such that:

We aim to construct a cocone β:FIΔS\beta: F \circ I \to \Delta_S (for some set SS) of FIF \circ I so that there is no Δg:ΔΛ(F)xΔS\Delta_g: \Delta_{\Lambda(F)_x} \to \Delta_S satisfying Δgα=β\Delta_g \circ \alpha = \beta. That would show that α\alpha can't possibly act in this way if we want it to be a colimit.

view this post on Zulip David Egolf (Apr 26 2024 at 15:51):

To construct β\beta my plan to use what I think is supposed to be the actual colimit. We define an equivalence relationship on real-valued functions defined on an open set of XX containing xx. We decree that h:URh:URh:U \to \mathbb{R} \sim h':U' \to \mathbb{R} exactly if there is some open set VUUV \subseteq U \cap U' containing xx on which hV=hVh|_V = h'_V. Then, we form the set SS by having one element per equivalence class. I'll call the element of SS corresponding to the equivalence class of h:URh:U \to \mathbb{R} by the name [h][h]. Then, we let βU(h:UR)=[h]\beta_U(h:U \to \mathbb{R}) = [h].

Notice that if there is no open VUUV \subseteq U \cap U' containing xx where f:URf:U \to \mathbb{R} and f:URf:U' \to \mathbb{R} restrict to the same function, then ff and ff' are not equivalent. That means that βU(f)βU(f)\beta_U(f) \neq \beta_U(f'). We will aim to use this in a minute to obtain a contradiction.

There's a couple things to show first, though:

(I'd love to finish this off today, but I think I'll need to rest up and come back to this hopefully tomorrow!)

view this post on Zulip Peva Blanchard (Apr 26 2024 at 16:58):

Yes, I think an explicit construction of the colimit as a quotient by an equivalence relation is the right way!

Here is a very basic example with finite sets. The bottom right corner is the colimit of the diagram consisting of the three other corners. The square brackets enclose equivalence classes.

image.png

The way I like to see it is two-steps: first we take the disjoint union (the bullets a,,ea,\dots,e), and then we glue things together by adding wires (labeled 0,10,1). The equivalence classes correspond to the connected components of the resulting graph.
image.png

view this post on Zulip David Egolf (Apr 26 2024 at 17:38):

That's a nice example! Your visualization of the "gluing" is very cool!

view this post on Zulip David Egolf (Apr 28 2024 at 17:24):

I think we're in the home stretch now. The next thing I want to do is to show that the following really defines an equivalence relationship on real-valued functions mapping from open subsets of XX that contain xXx \in X:

For any h:URh:U \to \mathbb{R}, hhh \sim h, because hU=hUh|_U = h|_U.

If h:URh:URh:U \to \mathbb{R} \sim h':U' \to \mathbb{R} and h:URh:URh':U' \to \mathbb{R} \sim h'':U'' \to \mathbb{R}, then we want to show that hhh \sim h''. Since, hhh \sim h', there is some open VUUV \subseteq U \cap U' containing xx so that hV=hVh|_V = h'|_V. And since hhh' \sim h'' there is some open VUUV' \subseteq U' \cap U'' containing xx so that hV=hVh'_{V'} = h''_{V'}. Now, VVV \cap V' is open, and is a subset of both VV and VV'. So on VVV \cap V', we have that hVV=hVVh_{V \cap V'} = h'_{V \cap V'} and hVV=hVVh'_{V \cap V'} = h''_{V \cap V'}. Hence hVV=hVVh_{V \cap V'} = h''_{V \cap V'} and thus hhh \sim h''.

If h:URh:URh:U \to \mathbb{R} \sim h':U' \to \mathbb{R}, we want to show that hhh' \sim h. Since hhh \sim h', there is some open VUUV \subseteq U \cap U' containing xx so that hV=hVh|_V = h'|_V. That implies that hV=hVh'_V = h|_V and hence hhh' \sim h.

We conclude that \sim is indeed an equivalence relation on the set of real-valued functions having some open domain of XX that contains xx.

view this post on Zulip David Egolf (Apr 28 2024 at 17:34):

Next, I want to show that β\beta is a cocone of FIF \circ I. Recall from above that βU:(FI)(U)S\beta_U: (F \circ I)(U) \to S is defined as βU(h:UR)=[h]\beta_U(h:U \to \mathbb{R}) = [h], where [h][h] is the equivalence class of hh according to the equivalence relationship \sim.

To show that β\beta is a cocone of FIF \circ I, it suffices to show that any naturality square of β:FIΔS\beta: F \circ I \to \Delta_S commutes. Given a morphism r:UVr:U \to V in OxO_x (where UU and VV are open subsets of XX containing xx, with VUV \subseteq U), here is the correspond naturality square:
naturality square

To show this square commutes, it suffices to show that βU=βVV\beta_U = \beta_V \circ |_V. At a particular element of (FI)(U)(F \circ I)(U), say f:URf:U \to \mathbb{R}, that means that βU(f)=βVV(f)\beta_U(f) = \beta_V \circ |_V(f).

To show this is true, it suffices to show that ffVf \sim f|_V. Since VV is an open subset of XX containing xx, and fV=(fV)V=fVf|_V = (f|_V)|_V = f_V, we conclude that ffVf \sim f_V. Hence βU(f)=βVV(f)\beta_U(f) = \beta_V \circ |_V(f) for any f(FI)(U)f\in (F \circ I)(U), and so βU=βVV\beta_U = \beta_V \circ |_V for any UU and VV.

We conclude that an arbitrary naturality square of β\beta commutes, so that β\beta is a natural transformation :FIΔS:F \circ I \to \Delta_S, and thus a cocone.

view this post on Zulip David Egolf (Apr 28 2024 at 17:48):

Now, we are in a good spot to demonstrate a contradiction. Recall that we assumed that:

We will now show that there is no natural transformation Δg:ΔΛ(F)xΔS\Delta_g: \Delta_{\Lambda(F)_x} \to \Delta_S such that Δgα=β\Delta_g \circ \alpha = \beta. (Which would be a contradiction, because α\alpha is supposed to be a colimit). For this equation to hold, it must hold at every component. In particular, we must have (Δg)UαU=βU(\Delta_g)_U \circ \alpha_U = \beta_U and (Δg)UαU=βU(\Delta_g)_{U'} \circ \alpha_{U'} = \beta_{U'}. Noting that every component of Δg\Delta_g is just gg, we have that gαU=βUg \circ \alpha_U = \beta_U and gαU=βUg \circ \alpha_{U'} = \beta_{U'}.

We also know that αU(f)=αU(f)\alpha_U(f) = \alpha_{U'}(f'). Using all this, we conclude that βU(f)=gαU(f)=gαU(f)=βU(f)\beta_{U'}(f) = g \circ \alpha_{U'}(f) = g \circ \alpha_{U}(f) = \beta_U(f). But this is a contradiction: by definition of ff and ff' we can't possibly have βU(f)=βU(f)\beta_{U'}(f) = \beta_U(f). (βU(f)=βU(f)\beta_{U'}(f) = \beta_U(f) would imply that fff \sim f', which would imply that there is some open set containing xx in the intersection of the domains of ff and ff' where they restrict to the same function - and we know this is false by the assumptions we have placed on ff and ff').

We conclude that if α:FIΔΛ(F)x\alpha: F \circ I \to \Delta_{\Lambda(F)_x} is to be the colimit of FIF \circ I, then two functions with the same germ at xx must have some open neighborhood in the intersection of their domains containing xx such that they restrict to the same function!

view this post on Zulip David Egolf (Apr 28 2024 at 17:53):

I'll pause here for now, and plan to focus on some examples of germs next time!

view this post on Zulip Peva Blanchard (Apr 29 2024 at 08:46):

In case you are interested, here is, I think, a more direct proof. It amounts to showing that the quotient you suggest satisfies the universal property of the colimit.

spoiler

view this post on Zulip David Egolf (Apr 29 2024 at 16:06):

That is interesting! I'm trying to understand what you just wrote...

I think there might be a typo. If I understand correctly, we have that UVU \subseteq V, but the morphism from FVFV to FUFU is called V-|V in the diagrams above. I would have expected it to be called something more like U-|U, as I think it corresponds to restricting from VV to UU.

view this post on Zulip David Egolf (Apr 29 2024 at 16:13):

So, I think you start out by describing the cocone ι:FIΔΛ(F)x\iota: F \circ I \to \Delta_{\Lambda(F)_x} (to use the notation I was using above), where the component ιU:(FI)(U)Λ(F)x\iota_U:(F \circ I)(U) \to\Lambda(F)_x sends each element of FUFU to its equivalence class under \sim.

Then, to show this cocone satisfies the universal property of the colimit, you introduce another cocone :FIΔA: F \circ I \to \Delta_A having UU-th component aU:(FI)(U)Aa_U:(F \circ I)(U) \to A.

view this post on Zulip David Egolf (Apr 29 2024 at 16:15):

Next, you define an a:S(F)xAa:S(F)_x \to A. Here, S(F)xS(F)_x is the disjoint union of the sets (FI)(U)(F \circ I)(U) as UU varies over open sets containing xx. Because the disjoint union is the coproduct in Set\mathsf{Set}, a collection of functions aU:(FI)(U)Aa_U: (F \circ I)(U) \to A induce a function a:S(F)xAa:S(F)_x \to A.

view this post on Zulip David Egolf (Apr 29 2024 at 16:23):

You then I think note that if f:URg:VRf:U \to \mathbb{R} \sim g:V \to \mathbb{R} then a(f)=a(g)a(f) = a(g). If fgf \sim g, that means there is some WUVW \subseteq U \cap V containing xx where fW=gWf|_W = g|_W. In this situation, this diagram commutes:
diagram

Since W(f)=W(g)|_W(f) = |_W(g), aWW(f)=aWW(g)a_W \circ |_W(f) = a_W \circ |_W(g). By commutativity of the diagram, this implies that aU(f)=aU(g)a_U(f) = a_U(g). Since aa is induced using the universal property of disjoint unions, this implies that indeed a(f)=a(g)a(f) = a(g).

view this post on Zulip David Egolf (Apr 29 2024 at 16:32):

Now, a:S(F)xAa:S(F)_x \to A. We want to use aa to induce a natural transformation from ΔΛ(F)x\Delta_{\Lambda(F)_x} to ΔA\Delta_A. To do this, we just need a morphism α:Λ(F)xA\alpha:\Lambda(F)_x \to A.

At this point, I think we want to use something like the "universal property of quotients" to induce our α\alpha. I don't remember how that stuff goes very well right now... But I assume the basic idea is to set α([f])=a(f)\alpha([f]) = a(f).

We have to show this is well-defined. If fgf \sim g, then α([f])=a(f)\alpha([f]) = a(f) and α([g])=a(g)\alpha([g]) = a(g), but since fg    a(f)=a(g)f \sim g \implies a(f)=a(g), we learn that α([f])=α([g])\alpha([f]) = \alpha([g]). So, α\alpha is indeed well-defined.

view this post on Zulip David Egolf (Apr 29 2024 at 16:48):

I think it just remains to show that α:Λ(F)xA\alpha: \Lambda(F)_x \to A:

view this post on Zulip David Egolf (Apr 29 2024 at 16:51):

To show that α\alpha induces a morphism of cocones, we need to show that αιU=aU\alpha \circ \iota_U = a_U for all UOxU \in O_x. For some f(FI)(U)f \in (F \circ I)(U), we have α(ιU(f))=α([f])=a(f)=aU(f)\alpha(\iota_U(f)) = \alpha([f]) = a(f) = a_U(f), as desired.

view this post on Zulip David Egolf (Apr 29 2024 at 16:53):

Finally, we want to show that α:Λ(F)xA\alpha: \Lambda(F)_x \to A is the unique morphism :Λ(F)xA:\Lambda(F)_x \to A that induces a morphism of cones from our cocone with tip Λ(F)x\Lambda(F)_x to our cocone with tip AA.

view this post on Zulip David Egolf (Apr 29 2024 at 17:25):

So, we just saw that we need αιU(f)=aU(f)=a(f)\alpha \circ \iota_U(f) = a_U(f) = a(f) for all UU. Since ιU\iota_U projects to equivalence classes, this means we need α([f])=a(f)\alpha([f]) = a(f). As UU varies, we'll obtain this condition for all equivalence classes. So, I think α([f])=a(f)\alpha([f]) = a(f) for all [f]Λ(F)x[f] \in \Lambda(F)_x is forced, if α\alpha is to be a morphism of our cocones.

I think that means we can conclude that α\alpha does indeed induce the unique morphism from our cocone with tip Λ(F)x\Lambda(F)_x to our cocone with tip AA. We conclude that our cocone with tip with tip Λ(F)x\Lambda(F)_x is indeed initial, and so it is indeed the colimit of our diagram!

view this post on Zulip David Egolf (Apr 29 2024 at 17:26):

Thanks, @Peva Blanchard , for working out the direct proof! It found it interesting and helpful to review. :smile:

view this post on Zulip David Egolf (Apr 29 2024 at 17:35):

Starting to move in the direction of examples of germs, there is a nice example in the book "An Introduction to Manifolds" (by Tu), on page 12:

The functions f(x)=1/(1x)f(x) = 1/(1-x) with domain R{1}\mathbb{R} - \{1\} and g(x)=1+x+x2+x3+g(x) = 1 + x + x^2 + x^3+\dots with domain the open interval ]1,1[]-1,1[ have the same germ at any point pp in the open interval ]1,1[]-1,1[.

view this post on Zulip Peva Blanchard (Apr 29 2024 at 17:53):

I think there might be a typo. If I understand correctly, we have that UVU \subseteq V, but the morphism from FVFV to FUFU is called V-|V in the diagrams above. I would have expected it to be called something more like U-|U, as I think it corresponds to restricting from VV to UU.

Oh you're right! Yes I made a typo in the diagrams.

view this post on Zulip David Egolf (Apr 30 2024 at 17:25):

Before moving on to the next puzzle, I'd like to try and visualize a germ for the presheaf F:O(R)opSetF: \mathcal{O}(\mathbb{R})^{\mathrm{op}} \to \mathsf{Set}, which sends each open subset UU of R\mathbb{R} to the set of continuous real-valued functions :UR:U \to \mathbb{R}.

To visualize a germ at xRx \in \mathbb{R}, (which is an element of Λ(F)x\Lambda(F)_x), I'll draw a little cartoon of a bunch of continuous functions (defined on different open sets containing xx) that correspond to the same germ. That is, they become the same function when restricted to a "small enough" open set containing xx.

germ visualization

view this post on Zulip David Egolf (Apr 30 2024 at 17:31):

I'd be happy to talk more about examples of germs (e.g. in the continuous vs smooth vs analytic cases), but I don't know really know how to go about comparing those. So I'll move on to the next puzzle. But if you have something you'd like to say regarding examples of germs, please feel welcome to share your thoughts here!

view this post on Zulip David Egolf (Apr 30 2024 at 17:37):

Here is the next puzzle, together with some context:

Show that with this topology on Λ(F)\Lambda(F) the map p:Λ(F)Xp:\Lambda(F) \to X is continuous.

Context:

I am still working to understand the proposed topology on Λ(F)\Lambda(F).

view this post on Zulip Kevin Carlson (Apr 30 2024 at 17:38):

David Egolf said:

I'd be happy to talk more about examples of germs (e.g. in the continuous vs smooth vs analytic cases), but I don't know really know how to go about comparing those. So I'll move on to the next puzzle. But if you have something you'd like to say regarding examples of germs, please feel welcome to share your thoughts here!

One very important fact about analytic germs is that you know how to name all of them! In fact you probably learned how in a calculus course.

view this post on Zulip David Egolf (Apr 30 2024 at 17:52):

Thanks, @Kevin Carlson for your comment! It's been a while since I took a calculus course, and I can't remember if we ever used the word "analytic". But let me see if I can figure out what you're hinting at.

I'll be referencing John Baez's remark above:

Remember, a functions from an open set of R\mathbb{R} to R\mathbb{R} is analytic if at each point it has a Taylor series with a positive radius of convergence.

One way to put an equivalence relationship on a set SS is to use a function f:SPf:S \to P where f(s)f(s) is some property of ss. Then we let ss    f(s)=f(s)s \sim s' \iff f(s) = f(s').

If SS is the set of real-valued analytic functions, with each element of SS defined in some open set URU \subseteq \mathbb{R} containing xRx \in \mathbb{R}, I want to try setting f(s)f(s) to be the Taylor series of ss about xx. Then I'm hoping that the equivalence relationship induced by ff is the same as the equivalence relationship "belongs to the same germ". If that works out, I am hoping that would imply that the analytic germs at xx are in bijection with the Taylor series about xx that converge in some open subset of R\mathbb{R} containing xx.

view this post on Zulip David Egolf (Apr 30 2024 at 17:59):

If f(s)=f(s)f(s) = f(s'), that implies that ss and ss' have the same Taylor series about xx. Because ss and ss' are analytic, f(s)f(s) and f(s)f(s') both have a positive radius of convergence about xx. I think that means that ss and ss' become equal when restricted to this region of convergence about xx. And this restricted function is still analytic, so I think this implies that ss and ss' belong to the same analytic germ at xx.

view this post on Zulip David Egolf (Apr 30 2024 at 18:01):

If ss and ss' belong to the same analytic germ at xx, then they are both analytic and have some common analytic restriction to some open subset about xx. That restriction, being analytic, can be expressed as a Taylor series in some region with a positive radius of convergence about xx. And so, ss and ss' have the same Taylor series about xx when we are "close enough" to xx. I am hoping that implies that ss and ss' must have the same Taylor series about xx, so that f(s)=f(s)f(s) = f(s').

view this post on Zulip David Egolf (Apr 30 2024 at 18:03):

Well, I feel rather shaky on this stuff. Any corrections or clarifications would be appreciated! :smile:

view this post on Zulip Kevin Carlson (Apr 30 2024 at 18:05):

That’s the idea! Sounds like you’re still just a little stuck on whether having the same Taylor series on a small enough neighborhood of a point means you have the same Taylor series at that point. But there’s no difference between “my Taylor series near aa” and “my Taylor series at aa”, because, recall, the Taylor series is calculated by calculating all the derivatives of ss at a.a. So if two analytic functions agree near aa, they have the same Taylor series there. And conversely, since you compute the functions by actually plugging into the Taylor series where it converges! Hopefully that wasn’t handing you anything it would’ve been more fun to figure out on your own, just trying to help remind you of some old calculus stuff.

view this post on Zulip David Egolf (Apr 30 2024 at 18:13):

Thanks for clarifying! That makes sense: since a Taylor series is computed entirely using information "extremely close" to aa (by computing s(a),s(a),s(a),s(a), s'(a), s''(a), \dots), if two analytic functions agree on some open set containing aa, they must have the same Taylor series at aa. (All the derivatives are computed using limits which only care about behaviour as we get "really close" to aa: we'll eventually get inside the open set where these two functions agree during the limiting process). In particular, if two analytic functions obtain the same Taylor series at aa when we restrict both of them to some open set about aa (which means they agree on some open set containing aa), then the two original analytic functions must have the same Taylor series at aa.

view this post on Zulip Peva Blanchard (Apr 30 2024 at 18:30):

Yes! Now we have everything to explain why this function is not analytic.

f(x)={0if x0e1x2otherwise f(x) = \begin{cases} 0 &\text{if } x \le 0 \\ e^{-\frac{1}{x^2}} &\text{otherwise} \end{cases}

spoiler

view this post on Zulip John Baez (May 01 2024 at 07:18):

David Egolf said:

Context:

Yes, the disjoint union (aka "coproduct"). You'd never want to say two germs at two different points xx are equal.

view this post on Zulip John Baez (May 01 2024 at 07:21):

If my blog post left that unclear, I should fix it.

view this post on Zulip John Baez (May 01 2024 at 07:23):

If you haven't thought much about analytic functions, it might help to know that @Peva Blanchard is giving the standard example to show how the concept is a bit subtle. This is a function that has an nn th derivative at x=0x = 0 for all n=0,1,2,n = 0, 1,2, \dots, which is still not analytic. In fact all these derivatives are zero, yet the germ of this function at x=0x = 0 is nonzero!

view this post on Zulip John Baez (May 01 2024 at 07:34):

Maybe it's good to think about something much less weird:

Puzzle. Find a function that vanishes at x=0x = 0, along with its first million derivatives:

f(0)=0,dfdx(0)=0,d2fdx2(0)=0,,d1,000,000fdx1,000,000(0)=0 f(0) = 0, \frac{df}{dx}(0) = 0, \frac{d^2 f}{dx^2}(0) = 0, \dots, \frac{d^{1,000,000} f}{dx^{1,000,000}}(0) = 0

but is nonzero for all x0x \ne 0.

view this post on Zulip John Baez (May 01 2024 at 07:38):

Peva's example is much stranger, because we don't stop at a million or any finite number - all the derivatives of this function are all well-defined for all xx, and they all vanish at x=0x = 0, but this nonzero for all x>0x > 0.

view this post on Zulip John Baez (May 01 2024 at 18:02):

The point of Peva's example is that if you have a function f ⁣:RRf \colon \mathbb{R} \to \mathbb{R} that is infinitely differentiable, its germ at x = 0 can contain more information than all its derivatives at x = 0. But for analytic functions, all the information about the germ is contained in the derivatives - since you can recover the function from its power series, at least in some neighborhood of x = 0.

view this post on Zulip David Egolf (May 01 2024 at 18:38):

Thanks to both of your for your comments! I'm taking a little break today from this thread, but I hope to return to it tomorrow. The idea that a smooth function can have more information in its germ at a point (in addition to the values of all its derivatives at that point) is interesting, and I look forward to responding in more detail to your comments soon.

view this post on Zulip David Egolf (May 02 2024 at 16:05):

John Baez said:

Maybe it's good to think about something much less weird:

Puzzle. Find a function that vanishes at x=0x = 0, along with its first million derivatives:

f(0)=0,dfdx(0)=0,d2fdx2(0)=0,,d1,000,000fdx1,000,000(0)=0 f(0) = 0, \frac{df}{dx}(0) = 0, \frac{d^2 f}{dx^2}(0) = 0, \dots, \frac{d^{1,000,000} f}{dx^{1,000,000}}(0) = 0

but is nonzero for all x0x \ne 0.

The first idea that comes to mind for me is to try f(x)=xnf(x)=x^n for nn big enough. Each derivative we take reduces the exponent of xx by 11. I think this implies that the first n1n-1 derivatives are all zero. (Eventually though, after we take nn derivatives, we get f(n)(x)=n!f^{(n)}(x) = n! which is non-zero at x=0x=0.) I think setting n=1,000,000+1n=1,000,000+1 gives us a function f(x)=x1,000,001f(x)=x^{1,000,001} that meets the requirements of the puzzle.

view this post on Zulip David Egolf (May 02 2024 at 16:09):

John Baez said:

David Egolf said:

Context:

Yes, the disjoint union (aka "coproduct"). You'd never want to say two germs at two different points xx are equal.

John Baez said:

If my blog post left that unclear, I should fix it.

I was fairly sure we don't ever want to consider two germs at different points to be equal, but I started slightly worrying about this issue because the \bigcup symbol was used instead of the \coprod symbol in the blog post:
notation

Actually, I suppose that each of Λ(F)x\Lambda(F)_x is only defined up to isomorphism if we just require each Λ(F)x\Lambda(F)_x to be (part of) a colimit of an appropriate diagram. From that perspective, it seems bad to take the union of these Λ(F)x\Lambda(F)_x as xx varies, because the union is an operation that cares about the equality of elements of the different sets we are taking a union of. (And we can change which elements in different Λ(F)x\Lambda(F)_x are equal by swapping out isomorphic copies of some Λ(F)x\Lambda(F)_x).

view this post on Zulip David Egolf (May 02 2024 at 16:14):

Peva Blanchard said:

Yes! Now we have everything to explain why this function is not analytic.

f(x)={0if x0e1x2otherwise f(x) = \begin{cases} 0 &\text{if } x \le 0 \\ e^{-\frac{1}{x^2}} &\text{otherwise} \end{cases}

spoiler


Huh! I suppose this function "takes off" from zero so slowly that all its derivative at 00 don't even notice! So we have two smooth functions (this one, and the function constant at zero) that have the same Taylor series at 00, but there is no open set containing 00 in which those two functions restrict to the same function!

In this example, we see that computing all the derivatives at a point xx of a smooth function doesn't always determine uniquely which smooth germ of xx that function belongs to.

view this post on Zulip David Egolf (May 02 2024 at 16:22):

I find myself wondering what additional information (in addition to the value of all the derivatives) is needed to determine the germ that a smooth function ff belongs to at some point xx. I suppose we'd like to find some information that determines ff on some small enough neighborhood of xx. We just saw that all the values of the derivatives of ff at xx aren't always going to be enough to do this! So we need some additional information.

But I'm unsure how we could go about discovering what that additional information is.

view this post on Zulip John Baez (May 02 2024 at 16:56):

David Egolf said:

John Baez said:

Puzzle. Find a function that vanishes at x=0x = 0, along with its first million derivatives:

f(0)=0,dfdx(0)=0,d2fdx2(0)=0,,d1,000,000fdx1,000,000(0)=0 f(0) = 0, \frac{df}{dx}(0) = 0, \frac{d^2 f}{dx^2}(0) = 0, \dots, \frac{d^{1,000,000} f}{dx^{1,000,000}}(0) = 0

but is nonzero for all x0x \ne 0.

I think setting n=1,000,000+1n=1,000,000+1 gives us a function f(x)=x1,000,001f(x)=x^{1,000,001} that meets the requirements of the puzzle.

Yes, that's the best solution of this puzzle!

view this post on Zulip John Baez (May 02 2024 at 16:59):

David Egolf said:

We just saw that all the values of the derivatives of ff at xx aren't always going to be enough to do this! So we need some additional information.

But I'm unsure how we could go about discovering what that additional information is.

In a sense the difficulty of this question is why the concept of 'germ' is so useful: the germ of a function is the tautological answer to this question!

view this post on Zulip Peva Blanchard (May 02 2024 at 17:03):

The question is interesting.

I'm wondering how we could "measure" the "complexity" of the set of germs at xx. For instance, the analytic germs at xx seem to form a vector space of countable dimension (I think).

view this post on Zulip Kevin Carlson (May 02 2024 at 17:07):

It's not exactly of countable dimension, in the usual linear algebra sense, since the space of infinite sequences has uncountable dimension (you can't get any Taylor series with infinitely many nonzero coefficients as a linear combination of $$x^n$$s!) But it's "countable-dimensional" in the functional analysis sense, which is that there's a countable "basis" when you allow for convergent infinite sums from that basis, or similarly, the linear span of that countable "basis" is dense. So one way of taking David's interesting question is, whether we can find an explicit basis for the germs of smooth functions at a point. I don't know the answer but I have an intuition that we cannot!

view this post on Zulip Peva Blanchard (May 02 2024 at 17:25):

It is tricky because this requires (at least) a topology on the set Λ(F)x\Lambda(F)_x of germs. There is the coarsest topology making the projection p:xXΛ(F)xXp: \bigsqcup_{x\in X} \Lambda(F)_x \rightarrow X continuous. But, this is not enough: the induced topology on the subset Λ(F)x\Lambda(F)_x is trivial (I think).

view this post on Zulip Peva Blanchard (May 02 2024 at 17:32):

We would need to "topologize" the sheaf of continuous/smooth/analytic functions: each F(U)F(U) is a topological space (instead of just a set) for every open subset UXU \subseteq X.

view this post on Zulip Peva Blanchard (May 02 2024 at 17:32):

Mmh ... this is going too far out of my reach, so I'll stop there.

view this post on Zulip John Baez (May 03 2024 at 07:21):

If you want a vector space of germs that's of countably infinite dimension, the nicest choice is the sheaf of polynomial functions on the real line, or the complex plane...

... or any algebraic variety, which is roughly a space described by a bunch of polynomial equations, like the space of solutions of x2=y3+yx^2 = y^3 + y. But people call polynomial functions on algebraic varieties regular functions.

Algebraic varieties are the traditional object of study of algebraic geometry, and the sheaf of regular functions on an algebraic variety became the star of algebraic geometry: people call it O\mathcal{O}.

You can define algebraic varieties over various fields, but the most traditional case uses C\mathbb{C}. For any 'smooth' nn-dimensional complex algebraic variety, the germ of its sheaf of regular functions at any point is isomorphic to the germ of the sheaf of polynomial functions on Cn.\mathbb{C}^n.

After algebraic varieties were quite well understood and Grothendieck started chafing at their limitations, he defined the concept of 'scheme', which is roughly a topological space equipped with a sheaf that acts like the sheaf of regular functions.

I'm not giving a precise definition here, but it's very notable that the concept of scheme explicitly involves the concept of sheaf! So modern algebraic geometry, which uses schemes, is heavily reliant on sheaves.

view this post on Zulip John Baez (May 03 2024 at 07:26):

If we start exploring sheaves that are like the sheaf of analytic functions, we are moving in a somewhat different direction. There's an important concept of complex manifold, which is a space covered by 'charts' that are copies of Cn\mathbb{C}^n for some nn, with transition functions that are analytic. Any such manifold has a sheaf of analytic functions on it... and the germ of this sheaf at any point is isomorphic to the germ of analytic functions at any point of Cn\mathbb{C}^n.

view this post on Zulip John Baez (May 03 2024 at 07:33):

Just as we can have algebraic varieties that aren't smooth, like the space of solutions of x2=y3x^2 = y^3 (which has a sharp 'cusp' at the origin), we can also define complex analytic varieties, which generalize complex manifolds but don't need to be smooth. I know almost nothing about these, but they're again defined using sheaves.

view this post on Zulip John Baez (May 03 2024 at 08:53):

David Egolf said:

Peva Blanchard said:

f(x)={0if x0e1x2otherwise f(x) = \begin{cases} 0 &\text{if } x \le 0 \\ e^{-\frac{1}{x^2}} &\text{otherwise} \end{cases}

Huh! I suppose this function "takes off" from zero so slowly that all its derivative at 00 don't even notice!

Right. But it's very peculiar. It's like starting your car so smoothly that at first you don't accelerate at all.

For the nth derivative to become bigger than zero, the (n+1)st derivative needs to be bigger than zero first... and here that happens for all n, yet all these derivatives start at zero.

view this post on Zulip John Baez (May 03 2024 at 08:59):

How does it work?

As you increase xx from 0 to ++\infty this function goes from 0 to 1. Its first derivative goes up to about 1/2 and then goes down. But before that, its second derivative goes up to 3... later it goes down. And before that, its third derivative goes up to about 20. And before that, its fourth derivative goes up to about 200. And so on.

view this post on Zulip John Baez (May 03 2024 at 09:01):

So while you may feel the function takes off very gently, because all its derivatives are zero at x=0x = 0, in fact there's a huge flurry of activity going on for arbitrarily small xx.

view this post on Zulip David Egolf (May 03 2024 at 17:23):

Looking at those examples of germs was interesting! And it's cool to learn that sheaves get used in all kinds of places. It's always a nice bonus when learning about one thing makes it a bit easier to learn some other things!

I want to return my attention to the next puzzle, which has to do with putting a topology on Λ(F)=xXΛ(F)x\Lambda(F) = \coprod_{x \in X} \Lambda(F)_x, the set of all our germs for our sheaf FF of continuous real-valued functions on XX. I'm still trying to understand the topology described in the blog post - but I'll write out my understanding so far.

We'd really like to have a "germ bundle" p:Λ(F)Xp: \Lambda(F) \to X that sends a particular germ pp to the point xXx \in X that it is associated with. (Each germ in Λ(F)\Lambda(F) is associated with exactly one point xXx \in X, as it belongs to exactly one of Λ(F)x\Lambda(F)_x as xx ranges over XX). If we could construct a bundle from a sheaf in this way, then we'd be able to think about sheaves from the perspective described here ("Sheaves in Geometry and Logic", page 64):

Alternatively, a sheaf FF on XX can be described as a rule which assigns to each point xx of the space a set FxF_x consisting of the "germs" at xx of the functions to be considered, as defined in neighborhoods of the point xx... Viewed in this way, the sheaf FF is a set FxF_x which "varies" (with the point xx) over the space XX.

view this post on Zulip David Egolf (May 03 2024 at 17:27):

Now, for p:Λ(F)Xp: \Lambda(F) \to X to really be a bundle, it needs to be a continuous function. To talk about its continuity, we need to put a topology on Λ(F)\Lambda(F). However, referencing pages 84-85 of "Sheaves in Geometry and Logic", this isn't the only function that we want to be continuous when we select an appropriate topology for Λ(F)\Lambda(F).

We also have some other interesting functions which we'd like to be continuous, so that they can be sections of our bundle. Given some sF(U)s \in F(U), so in our case s:URs:U \to \mathbb{R} in Top\mathsf{Top}, we define a function g(s):UΛ(F)g(s):U \to \Lambda(F) ("g" refers to "germ") defined by g(s)(x)=[s]xg(s)(x) = [s]_x, where by [s]x[s]_x I mean the germ of ss at the point xUx \in U. Note that pg(s)(x)=p([s]x)=xp \circ g(s)(x) = p([s]_x) = x, so that if g(s):UΛ(F)g(s):U \to \Lambda(F) was continuous, it would provide a section of our bundle p:Λ(F)Xp: \Lambda(F) \to X. In this way, we are hoping to associate each element of a sheaf set F(U)F(U) (which is a set of continuous real-valued functions in the case of our FF) to a corresponding section of our germ bundle p:Λ(F)Xp: \Lambda(F) \to X. To make this happen, we need to choose the topology on Λ(F)\Lambda(F) appropriately, so that g(s):UΛ(F)g(s):U \to \Lambda(F) is continuous (for each sF(U)s \in F(U) as UU varies).

view this post on Zulip David Egolf (May 03 2024 at 17:32):

I'll stop here for today. Next time, I'm planning to look at the minimum of open sets we need to put in our topology for Λ(F)\Lambda(F) so that p:Λ(F)Xp: \Lambda(F) \to X becomes continuous. Then I think I want to check that a given g(s):UΛ(F)g(s):U \to \Lambda(F) still has a hope of being continuous, even after we've declared those subsets of Λ(F)\Lambda(F) to be open.

view this post on Zulip John Baez (May 03 2024 at 20:57):

John Baez said:

David Egolf said:

We just saw that all the values of the derivatives of ff at xx aren't always going to be enough to do this! So we need some additional information.

But I'm unsure how we could go about discovering what that additional information is.

In a sense the difficulty of this question is why the concept of 'germ' is so useful: the germ of a function is the tautological answer to this question!

By the way, I'd like to know any sort of answer to this question. I was optimistic when I saw this question on Math Stack Exchange:

but the answers were completely useless (except for one answer who told the original questioner that he was asking about the germ of a smooth function: he hadn't known this concept had a name). I think there should be interesting things to say about this question even if a fully satisfying answer is not known.

view this post on Zulip David Egolf (May 03 2024 at 23:56):

One very rough idea that comes to mind: instead of taking the limit of something like (f(x)f(0))/x(f(x)-f(0))/x as xx approaches 00, maybe we could consider "taking a limit" of the truth values of a bunch of propositions like "f(x)>0f(x)>0" as xx approaches 0.

When f(x)=e1/x2f(x) = e^{-1/x^2} for x>0x>0, we get a sequence of truth values that looks like: true, true, true... as we assess the truth value of "f(x)>0f(x)>0" as xx approaches 00. By contrast, if f(x)=0f(x)=0 for all xx, then the sequence of truth values we get from "f(x)>0f(x)>0" is false, false, false.. as we assess the truth value of "f(x)>0f(x)>0" as xx approaches zero.

I'm not sure how useful this is... I was just trying to think of "measurements really close to 0" that determine that the zero function is different from our "slow takeoff" function which at x=0x=0 is zero and has all derivatives equal to zero.

view this post on Zulip David Egolf (May 04 2024 at 00:00):

John Baez said:

So while you may feel the function takes off very gently, because all its derivatives are zero at x=0x = 0, in fact there's a huge flurry of activity going on for arbitrarily small xx.

I also wonder how one could formalize this "huge flurry of activity". Maybe that could be helpful for distinguishing these functions from one another using some kind of measurement involving a limiting process which approaches x=0x=0?

view this post on Zulip Peva Blanchard (May 04 2024 at 00:04):

Here is a baby step in that direction.

Let CC_{\infty} be the sheaf of smooth functions on the unit interval X=[0,1]X = [0,1], and FF be any sub-sheaf of CC_{\infty}.

Given a smooth function ff, I want to consider its derivatives all at once. So we can consider the N\mathbb{N}-fold power of FF, namely, the sheaf nNF\prod\limits_{n\in\mathbb{N}}F. We have natural transformation η:FnNF\eta : F \rightarrow \prod\limits_{n\in\mathbb{N}}F whose component over an open subset UXU\subseteq X is given by

(U,f)(f(n))nN (U, f) \mapsto \left(f^{(n)}\right)_{n\in\mathbb{N}}

This induces a linear function on the germs at an arbitrary point xXx \in X

η:Λ(F)xnNΛ(F)x\eta : \Lambda(F)_x \rightarrow \prod\limits_{n\in\mathbb{N}} \Lambda(F)_x

We have another linear function ϵ:Λ(F)xR\epsilon : \Lambda(F)_x\rightarrow \mathbb{R} given by a evaluating a function at xx. Then we have a linear function

Λ(F)xηnNΛ(F)xϵNRN\Lambda(F)_x \xrightarrow{\eta} \prod\limits_{n\in\mathbb{N}} \Lambda(F)_x \xrightarrow{\epsilon^{\mathbb{N}}} \mathbb{R}^{\mathbb{N}}

which maps the germ of a function ff at xx to the sequence (f(n)(x))nN(f^{(n)}(x))_{n\in\mathbb{N}} of values of its derivatives at xx. Finally, we can consider the kernel K(F)xK(F)_x of this linear function.

It seems that η\eta is injective, while ϵ\epsilon is surjective (hence ϵN\epsilon^{\mathbb{N}} too).

When FF is a sub-sheaf of the sheaf of analytic functions, the kernel is trivial, K(F)x=0K(F)_x = 0. This is because the germ of an analytic function at xx is entirely determined by the values of its derivatives at the point xx.

Question: Is the converse true? I.e., if K(F)x=0K(F)_x = 0 for all xx then is FF a sub-sheaf of the sheaf of analytic functions?

We can go further and try to describe the kernel K(C)xK(C_{\infty})_x for the sheaf of smooth functions.

To build a germ in K(C)xK(C_{\infty})_x, we must first choose a sequence (Un,fn)nN(U_n, f_n)_{n\in\mathbb{N}} with UnU_n an open neighborhood of xx, and fnf_n a smooth function such that fn(x)=0f_n(x) = 0. This data already gives quite a lot of freedom. The tricky condition is to ensure that:

dfndx=fn+1 on UnUn+1\frac{d f_n}{dx} = f_{n+1} \text{ on } U_n \cap U_{n + 1}

A strategy would be to start from something that does not care about this condition, and iterate so that in the limit the tricky condition holds. (I'm being hand-wavy here because I haven't figured it out yet)

view this post on Zulip John Baez (May 04 2024 at 06:49):

I had an idea that seems related to @Peva Blanchard's. There's a sheaf CC^\infty of smooth real-valued functions on R\mathbb{R}, and its germ at 00 is some vector space Λ(C)0\Lambda(C^\infty)_0. We want to understand this space. Peva has described a map, I'll abbreviate it as

ϕ:Λ(C)0RN, \phi: \Lambda(C^\infty)_0 \to \mathbb{R}^{\mathbb{N}},

sending the germ of any smooth function f:RRf: \mathbb{R} \to \mathbb{R} to the list of derivatives

(f(0),f(0),f(0),) (f(0), f'(0), f''(0), \dots )

This is well-defined and this is the 'understandable aspect' of Λ(C)0\Lambda(C^\infty)_0 . So we really want to understand the kernel ker(ϕ) \mathrm{ker}(\phi) .

This raises the question: can we extract any real numbers from the germ of a smooth function at 00 in a linear way, other than by taking derivatives of that function at 00?

So:

Question. Can we explicitly describe any nonzero linear map :ker(ϕ)R\ell : \mathrm{ker}(\phi) \to \mathbb{R}?

Since we know ker(ϕ)\ker(\phi) is infinite-dimensional, there exist infinitely many linearly independent linear maps :ker(ϕ)R\ell : \mathrm{ker}(\phi) \to \mathbb{R}. But this does not imply that we can get our hands on any of them, because it's possible that my last sentence can only be proved using the axiom of choice (or some weaker nonconstructive principle)! There are some famous examples of this frustrating situation in analysis.

I've asked this question on MathOverflow and will see if it gets any useful answers.

view this post on Zulip John Baez (May 04 2024 at 06:53):

By the way, there's a rather surprising theorem related to all this:

Theorem. The map ϕ:Λ(C)0RN \phi: \Lambda(C^\infty)_0 \to \mathbb{R}^{\mathbb{N}} sending the germ of any smooth function f:RRf: \mathbb{R} \to \mathbb{R} to its list of derivatives (f(0),f(0),f(0),) (f(0), f'(0), f''(0), \dots ) is surjective.

view this post on Zulip John Baez (May 04 2024 at 06:55):

I think to get the idea for how prove this, it's enough to solve this

Puzzle. Find a smooth function f:RRf: \mathbb{R} \to \mathbb{R} whose nth derivative at 00 is 2n!2^{n!}.

At first you might think this is impossible, since the power series

n=02n!n!xn \displaystyle{ \sum_{n = 0}^\infty \frac{2^{n!}}{n!} x^n }

has zero radius of convergence. But such a function does exist! As a clue, I'll say that to construct it, it helps to use the fact that there exists a smooth function that's zero for x1x \ge 1 and x0x \le 0, and positive for 0<x<10 < x < 1.

view this post on Zulip John Baez (May 04 2024 at 07:00):

By the way, I know I'm digressing from the main theme of this discussion, which is sheaves. But it's hard to resist, because I've spent a lot of time teaching analysis, and the difference between the sheaf of smooth functions and the sheaf of analytic function is pretty interesting, not only as example of how different sheaves work differently, but because mathematicians and physicists spend a lot of time working with smooth and analytic functions.

view this post on Zulip Morgan Rogers (he/him) (May 04 2024 at 10:25):

John Baez said:

Question. Can we explicitly describe any nonzero linear map :ker(ϕ)R\ell : \mathrm{ker}(\phi) \to \mathbb{R}?

You could take any sequence tending to 00 and ask about the limit of a sequence derived from the values of the function of that point. For instance, you could ask about the limit of f(1/n)n!f(1/n)\cdot n!. The hard part is guaranteeing that such a functional will converge and isn't simply a function of the derivatives at 00. Or you could ask about the relative measure of points at which the function is 0 on a sequence of intervals tending to 0. That is, take the limit as a0a \to 0 of μ({a<x<af(x)=0})/2a\mu(\{-a < x < a \mid f(x) = 0\})/2a. This is bounded, at least, but there's again no guarantee of convergence (at least a priori; maybe there's some slick analytic argument proving that this converges)

view this post on Zulip John Baez (May 04 2024 at 13:23):

Morgan Rogers (he/him) said:

John Baez said:

Question. Can we explicitly describe any nonzero linear map :ker(ϕ)R\ell : \mathrm{ker}(\phi) \to \mathbb{R}?

You could take any sequence tending to 00 and ask about the limit of a sequence derived from the values of the function of that point. For instance, you could ask about the limit of f(1/n)n!f(1/n)\cdot n!. The hard part is guaranteeing that such a functional will converge and isn't simply a function of the derivatives at 00.

This raises a good issue, namely, how fast can a smooth function f grow for small x if all its derivatives vanish at x=0. Your proposed quantity will be finite if for all such f there exists C with

f(1/n)C/n!f(1/n) \le C/n!

for all large enough n.

This seems unlikely since all I know is that for all such f and all natural numbets k there exists C with

f(1/n)C/nkf(1/n) \le C/n^k

for all large enough n. This follows from the first k derivatives of f vanishing at x=0.

view this post on Zulip John Baez (May 04 2024 at 13:30):

Unfortunately there is no slowest growing function that grows faster than all polynomials! So we can probably show no candidate like what you suggested can work: it'll either be zero for all smooth f whose derivatives all vanish at 0, or infinite for some such f.

view this post on Zulip David Egolf (May 04 2024 at 15:55):

I'm not following in detail, but I just wanted to highlight a strategy that I noticed Peva Blanchard and John Baez use above. (Which I thought was really cool!) We're interested in information besides the derivatives of a function that can help us determine which germ at a point a smooth function belongs to. The strategy - to my understanding - goes like this:

I don't remember seeing this strategy before, and I like it!

view this post on Zulip Morgan Rogers (he/him) (May 04 2024 at 15:56):

Hmm so we need something that converges for all such ff (so that \ell is well-defined) but isn't forced to be 00. Well that's fun to think about. I'll let you get back to sheaves now :)

view this post on Zulip David Egolf (May 04 2024 at 16:15):

In the mathoverflow question, John Baez introduces a vector space:

There's a sheaf of smooth real-valued functions on R\mathbb{R}, and its germ at 00 is some vector space VV.

Now, we saw earlier that f(x)f(x) (which is 00 for x0x \leq 0 and e1/x2e^{-1/x^2} for x>0x > 0) and the zero function 00 are not in the same germ. That means that [f][f] and [0][0] are different elements of VV. But, they have the same derivatives, so that ϕ([f])=ϕ([0])\phi([f]) = \phi([0]) (ϕ\phi is defined in that question to be the function that takes the germ of a smooth function to the derivatives of that function). This means that [f][0][f]-[0] is in the kernel of ϕ\phi.

I'd like to define a non-zero linear real-valued map :ker(ϕ)R\ell:\ker(\phi) \to \mathbb{R} from the kernel of ϕ\phi. To define such a map, I think it suffices to specify the value of the map on each element of a set of basis vectors for ker(ϕ)\ker(\phi). I am hoping that we could find just two linearly independent vectors in ker(ϕ)\ker(\phi) and say what \ell does to those, and then just let \ell send all vectors that aren't a linear combination of those two to zero.

I think we already have one nonzero vector [f][0][f]-[0] in ker(ϕ)\ker(\phi). If we could just find another one, that is linearly independent from this one, maybe we could construct an \ell from that? So, I'm wondering if we can think of more examples of pairs of smooth real-valued functions (defined on some open set containing 00) that have the same derivatives at 00, but don't belong to the same germ at zero.

(I wonder if f2f^2 also has all derivatives equal to zero at zero, and if it belongs to a different germ from ff...)

view this post on Zulip Morgan Rogers (he/him) (May 04 2024 at 18:26):

You can multiply ff by any function which is bounded at 00 to get another potentially linearly independent function, I think.

view this post on Zulip David Egolf (May 04 2024 at 19:12):

I suppose that defining some \ell in the way I sketched above wouldn't really help us that much. That's because such an \ell would assign a real number to each germ at zero, but it wouldn't directly provide this "measurement" for smooth functions. So, although such an \ell I think could tell certain germs apart (which can't be distinguished using derivatives), it seems like we'd need something more to determine which smooth functions with the same derivative values at a point don't belong to the same germ at that point.

view this post on Zulip John Baez (May 04 2024 at 21:00):

David Egolf said:

I'd like to define a non-zero linear real-valued map :ker(ϕ)R\ell:\ker(\phi) \to \mathbb{R} from the kernel of ϕ\phi. To define such a map, I think it suffices to specify the value of the map on each element of a set of basis vectors for ker(ϕ)\ker(\phi). I am hoping that we could find just two linearly independent vectors in ker(ϕ)\ker(\phi) and say what \ell does to those, and then just let \ell send all vectors that aren't a linear combination of those two to zero.

How does this work? Think about a simpler case: trying to define a linear function  ⁣:R3R\ell \colon \mathbb{R}^3 \to \mathbb{R} that maps (1,0,0)(1,0,0) to 11, (0,1,0)(0,1,0) to 22, and all vectors that aren't a linear combination of those two to zero. What should (x,y,z)\ell(x,y,z) be?

view this post on Zulip David Egolf (May 04 2024 at 22:49):

Hmm, well we want \ell to be R\mathbb{R}-linear. So, (x,y,z)=(x,0,0)+(0,y,0)+(0,0,z)=x(1,0,0)+y(0,1,0)+z(0,0,1)=x+2y+z(0,0,1)\ell(x,y,z) = \ell(x,0,0) + \ell(0,y,0) + \ell(0,0,z) = x \ell(1,0,0) + y \ell(0,1,0) + z \ell (0,0,1) = x + 2y + z \ell(0,0,1). Since (0,0,1)(0,0,1) isn't a linear combination of (1,0,0)(1,0,0) and (0,1,0)(0,1,0), we set (0,0,1)=0\ell(0,0,1)=0. So we find (x,y,z)=x+2y\ell(x,y,z) = x + 2y.

view this post on Zulip John Baez (May 05 2024 at 06:36):

Okay, that's a linear function, but it's not doing what you said. You said all vectors that aren't a linear combination of the first two should be sent to zero. But (1,1,1)(1,1,1) is not a linear combination of (1,0,0)(1,0,0) and (0,1,0)(0,1,0), and (1,1,1)\ell(1,1,1) is not zero.

view this post on Zulip John Baez (May 05 2024 at 06:37):

What you in fact did is choose one vector that's not a linear combination of the first two, and decree that \ell of it is zero. You chose the vector (0,0,1)(0,0,1). If you'd chosen the vector (1,1,1)(1,1,1), for example, and decreed that \ell of that is zero, you'd get a different linear map \ell.

view this post on Zulip John Baez (May 05 2024 at 06:42):

Returning to the actual problem, it's this arbitrary choice that makes defining a nonzero linear function from ker(ϕ)\mathrm{ker}(\phi) to R\mathbb{R} so difficult! And ker(ϕ)\mathrm{ker}(\phi) is not just 3-dimensional, it's infinite-dimensional, so the choice requires a lot more thought - and it seems nobody knows how to do it, except by resorting to the axiom of choice.

view this post on Zulip John Baez (May 05 2024 at 06:45):

The problem is of this sort.

You have a vector space VV and you're trying to define a nonzero linear map :VR\ell : V \to \mathbb{R}. You know a couple of vectors v1,v2Vv_1, v_2 \in V (you might know more) and you say you want

(v1)=c1\ell(v_1) = c_1

(v2)=c2\ell(v_2) = c_2

for some numbers c1,c2Rc_1, c_2 \in \mathbb{R}.

You can do this if v1v_1 and v2v_2 are linearly independent. By a general theorem, which relies on the axiom of choice, we know such an \ell exists. If dim(V)>2\mathrm{dim}(V) \gt 2 many such \ell exist. But getting your hands on one is an entirely different matter!

view this post on Zulip John Baez (May 05 2024 at 06:47):

You can get your hands on one if you can find a linear subspace WVW \subset V such that

1) no vector in WW is a linear combination of v1v_1 and v2v_2 and

2) Every vector in VV is a linear combination of v1,v2v_1, v_2 and some vector in WW.

view this post on Zulip John Baez (May 05 2024 at 06:50):

Then there is a unique :VR\ell : V \to \mathbb{R} such that

(v1)=c1\ell(v_1) = c_1

(v2)=c2\ell(v_2) = c_2

and

(w)=0\ell(w) = 0

for all wWw \in W. At this point we've gotten our hands on \ell. But \ell depends on our choice of WW.

How do we know there always exists WW obeying conditions 1) and 2)? There's a theorem saying that there exists a basis of VV that starts with v1v_1 and v2v_2 and continues with some other vectors wiw_i. Then we can define WW to be the space of all linear combinations of these other vectors wiw_i.

However, to prove this theorem you need a version of the axiom of choice: in general there's no 'procedure' to choose these vectors wiw_i. You chose the vector (0,0,1)(0,0,1) because you only needed one and it was staring you in the face. But in our actual example the dimension of ker(ϕ)\mathrm{ker}(\phi) is uncountably infinite and - to the best of my knowledge - nobody knows a basis for it. That's why my MathOverflow problem seems to be hard.

view this post on Zulip John Baez (May 05 2024 at 07:00):

There might still be some other way to define a nonzero linear map :ker(ϕ)R\ell : \mathrm{ker}(\phi) \to \mathbb{R}.

view this post on Zulip John Baez (May 05 2024 at 07:26):

This discussion may seem like a huge disgression from sheaves, and in a way it is. But the issue of how relying on the axiom of choice makes it difficult to get your hands on things you want is a big deal in analysis, so it's nice that we've bumped into an example. And a good way to do math without the axiom of choice is topos theory, which is what we're supposed to be learning here!

Indeed, you'll notice that the really good answers about what version of the axiom of choice you need to prove that every vector space has a basis come from Andreas Blass. Blass is an excellent category theorist, and I think he knows topos theory quite well.

view this post on Zulip David Egolf (May 05 2024 at 15:39):

John Baez said:

What you in fact did is choose one vector that's not a linear combination of the first two, and decree that \ell of it is zero. You chose the vector (0,0,1)(0,0,1). If you'd chosen the vector (1,1,1)(1,1,1), for example, and decreed that \ell of that is zero, you'd get a different linear map \ell.

Ah! I wondered what I was missing. Thanks for pointing that out!

view this post on Zulip David Egolf (May 05 2024 at 15:55):

If I'm understanding correctly, you're saying (among other things):

But, potentially, although there's no procedure in general for completing a basis for an arbitrary infinite dimensional vector space, there could maybe be such a procedure in this particular case (?). It just might be hard to find (if it exists) I guess!

view this post on Zulip David Egolf (May 05 2024 at 16:06):

On a related note, it's a weird feeling to know that many examples of a certain kind of thing exist, but at the same time we may not be able to name any examples :astonished:!

view this post on Zulip John Baez (May 05 2024 at 17:09):

David Egolf said:

On a related note, it's a weird feeling to know that many examples of a certain kind of thing exist, but at the same time we may not be able to name any examples :astonished:!

This is actually quite common in mathematics, and there are even many situations where we know that the probability of a number having some property is 1 yet we don't know if most familiar numbers have that property (though surely they must).

view this post on Zulip John Baez (May 05 2024 at 17:14):

See for example the result about Khinchin's constant.

It's sort of like how knowing there are lots of ants doesn't mean you know any of their names: they are numerous yet anonymous.

view this post on Zulip John Baez (May 05 2024 at 21:07):

David Egolf said:

If I'm understanding correctly, you're saying (among other things):

Right - good summary.

There are other ways to define linear maps than saying what they do on each member of a basis, and often they are easier to work with, e.g. taking the derivative at x is a linear map from germs of smooth functions at x to real numbers, and we don't need to pick a basis to define it! But I don't see how to use an approach like that for this problem, either. It may be lack of cleverness, or it may be a deep issue.

But, potentially, although there's no procedure in general for completing a basis for an arbitrary infinite dimensional vector space, there could maybe be such a procedure in this particular case (?). It just might be hard to find (if it exists) I guess!

Right. I'd say we just don't know yet. And we may never know.

view this post on Zulip Peva Blanchard (May 05 2024 at 21:45):

John Baez said:

See for example the result about Khinchin's constant.

It's sort of like how knowing there are lots of ants doesn't mean you know any of their names: they are numerous yet anonymous.

There is another example that I find fascinating. The set of computable real numbers is countable. This implies that almost all real numbers are not computable: there is no (finitely described) algorithm to enumerate their digits. To put it in more colloquial terms: there is no "reasonable" way to poke inside them.

It does not mean we cannot define one though. For instance, Chaitin's constant is the probability that a random Turing machine will halt. This constant is well defined, and we know it is not computable.

view this post on Zulip John Baez (May 06 2024 at 07:23):

Another related example: we say a real number is normal in base 10 if in its decimal expansion every string of n digits appears with frequency 1/10n1/10^n, which is what you'd expect of a 'random' number. More generally we can talk about normal numbers in any base. A number that's normal in every base is called uniformly normal.

The set of numbers that are not normal in base bb has measure zero, and the countable union of sets of measure zero again has measure zero, so the set of numbers that are not uniformly normal has measure zero.
In simple rough terms: the probability that a number is uniformly normal is 1.

For this reason, and because people have actually done compute calculations to check, everyone believes π,e,2\pi, e, \sqrt{2}, and other famous irrational numbers are uniformly normal. But nobody has been able to show this for any interesting examples.

view this post on Zulip John Baez (May 06 2024 at 07:24):

There is some slight hope that people can show π\pi is normal in base 16 (and thus base 2), because there's a cool formula that makes it easy to compute individual base 16 digits of π\pi without computing all the previous digits. But people haven't succeeded yet.

view this post on Zulip John Baez (May 06 2024 at 07:25):

While we're digressing, I got this interesting email about my 'germs of smooth functions' question:

dear john baez
i hope that you don’t object to my getting in touch with you in this manner but this is not directly an answer to your query but might contain material of interest.
the appropriate functional analytic structure of the space of germs of smooth functions has been understood since the 70’s.  it is a complete, conuclear convex bornological space.  the class of convex bornological spaces (cbs’s) was introduced and investigated by the french-belgian school (waelbroeck, buchwalter, hogbe-nlend) and is in a certain sense dual to that of locally convex spaces (lcs’s)—the role of the topology is taken by the bounded sets.  this duality can be formalised in terms of category theory—the category of complete cbs’s (lcs’s) is the ind category (pro category) in the sense of grothendieck generated by that of Banach spaces.  further the dual of each lcs has a natural cbs structure and the duals of cbs’s are lcs’s.
the bounded sets of the space of germs are defined to be the sets of such germs which are represented by smooth functions on a neighbourhood of zero which are bounded there as are the sets of their (higher) derivatives.
as mentioned above, this is a respectable space (even conuclear). for example, suitable forms of the classical results (closed graph, uniform boundedness, banach-steinhaus) hold.  it has a well defined dual, which is a complete lcs.  in fact it is the space of distributions with support at the origin, in other words, linear combinations of the delta distributions and its derivatives.
there is, of course a big but.  there is no hahn-banach theorem for cbs’s.  this means that duality theory can collapse (i.e., the dual space can reduce to the zero vector).  usually (i.e., for the important function and distribution spaces), this doesn’t happen. in your case, we have something intermediate—the dual space is infinite dimensional, in a certain sense (which can be made precise), the smallest such space, but it is not large enough to separate points. in fact, the intersection of the kernels of elements of the dual is precisely the space that you describe.
i could go on but i will stop here with the hope that this helps.  i would be happy to try to answer any questions you might have.
sincerely
jim cooper

view this post on Zulip John Baez (May 06 2024 at 07:28):

In case this is too hard to understand, one thing he's saying is that we can define 'bounded' sets in the vector space I called ker(ϕ)\mathrm{ker}(\phi), so we can talk about linear functions :ker(ϕ)R\ell: \mathrm{ker}(\phi) \to \mathbb{R} that map bounded sets to bounded sets... but the only such \ell is zero.

I guess this means it'll be hard to find an explicit such \ell... though "hard to find" is a touchy-feely concept.

view this post on Zulip Kevin Carlson (May 06 2024 at 17:31):

Another closely related thing he said that might be unclear is that the dual of the whole space of germs is the space of “distributions generated by the Dirac delta and its derivatives”, which means that we weren’t missing any nice linear functions on the space of germs of smooth functions—they’re really only the differentiation operations.

view this post on Zulip Kevin Carlson (May 06 2024 at 17:31):

That’s a relief to know!

view this post on Zulip David Egolf (May 07 2024 at 16:29):

I feel like it's a good time to work a bit on the next puzzle, again. Recall that this involves thinking about the topology on Λ(F)=xXΛ(F)x\Lambda(F) = \coprod_{x \in X}\Lambda(F)_x, where Λ(F)x\Lambda(F)_x is the set of germs at xx for our sheaf FF, which sends an open set UU of XX to the set of continuous real-valued functions :UR:U \to \mathbb{R}.

There is a function p:Λ(F)Xp:\Lambda(F) \to X that sends each germ to the point it is associated with. We would like this function to be continuous. To ensure that, we need p1(U)Λ(F)p^{-1}(U) \subseteq \Lambda(F) to be open in Λ(F)\Lambda(F), for each open set UXU \subseteq X.

However, we're not done yet, as we want certain functions to Λ(F)\Lambda(F) to be continuous as well. Given some sF(U)s \in F(U) (a continuous function from UU to R\mathbb{R}), we define the function g(s):UΛ(F)g(s): U \to\Lambda(F) that acts by g(s):x[s]xg(s): x \mapsto [s]_x, where [s]x[s]_x is the germ at xx that ss belongs to. We have that pg(s)(x)=p([s]x)=xp \circ g(s)(x) = p([s]_x) = x, so that if g(s):UΛ(F)g(s):U \to \Lambda(F) was continuous, it would be a section of our bundle p:Λ(F)Xp:\Lambda(F) \to X.

view this post on Zulip David Egolf (May 07 2024 at 16:41):

How can we ensure that g(s):UΛ(F)g(s): U \to \Lambda(F) is continuous for all sF(U)s \in F(U) and all open UU? We need g(s)1(V)Ug(s)^{-1}(V) \subseteq U to be open for every open set VΛ(F)V \subseteq \Lambda(F). Earlier, we declared that certain subsets of Λ(F)\Lambda(F) need to be open: namely the preimage of the open sets in XX under our projection mapping p:Λ(F)Xp:\Lambda(F) \to X. Given some open subset UXU' \subseteq X, that means that p1(U)=xUΛ(F)xp^{-1}(U') = \coprod_{x \in U'}\Lambda(F)_x needs to be open.

So, let us consider an open set in Λ(F)\Lambda(F) of this form. That is , we let V=xUΛ(F)xV = \coprod_{x \in U'}\Lambda(F)_x for some open subset UU' of XX. What is g(s)1(V)g(s)^{-1}(V)? These are the points in UU that map to VV under g(s)g(s). Since VV consists exactly of germs associated to points in UU', the part of UU that maps to VV under g(s)g(s) is UUU \cap U'. Therefore, g(s)1(V)=g(s)1(p1(U))=UUg(s)^{-1}(V) = g(s)^{-1}(p^{-1}(U')) = U \cap U'. Since UU and UU' are both open in XX, UUU \cap U' is open too. We conclude that declaring enough subsets of Λ(F)\Lambda(F) to be open so that p:Λ(F)Xp:\Lambda(F) \to X becomes continuous is compatible with the "germ assignment" functions g(s):UΛ(F)g(s):U \to \Lambda(F) being continuous.

view this post on Zulip David Egolf (May 07 2024 at 16:48):

Now, we actually still aren't done, I believe. I think we want to declare as many subsets of Λ(F)\Lambda(F) to be open as possible, while preserving the continuity of p:Λ(F)Xp:\Lambda(F) \to X and g(s):UΛ(F)g(s):U \to \Lambda(F) (for every sF(U)s \in F(U) and for every UU). (Although I'm not sure why we'd want to do this.)

Let's consider some particular g(s):UΛ(F)g(s): U \to \Lambda(F), which sends xx to [s]x[s]_x. Without knowing anything extra about UU, we know that in the subspace topology on UU these two subsets are continuous: (1) the empty subset and (2) the subset that is all of UU. Making use of the fact that UU is open, if we declare g(s)(U)g(s)(U) to be open in Λ(F)\Lambda(F), then g(s)1(g(s)(U))=Ug(s)^{-1}(g(s)(U)) = U is open, and so the continuity of g(s)g(s) is not disrupted.

view this post on Zulip David Egolf (May 07 2024 at 16:54):

I'll stop here for now, but it still remains to show that declaring g(s)(U)g(s)(U) to be open for each sF(U)s \in F(U) (as UU varies) preserves the continuity of every "germ assigning function" g(s)g(s).

(I hope to finish up thinking about the topology on Λ(F)\Lambda(F) soon... I recognize it's probably not the easiest thing to have a conversation around!)

view this post on Zulip John Baez (May 07 2024 at 18:14):

For anyone trying to follow along again, here's the puzzle again:

Now that we have the space of germs Λ(F)x\Lambda(F)_x for each point xX,x \in X, we define

Λ(F)=xXΛ(F)x \Lambda(F) = \bigsqcup_{x \in X} \Lambda(F)_x

There is then a unique function

p ⁣:Λ(F)Xp \colon \Lambda(F) \to X

sending everybody in Λ(F)x\Lambda(F)_x to x.x. So we've almost gotten our bundle over X.X. We just need to put a topology on Λ(F).\Lambda(F).

We do this as follows. We'll give a basis for the topology, by describing a bunch of open neighborhoods of each point in Λ(F).\Lambda(F). Remember, any point in Λ(F)\Lambda(F) is a germ. More specifically, any point pΛ(F)p \in \Lambda(F) is in some set Λ(F)x,\Lambda(F)_x, so it's the germ at xx of some sFUs \in FU where UU is an open neighborhood of x.x. But this ss has lots of other germs, too, namely its germs at all points yU.y \in U. We take this collection of all these germs to be an open neighborhood of our point p.p. A general open set in Λ(F)\Lambda(F) will then be an arbitrary union of sets like this.

Puzzle. Show that with this topology on Λ(F),\Lambda(F), the map p ⁣:Λ(F)Xp \colon \Lambda(F) \to X is continuous.

view this post on Zulip John Baez (May 07 2024 at 19:27):

I don't know if this helps, but:

1) In topology I tend to think visually, so I find it hard to start solving this puzzle until I draw a picture of ΛF\Lambda F and the open neighborhoods described here. I'd probably try to take the sheaf of smooth real-valued functions on R\mathbb{R}, and try to draw one of these open sets UU. The picture might not be accurate, but it woulds somehow help me think about whether pp is continuous.

view this post on Zulip John Baez (May 07 2024 at 19:37):

2) Here's an example of how it helps: in the process of thinking about this picture, I'm instantly led to remember that continuity can be studied locally. A function ff is continuous iff it is continuous at each point aa in its domain, and this in turn is true iff f1f^{-1} of every open set UU contained in some neighborhood VV of f(a)f(a) is open. We discussed the first fact earlier here somewhere, but I forget if we discussed the second fact. It comes to mind now because we're trying to show p:Λ(F)Xp: \Lambda(F) \to X is continuous and our picture of the open sets of Λ(F)\Lambda(F) is a local one.

view this post on Zulip David Egolf (May 08 2024 at 15:42):

John Baez said:

I don't know if this helps, but:

1) In topology I tend to think visually, so I find it hard to start solving this puzzle until I draw a picture of ΛF\Lambda F and the open neighborhoods described here. I'd probably try to take the sheaf of smooth real-valued functions on R\mathbb{R}, and try to draw one of these open sets UU. The picture might not be accurate, but it woulds somehow help me think about whether pp is continuous.

Thanks for the suggestion! I felt like I was making progress with what I typed out above, but it wasn't feeling very intuitive. Drawing a picture sounds like it may help with gaining some intuition. So, I think I'll shift over to working on this, next. (I may go back to thinking about the continuity of the "germ assigning" functions g(s)g(s) later).

view this post on Zulip David Egolf (May 08 2024 at 15:47):

Alright, let me try to draw a picture of ΛF\Lambda F and its open neighborhoods. The elements of ΛF\Lambda F are germs of FF at various points. And our open neighborhoods in the topology described in the puzzle are unions of the sets g(s)(U)g(s)(U) as sFUs \in FU varies over FUFU and as UU varies over the open sets of XX.

So, let's pick some particular sFUs \in FU to be some s:URs:U \to \mathbb{R}. If I let X=RX = \mathbb{R}, then I can draw of picture of this ss. That seems like a place to start.

view this post on Zulip David Egolf (May 08 2024 at 15:50):

So then, here's a picture of some s:URs:U \to \mathbb{R}. The open set UU is indicated in red.
picture of s

view this post on Zulip David Egolf (May 08 2024 at 15:57):

Now, let's consider g(s)(U)g(s)(U). For each xUx \in U, g(s)(x)=[s]xg(s)(x) = [s]_x, the germ of ss at xx. So, g(s)(U)g(s)(U) is the set of germs of ss. Each germ at xx I think is roughly like a "local shape" that functions can have at xx. In general, a germ of a continuous function contains more information than just its derivatives at that point. But to get a picture, I'll pretend that the germ of ss at xx is determined just by the slope of ss at xx. (I'm assuming that this particular ss is differentiable, too).

I'll organize my drawing of g(s)(U)g(s)(U), which is to be an open set of ΛF\Lambda F, by thinking of ΛF\Lambda F as having a collection of "local shapes" (germs) for each point which "hover over" each point xXx \in X.

view this post on Zulip David Egolf (May 08 2024 at 16:05):

Here's my attempted visualization of g(s)(U)g(s)(U), which picks out the "local shape" (germ) of ss at each point in UU:
visualizing germs of s

This whole 2D region is part of Λ(F)\Lambda(F). So, for each point xXx \in X we have a collection of shapes hovering above (and below) the xx-axis corresponding to different germs at that point. The blue point at some xx is g(s)(x)g(s)(x), which in this simplified drawing is supposed to (partially) describe the local shape of ss at xx using its first derivative there. Notice that the first derivative really doesn't provide enough information to reconstruct our function about some point (in particular it forgets "vertical shifting"), but this is at least some of the information that describes our ss about each point.

This is not what I expected an open set of Λ(F)\Lambda(F) to look like! My picture might be too inaccurate and approximate for it to give good intuition, but maybe not! It seems interesting.

view this post on Zulip David Egolf (May 08 2024 at 16:11):

Now, let's consider our p:Λ(F)Xp: \Lambda(F) \to X in this example (which is the function we wish to show is continuous). We need p1(U)p^{-1}(U) to be open in Λ(F)\Lambda(F) for each open subset UU of XX. Let's take UU to the open subset of X=RX = \mathbb{R} indicated in red above. Then a point in Λ(F)\Lambda(F) maps to some xUx \in U if it is a germ associated to xUx \in U. In our picture, this will correspond to the points hovering above (and below) UU.

view this post on Zulip David Egolf (May 08 2024 at 16:15):

Here's a picture of p1(U)p^{-1}(U), which is a subset of Λ(F)\Lambda(F):
preimage of an open set

UU is indicated by the red line segments,and p1(U)p^{-1}(U) is indicated by the shaded light red regions. Intuitively, this preimage is the disjoint union of all the "local behaviours/shapes" possible for continuous real-valued functions (as provided by FF) at each point.

If Λ(F)\Lambda(F) is to be continuous, this p1(U)p^{-1}(U) needs to be open. With our proposed topology, that means it needs to be the union of the image of some "germ assigning" functions (one of which we visualized in blue in a drawing above). I guess that means that for any germ λ\lambda at some xUx \in U, there needs to be at least one function which belongs to that germ, so that its behaviour about xx is described by λ\lambda. If that's right, the continuity of p:Λ(F)Xp: \Lambda(F) \to X might correspond roughly to the idea that "every possible local behaviour at xx occurs for at least one element ss of a sheaf set F(U)F(U), for some UU containing xx".

view this post on Zulip David Egolf (May 08 2024 at 16:21):

John Baez said:

2) Here's an example of how it helps: in the process of thinking about this picture, I'm instantly led to remember that continuity can be studied locally. A function ff is continuous iff it is continuous at each point aa in its domain, and this in turn is true iff f1f^{-1} of every open set UU contained in some neighborhood VV of f(a)f(a) is open. We discussed the first fact earlier here somewhere, but I forget if we discussed the second fact. It comes to mind now because we're trying to show p:Λ(F)Xp: \Lambda(F) \to X is continuous and our picture of the open sets of Λ(F)\Lambda(F) is a local one.

I don't think we directly discussed the second fact, although I may just be forgetting. Next time, I'll plan to start by proving that fact! Then I'll try to connect that fact to the pictures I've drawn above.

Although, thinking it over a bit, I think I might have an idea of how to solve the puzzle already... I guess I'll see what I feel like trying out tomorrow!

view this post on Zulip John Baez (May 08 2024 at 19:38):

David Egolf said:

This is not what I expected an open set of Λ(F)\Lambda(F) to look like! My picture might be too inaccurate and approximate for it to give good intuition, but maybe not! It seems interesting.

It seems like a pretty good picture to me. It's really important to realize that for many familiar sheaves FF on R\mathbb{R}, like the sheaf of smooth functions, the corresponding space of germs Λ(F)\Lambda(F) is not very easy to draw or visualize. And I think the best way to realize this is to try to draw it. You've drawn a kind of 'approximation' to it - and by thinking about the information your drawing leaves out, you're starting to get a sense for how peculiar this space is!

One thing that's strange about this space is that Λ(F)\Lambda(F) is not 'Hausdorff'. This means you can find two different germs g,gΛ(F)g, g' \in \Lambda(F) that can't be separated by open sets: i.e., you can't find disjoint open sets U,UΛ(F)U, U' \in \Lambda(F) with gU,gUg \in U, g' \in U'. That germ at zero of the weird function @Peva Blanchard described cannot be separated by open sets from the germ at zero of the constant function 0. That's because these functions are equal at all points slightly left of zero. (For a proof of the similar fact about continuous functions see this).

Well, I'm probably getting ahead of myself here, so I should stop. But my main point is, you're doing a fine job of attempting to draw a space that's impossible to draw in a fully accurate way... and I've found such attempts very useful!

view this post on Zulip Peva Blanchard (May 08 2024 at 21:57):

This is really nice. Thanks to your detailed exposition David, I corrected a very wrong picture I had.

Indeed, I thought that the topology on Λ(F)\Lambda(F) was the coarsest topology making the projection p:Λ(F)Xp : \Lambda(F) \rightarrow X continuous. This means that any open set in Λ(F)\Lambda(F) is a union of sets of the form p1(U)p^{-1}(U) for every open subset UU in XX.

But this topology is not enough (too coarse). Because we also want to think about a section sF(U)s \in F(U) of pp as a continuous function g(s):UΛ(F)g(s) : U \rightarrow \Lambda(F).

view this post on Zulip Peva Blanchard (May 08 2024 at 21:57):

Now, maybe I can share the mental picture I have now about the required topology on Λ(F)\Lambda(F). (Hopefully, it is correct). I find it easier to deal with neighborhoods instead of open sets. Given a germ aΛ(F)xa \in \Lambda(F)_x at xx, what does it mean for another germ bΛ(F)yb \in \Lambda(F)_y at a different base point yy to be "in the neighborhood of aa"? We can answer that question by providing a witness of the fact that they are close to each other. Such a witness is a pair (U,s)(U, s) where UU is an open set containing both xx and yy, and sF(U)s \in F(U) is a section such that

[s]x=a[s]y=b \begin{align*} [s]_x &= a \\ [s]_y &= b \\ \end{align*}

I picture this witness as providing a connecting path between aa and bb. With this picture, we see that for every zUz \in U, the germ [s]z[s]_z of ss at zz is in the neighborhood of aa. In other words, a pair (U,s)(U, s) with sF(U)s \in F(U) and Ux=p(a)U \ni x = p(a) encodes a specific neighborhood of aa.

view this post on Zulip Peva Blanchard (May 08 2024 at 22:29):

To continue with this picture, we can interpret the separation of two points aa and bb in Λ(F)\Lambda(F) (the "Hausdorff" property as explained by @John Baez )

These points are separated if we can find two disjoint neighborhoods of aa and bb respectively. Informally, this means that we have a neighborhood (U,s)(U, s) of aa, and a neighborhood (V,t)(V, t) of bb, and such that ss and tt never "agree" over UVU \cap V.

For instance, let's consider the germ aa of the funny function e1x2e^{-\frac{1}{x^2}} at 00, and the germ bb of the constant zero function at 00. In that case,

view this post on Zulip Peva Blanchard (May 08 2024 at 22:32):

By the way, does it mean that the topology of Λ(F)\Lambda(F) is Hausdorff when FF is the sheaf of analytic functions? (I'll think about it, no need to answer right away)

view this post on Zulip David Egolf (May 09 2024 at 17:05):

Thanks to both of you for your interesting comments! Thinking about whether Λ(F)\Lambda(F) is Hausdorff is interesting. (Side note: somehow the open sets on Λ(F)\Lambda(F) remind me of the closed sets in the Zariski topology...which I seem to recall is not (usually?) Hausdorff either.)

I'm going to take a break from this thread today, to rest up, but I hope to get back to it tomorrow!

view this post on Zulip Peva Blanchard (May 10 2024 at 09:09):

Peva Blanchard said:

By the way, does it mean that the topology of Λ(F)\Lambda(F) is Hausdorff when FF is the sheaf of analytic functions? (I'll think about it, no need to answer right away)

I think the answer is yes. I wasn't sure if it would digress too much, so I opened another topic.

view this post on Zulip Peva Blanchard (May 10 2024 at 09:48):

By the way, something clicked for me about "evaluating a function at some point".

Because of my SetSet-based math education, I am used to thinking about a function f:XYf : X \rightarrow Y as being a graph, i.e., the set of pairs (x,y)X×Y(x, y) \in X \times Y with y=f(x)y = f(x). In that case, the evaluation of ff at xx is just picking out the second component of this pair, namely, the value f(x)f(x).

But, with our previous discussion, it turns out this evaluation procedure is actually very narrow. When we deal with continuous map ff, another evaluation procedure is given by "taking the germ [f]x[f]_x of ff at xx". The mental picture I have in mind, is a sequence of open neighborhoods UnU_n of xx that converges towards xx, and over which we take the restrictions of ff. This is like distilling to get the most concentrated information about ff at xx.

This reminds me of the way we define a distribution as a the dual of a space S\mathcal{S} of test functions. Formally, a distribution TT is a linear map SR\mathcal{S} \rightarrow \mathbb{R}. I picture a test function ϕ\phi as some kind of smooth bump around a point somewhere, so that the value T(ϕ)T(\phi) sums up the "behavior of TT around that point". In a way, test functions play the same role as the open subsets of XX in the previous paragraph, the map ϕT(ϕ)\phi \mapsto T(\phi) is analog to the restriction map UfUU \mapsto f_{|U}. We can evaluate a distribution at a point xx by considering a sequence ϕn\phi_n of test functions that "converge towards xx", and taking the limit of the T(ϕn)T(\phi_n)'s.

view this post on Zulip David Egolf (May 10 2024 at 16:29):

Wow, there is a lot interesting stuff to catch up on here :sweat_smile:! Today, I'll try to understand what you both are saying regarding the fact that Λ(F)\Lambda(F) is not Hausdorff, for FF our sheaf of continuous real-valued functions on open subsets of R\mathbb{R}.

To show that Λ(F)\Lambda(F) is not Hausdorff, we need to find two points (germs) f,fΛ(F)f,f' \in \Lambda(F) so that there are no open sets UU and UU' with fUf \in U, fUf' \in U' and UU=U \cap U' = \emptyset. In other words, for any open set UU containing ff and any open set UU' containing ff', UUU \cap U' is always non-empty.

view this post on Zulip John Baez (May 10 2024 at 16:33):

Right! And you can find an example of this! It's a lot easier to find one for the sheaf of continuous functions than with smooth functions, where a sneaky example Peva described comes to our aid.

view this post on Zulip David Egolf (May 10 2024 at 17:00):

This is feeling tricky for me today. But, referencing this page, I think we want to consider this situation:

We have this situation with U=RU = \mathbb{R}, x=0x=0, ss the zero function, and ss' the function that is zero for x<0x <0 and e1/x2e^{-1/x^2} for x0x \geq 0. ss and ss' have different germs at zero, but any open set VV that contains 00 also contains some negative numbers with some "breathing space" around them. So, we can pick some negative number xx' in VV: then ss and ss' must have the same germ at xx'. (That is because they both restrict to the zero function for sufficiently small open intervals about a negative number xx').

view this post on Zulip David Egolf (May 10 2024 at 17:09):

So, we can form two sequences of open sets in Λ(F)\Lambda(F), by taking g(s)(Vi)g(s)(V_i) and g(s)(Vi)g(s')(V_i) as ViV_i becomes a smaller and smaller open neighborhood containing 00. We can form a sequence xix'_i such that xix'_i is some negative number present in both ViV_i and ViV_i'. Applying g(s)g(s) and g(s)g(s') to this sequence gives us two sequences [s]xi[s]_{x'_i} and [s]xi[s']_{x'_i}. But these two sequences are actually equal, because ss and ss' have the same germs at any negative point xix'_i.

Now, intuitively, the sequence [s]xi[s]_{x'_i} should converge to [s]x[s]_x and the sequence [s]xi[s']_{x'_i} should converge to [s]x[s']_x. We just noted that both of these sequences are equal... but our proposed limits of them are different (as [s]x[s]x[s]_x \neq [s']_x)! So it seems like we might have a situation where limits aren't unique, which I think would relate to Λ(F)\Lambda(F) being non-Hausdorff, referencing this page.

There's probably a simpler way to do this, explained here. I'll look at that next.

view this post on Zulip David Egolf (May 10 2024 at 17:37):

Following the proof here, let us assume that we have this situation again:

We will aim to show that [s]x[s]_x and [s]x[s']_x are points of Λ(F)\Lambda(F) that can't be separated by open sets. That is, there are no open subsets λs\lambda_s and λs\lambda_{s'} of Λ(F)\Lambda(F) with [s]xλs[s]_x \in \lambda_s and [s]xλs[s']_x \in \lambda_{s'} with λsλs=\lambda_s \cap \lambda_{s'}= \emptyset.

To obtain a contradiction, let us assume that Λ(F)\Lambda(F) is Hausdorff, so that there are such disjoint open sets λs\lambda_s and λs\lambda_{s'}. By definition of the topology on Λ(F)\Lambda(F), λs=ig(si)(Ui)\lambda_s = \cup_i g(s_i)(U_i) for some sis_i and UiU_i and similarly λs=jg(sj)(Uj)\lambda_{s'} = \cup_j g(s'_j)(U_j') for some sis_i' and UiU_i'.

Since [s]xλs[s]_x \in \lambda_s, [s]xg(si)(Ui)[s]_x \in g(s_i)(U_i) for some particular ii, where xUix \in U_i. Since germs can only be equal if they are associated to the same point, this implies that [s]x=[si]x[s]_x = [s_i]_x. Similarly, [s]xg(sj)(Uj)[s']_x \in g(s'_j)(U'_j) for some particular jj, where xUjx \in U_j, so that [s]x=[sj]x[s']_x = [s'_j]_x.

Since λs\lambda_s and λs\lambda_{s'} are assumed disjoint, that means that g(si)(Ui)g(s_i)(U_i) and g(sj)(Uj)g(s'_j)(U'_j) are also disjoint. Thus, for any xUiUjx' \in U_i \cap U'_j, [si]x[sj]x[s_i]_{x'} \neq [s'_j]_{x'}.

view this post on Zulip David Egolf (May 10 2024 at 17:43):

Since sis_i and ss have the same germ at xx, there is some open subset containing xx where these two functions restrict to the same function. Similarly, there is some open subset containing xx where sjs_j' and ss' restrict to the same function. Taking the intersection of these two open sets, we get an open set WW containing xx where for each point xWx' \in W we have [si]x=[s]x[s_i]_{x'} = [s]_{x'} and [sj]x=[s]x[s_j']_{x'}=[s']_{x'}. Now, we know that [s]x[sj]x[s]_{x'} \neq [s_j']_{x'} for any xWx' \in W. Therefore, [s]x[s]x[s]_{x'} \neq [s']_{x'} for any xWx' \in W.

However, we know by assumption that there is always some point in any open subset containing xx where ss and ss' have the same germ. Thus, we have obtained a contradiction. We got this contradiction by assuming that Λ(F)\Lambda(F) was Hausdorff. We conclude that Λ(F)\Lambda(F) must not be Hausdorff!

Whew! Hopefully I did that correctly. There is still a lot more of catching up for me to do in this thread, but I'll stop here for today.

view this post on Zulip Peva Blanchard (May 11 2024 at 22:17):

I think the proof is correct. The proposition can be used to present concrete examples of continuous functions s,ss, s', simpler than 00 and e1x2e^{-\frac{1}{x^2}}, that cannot be separated by open sets in Λ(F)\Lambda(F) (the sheaf of continuous functions).

spoiler

By the way, @John Baez gave a very neat puzzle on the other thread. (spoiler alert, I gave a proof there).

Here it is.

Suppose FF is sheaf on a topological space XX. Then Λ(F)\Lambda(F) is Hausdorff if and only if for every open UXU \subseteq X and every s1,s2Λ(U)s_1, s_2 \in \Lambda(U), the set of points xUx \in U for which the germ of s1s_1 equals the germ of s2s_2 is closed in UU.

view this post on Zulip David Egolf (May 13 2024 at 17:44):

The next thing I'm hoping to do in this thread is to prove this:
John Baez said:

A function ff is continuous iff it is continuous at each point aa in its domain, and this in turn is true iff f1f^{-1} of every open set UU contained in some neighborhood VV of f(a)f(a) is open.

Once I've done that, I'm hoping to try and solve the current puzzle!

Unfortunately, I don't have the energy in the tank to work on this today. Once I have energy, I hope to return to this thread and work on what I just described.

view this post on Zulip David Egolf (May 14 2024 at 16:48):

Before I start on this topology exercise, I wanted to mention that I rather like @Peva Blanchard's mental picture regarding the topology on Λ(F)\Lambda(F). I think the basic idea is this: two germs a,ba,b are in the same open set g(s)(U)g(s)(U) if they are germs of the function sF(U)s \in F(U) for some points x,xUx,x' \in U. So, in a way, this particular continuous function s:URF(U)s:U \to \mathbb{R} \in F(U) provides a "bridge" that lets us "connect" two germs, in the sense that its set of germs is an open set containing both aa and bb.

view this post on Zulip David Egolf (May 14 2024 at 16:57):

This gets me wondering if we can define a category CC using this intuition. Let the objects of CC be the germs of FF, the elements of Λ(F)\Lambda(F). And let us put a morphism s:URF(U)s:U \to \mathbb{R} \in F(U) from aa to bb if aa and bb are both germs of ss at some points in UU. We'll also want to put a morphism s:URs:U \to \mathbb{R} from bb to aa in this case, because the condition we are checking is symmetric in aa and bb.

To make a category from this, we'd need to define composition. I'm not immediately sure if there's a nice way to do this... and I don't want to get too sidetracked, so I'll stop here.

view this post on Zulip David Egolf (May 14 2024 at 17:00):

Alright, on to this topology exercise:
John Baez said:

A function ff is continuous iff it is continuous at each point aa in its domain, and this in turn is true iff f1f^{-1} of every open set UU contained in some neighborhood VV of f(a)f(a) is open.

We've already seen above that a "function ff is continuous iff it is continuous at each point aa in its domain". We want to show that these conditions are equivalent to the condition that f1f^{-1} of every open set UU contained in some neighborhood VV of f(a)f(a) is open.

These kinds of statements still intimidate me a bit, so I'll try to draw a picture to illustrate what we're trying to prove.

view this post on Zulip David Egolf (May 14 2024 at 17:13):

Here's my picture:
picture

Here, VV is an open set containing f(a)f(a), and UU is an open set with UVU \subseteq V. I could have alternatively drawn UU so that it includes f(a)f(a), but since that isn't required I chose not to.

EDIT: I need to update the picture... the result to be shown is slightly different than what I listed above.

view this post on Zulip John Baez (May 14 2024 at 17:14):

Sorry, I left out a condition!!! I meant to require aUa \in U.

view this post on Zulip John Baez (May 14 2024 at 17:16):

So, the result to be shown is "f is continuous at aa if for some neighborhood VV of f(a)f(a), the inverse image of every open subset UVU \subseteq V containing f(a)f(a) is an open set containing aa".

view this post on Zulip John Baez (May 14 2024 at 17:18):

Compare this to the definition of "continuous at aa": ff is continuous at aa iff the inverse image of every open set UU containing f(a)f(a) contains an open set containing of aa.

view this post on Zulip John Baez (May 14 2024 at 17:19):

So the difference is saying it's enough to look at open sets UU containing aa that "aren't too big".

view this post on Zulip John Baez (May 14 2024 at 17:21):

Intuitively this makes sense, since we're talking continuity "at aa". This should only depend on what's going on near aa, and near f(a)f(a).

view this post on Zulip David Egolf (May 14 2024 at 17:35):

John Baez said:

So the difference is saying it's enough to look at open sets UU containing aa that "aren't too big".

Oh, I like that! That does help make it more intuitive. I'll draw a new picture now, and I'm hopeful that this intuition will be reflected in that picture as well.

view this post on Zulip David Egolf (May 14 2024 at 17:42):

Here's a picture illustrating the condition "for some neighborhood VV of f(a)f(a), the inverse image of every open subset UVU \subseteq V containing f(a)f(a) is an open set containing aa":
picture

Here, UU and VV are open sets, as is f1(U)f^{-1}(U).

view this post on Zulip David Egolf (May 14 2024 at 17:51):

We'd like to show that if ff satisfies this condition in the picture, then ff is continuous at aa. That is, we'd like to show there is some open set NN containing aa such that fNf|_N is continuous. My first guess was to try and set N=f1(V)N= f^{-1}(V). The problem with this is that f1(V)f^{-1}(V) isn't necessarily open.

view this post on Zulip David Egolf (May 14 2024 at 17:51):

Oh, wait, yes f1(V)f^{-1}(V) does have to be open! That's because VVV \subseteq V is an open set containing f(a)f(a), and so f1(V)f^{-1}(V) is an open set containing aa!

view this post on Zulip David Egolf (May 14 2024 at 17:55):

Alright, so let's set N=f1(V)N = f^{-1}(V) and try to show that fNf|_N is continuous. We have that fN:NVf|_N:N \to V, where NN and VV both have the subspace topology. Let's consider some open subset UU of VV. We'd like to show that fN1(U)f_N^{-1}(U) is open. Now, if f(a)Uf(a) \in U, we know that f1(U)f^{-1}(U) is open and hence f1(U)N=f1(U)f^{-1}(U) \cap N = f^{-1}(U) is open in NN.

It remains to consider the case where UVU \subseteq V is an open subset of VV that doesn't contain f(a)f(a). I don't immediately see how to show that fN1(U)f|_{N}^{-1}(U) is still an open subset of NN.

view this post on Zulip David Egolf (May 14 2024 at 18:03):

Well, I think I'm stuck here for the moment, but at least some progress was made. I'll stop here for today!

view this post on Zulip John Baez (May 14 2024 at 18:12):

David Egolf said:

It remains to consider the case where UVU \subseteq V is an open subset of VV that doesn't contain aa. I don't immediately see how to show that fN1(U)f|_{N}^{-1}(U) is still an open subset of NN.

You mean "doesn't contain f(a)f(a)", not "doesn't contain aa". But more importantly....

view this post on Zulip John Baez (May 14 2024 at 18:13):

I don't think this case matters. Only stuff around aa can possibly matter. Today I accidentally wrote down a bogus definition of "continuous at aa", but then I fixed it. Here's the fixed version:

view this post on Zulip John Baez (May 14 2024 at 18:14):

John Baez said:

the definition of "continuous at aa": ff is continuous at aa iff the inverse image of every open set UU containing f(a)f(a) contains an open set containing of aa.

view this post on Zulip John Baez (May 14 2024 at 18:19):

So note, we're not demanding that f1(U)f^{-1}(U) is open, which would be too much since parts of UU might be very far from aa. We're just demanding that f1(U)f^{-1}(U) contain an open neighborhood of aa.

view this post on Zulip David Egolf (May 14 2024 at 20:38):

I was working from this definition of "continuous at aa": f:XYf:X \to Y is continuous at aa exactly if there is some open set NXN \subseteq X containing aa such that fN:NYf|_N:N \to Y is continuous.

Maybe next time I'll try to show that the definition I was using is equivalent to the definition which you provided:
John Baez said:

the definition of "continuous at aa": ff is continuous at aa iff the inverse image of every open set UU containing f(a)f(a) contains an open set containing of aa.

view this post on Zulip John Baez (May 14 2024 at 20:48):

David Egolf said:

I was working from this definition of "continuous at aa": f:XYf:X \to Y is continuous at aa exactly if there is some open set NXN \subseteq X containing aa such that fN:NYf|_N:N \to Y is continuous.

Okay, that's a fine definition.

view this post on Zulip John Baez (May 14 2024 at 20:50):

It should be equivalent to the one I gave, but I don't mean to be overwhelming you with the task of showing lots of definitions are equivalent!

view this post on Zulip Graham Manuell (May 15 2024 at 03:54):

I don't think these two conditions are equivalent. David's is stronger. It means ff is continuous in a neighbourhood of aa.

view this post on Zulip David Egolf (May 15 2024 at 04:36):

I wondered if Schechter's "Handbook of Analysis and Its Foundations" talked about this. On page 417, it defines a function f:XYf:X \to Y to be continuous at the point x0x_0 if this condition is satisfied: the inverse image of each neighborhood of f(x0)f(x_0) is a neighborhood of x0x_0. This reminds me of the definition that @John Baez gave above. It should be noted that Schechter uses the term "neighborhood" in a way he defines on page 110: SS is a neighborhood of a point zz if zGSz \in G \subseteq S for some open set GG.

Schechter also touches on a condition similar to one I described above, saying on page 418 that a mapping f:XYf:X \to Y is continuous iff ff is "locally continuous" in the sense that each point in XX has a neighborhood NN such that fN:NYf|_N:N \to Y is continuous. He doesn't use the phrase "continuous at a point" in this context.

view this post on Zulip David Egolf (May 15 2024 at 04:41):

Schechter also says (on page 417) that the following two conditions are equivalent for a function f:XYf:X \to Y between two topological spaces:

  1. Inverse images of open sets are open
  2. ff is continuous at each point x0x_0 in XX. [So, for each x0x_0, the inverse image of each neighborhood of f(x0)f(x_0) is a neighborhood of x0x_0.]

view this post on Zulip David Egolf (May 15 2024 at 04:42):

Although this is all somewhat tangential to sheaves, I am pleased that - I think - I am slowly starting to get some of this topology stuff straight! :sweat_smile:

At this point, I might just assume that everything Schechter says here is true, to better focus on the main topic of this thread. Namely, by assuming the things I just listed above are true, I'd like to see if I can then prove that p:Λ(F)Xp:\Lambda(F) \to X is continuous.

view this post on Zulip John Baez (May 15 2024 at 07:35):

Sure! Schechter is organizing these things better than I am, by the way. I hadn't realize how many subtly different ways there are to say "continuous at a point", all of which are equivalent. Apparently I just make one up each time I need this concept.

view this post on Zulip David Egolf (May 15 2024 at 16:24):

Alright, let's again consider our projection function p:Λ(F)Xp: \Lambda(F) \to X which sends each germ to the point it is associated with. To show pp is continuous, we have a few different equivalent conditions available to us now. If we can prove any of these conditions are true for pp, pp is continuous:

  1. the inverse image p1(U)p^{-1}(U) of any open set UXU \subseteq X is open
  2. pp is continuous at each point [s]xΛ(F)[s]_x \in \Lambda(F), so that the inverse image of any neighborhood of xx is a neighborhood of [s]x[s]_x (where [s]x[s]_x is some germ associated to the point xx)
  3. pp is "locally continuous" in the sense that for each [s]xΛ(F)[s]_x \in \Lambda(F), there is some neighborhood NN of [s]x[s]_x so that pN:NXp|_N:N \to X is continuous

Schechter also provides several more equivalent conditions for the continuity of a function, but hopefully one of the conditions I've listed will be helpful for solving this puzzle.

view this post on Zulip David Egolf (May 15 2024 at 16:34):

I'm going to try using condition (2), because it's least familiar to me and I'm curious about it. :laughing:
So, let's consider some point [s]xΛ(F)[s]_x \in \Lambda(F). This is a germ associated to the point xXx \in X, consisting of an equivalence class of real-valued continuous functions which are each defined on some open set containing xx.(Recall that two such functions are equivalent exactly if they agree on some open set containing xx). In particular, we're considering the equivalence class of some continuous function s:URs:U \to \mathbb{R} with xUx \in U.

Now, let us introduce a neighbourhood NN of p([s]x)=xp([s]_x) = x. This is a subset of XX containing an open set NN' so that xNx \in N'. We wish to show that p1(N)p^{-1}(N) is a neighborhood of [s]x[s]_x.

view this post on Zulip David Egolf (May 15 2024 at 16:38):

By definition, p1(N)={λΛ(F)p(λ)N}p^{-1}(N) = \{\lambda \in \Lambda(F) | p(\lambda) \in N\}. That is, this preimage consists exactly of all the germs associated to points in NN. Since p([s]x)=xNp([s]_x) =x \in N, we do have xp1(N)x \in p^{-1}(N). It remains to show that we can find some open subset of p1(N)p^{-1}(N) which contains [s]x[s]_x.

view this post on Zulip David Egolf (May 15 2024 at 16:42):

We've already got [s]xp1(N)[s]_x \in p^{-1}(N), and we're looking to build up an open set in Λ(F)\Lambda(F) about [s]x[s]_x consisting only of germs associated to points in NN. To build this open set, we need to find some points in Λ(F)\Lambda(F) that are "near" to [s]x[s]_x. By definition of the topology of Λ(F)\Lambda(F), we know that g(s)(U)g(s)(U) is an open subset of Λ(F)\Lambda(F). I think we can use this to get some germs "nearby" [s]x[s]_x that also sit in p1(N)p^{-1}(N).

view this post on Zulip David Egolf (May 15 2024 at 16:47):

To do this, let's restrict s:URs:U \to \mathbb{R} (with xUx \in U). We know that NNN' \subseteq N is open and contains xx. Hence UNUU \cap N' \subseteq U is an open set containing xx. Then, sNU:NURs|_{N' \cap U}: {N' \cap U} \to \mathbb{R} and g(sNU)(NU)g(s|_{N' \cap U})(N' \cap U) is an open set of Λ(F)\Lambda(F). Since NUN' \cap U is a subset of NNN' \subseteq N, g(s)(NU)g(s)({N' \cap U}) is an open subset of p1(N)p^{-1}(N) containing [s]x[s]_x.

I think we have found an open set containing [s]x[s]_x that is a subset of p1(N)p^{-1}(N)! That is, I think we've shown that p1(N)p^{-1}(N) is a neighbourhood of [s]x[s]_x if NN is a neighbourhood of x=p([s]x)x = p([s]_x). Thus, pp is continuous at any [s]xΛ(F)[s]_x \in \Lambda(F), and hence it is continuous!

view this post on Zulip John Baez (May 15 2024 at 22:14):

I'm a bit confused because I thought you were solving this puzzle, which is not about R\mathbb{R}-valued functions, but rather an arbitrary sheaf FF on a topological space XX:

Now that we have the space of germs Λ(F)x\Lambda(F)_x for each point xX,x \in X, we define

Λ(F)=xXΛ(F)x \Lambda(F) = \bigsqcup_{x \in X} \Lambda(F)_x

There is then a unique function

p ⁣:Λ(F)Xp \colon \Lambda(F) \to X

sending everybody in Λ(F)x\Lambda(F)_x to x.x. So we've almost gotten our bundle over X.X. We just need to put a topology on Λ(F).\Lambda(F).

We do this as follows. We'll give a basis for the topology, by describing a bunch of open neighborhoods of each point in Λ(F).\Lambda(F). Remember, any point in Λ(F)\Lambda(F) is a germ. More specifically, any point pΛ(F)p \in \Lambda(F) is in some set Λ(F)x,\Lambda(F)_x, so it's the germ at xx of some sFUs \in FU where UU is an open neighborhood of x.x. But this ss has lots of other germs, too, namely its germs at all points yU.y \in U. We take this collection of all these germs to be an open neighborhood of our point p.p. A general open set in Λ(F)\Lambda(F) will then be an arbitrary union of sets like this.

Puzzle. Show that with this topology on Λ(F),\Lambda(F), the map p ⁣:Λ(F)Xp \colon \Lambda(F) \to X is continuous.

view this post on Zulip John Baez (May 15 2024 at 22:15):

Are you doing the special case where FF is the sheaf of continuous R\mathbb{R}-valued functions on a topological space XX?

view this post on Zulip David Egolf (May 15 2024 at 22:35):

John Baez said:

Are you doing the special case where FF is the sheaf of continuous R\mathbb{R}-valued functions on a topological space XX?

Yes, I was doing that special case. But I think I'll plan to next give this a try for an arbitrary sheaf FF on a topological space XX! I am hoping that the pattern of the argument will be similar.

view this post on Zulip John Baez (May 16 2024 at 05:31):

I think it should be almost identical! Working with a sheaf of continuous functions makes things easier to visualize, so it's a good test case.

view this post on Zulip David Egolf (May 16 2024 at 16:59):

That's encouraging to hear!

Let FF be a sheaf on a topological space XX. Then we wish to show that our map p:Λ(F)Xp: \Lambda(F) \to X is continuous. (Recall that pp sends each germ in Λ(F)x\Lambda(F)_x to xx). Following the argument above - which was carried out for a special case - let's consider some point [s]xΛ(F)[s]_x \in \Lambda(F), which is a germ associated to the point xXx \in X. This is the germ in Λ(F)x\Lambda(F)_x that some sheaf element sF(U)s \in F(U) belongs to, where xUx \in U.

Now, let us introduce a neighbourhood NN of p([s]x)=xp([s]_x)=x. This is a subset of XX containing an open set NN' so that xNx \in N'. We wish to show that p1(N)p^{-1}(N) is a neighbourhood of [s]x[s]_x.

view this post on Zulip David Egolf (May 16 2024 at 17:04):

By definition p1(N)p^{-1}(N) consists exactly of all the germs associated to points in NN. Since p([s]x)=xNp([s]_x)=x \in N, we do have xp1(N)x \in p^{-1}(N). It remains to show that we can find some open (in Λ(F)\Lambda(F)) subset of p1(N)p^{-1}(N) which contains [s]x[s]_x.

We've already got [s]xp1(N)[s]_x \in p^{-1}(N), and we're looking to build up an open set in Λ(F)\Lambda(F) about [s]x[s]_x consisting only of germs associated to points in NN. To build this open set, we need to find some points in Λ(F)\Lambda(F) that are "near" to [s]x[s]_x.

view this post on Zulip David Egolf (May 16 2024 at 17:08):

For our sF(U)s \in F(U), let g(s)(U)g(s)(U) denote the set of all germs of ss over the various points of UU. By definition of the topology of Λ(F)\Lambda(F), this is an open set. And we know that g(s)(U)g(s)(U) contains [s]x[s]_x, as xUx \in U.

Now, from this, we wish to construct an open set of Λ(F)\Lambda(F) that contains xx and is a subset of p1(N)p^{-1}(N).

view this post on Zulip David Egolf (May 16 2024 at 17:13):

To do this, we will "restrict" ss to the open set UNNU \cap N' \subseteq N which contains xx. Since we are working in the general case, this restriction is more abstract than just restricting the domain of a function. However, since FF is a presheaf, we have a restriction function rU(UN):F(U)F(UN)r|_{U \to (U \cap N')}: F(U) \to F(U \cap N') available to us, so our restriction of sF(U)s \in F(U) is simply rU(UN)(s)r|_{U \to (U \cap N')}(s). By definition of the topology of Λ(F)\Lambda(F), g(rU(UN)(s))(UN)g(r|_{U \to (U \cap N')}(s))(U \cap N') is an open set of Λ(F)\Lambda(F).

view this post on Zulip David Egolf (May 16 2024 at 17:20):

It remains to show that g(rU(UN)(s))(UN)g(r|_{U \to (U \cap N')}(s))(U \cap N') (1) is a subset of p1(N)p^{-1}(N) and (2) contains [s]x[s]_x. To show (1), note that this set consists only of germs associated to rU(UN)(s)r|_{U \to (U \cap N')}(s) over the points of UNU \cap N'. Since UNNU \cap N' \subseteq N, all of these germs belong to points in NN, and so g(rU(UN)(s))(UN)g(r|_{U \to (U \cap N')}(s))(U \cap N') is a subset of p1(N)p^{-1}(N).

To show (2), we note that restricting a presheaf element does not change its germ at a point. This is because each germ set Λ(F)x\Lambda(F)_x is the tip of a cocone for the diagram consisting of the various F(V)F(V) with VV varying over the open sets of XX containing xx, together with the restriction functions between them. So, [sF(U)]x=[rU(UN)(s)F(UN)]x[s \in F(U)]_x =[r|_{U \to (U \cap N')}(s) \in F(U \cap N')]_x. Hence [s]xg(rU(UN)(s))(UN)[s]_x \in g(r|_{U \to (U \cap N')}(s))(U \cap N').

view this post on Zulip David Egolf (May 16 2024 at 17:25):

We conclude that g(rU(UN)(s))(UN)g(r|_{U \to (U \cap N')}(s))(U \cap N') is an open set of Λ(F)\Lambda(F) containing xx, and that it is also a subset of p1(N)p^{-1}(N). Thus, if NN is a neighbourhood of p([s]x)=xp([s]_x)=x, then p1(N)p^{-1}(N) is a neighborhood of xx.

So, pp is continuous at any point [s]xΛ(F)[s]_x \in \Lambda(F). Hence, p:Λ(F)Xp:\Lambda(F) \to X is continuous!

view this post on Zulip David Egolf (May 16 2024 at 17:31):

I don't think I used the fact that FF was a sheaf anywhere, although I did make use of the fact that FF was a presheaf. So, I think the same result should hold for FF an arbitrary presheaf over some topological space XX.

view this post on Zulip David Egolf (May 16 2024 at 17:34):

Assuming the above is correct, we have now shown that we can make a bundle p:Λ(F)Xp:\Lambda(F) \to X from a presheaf FF on XX! The next puzzle asks us to upgrade this process to get a functor Λ:O(X)^Top/X\Lambda:\widehat{\mathcal{O}(X)} \to \mathsf{Top}/X. Here O(X)^\widehat{\mathcal{O}(X)} is the category of presheaves on XX and Top/X\mathsf{Top}/X is the category of bundles over XX.

To define our functor Λ\Lambda, as a first step we'll need to show how to get a morphism of a bundles from a morphism of presheaves. (I'll stop here for today!)

view this post on Zulip John Baez (May 16 2024 at 17:47):

Great! All this looks good, and I especially like how you "psychoanalyzed" your proof and noticed that it works for presheaves. I probably should have posed the puzzle for presheaves.

While I forget exactly what I did in the course notes, I imagine soon we'll do something like this:

1) get a functor from presheaves on XX to bundles over XX, sending each presheaf to its bundle of germs
2) get a functor from bundles on XX to sheaves over XX, sending each bundle to its sheaf of sections
3) compose these functors to get a functor from presheaves to sheaves, called sheafification
4) show that sheafification is left adjoint to the obvious forgetful functor from sheaves on XX to presheaves on XX.

view this post on Zulip Peva Blanchard (May 17 2024 at 05:13):

What a nice thread. I really like how the calm pace of this discussion leads to upbeat non-trivial concepts like sheafification.

view this post on Zulip David Egolf (May 17 2024 at 17:32):

"Sheafification" is such a fun word... it reminds me of another fun word I learned recently: "rectangulation"! Peeking ahead in the blog post, I suppose we're probably going to have an "ètalification" functor too, given by composing our functors in the opposite order (so we get a functor that converts each bundle to an étale bundle).

view this post on Zulip David Egolf (May 17 2024 at 17:39):

Anyways, to get there, we first need to show we really do have a functor G:O(X)^Top/XG: \widehat{\mathcal{O}(X)} \to \mathsf{Top}/X which converts presheaves and presheaf morphisms to bundles and bundle morphisms. (This functor is called Λ\Lambda in the blog post, but I'll call it GG for the moment, to avoid confusion due to the fact that Λ(F)\Lambda(F) means the space of all the germs of FF).

We just saw that we can get a bundle G(F)G(F) from a presheaf FF on a topological space XX by forming the bundle of germs G(F):Λ(F)XG(F):\Lambda(F) \to X, which sends each germ to the point it belongs to. Now, let's assume we have a morphism of a presheafs on XX, namely α:FF\alpha: F \to F'. We wish to construct a morphism of bundles from G(F):Λ(F)XG(F):\Lambda(F) \to X to G(F):Λ(F)XG(F'):\Lambda(F') \to X. That means we're looking for a continuous map G(α):Λ(F)Λ(F)G(\alpha): \Lambda(F) \to \Lambda(F') so that G(F)G(α)=G(F)G(F') \circ G(\alpha) = G(F). Strictly speaking, G(α)G(\alpha) has source of G(F)G(F) and target of G(F)G(F'), but I'll use the same symbol to refer to its underlying continuous map from Λ(F)\Lambda(F) to Λ(F)\Lambda(F'). Hopefully this won't be too confusing!

view this post on Zulip David Egolf (May 17 2024 at 17:52):

Here's the situation in picture form:
induced morphism of induced bundles

We wish to find some G(α)G(\alpha) so that this diagram commutes.

view this post on Zulip David Egolf (May 17 2024 at 17:55):

For this diagram to commute, we must have that G(α)G(\alpha) maps germs of FF associated to xXx \in X to germs of FF' associated to xx. So, we can consider the function G(α)G(\alpha) as being formed from multiple functions, one for each xXx \in X. I'll call xx-th function G(α)x:Λ(F)xΛ(F)xG(\alpha)_x:\Lambda(F)_x \to \Lambda(F')_x, where Λ(F)x\Lambda(F)_x is the set of germs of FF associated to xx.

I'm not sure how to define G(α)xG(\alpha)_x. But maybe we can start by considering αU:F(U)F(U)\alpha_U:F(U) \to F'(U) as UXU \subseteq X becomes a smaller and smaller open set that contains xx. Intuitively, I'd like to set G(α)xG(\alpha)_x to be some kind of "limit" of αU\alpha_U as UU approaches xx.

view this post on Zulip David Egolf (May 17 2024 at 18:14):

I'm wondering if there is some way to define G(α)xG(\alpha)_x as some colimit, analogous to how Λ(F)x\Lambda(F)_x is (part of) a colimit. This is the picture I've been starting at:
picture

Maybe we could try to define G(α)xG(\alpha)_x as the (hopefully unique) function that makes this diagram commute? I've only drawn part of the full diagram I have in mind; we should have F(U)F(U) and F(U)F'(U) present in the full diagram as UU varies over all open sets containing xx.

view this post on Zulip David Egolf (May 17 2024 at 18:47):

Trying to draw the full picture, I thought of a diagram involving functors and natural transformations:
picture 2

In this picture, the functors map to Set\mathsf{Set} from the full subcategory of O(X)op\mathcal{O}(X)^{\mathrm{op}} given by taking only the open sets containing xx. II is the inclusion functor from this full subcategory. The natural transformations pointing down the page correspond to our colimit co-cones. Finally, ΔΛ(F)x\Delta_{\Lambda(F)_x} is the functor constant at the set Λ(F)x\Lambda(F)_x, and ΔΛ(F)x\Delta_{\Lambda(F')_x} is defined similarly.

The idea is that G(α)xG(\alpha)_x could (hopefully) be defined in terms of the (hopefully) unique natural transformation making this diagram commute.

I'm not sure if this is a good direction to explore... I'll stop here for today. Any hints or thoughts relating to G(α)G(\alpha) or G(α)xG(\alpha)_x would be most welcome!

view this post on Zulip Peva Blanchard (May 17 2024 at 21:13):

Trying to define G(α)xG(\alpha)_x as the "colimit of the αU\alpha_U's" reveals a good mental picture, but maybe too involved for a formal proof.

Instead, I suggest to look at how G(α)G(\alpha) acts on a specific germ [s]x[s]_x at xx, e.g., choosing a representative sFUs \in FU for some open neighborhood UU of xx. How would you define the germ G(α)([s]x)G(\alpha)([s]_x)?

view this post on Zulip Peva Blanchard (May 17 2024 at 22:48):

David Egolf said:

"Sheafification" is such a fun word... it reminds me of another fun word I learned recently: "rectangulation"! Peeking ahead in the blog post, I suppose we're probably going to have an "ètalification" functor too, given by composing our functors in the opposite order (so we get a functor that converts each bundle to an étale bundle).

I got interested into that. I think I found a proof that we have an adjunction ΛΓ\Lambda \dashv \Gamma. If true, this implies that sheafification is a monad, while étalification is a comonad.

view this post on Zulip Peva Blanchard (May 18 2024 at 09:30):

(@Eric M Downes It took me a few seconds to understand the emoji "Grothenwoke" :D)

view this post on Zulip Eric M Downes (May 18 2024 at 09:52):

His eyes shoot stalks!

view this post on Zulip David Egolf (May 18 2024 at 20:05):

Peva Blanchard said:

Trying to define G(α)xG(\alpha)_x as the "colimit of the αU\alpha_U's" reveals a good mental picture, but maybe too involved for a formal proof.

Instead, I suggest to look at how G(α)G(\alpha) acts on a specific germ [s]x[s]_x at xx, e.g., choosing a representative sFUs \in FU for some open neighborhood UU of xx. How would you define the germ G(α)([s]x)G(\alpha)([s]_x)?

I just realized that where I left off above and your hint here are (I think) quite related. If the diagram I drew above is to commute, then it must commute in particular at each component of the natural transformations involved. Requiring commutativity at the UU-th component, we then want this diagram to commute:
diagram at U

If this is to commute, then it must in particular commute at each element. So, pick some s(FI)(U)s \in (F \circ I)(U). (Note that xUx \in U automatically, by definition of the subcategory II is mapping from). Then we need [αU(s)]x=G(α)x([s]x)[\alpha_U(s)]_x = G(\alpha)_x([s]_x). There is still some work left to define G(α)G(\alpha) from this, but this feels like progress.

(In general, I wonder if this kind of thing can provide an interesting strategy for trying to induce a map between two colimits of different diagrams).

view this post on Zulip Peva Blanchard (May 18 2024 at 21:21):

Yes that's right. I think you can already define G(α):Λ(F)Λ(F)G(\alpha) : \Lambda(F) \rightarrow \Lambda(F') point-wise.

You can use the formula you inferred from the naturality condition: for any germ of the form [s]x[s]_x, with sFUs \in FU and UU an open neighborhood of xx,

G(α)([s]x)=def[αU(s)]xG(\alpha)([s]_x) \underset{\text{def}}{=} [\alpha_U(s)]_x

But, first, you need to prove that this is well-defined, i.e., that this definition is invariant when we choose another representative [s]x=[t]x[s]_x = [t]_x for some other tFVt \in FV over another neighborhood VV of xx.

view this post on Zulip John Baez (May 19 2024 at 06:12):

Yes, I would be inclined to define the map between etale spaces over a space XX coming from a map between presheaves on XX by saying what it does to each germ. I'd do that by treating a germ as an equivalence class of sections, then doing the standard trick of choosing a representative of that equivalence class, writing down some formula that parses, and then checking that the answer doesn't depend on the representative.

I consider all of this "follow your nose" mathematics: writing down the only guess you can easily think of given the data available, then checking it works. I never considered David's more thoughtful approach of working explicitly with colimit diagrams. Probably it's because I consider that more "bulky", and harder to do calculations with. So while I applaud David's approach in spirit I would unthinkingly have taken Peva's approach, and I think in practice that's the easier one.

view this post on Zulip David Egolf (May 19 2024 at 18:21):

I'm all for "following my nose"... the only problem with the nose-following approach is that sometimes my nose doesn't know the right way to go :sweat_smile:. But I suppose that mostly comes with experience.

view this post on Zulip David Egolf (May 19 2024 at 18:24):

By the way, after considering this diagram some more...
diagram

...I realized that composing the morphisms along the top and right-hand side of this diagram gives us a natural transformation from FIF \circ I to a functor that is constant at a particular object. That is, these morphisms compose to give us a co-cone under FIF \circ I. Then the unique existence of G(α)xG(\alpha)_x follows by the fact that our set of germs of FF at xx is the tip of a colimit cocone (and hence is initial among cones of FIF \circ I)!

view this post on Zulip David Egolf (May 19 2024 at 18:28):

I still want to check "by hand" that setting G(α)([s]x)=[αU(s)]xG(\alpha)([s]_x) = [\alpha_U(s)]_x is "well-defined". (Although I suspect it must be, at this point, in light of the paragraph immediately above this one).

We need to check that if [s]x=[t]x[s]_x = [t]_x for some tF(V)t \in F(V) for xVx \in V, then [αU(s)]x=[αV(t)]x[\alpha_U(s)]_x = [\alpha_V(t)]_x.

view this post on Zulip David Egolf (May 19 2024 at 18:52):

I want to use the fact that restricting a sheaf element doesn't change the germ it belongs to. So, I'm aiming to show that αU(s)\alpha_U(s) and αV(t)\alpha_V(t) restrict to the same thing on some open set containing xx.

Now, we know that [s]x=[t]x[s]_x = [t]_x,with sF(U)s \in F(U) and tF(V)t \in F(V). I think we proved a while ago that this implies there is some open set WW containing xx so that sW=tWs|_W = t|_W.

view this post on Zulip David Egolf (May 19 2024 at 18:56):

I'll use this fact, together with the naturality of α\alpha, referencing this diagram:
diagram

view this post on Zulip David Egolf (May 19 2024 at 19:01):

We start with sF(U)s \in F(U) and tF(V)t \in F(V). WW was defined so that we have F(rUW)(s)=F(rVW)(t)F(r_{U \to W})(s) = F(r_{V\to W})(t). Consequently, αWF(rUW)(s)=αWF(rVW)(t)\alpha_W \circ F(r_{U \to W})(s) = \alpha_W \circ F(r_{V\to W})(t). Since the left and right "trapezoids" of our diagram commute (because α\alpha is a natural transformation), we have that F(rVW)αV=αWF(rVW)F'(r_{V \to W}) \circ \alpha_V = \alpha_W \circ F(r_{V\to W}) and similarly F(rUW)αU=αWF(rUW)F'(r_{U\to W}) \circ \alpha_U = \alpha_W \circ F(r_{U \to W}).

Putting this all together, we find that F(rUW)(αU(s))=F(rVW)(αV(t))F'(r_{U \to W})(\alpha_U(s)) =F'(r_{V \to W})(\alpha_V(t)). Thus, αU(s)W=αU(t)W\alpha_U(s)|_W = \alpha_U(t)|_W.

view this post on Zulip David Egolf (May 19 2024 at 19:08):

Since αU(s)F(U)\alpha_U(s) \in F'(U) and αV(t)F(V)\alpha_V(t) \in F'(V) restrict to the same thing on WW (which is an open set containing xx), and since two sheaf elements have the same germ at xx if they restrict to the same thing in some open set containing xx, we conclude that [αU(s)]x=[αV(t)]x[\alpha_U(s)]_x = [\alpha_V(t)]_x.

So, if [s]x=[t]x[s]_x = [t]_x for some tF(V)t \in F(V) with xVx \in V, we have that [αU(s)]x=[αV(t)]x[\alpha_U(s)]_x = [\alpha_V(t)]_x as desired. We conclude that setting G(α)([s]x)=[αU(s)]xG(\alpha)([s]_x) = [\alpha_U(s)]_x actually defines a function!

view this post on Zulip David Egolf (May 19 2024 at 19:15):

It still remains to show that G(α):Λ(F)Λ(F)G(\alpha):\Lambda(F) \to \Lambda(F') is continuous. But I will leave that for another day!

view this post on Zulip David Egolf (May 19 2024 at 19:54):

This is a bit tangential, but the above has helped me realized how we can take the "limit" or "colimit" of a natural transformation between two diagrams with limits or colimits! This seems pretty cool because it lets us "condense" the data of a natural transformation (which could consist of many morphisms) to a single morphism.

Here's a picture illustrating how this works for limits:
picture

view this post on Zulip David Egolf (May 19 2024 at 19:56):

Here DD and DD' are diagrams of the same shape and α:DD\alpha:D' \to D is a natural transformation. uDu_D and uDu_{D'} correspond to the limit cones over DD and DD'. Then composing αuD\alpha \circ u_{D'} gives us a cone over DD. Since uDu_D is the terminal such cone, there is a unique morphism :limDlimD:\lim D' \to \lim D which induces a natural transformation :ΔlimDΔlimD:\Delta_{\lim D'} \to \Delta_{\lim D} so that the diagram commutes.

For example, in a category with products, I expect that the "product" f×gf \times g of two morphisms is a limα\lim \alpha, in the case where DD and DD' are discrete diagrams with two objects. In this case, the data of a natural transformation α\alpha corresponds to two morphisms ff and gg.

view this post on Zulip Peva Blanchard (May 19 2024 at 20:08):

Yes, I've been thinking about "condensing a natural transformation" too, and your "colimit of natural transformations" picture.

I will probably open another topic to discuss the details, but here is an overview.

There is a fact about presheaves on a topological space (or more generally any category): it is a closed category

This means that if you have two presheaves F,FF, F' on XX, there is another presheaf [F,F][F, F']. Intuitively, the presheaf [F,F][F, F'] represents (in the category of presheaves on XX) all the natural transformations from FF to FF'.

Then, you can look at the associated bundle Λ([F,F])\Lambda([F, F']) over XX. And there are interesting things.

For instance, the function G(α)xG(\alpha)_x you were looking for a few messages above would correspond to a point of this bundle. And the G(α)G(\alpha) to a section of this bundle (a global section, i.e., over the entire space XX).

Actually, there is a correspondance between natural transformations from FF to FF' and global sections of the bundle Λ([F,F])\Lambda([F, F']).

Actually, any natural transformations from FF to FF' yields a global section of the bundle Λ([F,F])\Lambda([F, F']).

view this post on Zulip John Baez (May 20 2024 at 10:51):

David Egolf said:

I'm all for "following my nose"... the only problem with the nose-following approach is that sometimes my nose doesn't know the right way to go :sweat_smile:. But I suppose that mostly comes with experience.

That's true. But now that you've had an experience, I hope you see that this strategy counts as following your nose, each step following naturally from the one before:

view this post on Zulip John Baez (May 20 2024 at 11:05):

I would count this as sufficient, though there are certainly details one can unpack here, which you unpacked in your much more careful argument here.

view this post on Zulip David Egolf (May 21 2024 at 16:37):

Given a morphism of presheaves α:FF\alpha: F \to F', where each presheaf is a presheaf on a topological space XX, we were able to define a function from Λ(F)\Lambda(F) to Λ(F)\Lambda(F'), which sends each germ of FF at a point to a germ of FF' at that same point. Namely, we got the function G(α):Λ(F)Λ(F)G(\alpha):\Lambda(F) \to \Lambda(F') which acts by G(α)[s]x=[αU(s)]xG(\alpha)[s]_x = [\alpha_U(s)]_x, where sF(U)s \in F(U) and xXx \in X.

It remains to show that G(α):Λ(F)Λ(F)G(\alpha): \Lambda(F) \to \Lambda(F') is continuous.

view this post on Zulip David Egolf (May 21 2024 at 16:48):

At this point, part of me wishes we had defined the topology on Λ(F)\Lambda(F) (and on Λ(F)\Lambda(F')) in a different but equivalent way, in terms of some universal property. I am guessing that doing that might help make it clearer why G(α)G(\alpha) needs to be continuous.

view this post on Zulip David Egolf (May 21 2024 at 16:50):

Before thinking about that, let me see how far I can get while working with the definition we've used so far. I will try to show that G(α)G(\alpha) is continuous at an arbitrary point [s]xΛ(F)[s]_x \in \Lambda(F), where sF(U)s \in F(U) and xUx \in U. Let NN be a neighborhood of G(α)([s]x)G(\alpha)([s]_x). This is a subset of Λ(F)\Lambda(F') containing an open set NN' so that [αU(s)]xN[\alpha_U(s)]_x \in N'. We wish to show that G(α)1(N)G(\alpha)^{-1}(N) is a neighbourhood of [s]x[s]_x.

To show that G(α)1(N)G(\alpha)^{-1}(N) is a neighbourhood of [s]x[s]_x it suffices to find some open set that contains [s]x[s]_x and is a subset of G(α)1(N)G(\alpha)^{-1}(N).

view this post on Zulip David Egolf (May 21 2024 at 17:12):

To get further, I want to find an open set containing [αU(s)]x[\alpha_U(s)]_x using ss. Since sF(U)s \in F(U) and αU:F(U)F(U)\alpha_U:F(U) \to F'(U), we have that αU(s)F(U)\alpha_U(s) \in F'(U). Hence, the set g(αU(s))(U)g(\alpha_U(s))(U) of all the germs of αU(s)\alpha_U(s) over UU forms an open set of Λ(F)\Lambda(F'). And since xUx \in U, [αU(s)]xg(αU(s))(U)[\alpha_U(s)]_x \in g(\alpha_U(s))(U).

view this post on Zulip David Egolf (May 21 2024 at 17:17):

We've just seen that g(αU(s))(U)g(\alpha_U(s))(U) is an open subset of Λ(F)\Lambda(F') containing [αU(s)]x[\alpha_U(s)]_x. Next, I want to create an open set from this one, aiming to obtain a subset of our neighbourhood NN. Since NN is a neighbourhood of [αU(s)]x[\alpha_U(s)]_x, it contains an open set NN' that contains [αU(s)]x[\alpha_U(s)]_x. Consequently, g(αU(s))(U)Ng(\alpha_U(s))(U) \cap N' is an open set containing [αU(s)]x[\alpha_U(s)]_x that is a subset of NN.

view this post on Zulip David Egolf (May 21 2024 at 17:22):

This set g(αU(s))(U)Ng(\alpha_U(s))(U) \cap N' is some open set that is a subset of g(αU(s))g(\alpha_U(s)). Hence it is the union of sets of the form g(s)(V)g(s')(V), by definition of the topology of Λ(F)\Lambda(F'). Each of these sets, being a subset of g(αU(s))(U)g(\alpha_U(s))(U), contains only germs belonging to αU(s)\alpha_U(s) over some subset of UU. So, each of these open sets is really of the form g(αU(s))(V)g(\alpha_U(s))(V) for VUV \subseteq U an open subset of XX.

Since this set contains [αU(s)]x[\alpha_U(s)]_x, there is some open VXV \subseteq X containing xx such that [αU(s)]]xg(αU(s))(V)[\alpha_U(s)]]_x \in g(\alpha_U(s))(V). This g(αU(s))(V)g(\alpha_U(s))(V) is an open set of Λ(F)\Lambda(F') containing [αU(s)]x[\alpha_U(s)]_x, that is also a subset of NN. Hence, G(α)1(g(αU(s))(V))G(\alpha)^{-1}(g(\alpha_U(s))(V)) is a subset of G(α)1(N)G(\alpha)^{-1}(N) containing [s]x[s]_x. If we can show that G(α)1(g(αU(s))(V))G(\alpha)^{-1}(g(\alpha_U(s))(V)) is open, then I think we will have shown that G(α)G(\alpha) is continuous at [s]x[s]_x.

view this post on Zulip David Egolf (May 21 2024 at 17:39):

G(α)1(g(αU(s))(V))G(\alpha)^{-1}(g(\alpha_U(s))(V)) is a bit of a mouthful, but I'm hoping working with it won't be too bad. I'll stop here for today though!

view this post on Zulip John Baez (May 22 2024 at 10:44):

Hmm, something seems 'heavy' about this discussion so far. Let me see if I can lighten it a bit. I'll follow my nose for a little while and see where it leads, but I won't go too far.

We're trying to show that G(α):Λ(F)Λ(F)G(\alpha): \Lambda(F) \to \Lambda(F') is continuous. So we should think about how we defined the topology on Λ(F)\Lambda(F). In my course notes I said something like this:

We'll give a basis for the topology, by describing a bunch of open neighborhoods of each point in Λ(F).\Lambda(F). Remember, any point in Λ(F)\Lambda(F) is a germ. More specifically any point of pΛ(F)p \in \Lambda(F) is in some stalk Λ(F)x,\Lambda(F)_x, so it's the germ at xx of some sFUs \in FU where UU is an open neighborhood of x.x. But this ss has lots of other germs, too, namely its germs at all points yU.y \in U. We take this collection of all these germs to be an open neighborhood of our point p.p. A general open set in Λ(F)\Lambda(F) will then be an arbitrary union of sets like this.

view this post on Zulip John Baez (May 22 2024 at 10:45):

The description of the topology must determine the strategy for how we'll show G(α)G(\alpha) is continuous. Since inverse images automatically preserve unions, we don't need to check that the inverse image of a general open set under G(α)G(\alpha) is open. It's enough to check it for the open neighborhoods of the form described above. So let's make up some convenient notation for them.

We've already decided to call any point in Λ(F)\Lambda(F) something like [s]x[s]_x where xXx \in X and sFUs \in FU, where UU is any open neighborhood of XX.

Above I described a basis of open neighborhoods of [s]x[s]_x, which are sets like this:

{[s]y    yU} \{ [s]_y \; \vert \; y \in U \}

The vertical bar means "such that". I used to use a colon to mean "such that", but I decided that was confusing.

This is an efficient notation for our basis of open neighborhoods, so we should be able to do computations with it fairly painlessly.

view this post on Zulip John Baez (May 22 2024 at 10:50):

Similarly, any point in Λ(F)\Lambda(F') will be an equivalence class like [t]x[t]_x where xXx \in X and tFUt \in F'U, where UU is any open neighborhood of xx. And we get a basis of open neighborhoods of [t]x[t]_x that are sets like this:

{[t]y    yU} \{ [t]_y \; \vert \; y \in U \}

view this post on Zulip John Baez (May 22 2024 at 10:55):

Since we understand the topology on Λ(F)\Lambda(F) and Λ(F)\Lambda(F') in terms of a basis of open neighborhoods, to show G(α):Λ(F)Λ(F)G(\alpha) : \Lambda(F) \to \Lambda(F') is continuous we should check continuity at a point for every point [s]xΛ(F)[s]_x \in \Lambda(F).

view this post on Zulip John Baez (May 22 2024 at 11:02):

We saw G(α)G(\alpha) maps [s]x[s]_x to [α(s)]x[\alpha(s)]_x. So to check that G(α)G(\alpha) is continuous at the point [s]x[s]_x, in principle we need to check that the inverse image of any open neighborhood of [α(s)]x[\alpha(s)]_x is an open neighborhood of [s]x[s]_x.

But since I'm a master of topology, I know it's enough to check that the inverse image of one of our "basis" open neighborhoods contains a "basis" open neighborhood. I realize now that this may be a potential stumbling block for you, @David Egolf - it's a trick one learns in a topology class.

view this post on Zulip John Baez (May 22 2024 at 11:13):

We've found a slick notation for these "basis" open neighborhoods. So let's write down one of these open neighborhoods of [α(s)]x[\alpha(s)]_x. It will look like this:

{[α(s)]y    yU} \{ [\alpha(s)]_y \; \vert \; y \in U \}

view this post on Zulip John Baez (May 22 2024 at 11:15):

Now let's figure out its inverse image and see if it contains an open neighborhood of [s]x[s]_x!

Well, I had better stop here... I may have already done too much, but I wanted to reach what I called the "potential stumbling block".

view this post on Zulip David Egolf (May 22 2024 at 17:45):

Wow, thanks! It will take me some time to work through what you just said, but it looks to be quite helpful! I didn't expect topology to come up quite so often in these blog posts :sweat_smile:. But it's good - I'm happy to be learning about these practical topology strategies!

view this post on Zulip Todd Trimble (May 22 2024 at 17:57):

But since I'm a master of topology, I know it's enough to check that the inverse image of one of our "basis" open neighborhoods contains a "basis" open neighborhood.

John knows this of course, but this should be changed to read: every element in the inverse image of a basic open has a basic open neighbohood contained in the inverse image.

view this post on Zulip John Baez (May 22 2024 at 18:32):

Here's what I was trying to say. I was using the concept of "basis of open neighborhoods", which this link calls simply a "neighborhood basis":

(Check the link for the definition, David.) The reason is that in my lecture notes I described the topology on etale spaces in terms of a neighborhood basis, while avoiding the use the jargon "neighborhood basis".

Here's how I was using it:

Say quite generally that we have a function f:XYf: X \to Y between topological spaces XX and YY, and we're trying to show ff is continuous at some point xXx \in X. Say we know a neighborhood basis for every point xXx \in X and every point in yYy \in Y. Then to show ff is continuous at xXx \in X, it's enough to check that for every UU in the neighborhood basis of f(x)f(x), the inverse image f1(U)f^{-1}(U) contains a set in the neighborhood basis of xx.

view this post on Zulip John Baez (May 22 2024 at 18:37):

By now it seems like I've slipped into acting like David understands the concept of "neighborhood basis", which is unreasonable. I guess I'm being a bad teacher and describing how I'd solve a homework problem by following my nose, without remembering just how long my nose has grown over the years, and how long this has taken.

view this post on Zulip John Baez (May 22 2024 at 18:41):

If I'd left him alone David would have solved the problem in his own way. I may have made the bad teacher's mistake of saying "oh, why don't you just do this?", where "this" is some trick only known to the teacher.

view this post on Zulip David Egolf (May 23 2024 at 15:30):

John Baez said:

If I'd left him alone David would have solved the problem in his own way. I may have made the bad teacher's mistake of saying "oh, why don't you just do this?", where "this" is some trick only known to the teacher.

I'm very glad to learn about new strategies or tricks! People pointing out different ways to think about a problem is one of the things I hoped would happen when I started this thread. I've run across at least some of the topology concepts you're using above, but I'm excited to see how they can make a specific problem easier to solve.

view this post on Zulip David Egolf (May 23 2024 at 15:32):

John Baez said:

The description of the topology must determine the strategy for how we'll show G(α)G(\alpha) is continuous. Since inverse images automatically preserve unions, we don't need to check that the inverse image of a general open set under G(α)G(\alpha) is open. It's enough to check it for the open neighborhoods of the form described above. So let's make up some convenient notation for them.

This makes sense. Let UU be an arbitrary open set of Λ(F)\Lambda(F'). Then we wish to show that its inverse image G(α)1(U)G(\alpha)^{-1}(U) is also open. But since we have a basis for the topology on Λ(F)\Lambda(F'), we know that UU is the union of some bib_i, where each bib_i is in our basis. Then G(α)1(U)=G(α)1(ibi)=iG(α)1(bi)G(\alpha)^{-1}(U) = G(\alpha)^{-1}(\cup_i b_i) = \cup_i G(\alpha)^{-1}(b_i). Since the union of open sets is open, we see that if the inverse image of each basis set is open, then the inverse image of arbitrary open sets is open.

view this post on Zulip John Baez (May 23 2024 at 15:46):

That's the idea! Later, due to remark by Todd, I started discussing the difference between a 'basis of open sets' and a 'basis of open neighborhoods of a point p'. The open sets I described are both of these, depending on whether you hold p (your germ) fixed or let it vary. The 'basis of open neighborhoods of a point' idea is especially nice for studying continuity at that point. But maybe you don't need to worry about this until you run into it on your own.

view this post on Zulip David Egolf (May 23 2024 at 15:47):

Next up, I'd like to review the concept of a "basis of open neighbourhoods", which you're using above. I think I've actually seen this before, but I haven't used it to solve problems yet. I referenced this and this.

I think this is the definition of a basis of open neighbourhoods at a point pp in some topological space TT: it is a collection of open subsets BpB_p of TT, each containing pp, such that for any open subset UU that contains pp there is some VBpV \in B_p so that VUV \subseteq U. Intuitively, this is a collection of open sets "about pp" that lets us get "arbitrarily close" to pp.

view this post on Zulip David Egolf (May 23 2024 at 16:10):

John Baez said:

Say quite generally that we have a function f:XYf: X \to Y between topological spaces XX and YY, and we're trying to show ff is continuous at some point xXx \in X. Say we know a neighborhood basis for every point xXx \in X and every point in yYy \in Y. Then to show ff is continuous at xXx \in X, it's enough to check that for every UU in the neighborhood basis of f(x)f(x), the inverse image f1(U)f^{-1}(U) contains a set in the neighborhood basis of xx.

Let me try to relate this condition for continuity at a point to the one I was using earlier. Above, I was trying to show continuity at xx by checking that the inverse image of a (not necessarily open) neighbourhood of f(x)f(x) is a neighbourhood of xx.

I'd like to think about how it is enough to consider the inverse image only of (open) neighbourhood basis sets, instead of the inverse image of arbitrary neighbourhoods of f(x)f(x). Throughout, I assume we have a neighbourhood basis for f(x)f(x) and for xx. Let NN be an arbitrary neighbourhood of f(x)f(x). It contains an open set NN' that contains f(x)f(x). Then, there is some VV in our neighbourhood basis for f(x)f(x) so that VNNV \subseteq N' \subseteq N. If f1(V)f^{-1}(V) contains an open set containing xx, then certainly f1(N)f^{-1}(N) contains an open set containing xx. So, if the inverse image of every (open) neighbourhood basis set for f(x)f(x) is a neighbourhood of xx, then the inverse image of any neighbourhood of f(x)f(x) is a neighbourhood of xx.

To show that the inverse image of a (open) neighbourhood basis set of f(x)f(x) is a neighbourhood of xx, it suffices to show that its inverse image contains an open set containing xx. If its inverse image contains some set in the neighbourhood basis of xx, then its inverse image certainly contains an open set containing xx.

In conclusion:

view this post on Zulip David Egolf (May 23 2024 at 16:29):

To use the above in our case, we need to figure out a neighbourhood basis for each [s]xΛ(F)[s]_x \in \Lambda(F), and for each G(α)([s]x)=[α(s)]xG(\alpha)([s]_x) = [\alpha(s)]_x. To do that, we need to figure out a strategy for getting "arbitrarily close" to these points.

To get a bunch of open sets containing [s]x[s]_x that "get arbitrarily close" to it, the first idea that comes to mind for me is to take the open set of germs of ss as we restrict its domain to be smaller and smaller open sets about xx. So, the sets in our proposed open neighbourhood basis for [s]x[s]_x are of the form g(s)(U)g(s)(U) as UU becomes a smaller and smaller open set containing xx. In alternate notation, they are of the form {[s]yyU}\{[s]_y | y \in U\}, as UU ranges over all the open sets containing xx. I think @John Baez was indicating that we really do get a (open) neighbourhood basis for [s]xΛ(F)[s]_x \in \Lambda(F) in this way. However, I don't immediately see how to prove this.

I'll stop here for today! Next time, I'm hoping to prove that we do get a neighbourhood basis in this way for a point [s]xΛ(F)[s]_x \in \Lambda(F).

view this post on Zulip John Baez (May 23 2024 at 19:54):

David Egolf said:

So, the sets in our proposed open neighbourhood basis for [s]x[s]_x are of the form g(s)(U)g(s)(U) as UU becomes a smaller and smaller open set containing xx. In alternate notation, they are of the form {[s]yyU}\{[s]_y | y \in U\}, as UU ranges over all the open sets containing xx. I think John Baez was indicating that we really do get a (open) neighbourhood basis for [s]xΛ(F)[s]_x \in \Lambda(F) in this way. However, I don't immediately see how to prove this.

In my course notes I defined the topology on Λ(F)\Lambda(F) in essentially this way, by specifying these neighborhood bases, so I see nothing to prove! If you have some other way to define the topology, then you can try to prove this.

view this post on Zulip John Baez (May 23 2024 at 19:59):

In my course notes I said something like this:

We'll give a basis for the topology, by describing a bunch of open neighborhoods of each point in Λ(F).\Lambda(F). Remember, any point in Λ(F)\Lambda(F) is a germ. More specifically any point of pΛ(F)p \in \Lambda(F) is in some stalk Λ(F)x,\Lambda(F)_x, so it's the germ at xx of some sFUs \in FU where UU is an open neighborhood of x.x. But this ss has lots of other germs, too, namely its germs at all points yU.y \in U. We take this collection of all these germs to be an open neighborhood of our point p.p. A general open set in Λ(F)\Lambda(F) will then be an arbitrary union of sets like this.


view this post on Zulip John Baez (May 23 2024 at 20:00):

That's me informally specifying a neighbohood basis about each point.

view this post on Zulip David Egolf (May 24 2024 at 17:18):

Here's my understanding of the situation:

view this post on Zulip David Egolf (May 24 2024 at 17:23):

This may be a situation where the thing to be proved is very fast and simple to prove once you know how to do it! But it seems to me that there is really something to be proved here.

view this post on Zulip David Egolf (May 24 2024 at 17:27):

Let NN be an arbitrary neighbourhood of [s]x[s]_x in Λ(F)\Lambda(F), where sF(U)s \in F(U). It then contains an open set NN' that contains [s]x[s]_x. By definition of the topology of Λ(F)\Lambda(F), we know that N=ibiN' = \cup_i b_i where each biBb_i \in B. Since [s]xN[s]_x \in N', that implies that there is some specific biNb_i \subseteq N' so that [s]xbi[s]_x \in b_i.

view this post on Zulip David Egolf (May 24 2024 at 17:29):

Now, each basis element is the set of germs of some sheaf set element over some open set of XX. Hence bi={[t]yyV}b_i = \{[t]_y |y \in V\} where VV is an open set of XX and tF(V)t \in F(V). Since [s]xbi[s]_x \in b_i, we have that xVx \in V. Hence tF(V)t \in F(V) has the same germ at xx as our sF(U)s \in F(U) does.

We know that two sheaf set elements tt and ss have the same germ at a point xx exactly if they restrict to the same sheaf set element on some open subset of XX containing xx. Thus, we have tW=sWt|_W = s|_W for some open set containing xx. Note that WUW \subseteq U and WVW \subseteq V.

view this post on Zulip David Egolf (May 24 2024 at 17:37):

Now, {[tW]yyW}={[sW]yyW}\{[t|_W]_y | y \in W\}=\{[s|_W]_y | y \in W\} is an open set containing xx and further it belongs to our proposed neighbourhood basis BpB_p. Since WVW \subseteq V, we have that {[sW]yyW}biNN\{[s|_W]_y | y \in W\} \subseteq b_i \subseteq N' \subseteq N. Hence, we have found an element of BpB_p that is a subset of an arbitrary neighbourhood of [s]x[s]_x.

We conclude that BpB_p really does form a neighbourhood basis of open sets for [s]x[s]_x!

view this post on Zulip David Egolf (May 24 2024 at 17:44):

This is interesting to me, as it intuitively says that we can "approach arbitrarily close" to a point [s]xΛ(F)[s]_x \in \Lambda(F) just by looking at the germs of various restrictions of ss. This simplifies things: in this context we only have to think about the germs of the restrictions of a single sheaf element ss, instead of all sheaf elements that happen to have the same germ as ss at xx.

I'll stop here for today. Next time, I'm hoping to use this neighbourhood basis to try and show the continuity of G(α):Λ(F)Λ(F)G(\alpha): \Lambda(F) \to \Lambda(F').

view this post on Zulip John Baez (May 24 2024 at 20:43):

David Egolf said:

This may be a situation where the thing to be proved is very fast and simple to prove once you know how to do it! But it seems to me that there is really something to be proved here.

You're right. But you did it!

view this post on Zulip John Baez (May 24 2024 at 20:52):

David Egolf said:

This is interesting to me, as it intuitively says that we can "approach arbitrarily close" to a point [s]xΛ(F)[s]_x \in \Lambda(F) just by looking at the germs of various restrictions of ss. This simplifies things: in this context we only have to think about the germs of the restrictions of a single sheaf element ss, instead of all sheaf elements that happen to have the same germ as ss at xx.

Yes, that's a good thing to keep in mind with these etale spaces. So it's good you did that proof just now.

I just assumed it was obvious that if we have a bunch of open sets {Oα}\{O_\alpha\} forming a basis for a topology, the sets in that basis containing a particular point pp form an open neighborhood basis for pp.

Let's see if I was fooling myself. It suffices to show that if VV is any open set containing pp, there exists α\alpha such that

pOαV p \in O_\alpha \subseteq V

Since {Oα}\{O_\alpha\} is a basis for the topology, VV is a union of some collection of these sets:

V=αSOα V = \bigcup_{\alpha \in S} O_\alpha

so at least one of the OαO_\alpha for αS\alpha \in S contains pp, and for this one we have

pUαV p \in U_\alpha \subseteq V. \qquad \qquad \qquad

view this post on Zulip David Egolf (May 24 2024 at 21:01):

John Baez said:

I just assumed it was obvious that if we have a bunch of open sets {Oα}\{O_\alpha\} forming a basis for a topology, the sets in that basis containing a particular point pp form an open neighborhood basis for pp.

I had been thinking along these lines, but then I realized that there can be a lot more open sets in our basis for the topology on Λ(F)\Lambda(F) that include [s]x[s]_x, besides those of the form {[s]yyV}\{[s]_y | y \in V\} for VUV \subseteq U some open set containing xx (where sF(U)s \in F(U) is not allowed to vary). For example, for some tF(V)t \in F(V') with xVx \in V' satisfying [t]x=[s]x[t]_x = [s]_x, we have that {[t]yyV}\{[t]_y | y \in V'\} is an open set in our basis that contains [s]x[s]_x. And this open set is potentially different than the ones we can get just using ss, I think.

So, the collection of sets BpB_p discussed above (which has sets given by the germs of sF(U)s\in F(U) when restricted to various open subsets of UU containing xx) I think is smaller than the collection of sets from our basis that contain [s]x[s]_x.

view this post on Zulip John Baez (May 25 2024 at 05:21):

You're right, so I was being sloppy! I'm glad you caught that. I had to run through an example in my mind to see that this collection BpB_p is really smaller. Luckily it's still a neighborhood basis, and it's a much more convenient neighborhood basis.

(My downfall was wanting to be extremely quick and informal in my course notes, and not use terms like "basis" or "neighborhood basis". I think it may save everyone work if I come out and clearly specify a neighborhood basis for each point. But let's see how things go.)

view this post on Zulip David Egolf (May 27 2024 at 16:32):

Ok, now that I understand this neighbourhood basis, let me see about trying to use it to prove that G(α):Λ(F)Λ(F)G(\alpha):\Lambda(F) \to \Lambda(F') is continuous. Recall that G(α)G(\alpha) is going to be a morphism of bundles induced by a morphism α:FF\alpha:F \to F' of presheaves on XX. And G(α)G(\alpha) acts by [s]x[α(s)]x[s]_x \mapsto [\alpha(s)]_x.

view this post on Zulip David Egolf (May 27 2024 at 16:36):

To show that G(α)G(\alpha) is continuous, we will aim to show it is continuous at an arbitrary point [s]x[s]_x of Λ(F)\Lambda(F). To show this, we need to show that the inverse image of any neighbourhood of G(α)([s]x)=[α(s)]xG(\alpha)([s]_x) = [\alpha(s)]_x is a neighbourhood of [s]x[s]_x. But we recently saw that it suffices to show that the inverse image of any set in a neighbourhood basis of open sets for [α(s)]x[\alpha(s)]_x contains a set in a neighbourhood basis of open sets for [s]x[s]_x.

view this post on Zulip David Egolf (May 27 2024 at 16:45):

We recently saw that for a point [s]x[s]_x (with sF(U)s \in F(U) and xUx \in U) we have a neighbourhood basis of open sets B[s]xB_{[s]_x} having elements of the form {[s]yyU}\{[s]_y | y \in U'\} where UU' is some open set containing xx and contained in UU.

Similarly, we have a neighbourhood basis of open sets B[α(s)]xB_{[\alpha(s)]_x}. An element of this neighbourhood basis is of the form {[α(s)]yyU}\{[\alpha(s)]_y |y \in U'\} for UU' some open set containing xx and contained in UU. Here, α(s)\alpha(s) is shorthand for αU(s)F(U)\alpha_U(s) \in F'(U),

view this post on Zulip David Egolf (May 27 2024 at 16:48):

So, let us consider the inverse image under G(α)G(\alpha) of some arbitrary set in our neighbourhood basis of open sets for [α(s)]x[\alpha(s)]_x. Let's say we pick the neighbourhood basis element {[α(s)]yyU}\{[\alpha(s)]_y | y \in U'\}, where UU' is an open set of XX containing xx and contained in UU. Given that G(α)([s]y)=[α(s)]yG(\alpha)([s]_y) = [\alpha(s)]_y, what can we say about the inverse image of this neighbourhood basis element?

view this post on Zulip David Egolf (May 27 2024 at 16:51):

Well, we see that any [s]y[s]_y with yUy \in U' is in the inverse image. So, the set {[s]yyU}\{[s]_y | y \in U'\} is contained in the inverse image. But, since UU' is an open set containing xx and contained in UU, this set is an element of our neighbourhood basis of open sets for [s]x[s]_x!

We conclude that G(α)G(\alpha) is continuous at an arbitrary point [s]xΛ(F)[s]_x \in \Lambda(F), and hence it is continuous!

view this post on Zulip John Baez (May 27 2024 at 17:59):

Great! I guess it's clear now why I pushed you into this neighborhood basis idea. It makes this proof into a delicious downhill slide.

view this post on Zulip John Baez (May 27 2024 at 18:00):

There must be other ways to proceed, but this feels to me like the way.

view this post on Zulip John Baez (May 27 2024 at 18:33):

If we'd defined the topology in some other equivalent way from the start, some other proof might be good - but I haven't actually thought about other ways to define the topology on an etale space. You mentioned defining it using some universal property. One way might be to say: we give Λ(F)\Lambda(F) the weakest topology (fewest open sets) such that some class of maps out of it is continuous, or the strongest topology (most open sets) such that some class of maps into it is continuous. Maybe one of these works. But in this theorem we need to show continuity of maps Λ(F)Λ(F)\Lambda(F) \to \Lambda(F') - out of one etale space and into another.

We also want the projection pp from Λ(F)\Lambda(F) to XX to be continuous. I seem to recall you've already shown that? I think that's also easy with this neighborhood basis approach. If we give Λ(F)\Lambda(F) the weakest topology such that p:Λ(F)Xp: \Lambda(F) \to X is continuous, do we get the same topology we're using now? I don't know; it should be easy to figure out but not today.

view this post on Zulip Peva Blanchard (May 27 2024 at 20:44):

If we give Λ(F)\Lambda(F) the weakest topology such that p:Λ(F)Xp: \Lambda(F) \to X is continuous, do we get the same topology we're using now? I don't know; it should be easy to figure out but not today.

Actually, I made the mistake of taking this weakest topology WW as the topology TT on the bundle Λ(F)\Lambda(F). A priori, definition-wise, WW is coarser than TT since TT adds enough open sets to interpret sections of FF as continuous functions. But, I haven't tried to exhibit an actual example where WW would be strictly coarser than TT.

view this post on Zulip John Baez (May 27 2024 at 21:14):

Okay - now I remember those comments of yours. I don't alas know such a counterexample, and when I try to visualize one I instantly think of this: one thing about etale spaces is that p:Λ(F)Xp: \Lambda(F) \to X is not only continuous, it's a [[local homeomorphism]], where a section s:UΛ(F)s: U \to \Lambda(F) provides a continuous inverse to the projection restricted to the open neighbohood {[s]y    yU}\{[s]_y \; \vert \; y \in U\} of the germ [s]x[s]_x. This seems relevant somehow. Maybe prevents the existence of a counterexample? Or maybe we can use this condition as a kind of requirement that helps specify the topology of the etale space?

view this post on Zulip Peva Blanchard (May 27 2024 at 22:43):

I think I found an example showing that WW (the coarsest topology on Λ(F)\Lambda(F) making the projection Λ(F)X\Lambda(F) \rightarrow X continuous) is strictly coarser than TT (the actual topology defined in John's blog post).

Take X=[0,1]X = [0,1] and FF the presheaf that maps any open subset UXU \subseteq X to the set 2={0,1}2 = \{0,1\}. Then Λ(F)=X×2\Lambda(F) = X \times 2.

An open set in X×2X \times 2, w.r.t WW, is exactly a set of the form U×2U \times 2, for some open subset UU in XX. This prevents pp to be a local homeomorphism w.r.t. WW.

But we know that pp is a local homeomorphism w.r.t. TT. So WW is strictly coarser than TT.

view this post on Zulip Peva Blanchard (May 27 2024 at 22:47):

image.png

view this post on Zulip Peva Blanchard (May 27 2024 at 22:52):

This is interesting. The requirement of being a local homeomorphism adds open subsets to the initial topology WW.

But it looks like we cannot express the topology TT as the initial or final topology of some collection of functions.

view this post on Zulip Peva Blanchard (May 27 2024 at 23:11):

Oh, actually, it's quite possible that TT is the final topology on XX making all the functions x[s]xx \mapsto [s]_x continuous. I'll think about that.

view this post on Zulip Peva Blanchard (May 27 2024 at 23:11):

(sorry David, I should have started this digression in another topic)

view this post on Zulip David Egolf (May 28 2024 at 17:00):

John Baez said:

There must be other ways to proceed, but this feels to me like the way.

This approach seemed quite nice! A "delicious downhill slide" indeed.

view this post on Zulip David Egolf (May 28 2024 at 17:02):

John Baez said:

We also want the projection pp from Λ(F)\Lambda(F) to XX to be continuous. I seem to recall you've already shown that?

Yes, I proved that earlier in this thread. (I had to scroll a long ways back to check though!)

view this post on Zulip David Egolf (May 28 2024 at 17:03):

Peva Blanchard said:

(sorry David, I should have started this digression in another topic)

I think reflecting on the topology we put on Λ(F)\Lambda(F) is quite interesting, and that discussion on that topic is a good fit for this thread.

I'm hoping that we can imagine a strategy or goal that would have led us to put the topology on Λ(F)\Lambda(F) that we did. One of my goals in learning math is to better understand how to create nice mathematical situations/structures. (I think this is a bit different than learning how to prove that structures that other people have come up with are quite nice.)

view this post on Zulip David Egolf (May 28 2024 at 17:12):

With the topology we chose to put on Λ(F)\Lambda(F), it will turn out that Λ:O(X)^Top/X\Lambda:\widehat{\mathcal{O}(X)} \to \mathsf{Top}/X is left adjoint to the functor Γ:Top/XO(X)^\Gamma:\mathsf{Top}/X \to \widehat{\mathcal{O}(X)}. In particular, that implies we have a natural isomorphism Top/X(Λ(F),)O(X)^(F,Γ())\mathsf{Top}/X(\Lambda(F),-) \cong \widehat{\mathcal{O}(X)}(F,\Gamma(-)) for any presheaf FF. That implies that for any bundle p:YXp:Y \to X we have a bijection: Top/X(Λ(F),p)O(X)^(F,Γ(p))\mathsf{Top}/X(\Lambda(F),p) \cong \widehat{\mathcal{O}(X)}(F,\Gamma(p)).

view this post on Zulip David Egolf (May 28 2024 at 17:15):

That means if I pick some particular natural transformation α\alpha from FF to Γ(p)\Gamma(p) (which is the sheaf of sections of p:YXp:Y \to X), then there is some unique corresponding bundle morphism α:Λ(F)p\alpha':\Lambda(F) \to p. If we let Λ(F)\Lambda(F) also denote the topological space that Λ(F)\Lambda(F) (the bundle) maps down to XX, α\alpha' then corresponds to a continuous map from Λ(F)\Lambda(F) to YY.

view this post on Zulip David Egolf (May 28 2024 at 17:17):

Now, imagine that we we hadn't yet set the topology on Λ(F)\Lambda(F), but we want to set things up so that Λ\Lambda is left adjoint to Γ\Gamma. Then we are motivated in our choice of topology of Λ(F)\Lambda(F): we need to choose our topology so that all of the induced α:Λ(F)Y\alpha':\Lambda(F) \to Y are continuous.

view this post on Zulip David Egolf (May 28 2024 at 17:21):

I am not confident in working with adjunctions yet, so actually using this idea to figure out what topology we'd need on Λ(F)\Lambda(F) sounds tricky to me currently. But my rough hope is this:

view this post on Zulip David Egolf (May 28 2024 at 17:23):

The ideas @Peva Blanchard and @John Baez sketched above relating to the topology on Λ(F)\Lambda(F) are also interesting. And one of those may be the way to go instead, I'm not sure!

But I like that this approach lets us imagine a goal that could have led us to our choice of topology on Λ(F)\Lambda(F). Namely, this goal: try to define the topology on Λ(F)\Lambda(F) so that Λ\Lambda is left adjoint to Γ\Gamma.

view this post on Zulip Peva Blanchard (May 28 2024 at 21:41):

Oh that's nice.

Indeed, given a natural transformation α:FΓ(p)\alpha : F \rightarrow \Gamma(p), we can define the set-function α:Λ(F)p\alpha' : \Lambda(F) \rightarrow p

α([s]x)=αU(s)(x)\alpha'([s]_x) = \alpha_U(s)(x)

with sFUs \in FU and UU an open neighborhood of xx. Of course, this requires to prove that it is well-defined, i.e., that it is independent of the chosen representative.

And your perspective leads to another candidate for the topology on Λ(F)\Lambda(F). Namely, the coarsest topology on Λ(F)\Lambda(F) making all those α\alpha' continuous.

view this post on Zulip John Baez (May 28 2024 at 21:58):

One of my goals in learning math is to better understand how to create nice mathematical situations/structures.

That's a good goal, @David Egolf! Not enough people explicitly make this a goal, but I think it's something that can be learned, and category theory can be seen as a huge toolbox of methods for doing exactly this, though every other branch of math is important too.

I like your new project:

I hope to take inverse images of open sets under the induced α\alpha' to get a basis for the topology on Λ(F)\Lambda(F).

I'd try to use this method to get those open sets I love, the sets I call {[s]y    yU}\{ [s]_y \;\vert \; y \in U \}, as inverse images of the sort you're talking about. Then you'd know that your topology has to at least contain those, which would be a big step forward.

view this post on Zulip David Egolf (May 29 2024 at 18:16):

Thanks to both of you for your encouraging words and interesting suggestions!

I am realizing that it will be helpful to learn more about this adjunction before figuring out what topology it requires on Λ(F)\Lambda(F). (For example, I don't yet know enough about the adjunction to check that an α\alpha' ends up matching @Peva Blanchard's description). For that reason, I think I want to progress a bit further on the puzzles of the current blog post (and the next one) before thinking about this in more detail. But I have made a note to return to this question later, once we've worked through the discussion of the adjunction in the blog posts!

view this post on Zulip David Egolf (May 29 2024 at 18:24):

The current puzzle I'm working on is this one:

Puzzle. Describe how a morphism of presheaves on XX gives a morphism of bundles over XX and show that your construction defines a functor Λ:O(X)^Top/X\Lambda: \widehat{\mathcal{O}(X)} \to \mathsf{Top}/X.

Starting with a natural transformation α:FF\alpha: F \to F' between presheaves on XX, we saw above how to form a morphism of bundles from Λ(F)\Lambda(F) to Λ(F)\Lambda(F'). It remains to show that this process actually defines a functor. (However, I need to rest up today, so I will return to this hopefully tomorrow!)

view this post on Zulip David Egolf (May 30 2024 at 16:29):

Given α:FF\alpha:F \to F', then the induced morphism of bundles Λ(α):Λ(F)Λ(F)\Lambda(\alpha):\Lambda(F) \to \Lambda(F') corresponds to the continuous function [s]x[αU(s)]x[s]_x \mapsto [\alpha_U(s)]_x, for sF(U)s \in F(U) for UU some open subset of XX. This function maps from the topological space Λ(F)\Lambda(F) to the topological space Λ(F)\Lambda(F'). (Above, we called Λ(α)\Lambda(\alpha) by the name G(α)G(\alpha)).

Here I am using Λ(F)\Lambda(F) to denote both the topological space of germs of FF and the projection from that space to XX, which sends each germ to the point it belongs to. Hopefully context will make it clear which usage I intend.

So, we notice that Λ\Lambda is preserving the source and target of morphisms.

view this post on Zulip David Egolf (May 30 2024 at 16:31):

Next, let 1F:FF1_F: F \to F be the identity natural transformation from FF to FF. Then Λ(1F):Λ(F)Λ(F)\Lambda(1_F):\Lambda(F) \to \Lambda(F) correspond to the continuous function [s]x[(1F)U(s)]x[s]_x \mapsto [(1_F)_U(s)]_x. But since each component of 1F1_F is an identity function, (1F)U(s)=s(1_F)_U(s)=s. Thus, the induced map is [s]x[s]x[s]_x \mapsto [s]_x, which is the identity map from Λ(F)\Lambda(F) to Λ(F)\Lambda(F). We conclude that Λ\Lambda is preserving identity morphisms.

view this post on Zulip David Egolf (May 30 2024 at 16:41):

It remains to show that Λ\Lambda preserves composition. Let us assume we have α:FF\alpha:F \to F' and β:FF\beta:F' \to F''. We wish to show that Λ(βα)=Λ(β)Λ(α)\Lambda(\beta \circ \alpha) = \Lambda(\beta) \circ \Lambda(\alpha).

Λ(βα):Λ(F)Λ(F)\Lambda(\beta \circ \alpha): \Lambda(F) \to \Lambda(F'') corresponds to the continuous function [s]x[(βα)U(s)]x[s]_x \mapsto [(\beta \circ \alpha)_U(s)]_x. Since (βα)U=βUαU( \beta \circ \alpha)_U = \beta_U \circ \alpha_U, we have that [(βα)U(s)]x=[βU(αU(s))]x[(\beta \circ \alpha)_U(s)]_x=[\beta_U (\alpha_U(s))]_x.

But we notice that Λ(β)Λ(α):Λ(F)Λ(F)\Lambda(\beta) \circ \Lambda(\alpha): \Lambda(F) \to \Lambda(F'') corresponds to the continuous function [s]x[αU(s)]x[βU(αU(s))]x[s]_x \mapsto [\alpha_U(s)]_x \mapsto [\beta_U(\alpha_U(s))]_x. So, we conclude that Λ(βα)=Λ(β)Λ(α)\Lambda(\beta \circ \alpha) = \Lambda(\beta) \circ \Lambda(\alpha) and so Λ\Lambda preserves composition.

Thus, Λ\Lambda is a functor.

view this post on Zulip David Egolf (May 30 2024 at 16:48):

The blog post next begins to discuss the fact that not only can we get a bundle p:Λ(F)Xp:\Lambda(F) \to X from a presheaf FF on XX, but that this bundle has a nice property. Namely, each point [s]x[s]_x in Λ(F)\Lambda(F) has a neighbourhood VV such that pV:VP(V)Xp|_V:V \to P(V) \subseteq X is a homeomorphism. Intuitively, each point in Λ(F)\Lambda(F) has some little region "near it" that looks (topologically) just like some neighbourhood of its image under pp. This seems like it might be helpful for understanding Λ(F)\Lambda(F) intuitively "around some germ", provided that we know what XX looks like.

Wikipedia gives a nice picture illustrating this kind of situation:
covering space

view this post on Zulip David Egolf (May 30 2024 at 16:51):

To set up the next puzzle, we need a few definitions:

view this post on Zulip David Egolf (May 30 2024 at 16:56):

We can now state the next puzzle:

Puzzle. Show that pp is a homeomorphism from VV to UU.

Technically, we wish to show that the restriction of pp to VV provides a homeomorphism from VV to UU. We define pV:VUp|_V: V \to U by pV([s]y)=yp|_V([s]_y) = y, and aim to show this is a homeomorphism. (Since each germ in VV belongs to a point in UU, this function really does map to the set UU.)

view this post on Zulip David Egolf (May 30 2024 at 16:57):

First, we note that pVp|_V is a bijection. That's because it has an inverse as a function, which I will call (pV)1(p|_V)^{-1}. This inverse function acts by y[s]yy \mapsto [s]_y.

view this post on Zulip David Egolf (May 30 2024 at 17:05):

It remains to show that pVp|_V and (pV)1(p|_V)^{-1} are both continuous. We know that p:Λ(F)Xp:\Lambda(F) \to X is continuous, and that the inclusion function i:VΛ(F)i:V \to \Lambda(F) is continuous. Hence pi:VXp \circ i:V \to X is continuous. I seem to recall that pi:VUp \circ i: V \to U is then also continuous, provided that pip \circ i only takes values in UU. If that's true, then pVp|_V is continuous.

view this post on Zulip David Egolf (May 30 2024 at 17:07):

I'll stop here for today. Next time, I'm hoping to finish showing that pVp|_V is a homeomorphism!

view this post on Zulip John Baez (May 30 2024 at 17:41):

Great, you're moving along nicely here. And your memory is right. The subspace topology on a subset SS of a topological space YY is the one where the open sets of SS are defined to be the sets USU \cap S where UU is open in YY. It then instantly follows that if f:AYf: A \to Y is any continuous function taking values in the subset SS, it gives a continuous function f:ASf: A \to S where SS is given the subspace topology.

view this post on Zulip John Baez (May 30 2024 at 17:43):

By the way, some very careful people distinguish notationally between the function f:AYf: A \to Y in this situation and the function f:ASf: A \to S, and call the latter a corestriction of the former, by analogy with restriction - since we're shrinking the codomain of ff instead of its domain.

view this post on Zulip John Baez (May 30 2024 at 17:44):

So if I were one of those people I'd have said

It then instantly follows that if f:AYf: A \to Y is any continuous function taking values in the subset SS, its corestriction to SS is continuous when SS is given the subspace topology.

view this post on Zulip Peva Blanchard (May 30 2024 at 21:39):

I wanted to visualize a bit more what the étale space looks like in strange cases, and I thought it could be interesting to share .

Let XX be the unit interval, and xXx \in X a point.
Let FF be the presheaf such that, for every open subset UXU\subseteq X

FU={2if x∉U1otherwise FU = \begin{cases} 2 & \text{if } x \not\in U \\ 1 & \text{otherwise} \end{cases}

Then E=Λ(F)E = \Lambda(F) should look like this
image.png

Here EE seems to just be the union of two line segments that crosses at a point pp over xx. It seems to me that pp has "essentially" two neighborhoods that are homeomorphic to UU, depending on which line segment you choose.

view this post on Zulip John Baez (May 31 2024 at 06:04):

Nice! And I guess if you change your mind which line you choose, and get a 'broken line segment', that's not an open neighborhod. This clarifies the original French meaning of the term espace étalé.

view this post on Zulip Peva Blanchard (May 31 2024 at 09:22):

Actually, my conclusion might be wrong. I've been too sketchy when defining the presheaf FF. I need to specify the restriction morphisms.

view this post on Zulip Peva Blanchard (May 31 2024 at 09:59):

Indeed, I was wrong ...

Let's write 2={l,r}2 = \{l, r\} and 1={}1 = \{\bullet\}. Let VUV \subseteq U be two open subsets in XX.

When both U,VU,V contain xx, or when both do not contain xx, we have FU=FVFU = FV and we define the restriction morphism to be the identity.

The interesting case is when xUx \in U and x∉Vx \not\in V. In that case, let's define

FUFV{lif V is on the left of xrif V is on the right of x\begin{align*} FU &\rightarrow FV \\ \bullet &\mapsto \begin{cases} l &\text{if } V \text{ is on the left of } x \\ r &\text{if } V \text{ is on the right of } x \\ \end{cases} \end{align*}

Now the étale space E=Λ(F)E = \Lambda(F) would look more like this
image.png

I.e., we have three line segments, one of them goes through pp, and the other two adheres to pp without touching it.

So we have a unique open neighborhood of pp that maps homeomorphically to UU.

view this post on Zulip David Egolf (May 31 2024 at 14:59):

These are interesting pictures! Even if the first one is not accurate to FF, I like how it illustrates this: it's possible to have multiple regions (in this case, lines) that "look the same about a point pp" while only having the point pp in common.

Regarding the restrictions maps, I don't understand how you are defining F(rUV):F(U)F(V)F(r_{U \to V}):F(U) \to F(V). You mention the condition "VV is on the left of xx" and the condition "VV is on the right of xx". But couldn't an open set VV that doesn't contain xx have elements both to the left and right of xx? It wasn't clear to me what the restriction map F(rUV)F(r_{U \to V}) would be in that case.

view this post on Zulip Peva Blanchard (May 31 2024 at 15:24):

Oh yes you're right! I am going too fast with the picture. I tend to draw and then formalize hastily in between two meetings. (I should definitely learn the moral lesson...)

The correct definition for FF, in the case xUx \in U and x∉Vx \not\in V should be (hopefully)

FUFVl\begin{align*} FU &\rightarrow FV \\ \bullet &\mapsto l \end{align*}

And in picture
image.png

view this post on Zulip David Egolf (May 31 2024 at 15:29):

John Baez said:

By the way, some very careful people distinguish notationally between the function f:AYf: A \to Y in this situation and the function f:ASf: A \to S, and call the latter a corestriction of the former, by analogy with restriction - since we're shrinking the codomain of ff instead of its domain.

I like to keep careful track of the source and target of morphisms, so I suppose I aspire to be one of these "careful people". Thanks for reminding me how corestriction interacts with continuity!

view this post on Zulip David Egolf (May 31 2024 at 15:36):

I think we can now finish showing that pV:VUp|_V: V \to U and (pV)1(p|_V)^{-1} are continuous. We saw above that pi:VXp \circ i:V \to X is continuous. Then, since pVp|_V is a corestriction of pip \circ i and UU has the subspace topology, pVp|_V is continuous.

It remains to show that (pV)1:UV(p|_V)^{-1}:U \to V is continuous. We recall that it acts by y[s]yy \mapsto [s]_y.

view this post on Zulip David Egolf (May 31 2024 at 15:48):

To make typing this a little bit easier, I'm going to denote (pV)1(p|_V)^{-1} using the symbol gsg_s. So, gs(y)=[s]yg_s(y) = [s]_y. I will aim to show that gsg_s is continuous at any point yUy \in U. We saw above that the following collection of sets is a neighourhood basis of open sets for [s]y[s]_y: namely {[s]wwW}\{[s]_w | w \in W\} as WW ranges over the open sets of XX contained in UU and also containing yy. To show that gsg_s is continuous at yy, it suffices to show that the inverse image under gsg_s of any set in our neighbourhood basis for gs(y)=[s]yg_s(y)=[s]_y contains an open set containing yy.

view this post on Zulip David Egolf (May 31 2024 at 15:56):

So, let us consider some arbitrary set b={[s]wwW}b=\{[s]_w|w \in W\} in our our neighbourhood basis for [s]y[s]_y. Here WW is an open set containing yy and contained in UU. The inverse image of bb under gsg_s certainly contains WW, and hence contains an open set containing yy.

We conclude that (pV)1=gs:UV(p|_V)^{-1}=g_s:U \to V is continuous at any point in UU, and so (pV)1(p|_V)^{-1} is a continuous function.

Thus, pV:VUp|_V:V \to U is a homeomorphism.

view this post on Zulip David Egolf (May 31 2024 at 16:22):

To remember what VV is, I might instead denote VV by gs(U)g_s(U) - the set of germs of ss associated to the points of UU. Intuitively, "taking germs over UU of a fixed sF(U)s \in F(U)" produces a topological space just like UU. Since there are potentially many different elements in F(U)F(U), there are potentially many "copies" of UU in Λ(F)\Lambda(F), given by gt(U)g_t(U) as tt varies over the elements of F(U)F(U).

(I'll stop here for today!)

view this post on Zulip Peva Blanchard (May 31 2024 at 16:39):

David Egolf said:

Since there are potentially many different elements in F(U)F(U), there are potentially many "copies" of UU in Λ(F)\Lambda(F), given by gt(U)g_t(U) as tt varies over the elements of F(U)F(U).

Yes it is exactly this observation that led me playing with strange cases. The trivial case is when all those copies are disjoint (e.g., like the parallel copies in the helix-like picture you posted before). Then I wondered what happens if we glue some of them at some specific point.

view this post on Zulip David Egolf (Jun 03 2024 at 16:04):

I believe we have now worked through the second blog post in the series! On to Part 3!

view this post on Zulip David Egolf (Jun 03 2024 at 16:19):

Part 3 begins by presenting a different way to specify the "sheaf condition" for a presheaf. Although this is not listed as an official puzzle, I would like to understand why this new formulation of the sheaf condition is equivalent to the formulation we've been using so far.

I was going to start by stating the new formulation of the sheaf condition. However, I don't understand it well enough to do so!

view this post on Zulip David Egolf (Jun 03 2024 at 16:21):

The new formulation involves a diagram that looks like this, where we have a collection of open sets UiUU_i \subseteq U that cover the open set UU in XX, and FF is a presheaf on XX:
diagram

I believe I understand how ff is defined. It is the function induced using the universal property of products via the collection of restriction functions fi:F(U)F(Ui)f_i:F(U) \to F(U_i). So, ff sends each sheaf element on UU to the tuple of its restrictions over the UiU_i that cover UU. Intuitively, we are "decomposing" a sheaf element into smaller pieces.

view this post on Zulip David Egolf (Jun 03 2024 at 16:23):

I don't yet understand how gg and hh are defined. I think we will again be using the universal property of products, but it seems confusing at the moment... I will stop here for today!

view this post on Zulip John Baez (Jun 03 2024 at 16:54):

It may help to think in a low brow way. Think of an element of iF(Ui)\prod_i F(U_i) as a list elements siF(Ui)s_i \in F(U_i), one for each ii. Think of an element of i,jF(UiUj) \prod_{i,j} F(U_i \cap U_j) as a list of elements sijF(UiUj)s_{ij} \in F(U_i \cap U_j), one for each pair i,ji,j. (Don't take the word 'list' too seriously here: the order doesn't matter, etc.) To get a map

g:iF(Ui)i,jF(UiUj)g : \prod_i F(U_i) \to \prod_{i,j} F(U_i \cap U_j)

we thus need a recipe to take

and extract from it

view this post on Zulip John Baez (Jun 03 2024 at 16:55):

There are two 'obvious' recipes, and these are gg and hh.

view this post on Zulip John Baez (Jun 03 2024 at 17:01):

To see why there are two, it may help to notice that

is exactly the same thing as

view this post on Zulip Peva Blanchard (Jun 03 2024 at 17:03):

Here is another hint when we have three open sets U1,U2,U3U_1, U_2, U_3.

When we fix i{1,2,3}i \in \{1, 2, 3\}, we get 3 restriction morphisms

Uiri1UiU1Uiri2UiU2Uiri3UiU3\begin{align*} U_i &\xrightarrow{r_{i1}} U_i \cap U_1 \\ U_i &\xrightarrow{r_{i2}} U_i \cap U_2 \\ U_i &\xrightarrow{r_{i3}} U_i \cap U_3 \\ \end{align*}

On the other hand, if we fix j{1,2,3}j \in \{1, 2, 3\}, we also get 3 restriction morphisms

U1r1jU1UjU2r2jU2UjU3r3jU3Uj\begin{align*} U_1 &\xrightarrow{r_{1j}} U_1 \cap U_j \\ U_2 &\xrightarrow{r_{2j}} U_2 \cap U_j \\ U_3 &\xrightarrow{r_{3j}} U_3 \cap U_j \\ \end{align*}

By taking the relevant products, and using the associativity, you get the two ways.

view this post on Zulip John Baez (Jun 04 2024 at 08:28):

That's a clearer hint than mine. By the way, there's a highbrow way of thinking about this stuff in terms of the 'Cech nerve', 'descent' and the 'bar construction', which we discussed here, but I feel the lowbrow approach we're taking here is a good warmup for that highbrow approach.

view this post on Zulip David Egolf (Jun 04 2024 at 16:01):

Thanks to both of your for the hints! I will be referencing them as I try to better understand gg and hh.

We want to define some function :iF(Ui)i,jF(UiUj):\prod_iF(U_i) \to \prod_{i,j}F(U_i \cap U_j). We know by the universal property of products that such functions corrrespond bijectively to cones with tip iF(Ui)\prod_i F(U_i) over the discrete diagram having objects of the form F(UiUi)F(U_i \cap U_i). So, to find a morphism :iF(Ui)i,jF(UiUj):\prod_iF(U_i) \to \prod_{i,j}F(U_i \cap U_j), it suffices to find one morphism from kF(Uk)\prod_k F(U_k) to F(UiUj)F(U_i \cap U_j) for each (i,j)(i,j).

view this post on Zulip David Egolf (Jun 04 2024 at 16:04):

Now, to describe a function, it suffices to say what the function does to each element. An arbitrary element of kF(Uk)\prod_kF(U_k) is a tuple of presheaf elements, where we have one element associated to each UkU_k. So, given a tuple of presheaf elements associated to the open sets UkU_k in our cover, we want to determine some corresponding element in each F(UiUj)F(U_i \cap U_j).

What is an element of F(UiUj)F(U_i \cap U_j)? It is a presheaf element associated to UiUjU_i \cap U_j. So, from a tuple of presheaf elements having one element for each open set in our open cover, we want to determine some presheaf element on UiUjU_i \cap U_j.

view this post on Zulip David Egolf (Jun 04 2024 at 16:15):

Let tkF(Uk)t \in \prod_k F(U_k) be such a tuple of presheaf elements. tt has a presheaf element in it associated in particular to UiU_i, and it also has one associated to UjU_j. I will denote the presheaf element of tt associated to UiU_i by tit_i, and the one associated to UjU_j by tjt_j.

If we restrict tiF(Ui)t_i \in F(U_i) to UiUjU_i \cap U_j, we get an element of F(UiUj)F(U_i \cap U_j). We can build up a function :kF(Uk)F(UiUj):\prod_kF(U_k) \to F(U_i \cap U_j) using this idea. Namely, let the function map act by tF(rUiUiUj)(ti)t \mapsto F(r_{U_i \to U_i \cap U_j})(t_i), where F(rUiUiUj):F(Ui)F(UiUj)F(r_{U_i \to U_i \cap U_j}): F(U_i) \to F(U_i \cap U_j) is a restriction function provided by our presheaf.

view this post on Zulip David Egolf (Jun 04 2024 at 16:17):

We can use this approach to build a function :kF(Uk)F(UiUj):\prod_kF(U_k) \to F(U_i \cap U_j) for each (i,j)(i,j). These functions all together then induce a unique function :kF(Uk)i,jF(UiUj):\prod_k F(U_k) \to \prod_{i,j} F(U_i \cap U_j) by the universal property of products.

view this post on Zulip David Egolf (Jun 04 2024 at 16:21):

Intuitively, this function takes a tuple tt of presheaf elements over our open cover, and sends this tuple to a tuple of presheaf elements. The output tuple has one element associated to each pairwise intersection UiUjU_i \cap U_j of our open cover sets. Namely, it associates the restriction of tiF(Ui)t_i \in F(U_i) to UiUjU_i \cap U_j.

view this post on Zulip David Egolf (Jun 04 2024 at 16:25):

Now, we can also define a second function in a similar way. It will also be built up from functions :kF(Uk)F(UiUj):\prod_k F(U_k) \to F(U_i \cap U_j) as (i,j)(i,j) varies. We define the (i,j)(i,j)-th inducing function as tF(rUjUiUj)(tj)t \mapsto F(r_{U_j \to U_i \cap U_j})(t_j). Then, we can use the universal property of products to induce a unique function :kF(Uk)i,jF(UiUj):\prod_k F(U_k) \to \prod_{i,j}F(U_i \cap U_j).

view this post on Zulip David Egolf (Jun 04 2024 at 16:27):

In brief, this function takes a tuple tt of presheaf elements over the sets of our open cover, and for each pairwise intersection UiUjU_i \cap U_j of those open sets it associates the restriction of tjF(Uj)t_j \in F(U_j) to UiUjU_i \cap U_j.

view this post on Zulip David Egolf (Jun 04 2024 at 16:31):

I think that's right. Assuming so, it appears that the functions I was looking for weren't all that complicated! Next time, building on this, I plan to work on understanding what it means for this to be an equalizer diagram:
diagram

view this post on Zulip David Egolf (Jun 04 2024 at 16:45):

(A side note: I just noticed that the phrase "a function is determined by its values on each of its inputs" can be thought of as an implicit reference to the universal property of coproducts in Set\mathsf{Set}. That's kind of fun!)

view this post on Zulip John Baez (Jun 04 2024 at 16:57):

David Egolf said:

I think that's right. Assuming so, it appears that the functions I was looking for weren't all that complicated!

Right!

view this post on Zulip Jean-Baptiste Vienney (Jun 04 2024 at 17:01):

David Egolf said:

(A side note: I just noticed that the phrase "a function is determined by its values on each of its inputs" can be thought of as an implicit reference to the universal property of coproducts in Set\mathsf{Set}. That's kind of fun!)

Be aware it’s not exactly the same thing. The coproduct in Set\mathbf{Set} is the disjoint union of sets. Therefore a function f:iIXiYf:\underset{i \in I}{\bigsqcup}X_i \rightarrow Y corresponds to family of functions (fi:XiY)iI(f_i:X_i\rightarrow Y)_{i \in I}.

“A function is determined by its values on each of its inputs” corresponds to the bijection Set[X,Y]xXY\mathbf{Set}[X,Y] \cong \underset{x \in X}{\prod}Y. By the universal property of products, it gives you that a function f:XYf:X \rightarrow Y can be thought of as a family (f(x))xX(f(x))_{x \in X} of elements of YY.

view this post on Zulip Jean-Baptiste Vienney (Jun 04 2024 at 17:21):

Oh, now I see it. You can also interpret this sentence using the universal property of coproducts. You can write XxX{}X \cong \underset{x \in X}{\bigsqcup}\{*\} and the universal property of the coproduct gives you that a function f:XYf:X\rightarrow Y corresponds to a family (fx:{}Y)xX(f_x:\{*\} \rightarrow Y)_{x \in X} that is a family (f(x)=fx())xX(f(x)=f_x(*))_{x \in X} of elements of YY.

view this post on Zulip John Baez (Jun 04 2024 at 18:14):

Yes, and we're also using a very special feature of the category of sets here, which is that every object is a coproduct of copies of the terminal object.

view this post on Zulip John Baez (Jun 04 2024 at 18:15):

This makes the category of sets is "boringly simple" compared to other categories. But of course that boring simplicity is exactly why we like it!

view this post on Zulip Peva Blanchard (Jun 04 2024 at 18:18):

@John Baez Oh but I remember an article of yours, at the n-category café, which presented some very nice characterization of the category of sets. I'm trying to google it, but I can't remember the exact keywords. I think there was some stuff involving comonoids...

view this post on Zulip John Baez (Jun 04 2024 at 18:32):

Most relevant here is that Set\mathsf{Set} is the free category with coproducts on one object, and it then turns out that this object is the terminal object. But it sounds like you're thinking of something else.

view this post on Zulip John Baez (Jun 04 2024 at 18:36):

I don't know what article you're talking about, but maybe you mean that FinSet\mathsf{FinSet} is the free symmetric monoidal category on a cocommutative monoid object; it then turns out that this object is the terminal object.

view this post on Zulip John Baez (Jun 04 2024 at 18:36):

I probably did blog about this ages ago, as part of my "Coffee for Theorems" series.

view this post on Zulip Peva Blanchard (Jun 04 2024 at 18:37):

Yes, I was mostly reacting to "boringly simple" because I remember my impression that the category of Set can really be "non-boringly non-simple".

But you're right it's not really related to the present topic.

view this post on Zulip David Egolf (Jun 06 2024 at 15:43):

I wonder if, in a category with coproducts, it makes sense to think of the "elements" in that category as the objects that can't be written as a "non-boring" coproduct (that is, excluding things like AA0A \cong A \coprod 0) of other objects. (The "indecomposable" objects, if that's the right term). I'm motivated in this intuition by this fact relating to the universal property of coproducts: a morphism from some object A=iBiA = \coprod_i B_i is determined by corresponding morphisms from the BiB_i.

view this post on Zulip David Egolf (Jun 06 2024 at 15:44):

I was hoping to work out today what it means for our diagram above to be an equalizer. However, it turns out I need to rest up today. So, I hope to get back to that tomorrow!

view this post on Zulip John Baez (Jun 06 2024 at 15:52):

David Egolf said:

I wonder if, in a category with coproducts, it makes sense to think of the "elements" in that category as the objects that can't be written as a "non-boring" coproduct (that is, excluding things like AA0A \cong A \coprod 0) of other objects. (The "indecomposable" objects, if that's the right term).

I don't know if it makes sense to think of indecomposable objects (yes, that's the right term) as "elements" - that's a sort of squishy question, so let me just say that I've never thought of them as "elements". However, they are important. They're especially important in categories where every object is a coproduct of indecomposables: then they serve as "building blocks" for general objects in a very nice way.

view this post on Zulip John Baez (Jun 06 2024 at 15:55):

For example, a functor from a group GG to Vect\mathsf{Vect} is called a representation of GG, and there's a category Rep(G)=VectG\mathsf{Rep}(G) = \mathsf{Vect}^G with representations of GG as objects and natural transformations as morphisms. There's a huge industry devoted to understanding Rep(G)\mathsf{Rep}(G) for various kinds of groups GG.

view this post on Zulip John Baez (Jun 06 2024 at 15:57):

If GG is a finite group, we have a wonderful theorem: every object in Rep(G)\mathsf{Rep}(G) is a coproduct of indecomposable objects! Moreover it's a coproduct in a unique way (up to isomorphism and permuting the guys you're taking the coproduct of).

view this post on Zulip John Baez (Jun 06 2024 at 15:59):

Even better, if we are using vector spaces over an algebraically closed field kk like C\mathbb{C}, then if we have two indecomposable objects R,RRep(G)R, R' \in \mathsf{Rep}(G) they are either isomorphic, in which case

hom(R,R)k\mathrm{hom}(R,R') \cong k

or they're not, in which case

hom(R,R){0}\mathrm{hom}(R,R') \cong \{0\}

meaning there's only one morphism between them.

view this post on Zulip John Baez (Jun 06 2024 at 16:01):

This should remind you of a "Kronecker delta" δx,x\delta_{x,x'} which is 11 if x=xx = x' and 00 otherwise. It's a categorified Kronecker delta, where hom(R,R)\mathrm{hom}(R,R') is 1-dimensional if RRR \cong R' and 0-dimensional otherwise!

view this post on Zulip John Baez (Jun 06 2024 at 16:04):

So these indecomposables are not only building blocks for all objects, they "don't talk to each other" - there are no interesting morphisms between nonisomorphic indecomposables.

view this post on Zulip John Baez (Jun 06 2024 at 16:05):

I'd call this a very "crunchy" situation - it's hard to put into words, but it's very much the opposite of a floppy, sloppy situation, it's very well-disposed to concrete calculations, and it makes the question of finding the indecomposable representations of a finite group incredibly important, because one you know them, you really know a lot!

view this post on Zulip John Baez (Jun 06 2024 at 16:08):

You said you wanted to learn how to create nice mathematical situations and structures. I guess one thing that helps is learning a bit about some of the classic examples of situations that mathematicians really love, and why they're loved. Rep(G)\mathsf{Rep}(G) for a finite group GG is one of these. Any category where every object is a coproduct of indecomposables has a similar crunchy feel to it.

view this post on Zulip John Baez (Jun 06 2024 at 16:25):

I said I don't think of indecomposables as "elements", at least not in a sense similar to set-theoretic elements. But people often do think of them as similar to "atoms"... or elements in the chemical sense!

The discovery that every group representation is a coproduct of indecomposables in a unique way is very similar to the discovery that all molecules are made of atoms which can't be further decomposed (well, that's what they thought anyway) - and moreover, that this decomposition is unique.

The world would be a much more tricky place if I could decompose water into one oxygen atoms and two hydrogen atoms but you, using another technique, could decompose it into a phosphorus atom and a carbon atom.

view this post on Zulip Peva Blanchard (Jun 06 2024 at 16:56):

If I understand correctly, in the case of sets, the atomic objects are the empty set and the singleton sets. So, there are only two iso classes of atomic objects, 00 and 11. And, there is a unique arrow from 00 from 11. This is in sharp contrast with Rep(G)Rep(G) !

Also, when we fix a singleton set 11, an element of an object AA is usually presented as an arrow 1A1 \rightarrow A. And the set of elements of AA corresponds to the hom set Set(1,A)Set(1, A).

So, in an arbitrary category with coproducts, it makes sense to discuss, on one hand, atomic objects (indecomposable as a coproduct), and, on the other hand, elements of an object. But it's not obvious to me how to relate the two.

view this post on Zulip John Baez (Jun 06 2024 at 17:08):

Yes, Rep(G)\mathsf{Rep}(G) is like Vect\mathsf{Vect}, and very different from Set\mathsf{Set}, in that its initial object is also terminal.

view this post on Zulip John Baez (Jun 06 2024 at 17:09):

It's an [[abelian category]], and the property I'm talking about, that every object is uniquely a coproduct of indecomposables, is essentially the same as it being [[semisimple]].

view this post on Zulip Oscar Cunningham (Jun 06 2024 at 17:14):

Peva Blanchard said:

If I understand correctly, in the case of sets, the atomic objects are the empty set and the singleton sets. So, there are only two iso classes of atomic objects, 00 and 11. And, there is a unique arrow from 00 from 11. This is in sharp contrast with Rep(G)Rep(G) !

I think it's best not to consider 00 to be indecomposable. For the same reason 11 isn't prime.

view this post on Zulip John Baez (Jun 06 2024 at 17:32):

Yes, that's true.

view this post on Zulip John Baez (Jun 06 2024 at 17:45):

You need the initial object not to count as indecomposable if you want to get the kind of result I'm claiming happens in Rep(G)\mathsf{Rep}(G) for a finite group: that every object is uniquely a coproduct of indecomposables. Just like 1 mustn't be prime if you want every positive integer to be uniquely a product of primes.

view this post on Zulip John Baez (Jun 06 2024 at 17:48):

This is under [[too simple to be simple]].

view this post on Zulip David Egolf (Jun 07 2024 at 16:15):

I look forward to resuming posting in this thread (and some others)! There's a lot of interesting posts various people have made that I look forward to responding to.

Unfortunately, I have been struggling more with my chronic fatigue the last few days. So, I think I need to rest up for several days and then hopefully I can return to posting in these threads. Here's wishing everyone a pleasant weekend!

view this post on Zulip Peva Blanchard (Jun 07 2024 at 16:42):

No worries, as far as I am concerned, no need to give reasons for why you are not posting. Rest well :)

view this post on Zulip David Egolf (Jun 16 2024 at 16:05):

John Baez said:

Even better, if we are using vector spaces over an algebraically closed field kk like C\mathbb{C}, then if we have two indecomposable objects R,RRep(G)R, R' \in \mathsf{Rep}(G) they are either isomorphic, in which case

hom(R,R)k\mathrm{hom}(R,R') \cong k

or they're not, in which case

hom(R,R){0}\mathrm{hom}(R,R') \cong \{0\}

meaning there's only one morphism between them.

It's interesting to me that this is a "nice mathematical situation". At first glance, I might have assumed it was "too boring" or "too simple"! But I suppose the idea roughly is that we have a bunch of "independent building blocks" in this category, which sounds handy.

view this post on Zulip David Egolf (Jun 16 2024 at 16:07):

I next want to continue contemplating this diagram:
diagram

Specifically, I am interested in understanding what it means for this diagram to be an equalizer. It's going to take me a minute to remember what we already said about this diagram!

view this post on Zulip David Egolf (Jun 16 2024 at 16:12):

To review:

view this post on Zulip David Egolf (Jun 16 2024 at 16:22):

Continuing the review, focusing now on the functions f,g,hf,g,h in our diagram above:

view this post on Zulip John Baez (Jun 16 2024 at 16:25):

David Egolf said:

John Baez said:

Even better, if we are using vector spaces over an algebraically closed field kk like C\mathbb{C}, then if we have two indecomposable objects R,RRep(G)R, R' \in \mathsf{Rep}(G) they are either isomorphic, in which case

hom(R,R)k\mathrm{hom}(R,R') \cong k

or they're not, in which case

hom(R,R){0}\mathrm{hom}(R,R') \cong \{0\}

meaning there's only one morphism between them.

It's interesting to me that this is a "nice mathematical situation". At first glance, I might have assumed it was "too boring" or "too simple"! But I suppose the idea roughly is that we have a bunch of "independent building blocks" in this category, which sounds handy.

Your impression that this is "too simple" shows you have good intuitions. The field of homological algebra is heavily focused on how categories deviate from the simple behavior described here, so from that point of view categories of representations of finite groups are boring. But:

1) When you're building a house, you don't want the bricks to be interesting: you want them to behave in simple nice ways. So, like the category of sets or the category of vector spaces, the category of representations of a finite group is a great thing for building further structures!

view this post on Zulip John Baez (Jun 16 2024 at 16:33):

2) Knowing general facts about categories of representations of finite groups is not the end of learning about these categories - it's just the start. We want to know all the indecomposable representations of all finite groups, and we want to know everything about them. That's an endless task... but luckily, it's fascinating and leads to many mind-blowing discoveries.

So: if something is "too simple", you happily move on to the next step!

view this post on Zulip John Baez (Jun 16 2024 at 16:36):

But I'll restrain myself. Back to why the sheaf condition is an equalizer condition!

view this post on Zulip David Egolf (Jun 17 2024 at 18:09):

Continuing to try and understand why the sheaf condition is an equalizer condition, I next want to think about this: what does it mean for some tiF(Ui)t \in \prod_i F(U_i) to satisfy g(t)=h(t)g(t) = h(t)? For g(t)=h(t)g(t)=h(t), we must have g(t)i,j=h(t)i,jg(t)_{i,j} = h(t)_{i,j} for each (i,j)(i,j). We know that g(t)i,jg(t)_{i,j} is the restriction of tiF(Ui)t_i \in F(U_i) to UiUjU_i \cap U_j. And h(t)i,jh(t)_{i,j} is the restriction of tjF(Uj)t_j \in F(U_j) to UiUjU_i \cap U_j. For these two to be equal, we must have that (ti)UiUj=(tj)UiUj(t_i)|_{U_i \cap U_j} = (t_j)|_{U_i \cap U_j}. In other words, the two parts of tt that "overlap" on UiUjU_i \cap U_j must agree on the overlap.

view this post on Zulip David Egolf (Jun 17 2024 at 18:14):

I know how to find an equalizer of a diagram in Set\mathsf{Set}. In this case, an equalizer of our diagram will be given by:

view this post on Zulip David Egolf (Jun 17 2024 at 18:19):

Combining the two paragraphs above, the object part of an equalizer of our diagram is given by the subset AA of iF(Ui)\prod_i F(U_i) consisting of elements tiF(Ui)t \in \prod_i F(U_i) such that (ti)UiUj=(tj)UiUj(t_i)|_{U_i \cap U_j} = (t_j)|_{U_i \cap U_j} for all (i,j)(i,j). Intuitively, each element of this set AA consists of a way to assign a presheaf element (provided by FF) to each open subset UiU_i in our open cover for UU, such that data we pick "agrees on overlaps". Together, the entire equalizer set AA corresponds to all the different ways in which we can do this.

view this post on Zulip Todd Trimble (Jun 17 2024 at 18:25):

David Egolf said:

Combining the two paragraphs above, the object part of an equalizer of our diagram is given by the subset AA of iF(Ui)\prod_i F(U_i) consisting of elements tiF(Ui)t \in \prod_i F(U_i) such that (ti)Uj=(tj)Ui(t_i)|_{U_j} = (t_j)|_{U_i} for all (i,j)(i,j). Intuitively, each element of this set AA consists of a way to assign a presheaf element (provided by FF) to each open set of XX, such that data we pick "agrees on overlaps". Together, the entire equalizer set AA corresponds to all the different ways in which we can do this.

The only thing I would change in that is to change "to each open set of XX" to "to each open set of the covering" (i.e., to each UiU_i). Otherwise, looks good.

view this post on Zulip David Egolf (Jun 17 2024 at 18:39):

Yes, thanks for pointing that out @Todd Trimble !

view this post on Zulip David Egolf (Jun 17 2024 at 18:50):

It remains to consider the case in which F(U)F(U) together with ff also provides an equalizer for our diagram. Since limits are unique up to isomorphism, if F(U)F(U) is the object part of an equalizer, that implies we have a bijection between F(U)F(U) and the set of all ways AA to pick data from each open set in our cover such that the selected data agrees on overlaps.

Let α:AF(U)\alpha: A \to F(U) be the canonical isomorphism (induced by the universal property of limits). Picking some data tiF(Ui)t_i \in F(U_i) for each UiU_i in our open cover such that tit_i and tjt_j agree on UiUjU_i \cap U_j for all (i,j)(i,j) corresponds to picking an element tt of AA. Then α(t)\alpha(t) is an element α(t)F(U)\alpha(t) \in F(U) such that f(α(t))=tf(\alpha(t))=t. That is, restricting α(t)\alpha(t) to each UiU_i gives us tit_i.

view this post on Zulip David Egolf (Jun 17 2024 at 18:51):

Here's the picture I was referencing when writing the above paragraph:
picture

view this post on Zulip David Egolf (Jun 17 2024 at 18:55):

In this case, α\alpha corresponds to a "stitching together" process that takes in presheaf data on the open sets UiU_i covering UU that agrees on overlaps, and returns a "global" presheaf element in F(U)F(U) that restricts to the data we selected on each UiU_i. The existence of α\alpha together with the fact that AA is (the object part of) an equalizer tells us that we can always perform this stitching process given appropriate input data.

(Note that F(U)F(U) together with ff isn't always an equalizer for our diagram! There will be lots of presheafs for which this is not the case. But I'm considering here the case in which this data does provide an equalizer).

view this post on Zulip David Egolf (Jun 17 2024 at 18:58):

There is still a bit more to do here, but I'll stop here for today.

view this post on Zulip John Baez (Jun 17 2024 at 22:24):

This is great, @David Egolf! You not only got the idea, you figured out how to explain it well, conveying the intuition. The phrase "stitching together" is very good for how we assemble an element of F(U)F(U) from elements of the F(Ui)F(U_i) that agree when restricted to the UiUjU_i \cap U_j.

view this post on Zulip David Egolf (Jun 19 2024 at 17:26):

Now, to show that FF is acting like a sheaf with respect to UU and its open cover of the UiU_i, there is still a little more work to do. We've seen that if F(U)F(U) and ff are an equalizer (as above), then we can always form an element of F(U)F(U) from elements tit_i of the F(Ui)F(U_i) that "agree on overlaps". It remains to show that in this case there is a unique element of F(U)F(U) that restricts to each of these elements tit_i on the corresponding UiU_i.

view this post on Zulip David Egolf (Jun 19 2024 at 17:32):

Let tAt \in A be some tuple of tiF(Ui)t_i \in F(U_i) such that the tit_i "agree on overlaps". We want to show there is a unique element xx of F(U)F(U) that maps to tt under ff. Let us assume that we have the situation f(x)=f(x)=tf(x)=f(x')=t for some x,xF(U)x,x' \in F(U). Since α\alpha is a bijection, there is a unique aAa \in A so that α(a)=x\alpha(a)=x and similarly a unique aAa' \in A so that α(a)=x\alpha(a')=x'. Therefore, f(x)=f(x)=tf(x)=f(x')=t implies that f(α(a))=f(α(a)f(\alpha(a)) = f(\alpha(a'). Since m=fαm = f \circ \alpha, this implies that m(a)=m(a)m(a) = m(a'). Since mm is injective, this implies that a=aa=a' and hence α(a)=x=x=α(a)\alpha(a)=x=x'=\alpha(a'). We conclude that there is indeed a unique element of F(U)F(U) that restricts to each tit_i on UiU_i.

view this post on Zulip David Egolf (Jun 19 2024 at 17:35):

So, if F(U)F(U) and the corresponding f:F(U)iF(Ui)f:F(U) \to \prod_i F(U_i) are always an equalizer for the diagram under discussion, given any open set UU of XX and any open cover of UiU_i of UU (with each UiUU_i \subseteq U), then I think that FF is a sheaf.

view this post on Zulip David Egolf (Jun 19 2024 at 17:37):

It remains to show that if FF is a sheaf, then F(U)F(U) and f:F(U)iF(Ui)f:F(U) \to \prod_i F(U_i) provide an equalizer for any version (as UU and its open cover vary) of the diagram discussed above. But I will leave that for next time!

view this post on Zulip David Egolf (Jun 20 2024 at 16:42):

Alright, we're in the home stretch now! Let's assume that FF is a sheaf. We want to show that this diagram is then an equalizer diagram (for any open set UU in XX and any open cover of UU using some UiU_i with each UiUU_i \subseteq U):
diagram

view this post on Zulip David Egolf (Jun 20 2024 at 16:48):

To do this, let's suppose we have some other cone over gg and hh. We want to show there is a unique morphism from this cone to our proposed equalizer cone, in the category of cones over gg and hh:
morphism of cones

view this post on Zulip David Egolf (Jun 20 2024 at 16:52):

Now, each element xx of XX corresponds to some element f(x)iF(Ui)f'(x) \in \prod_i F(U_i), such that g(f(x))=h(f(x))g(f(x')) = h(f'(x)). As we saw earlier, this means that f(x)f'(x) picks out some data from each F(Ui)F(U_i) such that the data selected on UiU_i and UjU_j agree when restricted to UiUjU_i \cap U_j, for all (i,j)(i,j). So, we can think of each element xx of XX as being associated to some collection of data f(x)f'(x) on the UiU_i that "agrees on overlaps".

view this post on Zulip David Egolf (Jun 20 2024 at 16:56):

Since FF is a sheaf, we know there is a unique element β(x)F(U)\beta(x) \in F(U) that restricts to each f(x)if'(x)_i on UiU_i, for all ii. So, let's define β\beta in that way: it sends xXx \in X associated to some collection f(x)f'(x) of "compatible on overlaps" data to the unique "stitched-together" element of F(U)F(U) induced by the data f(x)f'(x).

This will ensure that fβ=ff \circ \beta = f'.

view this post on Zulip David Egolf (Jun 20 2024 at 17:00):

We want to show that there is only this one way to define β\beta. We need f(β(x))=f(x)f(\beta(x)) = f'(x) for any xXx \in X. That is, β(x)\beta(x) must restrict on each UiU_i to f(x)if'(x)_i. Since FF is a sheaf, there is only one such element of F(U)F(U) that satisfies this condition. So, indeed there is a unique morphism of cones to our cone involving F(U)F(U) and ff.

view this post on Zulip David Egolf (Jun 20 2024 at 17:03):

We conclude that if FF is a sheaf on XX, UU is an open set in XX, and we have an open cover for UU formed from sets UiU_i with each UiUU_i \subseteq U, then this diagram is always an equalizer diagram:
diagram

view this post on Zulip David Egolf (Jun 20 2024 at 17:07):

Next time, I hope to start the section on sheafification!

view this post on Zulip David Egolf (Jun 26 2024 at 15:30):

We discussed these two functors above:

By applying Γ\Gamma after Λ\Lambda, we can make a sheaf from any presheaf! The next order of business is to understand in detail how this works. [I've not had the energy for math the last little while, but when I have a bit more energy I plan to start working this exercise!]

view this post on Zulip John Baez (Jun 26 2024 at 17:48):

I will just say that this business of turning a presheaf into a sheaf is called "sheafification", and there's at least one very nice way to understand it other than than merely saying it's the composite ΓΛ\Gamma \circ \Lambda. So this is a very worthwhile thing to think about. For example, I think you can see that it's the left adjoint of the forgetful functor from presheaves to sheaves.

view this post on Zulip David Egolf (Jul 11 2024 at 23:20):

I am currently puzzling over this sentence from the blog post:

So, if you think about it, you’ll see this: to define a section of the sheafification of FF over an open set UU, you can just take a bunch of sections of FF over open sets covering UU that agree when restricted to the overlaps.

view this post on Zulip David Egolf (Jul 11 2024 at 23:21):

I am unsure what is meant here by a section of a sheaf. (The sheafification of FF is a sheaf).

In the blog posts so far, we've talked about sections of bundles, but not sheaves, as far as I can remember. However, we recently saw that there is an equivalence of categories, between the category of sheaves on XX and the category of etale spaces over XX. When we talk about "a section of a sheaf", is this perhaps a way to refer to a section of the sheaf's corresponding etale space (which is a bundle) under this equivalence?

view this post on Zulip David Egolf (Jul 11 2024 at 23:32):

Looking in the comments for Part 2, I notice this comment (from John Baez):

I keep want to call an element of F(U)F(U) a 'section' of the presheaf FF over the open set UU...

I am guessing that this is the intended meaning in the snippet of Part 3 that I quoted above.

view this post on Zulip David Egolf (Jul 11 2024 at 23:39):

Under this assumption, we can rewrite the snippet I quoted above. (I use FF' to refer to the sheafification of FF).

To define an element of F(U)F'(U) over an open set UU, you can just take a bunch of elements, one from each F(Ui)F(U_i) as UiU_i ranges over a collection of open sets that covers UU, provided that the chosen elements agree when restricted to overlaps.

For example, if FF is the presheaf of bounded continuous real-valued functions on open subsets of R\mathbb{R}, then an element of F(R)F'(\mathbb{R}) can be obtained by stitching together a bunch of bounded continuous real-valued functions, defined on various open subsets UiU_i of R\mathbb{R}, provided that the UiU_i cover R\mathbb{R} and the selected bounded continuous real-valued functions agree on overlaps. In particular, such an element is not necessarily bounded. So we see that the sheafification FF' of a presheaf FF can (at least sometimes) assign new data to an open set - data that the original presheaf FF did not assign to that open set!

view this post on Zulip Peva Blanchard (Jul 12 2024 at 10:27):

Indeed, I was a bit surprised too, when reading papers, to see that an element sF(U)s \in F(U) is also called a section of FF over UU. Because a section is usually defined w.r.t. a bundle EBE \to B.

I think there is a way to this way of speaking rigorous by noticing that, as presheaves, FF is included in its sheafification FF'. The inclusion is given, for every open subset UU, by s[s]s \mapsto [s] where [s][s] is the continuous function from UU to the étale space of FF that assigns to every xUx \in U the germ [s]x[s]_x of ss at xx.

The function [s][s] is a section of the étale space of FF.
And the map s[s]s \mapsto [s] is injective.
This is why I think we can speak of sF(U)s \in F(U) as a section of FF over UU.

view this post on Zulip Kevin Carlson (Jul 12 2024 at 16:59):

Yes, that's right, in the settings where a sheaf can be seen as a local homeomorphism, its sections in F(U)F(U) are literally local sections of that map; in more general settings it's metaphorical.

view this post on Zulip John Baez (Jul 12 2024 at 19:05):

@David Egolf sorry for the sloppy use of language. Peva and Kevin guessed: at some point I started wanting to call an element of F(U)F(U) of a sheaf or even a presheaf FF something a bit more evocative than an "element", so I started calling it a "section" - apparently without adequate warning.

view this post on Zulip John Baez (Jul 12 2024 at 19:05):

It's probably safest - as far as clarity goes - if I just don't do this at all.

view this post on Zulip John Baez (Jul 12 2024 at 19:07):

For sheaves, it's quite safe to abuse language after one has internalized the fact that every sheaf is (isomorphic) to the sheaf of sections of its etale space.

view this post on Zulip John Baez (Jul 12 2024 at 19:10):

And one can even get away with it for presheaves, using the trick Peva explained: given a presheaf GG we have a god-given inclusion G(U)F(U)G(U) \to F(U) where FF is the sheafification of GG, so we can think of elements of G(U)G(U) as some of the elements of F(U)F(U), which we can think of as sections of the etale space of FF.

view this post on Zulip John Baez (Jul 12 2024 at 19:11):

However, since this is supposed to be an introduction to the subject, I should not be forcing my readers into all this "thinking of X as Y" baloney.

view this post on Zulip John Baez (Jul 12 2024 at 19:12):

I will just go back and edit my blog posts to change the offending term "section" to "element" as needed.

view this post on Zulip John Baez (Jul 12 2024 at 19:12):

Okay, on the blog post Topos Theory (Part 3) I've changed

So, if you think about it, you’ll see this: to define a section of the sheafification of FF over an open set UU, you can just take a bunch of sections of FF over open sets covering UU that agree when restricted to the overlaps.

to this:

If you think about it a while, you'll see that sheafification works like this: to define an element of (ΓΛF)(U) (\Gamma \Lambda F)(U) over an open set UU, you can just take a bunch of elements of F(Ui)F(U_i) over open sets UiU_i covering U U that agree when restricted to the overlaps UiUjU_i \cap U_j.

This is somewhat symbol-ridden compared to what I'd written - I was trying to talk like an ordinary bloke - but since I'd just said that ΓΛ\Gamma \Lambda is called 'sheafification', it should make sense in context.

view this post on Zulip David Egolf (Jul 12 2024 at 19:32):

Great! Thanks for clarifying that!

view this post on Zulip John Baez (Jul 12 2024 at 19:34):

And it's spelled out in even more detail in the following puzzle, which I've also corrected:

Puzzle. Prove the above claim. Give a procedure for constructing an element of (ΓΛF)(U)(\Gamma \Lambda F)(U) given open sets UiUU_i \subseteq U covering UU and elements siF(Ui)s_i \in F(U_i) that obey

siUiUj=sjUiUj \displaystyle{ s_i|_{U_i \cap U_j} = s_j|_{U_i \cap U_j} }

view this post on Zulip John Baez (Jul 12 2024 at 20:38):

If you spot more places where I do this, or any other problems, please let me know and I can fix them.

view this post on Zulip Peva Blanchard (Jul 12 2024 at 21:07):

This "way of thinking about sFUs \in FU" reminds me of something that recently blew my mind.

Urs Schreiber gave a very nice talk at the Zulip CT Seminar, about higher topos theory in physics. I couldn't understand all the details, but he starts with a "way of thinking" that, I think, is accessible for CT beginners. Here it is.

When you have a space XX that you want to study, e.g., the surface of the earth, it is easier if you have a plot. A plot could be, for instance, a function pp that sends a tuples of coordinates (x,y)(x,y) (e.g. the latitude and longitude) to a point in XX. In other words, a plot can be seen as a function UpXU \xrightarrow{p} X from some "nice" or familiar space UU to the space XX.

The key point is that the study of the space XX is exactly the study of all the plots UpXU \xrightarrow{p} X for every nice space UU. To study the surface of the earth is the same as setting up all local charts and giving procedures to transform any one of them into another (provided they overlap).

This collection of charts can be described as follows: for every UU, we have a set FUFU of plots p:UXp : U \to X. Moreover, these sets should be consistent with one another. If there is a nice morphism UVU \to V, then there is a function FVFUFV \to FU that maps any plot VXV \to X to its pre-composition with the nice morphism. And there is a collection of nice morphisms that mutually agree on the pairwise intersection of their domains, then they can be glued together.

In other words, the study of XX is exactly the study of a sheaf FF defined on some category CC of "nice" spaces.

And now the twist: if we follow this wanna-be equivalence, then any sheaf FF on CC can be thought of as the sheaf of plots into a generalized space. To emphasize even more the situation, we can use the name FF to refer to this generalized space.

And then, an element sFUs \in FU is to be thought of as a plot s:UFs : U \to F.

view this post on Zulip Peva Blanchard (Jul 12 2024 at 21:11):

ps: I am still processing all the twists involved. I hope I did not over interpret this stuff.

view this post on Zulip Peva Blanchard (Jul 12 2024 at 21:19):

Note that, I am using the symbols UU and XX that have been used in this thread to denote an open subset UU and a topological space XX.

However, in Urs Schreiber's talk, the "nice" category is not the category of open subsets of a topological space. The first example he gave was something like the category of cartesian spaces Rn\mathbb{R}^n. Apparently, this relates to the distinction between a petit and gros topos

view this post on Zulip John Baez (Jul 13 2024 at 06:37):

Right, that's an excellent explanation of an important direction sheaf theory goes after one learns the basic example of sheaves on a topological space! This direction is often attributed to Grothendieck. Unfortunately my blog articles don't get far enough to talk about this direction: I only managed to cover some very basic material. But Mac Lane and Moerdijk do talk about it.

The first step toward this direction is generalizing sheaves on a set with a topology to sheaves on a category with a Grothendieck topology. And this is the kind of sheaf their book is mainly about.

view this post on Zulip David Egolf (Jul 15 2024 at 17:57):

Here is the next puzzle:

Puzzle. Prove the above claim. Give a procedure for constructing an element of (ΓΛF)(U)(\Gamma \Lambda F)(U) given open sets UiUU_i \subseteq U covering UU and elements siF(Ui)s_i \in F(U_i) that obey

siUiUj=sjUiUj \displaystyle{ s_i|_{U_i \cap U_j} = s_j|_{U_i \cap U_j} }

Since we are wishing to construct an element of (ΓΛF)(U)(\Gamma \Lambda F)(U), perhaps it will help to think about what the elements of (ΓΛF)(U)(\Gamma \Lambda F)(U) are.

ΓΛF\Gamma \Lambda F is the presheaf of sections of ΛF\Lambda F. So, the set (ΓΛF)(U)(\Gamma \Lambda F)(U) is the set of sections of ΛF\Lambda F over UU. Thus, an element of (ΓΛF)(U)(\Gamma \Lambda F)(U) is a section of ΛF\Lambda F over UU.

view this post on Zulip David Egolf (Jul 15 2024 at 17:59):

What is a section of ΛF\Lambda F over UU? Well, ΛF\Lambda F is a bundle over XX (where XX is the topological space the open sets UiU_i are from), which I will write as p:ΛFXp:\Lambda F \to X. (So I am using the symbol ΛF\Lambda F in two different ways now). We recall that each point of xx has some set Λ(F)x\Lambda(F)_x "hovering over" it, in the sense that p1(x)=Λ(F)xp^{-1}(x) = \Lambda(F)_x for each xXx \in X. Thus, a section of p=ΛFp = \Lambda F over UU is a continuous function s:UΛFs:U \to \Lambda F such that s(x)Λ(F)xs(x) \in \Lambda(F)_x for each xUx \in U.

view this post on Zulip David Egolf (Jul 15 2024 at 18:01):

So, if we can construct a map s:UΛFs:U \to \Lambda F with xΛ(F)xx \in \Lambda(F)_x for each xUx \in U, we will have constructed an element of (ΓΛF)(U)(\Gamma \Lambda F)(U). To do this, we first recall that Λ(F)x\Lambda(F)_x is the colimit of the diagram given by the F(Vi)F(V_i) (and their restriction functions) as ViV_i ranges over all the open sets in XX that contain xx. In particular, this implies that there is a ("cone leg") function from F(Vi)F(V_i) to ΛF(x)\Lambda F(x) if xVix \in V_i.

view this post on Zulip David Egolf (Jul 15 2024 at 18:07):

The following procedure now occurs to me:

view this post on Zulip David Egolf (Jul 15 2024 at 18:12):

Intuitively, s:UΛFs:U \to \Lambda F is a "local behaviour" function, that builds up some "global" data on all of UU by describing how it behaves at each point. s(x)=[si]xs(x) = [s_i]_x intuitively says that our global data at xx will behave locally like how siF(Ui)s_i \in F(U_i) does. So, we can view this definition of ss as a sort of "gluing process" that forms global data from local data. This glued together data, ss, is then an element of the set assigned to UU by the sheafification of FF.

view this post on Zulip David Egolf (Jul 15 2024 at 18:14):

It still remains to show that s:UΛFs: U \to \Lambda F defined by s(x)=[si]xs(x)=[s_i]_x (for xUix \in U_i) is continuous. But I will stop here for today!

view this post on Zulip David Egolf (Jul 22 2024 at 16:24):

I will now try to prove that s:UΛFs:U \to \Lambda F defined by s(x)=[si]xs(x) = [s_i]_x (when xUix \in U_i) is continuous. I hope to do this by showing that ss is continuous at any point xUx \in U.

To show that ss is continuous at xUx \in U, we need to show that the inverse image under ss of each neighborhood of s(x)s(x) is a neighborhood of xx. (I use the word "neighborhood" to mean a subset that contains the point in question, while having a subset that is open which also contains that point).

view this post on Zulip David Egolf (Jul 22 2024 at 16:29):

So, let VΛFV \subseteq \Lambda F be a neighborhood of s(x)s(x). We wish to show that s1(V)s^{-1}(V) is a neighborhood of xx.

view this post on Zulip David Egolf (Jul 22 2024 at 16:42):

After spending some time dredging up my memories from the earlier parts of this thread, I think it suffices to show this: the inverse image of any neighbourhood from a neighbourhood basis of s(x)s(x) is a neighbourhood of xx.

To see why this is sufficient, let VV be an arbitrary neighbourhood of s(x)s(x) and let Bs(x)B_{s(x)} be a neighbourhood basis for s(x)s(x). Then, by the definition of neighbourhood basis, there is some VbVV_b \subseteq V with VbBs(x)V_b \in B_{s(x)}, so that s(x)Vbs(x) \in V_b. If f1(Vb)f^{-1}(V_b) is a neighbourhood of xx, then that means it contains an open set that contains xx, which is further a subset of f1(V)f^{-1}(V). Thus, we can conclude that f1(V)f^{-1}(V) is a neighbourhood of xx.

view this post on Zulip David Egolf (Jul 22 2024 at 16:56):

Earlier in this thread, we discussed a convenient neighbourhood basis for a point pΛFp \in \Lambda F. First, since every element of ΛF\Lambda F is a germ, we note that p=[s]xp = [s]_x for some sF(U)s \in F(U), where UXU \subseteq X is an open set containing xx. Then, we can form the collection of sets of the form {[s]xxU}\{[s]_x | x \in U'\} as UUU' \subseteq U varies over the open subsets of UU that contain xx.

If I'm reading the earlier part of this thread correctly, we saw that this collection of sets B[s]xB_{[s]_x} forms a neighbourhood basis for p=[s]xp=[s]_x. Intuitively, each set in the collection B[s]xB_{[s]_x} is a set of "local behaviours of ss" on some open set containing xx.

view this post on Zulip David Egolf (Jul 22 2024 at 17:04):

Returning to the current puzzle, to show that s:UΛFs:U \to \Lambda F is continuous at xx, it suffices to show that the inverse image of any neighbourhood from our neighbourhood basis Bs(x)B_{s(x)} is a neighbourhood of xx.

view this post on Zulip David Egolf (Jul 22 2024 at 17:07):

Now, s(x)=[si]xs(x) = [s_i]_x by definition. Here, siF(Ui)s_i \in F(U_i) is one of our provided sheaf set elements such that xUix \in U_i.

In this setting, let us consider an arbitrary neighbourhood from our neighbourhood basis Bs(x)=B[si]xB_{s(x)} = B_{[s_i]_x}. It is of this form: {[si]yyU}\{[s_i]_y | y \in U'\} where UUiU' \subseteq U_i is an open subset of UiU_i that contains xx. I will denote this neighbourhood as NiN_i.

view this post on Zulip David Egolf (Jul 22 2024 at 17:14):

We wish to show that s1(Ni)s^{-1}(N_i) is a neighbourhood of xx. What is this preimage? Well, s:y[si]ys:y \mapsto [s_i]_y. So, if we have some [si]yNi[s_i]_y \in N_i, its preimage is the collection of points yy' such that [si]y=[si]y[s_i]_{y} = [s_i]_{y'}.

For each yUy \in U', we have that [si]yNi[s_i]_y \in N_i. Hence, s1(Ni)s^{-1}(N_i) contains at least all of UU'. Since UU' is an open set containing xx, we conclude that s1(Ni)s^{-1}(N_i) really is a neighbourhood of xx!

view this post on Zulip David Egolf (Jul 22 2024 at 17:21):

We conclude that the inverse image of an arbitrary set in our neighbourhood basis for s(x)s(x) is a neighbourhood of xx. Therefore, the inverse image of an arbitrary neighbourhood for s(x)s(x) is a neighbourhood for xx. That implies that ss is continuous at xx, for any xUx \in U. Thus, s:UΛFs:U \to \Lambda F is continuous, as desired!

view this post on Zulip David Egolf (Jul 22 2024 at 17:23):

Intuitively, if we think of continuous functions as having output that changes gradually as their input changes gradually, then the "local behaviour" of our data on UU specified by our s:UΛFs: U \to \Lambda F changes gradually as we move around in UU.

(I suppose one way to formalize this intuition is to note that, by making use of ss, any path p:IUp:\mathbb{I} \to U induces a path in ΛF\Lambda F. Namely , we get sp:IUΛFs \circ p:\mathbb{I} \to U \to \Lambda F.)

view this post on Zulip David Egolf (Jul 22 2024 at 17:33):

Whew! Hopefully I did that correctly. I'll wrap up for today by introducing the next puzzle:

Puzzle. Show that for any presheaf FF there is a morphism of presheaves ηF:FΓΛF\eta_F: F \to \Gamma \Lambda F.

Show that these morphisms are natural in FF, so they define a natural transformation η:1ΓΛ\eta:1 \Rightarrow \Gamma \Lambda.

view this post on Zulip David Egolf (Jul 22 2024 at 17:37):

The puzzle in the blog post actually states this, which I suspect is a typo:

Show that these morphisms are natural in FF, so they define a natural transformation η:1ΛΓ\eta: 1 \Rightarrow \Lambda \Gamma.

view this post on Zulip John Baez (Jul 22 2024 at 20:49):

It looks like you did that last puzzle correctly - great!

Either ΓΛ\Gamma \Lambda or ΛΓ\Lambda \Gamma has to be a typo above, so pick the one that makes sense.

view this post on Zulip John Baez (Jul 22 2024 at 20:58):

If you get stuck... I fixed the typo in my blog - thanks! And I also fixed another mistake where I spoke of a "section" of a presheaf instead of an "element". You've convinced me it's really bad to start using "section" in this other way.

view this post on Zulip David Egolf (Jul 25 2024 at 18:38):

Awesome! It's great you fixed those things! It's nice to know that this thread is helping make the blog posts even better for future readers.

view this post on Zulip David Egolf (Jul 25 2024 at 18:43):

Alright, my next goal is to show that for any presheaf FF there is a morphism of presheaves ηF:FΓΛF\eta_F:F \to \Gamma \Lambda F. All the presheaves I consider here will be on some topological space XX.

So, we are looking for a natural transformation α=ηF\alpha = \eta_F from F:O(X)opSetF:\mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set} to ΓΛF:O(X)opSet\Gamma \Lambda F: \mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set}.

view this post on Zulip David Egolf (Jul 25 2024 at 18:47):

Let r:UUr: U \to U' be a morphism in O(X)op\mathcal{O}(X)^{\mathrm{op}}. Here UU and UU' are open sets of XX, with UUU' \subseteq U. Associated to rr, we have this naturality square:
naturality square

view this post on Zulip David Egolf (Jul 25 2024 at 18:48):

So, we need to specify a function αU:F(U)ΓΛF(U)\alpha_U:F(U) \to \Gamma \Lambda F(U) for each open set UU in XX.

view this post on Zulip David Egolf (Jul 25 2024 at 18:50):

In the previous puzzle, we saw a way to build an element of ΓΛF(U)\Gamma \Lambda F(U) given a cover of open sets UiU_i for UU, together with siF(Ui)s_i \in F(U_i) for each ii so that the sis_i given agree on overlaps.

We can now use this procedure specialized to the case where our cover of open sets for UU consists of just the single set UU. If we are given some sF(U)s \in F(U), we should be able to construct some element of ΓΛF(U)\Gamma \Lambda F(U).

view this post on Zulip David Egolf (Jul 25 2024 at 18:54):

An element of ΓΛF(U)\Gamma \Lambda F(U) is a section s:UΛFs':U \to \Lambda F, which is a continuous function :UΛF:U \to \Lambda F such that s(x)Λ(F)xs'(x) \in \Lambda(F)_x for each xUx \in U. (Here, Λ(F)x\Lambda(F)_x is the set of germs of FF at xx.)

Given sF(U)s \in F(U), our recipe for constructing an s:UΛFs':U \to \Lambda F from the previous puzzle is this: s(x)=[s]xs'(x) = [s]_x for each xUx \in U.

Intuitively, this ss' is just like ss: at each point xUx \in U it specifies a germ s(x)=[s]xs'(x)=[s]_x, which describes the "local behaviour" of ss at that point.

view this post on Zulip David Egolf (Jul 25 2024 at 18:59):

So, we can try setting αU(s)=(x[s]x):UΛF\alpha_U(s) = (x \mapsto [s]_x):U \to \Lambda F for each UU. It remains to check that this specifies a natural transformation.

view this post on Zulip David Egolf (Jul 25 2024 at 19:21):

Picking some sF(U)s \in F(U), let's see if the naturality square commutes. UαU(s)=U(x[s]x)|_{U'} \circ \alpha_U(s) = |_{U'}(x \mapsto [s]_x) and αUU(s)\alpha_{U'} \circ |_{U'}(s) will be x[sU]xx \mapsto [s|_{U'}]_x for xUx \in U'. These two functions are the same because restricting a presheaf element sF(U)s \in F(U) from UU to UU' leaves each of its germs in UU' unchanged.

We conclude that an arbitrary naturality square (of the form pictured above) commutes, so we have indeed found a natural transformation α:FΓΛF\alpha:F \to \Gamma \Lambda F.

view this post on Zulip David Egolf (Jul 25 2024 at 19:29):

I'll stop there for now! But the next part of the puzzle is to show that we can build up a natural transformation η:1ΓΛ\eta : 1 \Rightarrow \Gamma \Lambda in this way.

view this post on Zulip David Egolf (Jul 30 2024 at 16:45):

To zoom out briefly before pressing onward:

We are working towards showing that there is an adjunction between these two functors. An important feature of an adjunction is a "unit". If L:CDL:C \to D is left adjoint to R:DCR:D \to C, the unit is a natural transformation η:1CRL\eta:1_C \to R \circ L. Intuitively, a unit tells us something about what happens to an object or morphism of CC that takes a "round trip" across the adjunction and back.

view this post on Zulip David Egolf (Jul 30 2024 at 16:49):

In our case, we wish to construct a natural transformation:

This will relate each presheaf FF to its sheafification ΓΛF\Gamma \Lambda F.

view this post on Zulip David Egolf (Jul 30 2024 at 16:55):

To construct such a natural transformation, we need to say what each of its components are. Any component ηF\eta_F of η\eta is a morphism from some presheaf FF to its sheafification ΓΛ(F)\Gamma \Lambda (F).

So, for each FF, we need a natural transformation ηF:FΓΛ(F)\eta_F:F \to \Gamma \Lambda (F).

view this post on Zulip David Egolf (Jul 30 2024 at 16:59):

Since ηF\eta_F is a natural transformation between presheaves on XX, it has one component for each open set of XX. Using our work from last time, how should we set (ηF)U:F(U)ΓΛ(F)(U)(\eta_F)_U: F(U) \to \Gamma \Lambda (F)(U)?

view this post on Zulip David Egolf (Jul 30 2024 at 17:03):

We can try this:

Note that sF(U)s \in F(U) and (x[s]x):UΛF(x \mapsto [s]_x):U \to \Lambda F. By [s]x[s]_x I mean the germ of sF(U)s \in F(U) at xUx \in U.

(We recall that an element of ΓΛ(F)(U)\Gamma \Lambda (F)(U) is a section :UΛF:U \to \Lambda F with respect to p:ΛFXp:\Lambda F \to X which sends each germ to its associated point in XX).

view this post on Zulip David Egolf (Jul 30 2024 at 17:07):

We saw last time that ηF:FΛΓ(F)\eta_F:F \to \Lambda \Gamma(F) defined in this way is indeed a natural transformation from a presheaf to its sheafification. It remains to show that all the ηF\eta_F assemble to form a natural transformation η:1O(X)^ΓΛ\eta: 1_{\widehat{\mathcal{O}(X)}} \to \Gamma \circ \Lambda.

view this post on Zulip David Egolf (Jul 30 2024 at 17:12):

To show that we get a natural transformation, we need to show that an arbitrary naturality square commutes. Let α:FG\alpha: F \to G be an arbitrary morphism in O(X)^\widehat{\mathcal{O}(X)}. Our corresponding naturality square is:
naturality square

view this post on Zulip David Egolf (Jul 30 2024 at 17:14):

We want to show that ΓΛ(α)ηF=ηGα\Gamma \Lambda(\alpha) \circ \eta_F = \eta_G \circ \alpha is true. Both sides of this equation are natural transformations. To show that two natural transformations are equal, it suffices to show that each of their components are equal.

Hence, we now aim to show that (ΓΛ(α))U(ηF)U=(ηG)UαU(\Gamma \Lambda(\alpha))_U \circ (\eta_F)_U = (\eta_G)_U \circ \alpha_U, where UU is some arbitrary open set of XX.

view this post on Zulip David Egolf (Jul 30 2024 at 17:17):

In picture form, we wish to show that this diagram commutes:
naturality square evaluated at U

view this post on Zulip David Egolf (Jul 30 2024 at 17:19):

To show that this diagram commutes, we can trace around an arbitrary element sF(U)s \in F(U) and see what happens to it. We want to show that ((ΓΛ(α))U(ηF)U)(s)=((ηG)UαU)(s)((\Gamma \Lambda(\alpha))_U \circ (\eta_F)_U)(s) = ((\eta_G)_U \circ \alpha_U)(s) is true.

view this post on Zulip David Egolf (Jul 30 2024 at 17:26):

I start by aiming to expand the right-hand side of this equation.

We don't know anything in particular about αU\alpha_U, so we can't further simplify αU(s)\alpha_U(s). However, we do know what (ηG)U(\eta_G)_U is like. From above, we have (ηG)U:s(x[s]x)(\eta_G)_U:s \mapsto (x \mapsto [s]_x). Each output is a function :UΛG:U \to \Lambda G, which is in fact a section of p:ΛGXp:\Lambda G \to X.

Thus, ((ηG)UαU)(s)=(ηG)U(αU(s))=(x[αU(s)]x)((\eta_G)_U \circ \alpha_U)(s) = (\eta_G)_U(\alpha_U(s)) = (x \mapsto [\alpha_U(s)]_x).

view this post on Zulip David Egolf (Jul 30 2024 at 17:32):

We next turn our attention to the left-hand side of the equation above.

We know what (ηF)U(s)(\eta_F)_U(s) is. It is (x[s]x):UΛF(x \mapsto [s]_x):U \to \Lambda F. Thus, the left-hand side is:

view this post on Zulip David Egolf (Jul 30 2024 at 17:36):

Having re-written each side of the equation we wish to show is true, our new goal is to show this equation is true:

(Here, each side of the equation is a section of p:ΛGXp:\Lambda G \to X over UU. In particular, each side of the equation is a function :UΛG:U \to \Lambda G.)

view this post on Zulip David Egolf (Jul 30 2024 at 17:41):

To go further with this, I think we need to re-write (ΓΛ(α))U(\Gamma \Lambda (\alpha))_U, so that we can figure out what it does to the input (x[s]x):UΛF(x \mapsto [s]_x): U \to \Lambda F.

To start doing that, let's start by recalling what Λ(α):Λ(F)Λ(G)\Lambda(\alpha): \Lambda(F) \to \Lambda(G) is.

view this post on Zulip David Egolf (Jul 30 2024 at 17:52):

Scrolling back in time, I found this:
David Egolf said:

Ok, now that I understand this neighbourhood basis, let me see about trying to use it to prove that G(α):Λ(F)Λ(F)G(\alpha):\Lambda(F) \to \Lambda(F') is continuous. Recall that G(α)G(\alpha) is going to be a morphism of bundles induced by a morphism α:FF\alpha:F \to F' of presheaves on XX. And G(α)G(\alpha) acts by [s]x[α(s)]x[s]_x \mapsto [\alpha(s)]_x.

view this post on Zulip David Egolf (Jul 30 2024 at 17:53):

Translating this to the notation I'm currently using, we have:

α(s)\alpha(s) here is shorthand for αU(s)\alpha_U(s) where sF(U)s \in F(U). So we are making using of αU:F(U)G(U)\alpha_U:F(U) \to G(U).

view this post on Zulip David Egolf (Jul 30 2024 at 17:57):

Next, I want to work out what Γ(Λ(α))=Γ([s]x[α(s)]x)\Gamma(\Lambda(\alpha)) = \Gamma([s]_x \mapsto [\alpha(s)]_x) is.

To do this, it will be helpful to recall what Γ\Gamma outputs given a morphism of bundles. After reviewing an earlier part of this thread, it appears that:

view this post on Zulip David Egolf (Jul 30 2024 at 18:14):

So, Γ(Λ(α))\Gamma(\Lambda(\alpha)) is a natural transformation from Γ(Λ(F))\Gamma(\Lambda(F)) to Γ(Λ(G))\Gamma(\Lambda(G)). These are both sheaves on XX. The UU-th component of this natural transformation :Γ(Λ(F))(U)Γ(Λ(G))(U):\Gamma(\Lambda(F))(U) \to \Gamma(\Lambda(G))(U) is then the function tΛ(α)tt \mapsto \Lambda(\alpha) \circ t. Here, t:UΛFt:U \to \Lambda F is a section of p:ΛFXp:\Lambda F \to X. Since Λ(α):Λ(F)Λ(G)\Lambda(\alpha): \Lambda(F) \to \Lambda(G), we have that Λ(α)t:UΛ(G)\Lambda(\alpha) \circ t:U \to \Lambda(G), as desired.

view this post on Zulip David Egolf (Jul 30 2024 at 18:18):

Intuitively, the UU-th component of this natural transformation takes data described on UU in terms of local behaviour at each point in UU, and then transforms each piece of local behaviour (a point in ΛF\Lambda F) to a new piece of local behaviour (a point in ΛG\Lambda G).

view this post on Zulip David Egolf (Jul 30 2024 at 18:21):

In brief, (ΓΛ(α))U:(ΓΛ(F))(U)(ΓΛ(G))(U)(\Gamma \Lambda(\alpha))_U: (\Gamma \Lambda (F))(U) \to (\Gamma \Lambda (G))(U) and (ΓΛ(α))U(t)=Λ(α)t(\Gamma \Lambda(\alpha))_U(t) = \Lambda(\alpha) \circ t.

view this post on Zulip David Egolf (Jul 30 2024 at 18:26):

We are now in the position to try and rewrite this expression: (ΓΛ(α))U(x[s]x)(\Gamma \Lambda(\alpha))_U(x \mapsto [s]_x). This can now be rewritten as:

view this post on Zulip David Egolf (Jul 30 2024 at 18:31):

We noted above that Λ(α):[s]x[α(s)]x\Lambda(\alpha):[s]_x \mapsto [\alpha(s)]_x, where α(s)\alpha(s) is shorthand for αU(s)\alpha_U(s) if sF(U)s \in F(U). Thus, Λ(α)([s]x)=[αU(s)]x\Lambda(\alpha)([s]_x) = [\alpha_U(s)]_x, and so:

view this post on Zulip David Egolf (Jul 30 2024 at 18:35):

We were able to rewrite each side of ((ΓΛ(α))U(ηF)U)(s)=((ηG)UαU)(s)((\Gamma \Lambda(\alpha))_U \circ (\eta_F)_U)(s) = ((\eta_G)_U \circ \alpha_U)(s) to the expression x[αU(s)]xx \mapsto [\alpha_U(s)]_x. Thus, this diagram commutes at any sF(U)s \in F(U), and so it commutes:
naturality square evaluated at U

Since UU was arbitrary, we conclude that this diagram commutes for any open set UU of XX.

view this post on Zulip David Egolf (Jul 30 2024 at 18:36):

Thus, ΓΛ(α)ηF=ηGα\Gamma \Lambda(\alpha) \circ \eta_F = \eta_G \circ \alpha is true and this naturality square commutes:
naturality square

Since α:FG\alpha:F \to G was an arbitrary natural transformation between presheaves on XX, we conclude that this diagram commutes for any natural transformation between presheaves on XX. So, any naturality square for η\eta commutes.

view this post on Zulip David Egolf (Jul 30 2024 at 18:37):

Finally, we conclude that η:1O(X)^ΓΛ\eta: 1_{\widehat{\mathcal{O}(X)}} \to \Gamma \circ \Lambda is a natural transformation, as desired!

view this post on Zulip David Egolf (Jul 30 2024 at 18:45):

This means we have a candidate η\eta for the unit of an adjunction ΛΓ\Lambda \dashv \Gamma. Next time, I plan to think about what the counit of such an adjunction could look like!

view this post on Zulip John Baez (Jul 31 2024 at 09:02):

Great work so far!

view this post on Zulip David Egolf (Aug 05 2024 at 17:37):

Next up, we wish to construct a candidate for the counit of an adjunction ΛΓ\Lambda \dashv \Gamma. This will be a natural transformation ϵ:ΛΓ1Top/X\epsilon: \Lambda \circ \Gamma \to 1_{\mathsf{Top}/X}.

To start with, let's pick some bundle p:YXp:Y \to X. The pp-th component of ϵ\epsilon will be a morphism of bundles ϵp:ΛΓ(p)p\epsilon_p: \Lambda\Gamma(p) \to p.

view this post on Zulip David Egolf (Aug 05 2024 at 17:39):

@John Baez I believe there is a typo in this part of the blog post:

Then you want to construct a morphism of bundles ηp\eta_p from your etale space ΛΓp\Lambda \Gamma_p to my original bundle.

I think the ηp\eta_p in this sentence should be ϵp\epsilon_p, instead.

view this post on Zulip David Egolf (Aug 05 2024 at 17:45):

We want to define a morphism of bundles ϵp:ΛΓ(p)p\epsilon_p:\Lambda \Gamma(p) \to p. I don't have the energy to figure out how to do that today. But I will quote the relevant part of the blog post to facilitate doing this on another day:

We get points in ΛΓp\Lambda \Gamma_p over xXx \in X from sections of p:YXp:Y \to X over open sets containing xx. But you can just take one of these sections and evaluate it at xx and get a point in YY.

(Note that the blog post uses the notation ΛΓp\Lambda \Gamma_p where I use the notation ΛΓ(p)\Lambda \Gamma(p).)

view this post on Zulip John Baez (Aug 05 2024 at 20:56):

David Egolf said:

Then you want to construct a morphism of bundles ηp\eta_p from your etale space ΛΓp\Lambda \Gamma_p to my original bundle.

I think the ηp\eta_p in this sentence should be ϵp\epsilon_p, instead.

I think you're right - I'll fix that. Thanks!

view this post on Zulip John Baez (Aug 05 2024 at 21:00):

I also fixed another mistake where I used η\eta instead of ϵ\epsilon. It's both a blessing and a curse that Greek has two e-like letters.

view this post on Zulip Oscar Cunningham (Aug 06 2024 at 07:46):

The real trick is to use ε\varepsilon and ϵ\epsilon for two related variables.

view this post on Zulip David Egolf (Sep 05 2024 at 14:52):

Having taken a break, I think I'm feeling ready to resume my efforts here!

Upon reflection, the recent discussion relating to the adjunction ΛΓ\Lambda \dashv \Gamma has felt rather "heavy". I think it will help if I start by summarizing exactly how Λ\Lambda and Γ\Gamma work. Then, contemplating ΛΓ\Lambda \circ \Gamma should feel a bit easier, I hope!

view this post on Zulip John Baez (Sep 05 2024 at 15:25):

Good! Ideally when you have a clear mental image of Λ\Lambda and Γ\Gamma you can sort of "see" (in a more or less literal sense) what ϵ:ΛΓ1\epsilon: \Lambda \circ \Gamma \to 1 and η:1ΓΛ\eta : 1 \to \Gamma \circ \Lambda should be - i.e., how close taking a round trip around between presheaves and bundles comes to getting you back to where you started. To me, seeing what's going on is more important than writing up a proof with lots of symbols, since if I can do the former, I believe I can do the latter when pressed.

(This is after having done 8 years of homework assignments and taught years of courses that kept challenging that belief, in sometimes quite threatening ways. :upside_down:)

view this post on Zulip David Egolf (Sep 05 2024 at 15:47):

I like that perspective @John Baez ! I think I've slowly started to develop some intuition for Λ\Lambda and Γ\Gamma, but I have a ways to go still. I'll keep the goal of developing a clearer picture in mind as I work on this.

view this post on Zulip David Egolf (Sep 05 2024 at 16:05):

Here's a summary for Γ\Gamma:

view this post on Zulip John Baez (Sep 05 2024 at 16:10):

Great! Just for fun, I'll say: I seem to have mentally compacted a lot of stuff to a little picture of a bundle YY over XX (a rectangle sitting over a line), an open set UXU \subseteq X, and a bunch of sections of YY over UU (some graphs of continuous functions defined on UU, drawn in that rectangle I mentioned).

Then there's a fancier picture where I have two bundles Y,YY, Y' over XX and a bundle map f:YYf: Y \to Y'. I can see how it sends sections of YY over UU to sections of YY' over UU.

view this post on Zulip John Baez (Sep 05 2024 at 16:12):

Drawing the pictures would have taken one tenth the time it takes to describe them!

view this post on Zulip David Egolf (Sep 05 2024 at 16:23):

That's helpful! For a given bundle p:YXp:Y \to X, it makes sense to draw all the points that pp sends to xx as "hovering over" xx. And that is nicely visualized using the rectangle you describe!

The low dimension of this picture is also nice, because it lets us easily visualize a section. As a slightly fancier version, one could also imagine a rectangular prism floating over a rectangle. But that would be harder to draw!

view this post on Zulip John Baez (Sep 05 2024 at 16:26):

Yes, it's funny how much advanced work on topology boils down to drawing pathetically simple 2-dimensional pictures and pretending they're higher-dimensional. Our retina is essentially 2-dimensional and we have to live with that.

view this post on Zulip David Egolf (Sep 05 2024 at 16:35):

I made an attempt to draw the second picture you described:
picture

This picture aims to illustrate how we get a section of pp' from a section of pp, given f:YYf:Y \to Y' satisfying pf=pp' \circ f = p. This condition on ff basically ensures that each arrow describing how ff maps points is vertical in this picture.

view this post on Zulip John Baez (Sep 05 2024 at 16:37):

Nice! Yes, this sort of picture is always hovering in my eyeballs as I work with bundles, sheaves and presheaves... guiding me.

view this post on Zulip David Egolf (Sep 05 2024 at 16:42):

I'll stop here for today. Next time, I'd like to do a similar thing for Λ\Lambda. Namely, I hope to review how it acts on objects and morphisms, and maybe to draw a related picture.

view this post on Zulip John Baez (Sep 05 2024 at 16:44):

Great!

view this post on Zulip John Baez (Sep 05 2024 at 16:51):

The trick is to figure out how you want to draw a germ.

view this post on Zulip David Egolf (Sep 06 2024 at 16:57):

I'll start with a summary for Λ\Lambda:

view this post on Zulip David Egolf (Sep 06 2024 at 17:00):

Now, the challenge is to try and draw a picture illustrating this!

view this post on Zulip John Baez (Sep 06 2024 at 17:14):

This is much tougher to draw. Do you want to hear how I draw a germ? It uses some 'artistic license', I'm afraid.

view this post on Zulip David Egolf (Sep 06 2024 at 17:15):

John Baez said:

This is much tougher to draw. Do you want to hear how I draw a germ? It uses some 'artistic license', I'm afraid.

Sure! That would be great!

view this post on Zulip David Egolf (Sep 06 2024 at 17:27):

Here's what I've drawn so far:
picture

view this post on Zulip David Egolf (Sep 06 2024 at 17:31):

The horizontal line is our topological space XX. The region indicated by the large oval is an open set UU of XX. Applying FF to UU gives us the set F(U)F(U), which is represented here as a box floating above UU.

The squiggly line in that box indicates an element sF(U)s \in F(U). A germ of ss at xx is indicated by a small circle around part of the part of ss associated to xUx \in U. (Intuitively, this is given in the limit by restricting ss to smaller and smaller open sets contained in UU and containing xx). Finally the bundle Λ(F)\Lambda(F) projects this germ back down to xXx \in X.

view this post on Zulip David Egolf (Sep 06 2024 at 17:33):

The space of germs Λ(F)\Lambda(F) does not appear in this picture, which is somewhat unsatisfying.

view this post on Zulip David Egolf (Sep 06 2024 at 17:36):

However, I like this about the above picture: it illustrates how going from a presheaf to a bundle of germs involves a transition. Namely,

view this post on Zulip David Egolf (Sep 06 2024 at 17:46):

Here's a fancier version of the above picture, aiming to illustrate part of how Λ\Lambda sends a natural transformation of presheaves to a morphism of bundles:
picture

view this post on Zulip David Egolf (Sep 06 2024 at 17:50):

I've added a second box hovering over UU, corresponding to the set G(U)G(U). Since we have a natural transformation α:FG\alpha: F \to G, we have a function αU:F(U)G(U)\alpha_U:F(U) \to G(U). The squiggly line in the G(U)G(U) box indicates αU(s)\alpha_U(s). "Zooming in" on αU(s)\alpha_U(s) at xx gives us the germ [αU(s)]x[\alpha_U(s)]_x. Finally, the bundle Λ(G):Λ(G)X\Lambda(G):\Lambda(G) \to X projects this germ back down to xXx \in X.

view this post on Zulip David Egolf (Sep 06 2024 at 17:55):

These pictures aren't perfect, but I think making them has been helpful for developing some intuition about what's going on!

Next time, I plan to start thinking about the "round trips" ΛΓ\Lambda \circ \Gamma and ΓΛ\Gamma \circ\Lambda. It would be very cool if we could figure out some pictures to illustrate these round-trip functors!

view this post on Zulip John Baez (Sep 06 2024 at 18:46):

David Egolf said:

The squiggly line in that box indicates an element sF(U)s \in F(U). A germ of ss at xx is indicated by a small circle around part of the part of ss associated to xUx \in U. (Intuitively, this is given in the limit by restricting ss to smaller and smaller open sets contained in UU and containing xx).

Hey, that's more or less how I draw a germ. Morally, the germ of ss at xx is like xx restricted to an 'infinitesimal' open set containing xx.

view this post on Zulip John Baez (Sep 06 2024 at 18:52):

David Egolf said:

The space of germs Λ(F)\Lambda(F) does not appear in this picture, which is somewhat unsatisfying.

Yes, the space of all germs (say of the sheaf of continuous or smooth real-valued functions) is too large to draw except in a completely oversimplified way. I don't see a way around that.

view this post on Zulip John Baez (Sep 06 2024 at 19:01):

David Egolf said:

These pictures aren't perfect, but I think making them has been helpful for developing some intuition about what's going on!

Good! I find pictures helpful as long as I know their limitations, but maybe I haven't thought enough about how the mere process of trying to draw them makes me think about things in new ways.

Next time, I plan to start thinking about the "round trips" ΛΓ\Lambda \circ \Gamma and ΓΛ\Gamma \circ\Lambda. It would be very cool if we could figure out some pictures to illustrate these round-trip functors!

One challenge is that the etale space of a sheaf of sections of a bundle is usually huge compared to the original bundle. But you can just draw it as a rectangle labeled 'huge'. :upside_down:

view this post on Zulip David Egolf (Sep 08 2024 at 17:16):

Today, I want to try and draw a picture to illustrate the "round-trip" functor ΓΛ:O(X)^Top/XO(X)^\Gamma \circ \Lambda:\widehat{\mathcal{O}(X)} \to \mathsf{Top}/X \to \widehat{\mathcal{O}(X)}.

The intuition I have for this functor is as follows:

view this post on Zulip David Egolf (Sep 08 2024 at 17:16):

view this post on Zulip David Egolf (Sep 08 2024 at 17:16):

view this post on Zulip David Egolf (Sep 08 2024 at 17:18):

I'll next try to draw a picture that illustrates this process.

view this post on Zulip David Egolf (Sep 08 2024 at 17:58):

Here's the first part of the process, given by applying Λ\Lambda:
picture

view this post on Zulip David Egolf (Sep 08 2024 at 18:01):

In the top part of the picture, I visualize part of a presheaf FF on XX. The left box represents the set F(U)F(U) of data attached to the open set UU. Similarly, the right box represents the set F(U)F(U') of data attached to the open set UU'. The wiggly lines in these boxes represent elements of these sets.

view this post on Zulip David Egolf (Sep 08 2024 at 18:02):

Because FF is a presheaf, we can zoom in on these wiggly lines to various points, and get germs. Each germ here is represented by a small circle, intending to convey the idea of "zooming in" on a wiggly line at some point. There are many different ways our data can wiggle, and so there are a huge number of germs. I've drawn a big cloud of little circles to try and represent the space of germs very roughly.

The arrows coming down from the top of the diagram to the bottom intend to illustrate how the germs of the pictured wiggly lines would be included in this huge space of germs.

view this post on Zulip David Egolf (Sep 08 2024 at 18:04):

Next, here is a picture for the second half of the process, which involves applying Γ\Gamma:
picture

view this post on Zulip David Egolf (Sep 08 2024 at 18:19):

Γ\Gamma sends a bundle to its sheaf of sections. A section in this case involves "flowing through" germ space in a continuous way, picking out a local behaviour at each point of some open set of XX. Such a section now describes some data attached to an open set of XX again!

In the picture, I've illustrated how we can "glue together" the two presheaf elements ss and ss' that we started with. We get ss'', which is an element of the set attached to UUU \cup U' by our sheaf of sections of Λ(F):Λ(F)X\Lambda(F):\Lambda(F) \to X.

view this post on Zulip David Egolf (Sep 08 2024 at 18:23):

If FF already supported "gluing together" of elements that agree on overlaps (to a unique result), then FF was in fact already a sheaf. But if FF contained elements agreeing on overlaps that couldn't be glued together, then this "round trip" process will result in a sheaf-version of FF! And this (ΓΛ)(F)(\Gamma \circ \Lambda) (F) will be different than FF in this case because it contains some additional "glued together" elements.

view this post on Zulip David Egolf (Sep 08 2024 at 18:26):

I'll stop here for today. Next time, I'm hoping to draw a picture illustrating the other round trip, namely
ΛΓ:Top/XO(X)^Top/X\Lambda \circ \Gamma: \mathsf{Top}/X \to \widehat{\mathcal{O}(X)}\to \mathsf{Top}/X.

view this post on Zulip John Baez (Sep 08 2024 at 20:19):

That was very nice. You rose to the challenge of drawing countless germs without creating a complete mess!

view this post on Zulip David Egolf (Sep 10 2024 at 15:38):

I'm realizing I don't yet have clear intuition for the round trip functor ΛΓ:Top/XO(X)^Top/X\Lambda \circ \Gamma: \mathsf{Top}/X \to \widehat{\mathcal{O}(X)}\to \mathsf{Top}/X. To my understanding, this process converts any bundle over XX to an étale space over XX. (I will write "etale space" to mean " étale space", for ease of typing).

I think we proved that earlier in the thread, but I would struggle to explain how this happens using a picture.

view this post on Zulip David Egolf (Sep 10 2024 at 15:39):

There's two things I want to do at this point:

view this post on Zulip John Baez (Sep 10 2024 at 15:40):

So for ΛΓ\Lambda \circ \Gamma you start with a bundle over XX, then form its sheaf of sections, then look all the germs of sections and make these into the points of a whopping big new bundle over XX.

view this post on Zulip John Baez (Sep 10 2024 at 15:41):

Any point in the whopping big new bundle gives a point in the original bundle, since a germ of a section ss at a point xx gives the point s(x)s(x).

view this post on Zulip John Baez (Sep 10 2024 at 15:42):

I hope I didn't ruin things just now - I usually try to play coy and let you figure out almost everything yourself!

view this post on Zulip David Egolf (Sep 10 2024 at 15:49):

Thanks! I don't fully follow what you said (yet), but I will try to draw a picture of what you just said and see what happens!

John Baez said:

I hope I didn't ruin things just now - I usually try to play coy and let you figure out almost everything yourself!

I do enjoy figuring things out, but in this case a nudge in the right direction is appreciated!

view this post on Zulip David Egolf (Sep 10 2024 at 16:30):

To draw a picture, I need to choose a bundle to start with. I would like to choose a bundle that is not an etale space, so I can see how ΛΓ\Lambda \circ \Gamma upgrades it to an etale space.

view this post on Zulip David Egolf (Sep 10 2024 at 16:33):

Now, a bundle p:YXp:Y \to X is an etale space exactly if p:YXp:Y \to X is a local homeomorphism. A local homeomorphism f:ABf:A \to B is a continuous map so that about each point aa in the domain there is some open set UU so that fU:Uf(U)f|_U:U \to f(U) is a homeomorphism (where UU and f(U)f(U) are both equipped with the subspace topology).

view this post on Zulip David Egolf (Sep 10 2024 at 16:35):

A homeomorphism is in particular an open map: it sends open sets to open sets. The inverse of a homeomorphism is also a homeomorphism, and so it will also send open sets to open sets. Because homeomorphisms are bijections, if we have a homeomorphism fU:Uf(U)f|_U:U \to f(U), we get an induced bijection of open sets, going between the open sets of UU and the open sets of f(U)f(U).

view this post on Zulip David Egolf (Sep 10 2024 at 16:37):

So, to show a bundle p:YXp:Y \to X is NOT an etale space, it would suffice to find some point yYy \in Y so that any open neighborhood UU of yy has "too many" open sets compared to the image of UU under pp. To be more precise, it would suffice to show that there is no open set UU containing yy such that pU:Up(U)p|_U:U \to p(U) induces a bijection of open sets.

view this post on Zulip David Egolf (Sep 10 2024 at 16:45):

Here's a picture aiming to illustrate such a situation:
not a local homeomorphism

view this post on Zulip David Egolf (Sep 10 2024 at 16:46):

Here, YY is a subspace of R2\mathbb{R}^2, equipped with the subspace topology. XX is the real line, and pp is the continuous map given by composing the inclusion of YY to R2\mathbb{R}^2 and the projection of R2\mathbb{R}^2 down to the xx-axis.

view this post on Zulip David Egolf (Sep 10 2024 at 16:49):

UU is an open set containing yy, where UU is contained in the "vertical section" of YY. Notice that yy has many open neighborhoods in UU, given for example by various "vertical" open intervals containing yy. However, the image of UU under pp is just a single point. Viewed as a subspace of XX, this image only has two open sets: the empty set and the set containing the single point p(U)p(U). Hence, there can be no bijection between the open sets of UU and the open sets of p(U)p(U).

(And I suspect that this is true in fact for any open set UU containing yy: indeed, any such open set contains a small (and hence "vertical") open interval about yy, which is an open set that gets "collapsed" when passing to the image of pp).

EDIT: More simply, pp restricted to any open set containing yy will be non-injective. Hence, this restriction of pp can't be a homeomorphism.

view this post on Zulip David Egolf (Sep 10 2024 at 16:51):

We conclude that p:YXp:Y \to X is not a local homeomorphism, and so YY is not an etale space over XX.

view this post on Zulip David Egolf (Sep 10 2024 at 16:57):

Next time, I'm hoping to draw a picture illustrating the process of applying Γ\Gamma to this bundle p:YXp:Y \to X. I'm curious to see how the resulting sheaf of sections will reflect the fact that pp is not a local homeomorphism!

view this post on Zulip David Egolf (Sep 10 2024 at 17:21):

After writing the above, I realized that the "local non-injectivity" of pp is what stops pp from being a local homeomorphism. With that in mind, I think this bundle is also not an etale space:
picture

view this post on Zulip David Egolf (Sep 10 2024 at 17:22):

The idea is that YY is some space that "branches" at some point. If we pick yYy \in Y right at the branching point, then any open neighborhood of yy will contain points from both the "upper" and "lower" branches. And hence the projection pp won't be injective even when restricted to really small neighborhoods of yy, like the picture UU. Thus, this pp can't be a local homeomorphism.

view this post on Zulip Peva Blanchard (Sep 10 2024 at 17:32):

Interesting. So maybe the following holds: if the bundle p:YXp: Y \to X is étale, then all fibers must be in bijection with one another.

I'll think about it.

edit: a first observation is that we must assume XX to be connected

view this post on Zulip John Baez (Sep 10 2024 at 17:34):

David Egolf said:

After writing the above, I realized that the "local non-injectivity" of pp is what stops pp from being a local homeomorphism. With that in mind, I think this bundle is also not an etale space: picture.

That's right! Most bundles that you can actually draw are not etale spaces! For example the bundle I always draw when someone asks me to draw a bundle:

p:[0,1]×[0,1][0,1]p: [0,1] \times [0,1] \to [0,1]
p(x,y)=x p(x,y) = x

is not an etale space.

The only etale spaces p:EBp: E \to B I can easily draw are the 'covering spaces', where every point bBb \in B has a neighborhood UU where p1UU×Sp^{-1}{U} \cong U \times S for some discrete space SS:

view this post on Zulip John Baez (Sep 10 2024 at 17:54):

Peva Blanchard said:

Interesting. So maybe the following holds: if the bundle p:YXp: Y \to X is étale, then all fibers must be in bijection with one another.

This is certainly true if XX is connected and p:YXp: Y \to X is a covering space. So the challenge is to think about etale spaces that aren't covering spaces.

Hmm, now I see that some such etale spaces are quite easy to draw, like this: XX is the real line, and YY is the open interval (0,1)(0,1), and p:YXp: Y \to X is the inclusion of (0,1)(0,1) in the line.

view this post on Zulip David Egolf (Sep 11 2024 at 15:52):

John Baez said:

Hmm, now I see that some such etale spaces are quite easy to draw, like this: XX is the real line, and YY is the open interval (0,1)(0,1), and p:YXp: Y \to X is the inclusion of (0,1)(0,1) in the line.

Yes! Looking on Wikipedia, I see that if UU is an open subset of XX, then the inclusion i:UXi:U \to X is a local homeomorphism, provided that UU is equipped with the subspace topology. (So in this case UU is an etale space over XX).

That same article also notes that if UU is an open subset of Rn\mathbb{R}^n, then these two conditions on a continuous map f:URnf:U \to \mathbb{R}^n are equivalent:

(I assume that UU is again to be equipped with the subspace topology).

view this post on Zulip John Baez (Sep 11 2024 at 16:37):

Thanks! I peeked at the Wikipedia page and I see that to prove the second condition implies they use a substantial theorem in algebraic topology, called invariance of domain, proved by Brouwer. This is one of the results they dole out in a first course on homology theory, to prove you can use it to settle questions that aren't obviously about homology theory.

view this post on Zulip David Egolf (Sep 11 2024 at 16:46):

Peva Blanchard said:

Interesting. So maybe the following holds: if the bundle p:YXp: Y \to X is étale, then all fibers must be in bijection with one another.

I'll think about it.

edit: a first observation is that we must assume XX to be connected

For this condition on fibers to hold, we can also note that we need pp to be surjective, at least if YY is non-empty. Otherwise, we'll have at least one non-empty fiber and at least one empty fiber.

view this post on Zulip David Egolf (Sep 11 2024 at 16:48):

(In the case mentioned above by @John Baez, a covering space p:YXp:Y \to X on a connected space XX is always surjective).

view this post on Zulip John Baez (Sep 11 2024 at 16:55):

Your parenthetical claim actually doesn't follow from Wikipedia's definition of covering space. According to that definition pp can have empty fibers, e.g. we can have Y=Y = \emptyset, because the "discrete space" DxD_x mentioned in that definition can be empty.

view this post on Zulip John Baez (Sep 11 2024 at 16:57):

It's good to allow empty fibers in that definition since ruling out the empty set by fiat tends to produce categories with worse properties: e.g. the empty covering space of XX is the initial object in the category of covering spaces of XX.

view this post on Zulip John Baez (Sep 11 2024 at 16:58):

However, people are vastly more interested in covering spaces where the fibers are nonempty, and then p:YXp: Y \to X is surjective.

view this post on Zulip John Baez (Sep 11 2024 at 16:59):

Wikipedia says "some authors" require surjectivity in the case where XX is not connected. Those authors should probably require it even when XX is connected, since they obviously don't like covering spaces with empty fibers!

view this post on Zulip Kevin Carlson (Sep 11 2024 at 17:02):

Grumble. Reminds me of an old-fashioned professor I TA'd linear algebra for who wasn't quite convinced that the zero vector space has a basis.

view this post on Zulip John Baez (Sep 11 2024 at 17:06):

Born back before they discovered the empty set.

view this post on Zulip David Egolf (Sep 11 2024 at 17:12):

John Baez said:

Your parenthetical claim actually doesn't follow from Wikipedia's definition of covering space. According to that definition pp can have empty fibers, e.g. we can have Y=Y = \emptyset, because the "discrete space" DxD_x mentioned in that definition can be empty.

I admit I didn't try to prove this myself! I just read this in that Wikipedia article:

If XX is connected, it can be shown that π\pi is surjective...

Is that claim in the article incorrect? (As far as I can tell, that Wikipedia article doesn't actually try to prove this claim.)

view this post on Zulip John Baez (Sep 11 2024 at 17:20):

Take the Wikipedia definition of 'covering space' and see if p:YXp: Y \to X is a covering space when Y=Y = \emptyset and pp is the unique map to XX. If this is a covering space by their definition, then it can't be true that

If XX is connected, it can be shown that π\pi is surjective...

This would then be a good time to start your career of correcting Wikipedia pages. :upside_down:

But it's possible I didn't read their definition carefully enough, and for some reason it rules out the case Y=Y = \emptyset.

view this post on Zulip David Egolf (Sep 11 2024 at 17:22):

Alright, let me see what happens when we consider p:YXp:Y \to X when YY is empty! (Time to put Wikipedia to the test!)

view this post on Zulip David Egolf (Sep 11 2024 at 17:28):

Adjusted for our notation, Wikipedia's definition reads as follows:

Let XX be a topological space. A covering of XX is a continuous map p:YXp:Y \to X such that for every xXx \in X there exists an open neighborhood UxU_x of xx and a discrete space DxD_x such that p1(Ux)=dDxVdp^{-1}(U_x) = \coprod_{d \in D_x} V_d and pVd:VdUxp|_{V_d}:V_d \to U_x is a homeomorphism for every dDxd \in D_x.

view this post on Zulip David Egolf (Sep 11 2024 at 17:30):

In the next sentence the article elaborates on this, and indicates that each VdV_d is to be open (presumably as a subset of YY).

view this post on Zulip David Egolf (Sep 11 2024 at 17:39):

Alright. Now, consider the case where YY is empty. Then each VdYV_d \subseteq Y is also empty. Let us assume that XX is non-empty, and let UxU_x be a non-empty open neighborhood of xXx \in X. For p:YXp:Y \to X to be a covering, we need pVd:VdUxp|_{V_d}:V_d \to U_x to be a homeomorphism for each dDd \in D. However, in this case any pVdp|_{V_d} is mapping from an empty set VdV_d to a non-empty set UxU_x.

EDIT:
I think the following is wrong, but I leave it here for context:
[Hence pVdp|_{V_d} can't be a homeomorphism, and pp can't be a covering. We conclude that Wikipedia's definition of a covering p:YXp:Y \to X excludes YY from being empty, at least when XX is non-empty.]

view this post on Zulip David Egolf (Sep 11 2024 at 17:44):

Wait a second! I think I missed something possibly important.

view this post on Zulip David Egolf (Sep 11 2024 at 17:46):

Strictly speaking, we are not required to have a homeomorphism pVd:VdUxp|_{V_d}:V_d \to U_x. Instead, we are required that for every dDxd \in D_x we have a homeomorphism pVd:VdUxp|_{V_d}:V_d \to U_x. So, if DxD_x is empty, this condition can still hold trivially!

view this post on Zulip David Egolf (Sep 11 2024 at 17:49):

Let us consider p1(Ux)p^{-1}(U_x). Since YY is empty, p1(Ux)p^{-1}(U_x) is also empty. We want to write the empty space as a coproduct dDxVd\coprod_{d \in D_x}V_d where DxD_x is empty. What is the empty coproduct in Top\mathsf{Top}?

view this post on Zulip David Egolf (Sep 11 2024 at 17:52):

I expect that the empty coproduct is the colimit of the empty diagram, which is the initial object of Top\mathsf{Top}. So, the empty coproduct should be the empty space.

view this post on Zulip John Baez (Sep 11 2024 at 17:53):

David Egolf said:

Strictly speaking, we are not required to have a homeomorphism pVd:VdUxp|_{V_d}:V_d \to U_x. Instead, we are required that for every dDxd \in D_x we have a homeomorphism pVd:VdUxp_{V_d}:V_d \to U_x. So, if DxD_x is empty, this condition can still hold trivially!

Yes, that's how the empty set tricks people: a lot of things are vacuously true about it.

view this post on Zulip John Baez (Sep 11 2024 at 17:55):

David Egolf said:

I expect that the empty coproduct is the colimit of the empty diagram, which is the initial object of Top\mathsf{Top}. So, the empty coproduct should be the empty space.

Right. Maybe you can see why the editors of this page, who are probably not category theorists, slipped up around here.

view this post on Zulip David Egolf (Sep 11 2024 at 18:04):

So, it seems that YY can indeed be empty in Wikipedia's definition of a covering p:YXp:Y \to X. To summarize the case when YY is empty and XX is non-empty:

view this post on Zulip David Egolf (Sep 11 2024 at 18:06):

So I indeed stand corrected! When using Wikipedia's definition of a covering, one can have a non-surjective covering p:YXp:Y \to X even when XX is connected. The empty map p:YXp:Y \to X provides such a covering when YY is empty and XX is connected.

view this post on Zulip David Egolf (Sep 11 2024 at 18:08):

(Nothing like a bit of fun with the empty set to start out the day :sweat_smile: !)

view this post on Zulip David Egolf (Sep 11 2024 at 18:21):

Before wrapping up for today, I wanted to draw a picture. Specifically, I want to start thinking about the sheaf of sections Γ(p)\Gamma(p) of this bundle p:YXp:Y \to X:
bundle

I'm quite curious to see how the local non-injectivity of pp at yy gets removed as we apply ΛΓ:Top/XO(X)^Top/X\Lambda \circ \Gamma: \mathsf{Top}/X \to \widehat{\mathcal{O}(X)} \to \mathsf{Top}/X!

view this post on Zulip David Egolf (Sep 11 2024 at 18:23):

Intuitively, the local non-injectivity of this map comes from the following fact: the two "branches" to the right of yy have points arbitrarily close to yy. I am guessing that by applying the above process we will end up with a bunch of germs associated to yy, but where each of these germs will only be "really close" to some germs associated to one of the two branches.

view this post on Zulip David Egolf (Sep 11 2024 at 18:28):

Here's a picture illustrating two sections of p:YXp:Y \to X, namely s:VYs:V \to Y and s:VYs':V \to Y:
picture

Each of ss and ss' is a section of p:YXp:Y \to X over the open set VXV \subseteq X. So, s,sΓ(p)(V)s,s' \in \Gamma(p)(V), where Γ(p):O(X)opSet\Gamma(p):\mathcal{O}(X)^{\mathrm{op}} \to \mathsf{Set} is the sheaf of sections of pp.

view this post on Zulip David Egolf (Sep 11 2024 at 18:40):

Notice that the section ss' goes along the upper branch, while ss goes along the lower branch. I strongly suspect that the germ of ss' at yy will be different from that of the germ of ss at yy. Intuitively that would reflect the fact that each section is passing through yy in a certain way!

If this intuition is right, we can begin to see the single point yy split into multiple germs at that point, including the germs I just described [s]y[s']_y and [s]y[s]_{y}. I suspect that [s]y[s']_y is close to germs from ss' near yy - for example, germs of ss' on the upper branch. And similarly [s]y[s]_y I suspect is close to germs from ss near yy - for example germs of ss on the lower branch.

Intuitively, we are getting multiple germs associated to yy. And I suspect that each of these germs is only arbitrarily near to germs from a section that flows along on one of the two branches splitting off from yy. So we can perhaps begin to see how our bundle of germs of sections of pp could be locally injective!

view this post on Zulip David Egolf (Sep 11 2024 at 18:50):

(I'll stop here for today)

view this post on Zulip John Baez (Sep 11 2024 at 20:18):

David Egolf said:

So I indeed stand corrected! When using Wikipedia's definition of a covering, one can have a non-surjective covering p:YXp:Y \to X even when XX is connected.

Okay, so we see eye to eye. Wikipedia's definitiom looks correct to me, and with this definition a covering space p:XYp:X\to Y is surjective if XX is nonempty and YY is connected.

view this post on Zulip John Baez (Sep 11 2024 at 20:28):

David Egolf said:

Notice that the section ss' goes along the upper branch, while ss goes along the lower branch. I strongly suspect that the germ of ss' at yy will be different from that of the germ of ss at yy. Intuitively that would reflect the fact that each section is passing through yy in a certain way!

Yes, this and all your other inuitions are right! :tada:

To prove this particular fact, note (or show) that two sections of a bundle, say ss and ss', give two elements of its sheaf of sections, and these two elements have the same germ at a point xx iff ss and ss' become equal when restricted to some open neighborhood of xx. But in you picture ss and ss' are not equal on any open neighborhood of x=p(y)x = p(y).

view this post on Zulip Peva Blanchard (Sep 12 2024 at 21:56):

Peva Blanchard said:

Maybe the following holds: if the bundle p:YXp: Y \to X is étale, then all fibers must be in bijection with one another.

I've been thinking about this. I did not manage to prove that all fibers must be in bijection with one another yet, but I made a small step: when the bundle p:YXp : Y \to X is étale, then every fiber is discrete (w.r.t. to the subspace topology).

Indeed, let uFx=p1(x)u \in F_x = p^{-1}(x) a point in the fiber over xx. Since pp is étale, there exists an open neighborhood UU of uu s.t. pU:Up(U)p_{|U} : U \to p(U) is a homeomorphism. By definition of the subspace topology, UFxU \cap F_x is open in FxF_x. But, pUp_{|U} being a homeomorphism implies that UFx={u}U \cap F_x = \{u\}. Hence, every singleton in FxF_x is open. I.e., FxF_x is discrete.

This makes the "covering space" mental picture particularly hard not to see.

view this post on Zulip John Baez (Sep 13 2024 at 01:20):

Peva Blanchard said:

Peva Blanchard said:

Maybe the following holds: if the bundle p:YXp: Y \to X is étale, then all fibers must be in bijection with one another.

I've been thinking about this. I did not manage to prove that all fibers must be in bijection with one another yet...

Good! It's not true!

A while ago in this thread I mentioned a simple counterexample that works even when XX is connected. (I didn't say it's a counterexample, but it clearly is.)

You can also find a counterexample that's a covering space when XX is not connected.

view this post on Zulip John Baez (Sep 13 2024 at 01:23):

But yes, I like your proof that the fibers of an etale space are discrete.

view this post on Zulip Peva Blanchard (Sep 13 2024 at 06:18):

Oh indeed! I've been unknowingly assuming that every fiber was not empty. The inclusion (0,1)R(0,1) \subseteq \mathbb{R} provides a counter-example. From there, we can build other counterexamples with fibers of arbitrary different sizes.

E.g., E=F1×(0,1)+F2×(2,3)+F3×(4,5)+E = F_1 \times (0,1) + F_2 \times (2,3) + F_3 \times (4,5) + \dots with p:ERp: E \to \mathbb{R} defined as the second projection on every term of the disjoint sum, and the FiF_i's being arbitrary discrete spaces.

view this post on Zulip Peva Blanchard (Sep 13 2024 at 06:27):

I should probably reformulate my original goal then. "If p:EXp: E \to X is an étale bundle, with XX connected, and pp surjective, then all fibers are in bijection with one another". I'll think about it.

view this post on Zulip John Baez (Sep 13 2024 at 15:24):

Alas, that conjecture is false too - and I think with your ability to create etale spaces with fibers of different sizes, it should be easy to disprove.

view this post on Zulip John Baez (Sep 13 2024 at 15:26):

Maybe you should try to prove that if p:EXp: E \to X is a covering space, and XX is connected, then all fibers are in bijection with one another.

view this post on Zulip John Baez (Sep 13 2024 at 15:32):

Etale spaces seem too flexible for a good result of this sort unless we essentially require that they're covering spaces.

However, I now see, hovering before my eyes, an etale space p:EXp: E \to X where XX is connected and all fibers are in bijection with one another, which is not a covering space.

view this post on Zulip David Egolf (Sep 13 2024 at 17:02):

Peva Blanchard said:

...when the bundle p:YXp : Y \to X is étale, then every fiber is discrete (w.r.t. to the subspace topology).

I think this result provides some good intuition! I saw somewhere an analogy between an etale space and a pastry having many thin layers. I think one could view this result as saying that each point in a given fibre is "apart" from the other points in that fibre. And this could be viewed as saying that each point in any given fibre is in a separate "layer" from the others in that fibre.

view this post on Zulip Peva Blanchard (Sep 13 2024 at 17:02):

Oh yes. Actually, if CC is any collection of open subsets of XX, then we can form the coproduct

E=UCUE = \sum_{U \in C} U

with the obvious projection π:EX\pi : E \to X. It is then an étale bundle.

When CC is the set of all open subsets of XX, it looks like the corresponding bundle is initial among all the étale spaces over XX.

Anyway, this gives indeed a lot of flexibility.

view this post on Zulip Peva Blanchard (Sep 13 2024 at 17:04):

David Egolf said:

Peva Blanchard said:

...when the bundle p:YXp : Y \to X is étale, then every fiber is discrete (w.r.t. to the subspace topology).

I saw somewhere an analogy between an etale space and a layered pastry (one with many thin layers).

I had the exact same picture in mind! It's called "mille-feuille" in french. And it's quite crispy. The only difference is that étale spaces seem to lack cream. Étale spaces taste very dry.

view this post on Zulip David Egolf (Sep 13 2024 at 17:14):

Peva Blanchard said:

Oh yes. Actually, if CC is any collection of open subsets of XX, then we can form the coproduct

E=UCUE = \sum_{U \in C} U

with the obvious projection π:EX\pi : E \to X. It is then an étale bundle.

When CC is the set of all open subsets of XX, it looks like the corresponding bundle is initial among all the étale spaces over XX.

Anyway, this gives indeed a lot of flexibility.

If I understand correctly, in particular this lets us have layers that overlap in terms of their projection:
picture

The projection map is not injective, but it is locally injective. Some fibres have two elements, and some fibres have a single element. So, the fibres aren't in bijection with one another, even though the projection is surjective and XX is connected.

view this post on Zulip John Baez (Sep 13 2024 at 18:52):

Yes, that's the perfect example of that phenomenon!

view this post on Zulip Peva Blanchard (Sep 13 2024 at 20:04):

John Baez said:

Maybe you should try to prove that if p:EXp: E \to X is a covering space, and XX is connected, then all fibers are in bijection with one another.

Let's try to prove this.

Fix a point x0x_0 in the base. There is an open neighborhood U0U_0 of x0x_0 such that the pre-image is homeomorphic to U0×Ex0U_0 \times E_{x_0} where Ex0E_{x_0} is the fiber over x0x_0. In particular, for every xU0x \in U_0, the fiber ExE_x is in bijection with Ex0E_{x_0}.

This suggests to consider

W={xX  ExEx0}W = \{ x \in X ~|~ E_x \cong E_{x_0} \}

We want to prove that W=XW = X. Since XX is connected and WW is not empty, it suffices to show that WW is both open and closed.

Clearly, U0WU_0 \subseteq W. For any xWx \in W, there exists a neighborhood UU of xx such that p1UU×ExU×Ex0p^{-1}U \cong U \times E_x \cong U \times E_{x_0}. Then, UWU \subseteq W. This proves that WW is a neighborhood of every of its points. I.e., WW is open.

Let yXWy \in X - W, i.e., Ey≇Ex0E_y \not\cong E_{x_0}. Using the defining property of the covering, there exists a neighborhood VV of yy such that p1VV×Eyp^{-1}V \cong V \times E_y. In particular, for every yVy' \in V, EyEy≇Ex0E_{y'} \cong E_y \not\cong E_{x_0}. Thus., VXWV \subseteq X - W. I.e., XWX - W is open, and WW is closed. qed.

view this post on Zulip John Baez (Sep 13 2024 at 20:21):

Great! I wasn't sure how to prove it, I just knew it was true. :upside_down: But this looks like the best way to prove it.

view this post on Zulip John Baez (Sep 13 2024 at 20:27):

So, I think the idea of "all fibers being in bijection with one another" is not really something we should expect of etale spaces, unless they are covering spaces of connected spaces.

But here's the example I was imagining of an etale space where all fibers are in bijection with each other even though it's not a covering space!

Start with the map p:R2Rp: \mathbb{R}^2 \to \mathbb{R} that projects onto the first coordinate:

p(x,y)=xp(x,y) = x

Then let YR2Y \subseteq \mathbb{R}^2 be the subset that's the union of all horizontal lines

{(x,n):xR} \{(x,n) : x \in \mathbb{R} \}

where nZn \in \mathbb{Z}, together with the open line segment

{(x,12):0<x<1} \{(x,\frac{1}{2}): 0 \lt x \lt 1\}

Restricting pp to YY we get a map I'll abusively call p:YRp: Y \to \mathbb{R}. This is an etale space over a connected space, and all the fibers are in bijection with each other (since they're all countably infinite), but it's not a covering space.

view this post on Zulip Oscar Cunningham (Sep 13 2024 at 21:32):

Another example would be to take two copies of R\mathbb{R} and quotient them together on (0,)(0,\infty). This gives a space which has fibers of cardinality 22 above (,0](-\infty,0] and fibers of cardinality 11 above (0,)(0,\infty). So if we add a disjoint copy of (0,)(0,\infty) then the fibers will have cardinality 22 everywhere.

view this post on Zulip John Baez (Sep 13 2024 at 21:36):

Thanks - that's a much more exciting example, because it doesn't use the dirty trick 0+1=0\aleph_0+ 1 = \aleph_0.

view this post on Zulip Oscar Cunningham (Sep 13 2024 at 21:38):

sheaf.png

view this post on Zulip Kevin Carlson (Sep 13 2024 at 23:42):

That’s the kind of thing I was thinking of, but then gave up because it isn’t a local homeomorphism at the branch point.

view this post on Zulip Kevin Carlson (Sep 13 2024 at 23:44):

I suspect you can’t actually get an example of a local homeomorphism with fibers of constant finite size over a connected base that isn’t a covering.

view this post on Zulip John Baez (Sep 13 2024 at 23:54):

Kevin Carlson said:

That’s the kind of thing I was thinking of, but then gave up because it isn’t a local homeomorphism at the branch point.

What if we define a topology such that the branch point y has open neighborhoods U containing just some of the lower branch at left (together with some of the stuff at right) and open neighborhoods V containing just some of the upper branch at left (together with some of the stuff at right)? Then their intersection needs to be open, and it doesn't 'look' open, but let's accept that.

A function p : Y → X between two topological spaces is called a local homeomorphism y ∈ Y has an open neighborhood U whose image f(U) is open in Y and the restriction f | U : U → f ( U ) is a homeomorphism.

view this post on Zulip Kevin Carlson (Sep 14 2024 at 00:07):

I was wondering about that too but if you intersect two of those neighborhoods you see that there’s an open subset to the right of the branch point that looks like a half-open interval. In other words near the branch point I think the space we’ve specified is just a disjoint union of two open intervals and a half open interval, and so again this isn’t actually a local homeomorphism!

view this post on Zulip Kevin Carlson (Sep 14 2024 at 00:08):

If you allow things continuing into just the upper left to be open but not just the lower left, then maybe…

view this post on Zulip Kevin Carlson (Sep 14 2024 at 00:09):

Oh, eek, Oscar hasn’t specified a branch point in the way I thought since he glued over (0,),(0,\infty), not [0,)[0,\infty)!

view this post on Zulip John Baez (Sep 14 2024 at 00:09):

Whew, this is confusing.

view this post on Zulip Kevin Carlson (Sep 14 2024 at 00:10):

Yeah, I’ve never thought about the line-with-one-and-a-half origins before

view this post on Zulip Kevin Carlson (Sep 14 2024 at 00:13):

Ok now I think Oscar’s example is actually fine.

view this post on Zulip John Baez (Sep 14 2024 at 00:18):

Okay, I hadn't really understood what Oscar's example actually was. I thought there was one point where the two branches merge, but there are two, and that saves the day, apparently.

view this post on Zulip John Baez (Sep 14 2024 at 01:23):

Though of course that's necessary to accomplish what we're looking for: the same number of points in every fiber!

view this post on Zulip Oscar Cunningham (Sep 14 2024 at 05:58):

Kevin Carlson said:

Oh, eek, Oscar hasn’t specified a branch point in the way I thought since he glued over (0,),(0,\infty), not [0,)[0,\infty)!

Right. The way I think about it is that bundles over R\mathbb{R} can split 1n1\to n, merge n1n\to 1, begin 010\to 1 or end 101\to 0. But whichever way they go the bit with 11 above it has to be an open set. That's why I quotiented by (0,)(0,\infty) above, leaving two origins.

view this post on Zulip Kevin Carlson (Sep 14 2024 at 06:16):

Yeah, it’s a nice (and weird) example.

view this post on Zulip John Baez (Sep 14 2024 at 14:40):

Here's another way to think about this. Start with a bundle YXY \to X like this:

This is the bundle Kevin and I originally thought Oscar was talking about: the fiber over the arrow has just one point, while each fiber to the left of that has two, and each fiber to the right has one.

view this post on Zulip John Baez (Sep 14 2024 at 14:44):

Then apply the functor ΛΓ\Lambda \circ \Gamma that @David Egolf has been investigating. So: take the sheaf of sections of our bundle YXY \to X, and then form the bundle of germs of that sheaf. This new bundle ΛΓ(Y)X\Lambda \Gamma(Y) \to X is an etale space!

view this post on Zulip John Baez (Sep 14 2024 at 14:47):

The sheaf of sections of the original bundle has two different germs at the arrow, and two at each point to the left, and one at each point to the right.

view this post on Zulip John Baez (Sep 14 2024 at 14:48):

So our etale space has two points above the arrow, while our original bundle had just one!

view this post on Zulip John Baez (Sep 14 2024 at 14:51):

The counit ΛΓ(Y)Y\Lambda \Gamma(Y) \to Y must collapse those two points down to one.

view this post on Zulip John Baez (Sep 14 2024 at 14:53):

So, it maps what Oscar was trying to talk about, down to what Kevin and I originally thought he was talking about.

view this post on Zulip Peva Blanchard (Sep 15 2024 at 10:19):

Just to be sure that I understand the example correctly, especially the reason why it is not a covering space, I'm drawing it like this
image.png

If it were a covering, then the portion of the total space enclosed in the two yellow dashed lines would be homeomorphic to U×2U \times 2.

From the picture, we can see that it does not look like two disjoint copies of UU. But I have trouble formulating a precise argument that rules out the existence of a homeomorphism to U×2U \times 2.

I thought about connectedness, but if I'm not mistaken the white part is connected, hence π1U\pi^{-1}U has 2 connected components. So the invariant "number of connected components" is not enough to distinguish π1U\pi^{-1}U and U×2U \times 2.

view this post on Zulip Oscar Cunningham (Sep 15 2024 at 13:14):

Just looking at the number of connected components isn't enough, but you can use the fact that there are points in the base space for which both points above them are in the same component. That can't happen with U×2U\times 2.

view this post on Zulip Peva Blanchard (Sep 15 2024 at 15:09):

Ah yes, I see, thank you! My mistake was in thinking of π1UU×2\pi^{-1}U \cong U \times 2 as a homeomorphism between topological spaces, whereas it should be be an isomorphism of bundles over UU.

view this post on Zulip Peva Blanchard (Sep 15 2024 at 15:13):

Or more precisely. As a mere topological space, π1U\pi^{-1}U is indeed homeomorphic to U×2U \times 2. But, they are not isomorphic as bundles over UU, thanks to your argument.

view this post on Zulip Oscar Cunningham (Sep 15 2024 at 15:48):

They're not isomorphic as bundles over UU. But I don't think they are homeomorphic either. It's just harder to write down an invariant that proves that they're not homeomorphic.

view this post on Zulip Oscar Cunningham (Sep 15 2024 at 15:48):

Oh wait, one is Hausdorff and the other one isn't. That's a simple invariant.

view this post on Zulip Peva Blanchard (Sep 15 2024 at 15:53):

Oh yes, the two origins cannot be separated in your example. I was wrong, π1U\pi^{-1}U and U×2U \times 2 are not even homeomorphic as mere topological spaces

view this post on Zulip John Baez (Sep 15 2024 at 15:56):

That's true. But the really important lesson here is that given bundles p:EBp: E \to B, p:EBp': E' \to B, they are only isomorphic if there's a homeomorphism f:EEf: E \to E' with p=pfp = p ' \circ f. That last equation makes isomorphism of bundles much more than mere homeomorphism of their total spaces.

view this post on Zulip John Baez (Sep 15 2024 at 16:00):

This means that there needs to be a subject of "invariants of bundles" which goes beyond the subject of "invariants of topological spaces". Algebraic topology provides lots of both. For vector bundles, the most famous invariants are the [[Chern classes]] (for complex vector bundles) and [[Pontryagin classes]] and [[Stiefel-Whitney classes]] (for real vector bundles). All these can be defined using [[classifying spaces]], which we've been talking about in another thread.

view this post on Zulip John Baez (Sep 15 2024 at 16:02):

There are also "invariants of sheaves", which are especially well developed for sheaves of vector spaces - sheaves FF where the set F(U)F(U) attached to any open set UU is actually a vector space, and the restriction maps F(U)F(V)F(U) \to F(V) are linear. (Or, in algebraic geometry, sheaves of modules of the so-called "structure sheaf" - probably not worth explaining here. Grothendieck was especially involved in studying these, and his studies of such sheaves eventually led him to topos theory.)

view this post on Zulip David Egolf (Sep 15 2024 at 18:43):

John Baez said:

So our etale space has two points above the arrow, while our original bundle had just one!

John Baez said:

The counit ΛΓ(Y)Y\Lambda \Gamma(Y) \to Y must collapse those two points down to one.

This is helping me understand why our counit natural transformation needs to go from ΛΓ\Lambda \circ \Gamma to 1Top/X1_{\mathsf{Top}/X} and not from 1Top/X1_{\mathsf{Top}/X} to ΛΓ\Lambda \circ \Gamma. If we focus on a single point "hovering" in some bundle, and then apply ΛΓ\Lambda \circ \Gamma, we get out the germs of sections that go through that point. It's very natural to define a function that collapses each of these germs back to the original point. By contrast, I don't see a nice way to define a function that would send our original point to some particular germ of a section that goes through that point. (I don't see a nice way to pick some distinguished germ that is most deserving of being mapped to by our original point).

view this post on Zulip John Baez (Sep 16 2024 at 06:00):

Indeed, I doubt there's a way to pick a distinguished germ in general. It could be good to try to prove this by showing there exists no natural transformation α:1Top/XΛΓ\alpha: 1_{\mathsf{Top}/X} \to \Lambda \circ \Gamma. I think one can prove this by considering a single bundle that has lots of automorphisms, like the bundle

p:R2R p: \mathbb{R}^2 \to \mathbb{R}

p(x,y)=x p(x,y) = x

Each automorphism gives a naturality square that needs to commute, and I think one can show they can't all commute, no matter how one chooses α\alpha.

view this post on Zulip David Egolf (Sep 22 2024 at 16:59):

I feel more comfortable now with Γ\Gamma and Λ\Lambda, thanks to the discussion and picture-drawing above. Building on this understanding, I now want to return to the counit for our adjunction ΛΓ\Lambda \dashv \Gamma.

view this post on Zulip David Egolf (Sep 22 2024 at 17:01):

We are looking for a natural transformation ϵ:ΛΓ1Top/X\epsilon: \Lambda \circ \Gamma \to 1_{\mathsf{Top}/X}. Given a particular bundle p:YXp:Y \to X, Γ(p)\Gamma(p) is its sheaf of sections, and then Λ(Γ(p))\Lambda(\Gamma(p)) is the bundle of germs of that sheaf of sections. Thinking of our bundle as some geometry "hovering over" XX, then the sections are ways to "travel through" parts of that geometry, and we can get multiple germs at some point if there are sections with different germs passing through that point.

view this post on Zulip David Egolf (Sep 22 2024 at 17:07):

Given this intuition, intuitively I am hoping we can define a morphism of bundles ϵp:(ΛΓ)(p)p\epsilon_p:(\Lambda \circ \Gamma)(p) \to p that sends each germ of a section associated to some point yYy \in Y back to yy. Intuitively we are "collapsing" the cloud of germs associated to a point back to that point.

view this post on Zulip David Egolf (Sep 22 2024 at 17:08):

Let s:UYs:U \to Y be a section of p:YXp:Y \to X over the open set UXU \subseteq X, having germ [s]x[s]_x at xUx \in U. Then we want ϵp\epsilon_p to send [s]x[s]_x to s(x)s(x).

view this post on Zulip David Egolf (Sep 22 2024 at 17:12):

The first order of business is to check that ϵp:(ΛΓ)(p)p\epsilon_p: (\Lambda \circ \Gamma) (p) \to p really is a morphism of bundles. I will denote the corresponding function as ϵp:(ΛΓ)pY\epsilon_p: (\Lambda \circ \Gamma)_p \to Y, where (ΛΓ)p(\Lambda \circ \Gamma)_p is the space of germs of the sheaf Γ(p)\Gamma(p).

This function needs to preserve fibers and it also needs to be continuous. By "preserving fibers" I mean that it maps any data hovering over xXx \in X to data hovering over xXx \in X, for any xXx \in X.

view this post on Zulip David Egolf (Sep 22 2024 at 17:16):

First, let's check that ϵp:(ΛΓ)pY\epsilon_p: (\Lambda \circ \Gamma)_p \to Y preserves fibers. We have ϵp([s]x)=s(x)\epsilon_p([s]_x) = s(x) for any germ of a section [s]x[s]_x. But both [s]x[s]_x and s(x)s(x) "hover over" xx in their respective bundles, so ϵp\epsilon_p does preserve fibers.

view this post on Zulip David Egolf (Sep 22 2024 at 17:22):

I next want to show that ϵp:(ΛΓ)pY\epsilon_p: (\Lambda \circ \Gamma)_p \to Y is continuous. We know that the projection of germs to their "base" point (ΛΓ)(p):(ΛΓ)pX(\Lambda \circ \Gamma)(p):(\Lambda\circ \Gamma)_p\to X is continuous. And we know that s:UYs:U \to Y is continuous, for any section ss. I'm hoping to somehow use these facts to prove that ϵp\epsilon_p is continuous.

view this post on Zulip David Egolf (Sep 22 2024 at 17:28):

Let [s]x(ΛΓ)p[s]_x \in (\Lambda \circ \Gamma)_p, where s:UYs:U \to Y is a section of p:YXp:Y \to X over the open set UXU \subseteq X. I'd like to find an open set containing [s]x[s]_x such that the projection of the germs in that open set all land in UU.

I seem to recall that the set of all the germs of ss over UU is an open set containing [s]x[s]_x in (ΛΓ)p(\Lambda \circ \Gamma)_p.

view this post on Zulip David Egolf (Sep 22 2024 at 17:31):

I'll use [s]U[s]_U to refer to this open set containing [s]x[s]_x. Then, we get a restriction of our projection (for germs of sections) as (ΛΓ(p))[s]U:[s]UU(\Lambda \circ \Gamma(p))_{[s]_U}:[s]_U \to U. This is a continuous function, because restricting a continuous function in this way always yields a continuous function.

view this post on Zulip David Egolf (Sep 22 2024 at 17:32):

Then, s(ΛΓ(p))[s]U:[s]UUYs \circ (\Lambda \circ \Gamma(p))_{[s]_U}:[s]_U \to U \to Y acts by [s]xxs(x)[s]_x \mapsto x \mapsto s(x). And this function is continuous, because it is the composite of continuous functions.

Further, we note that this function is the restriction of ϵp:(ΛΓ)pY\epsilon_p: (\Lambda \circ \Gamma)_p \to Y to [s]U[s]_U.

view this post on Zulip David Egolf (Sep 22 2024 at 17:36):

Since each germ [s]x[s]_x is the germ of some section s:UYs:U \to Y for some open set UXU \subseteq X, we can perform a similar procedure at each point in ΛΓp\Lambda \circ \Gamma_p. The various [s]U[s]_U form an open cover for (ΛΓ)p(\Lambda \circ \Gamma)_p, and our restriction of ϵp\epsilon_p to each of these open sets is continuous.

We conclude that ϵp\epsilon_p is continuous, because we can always "glue together" continuous functions that agree on overlaps to make a continuous function.

view this post on Zulip David Egolf (Sep 22 2024 at 17:44):

I hope I did that right! :sweat_smile: I'll stop here for today, at any rate.

view this post on Zulip David Egolf (Sep 22 2024 at 17:47):

Oh, one last thing!

I just realized I didn't check that :[s]xs(x):[s]_x \mapsto s(x) is really a function. We need to show that if [s]x=[t]x[s]_x = [t]_x then s(x)=t(x)s(x) = t(x). But if [s]x=[t]x[s]_x = [t]_x that implies that ss and tt are equal on some small enough open neighorhood of xx. In particular, they are equal when evaluated at xx. So, :[s]xs(x):[s]_x \to s(x) really is a function.

view this post on Zulip John Baez (Sep 22 2024 at 19:31):

Yes, all this looks perfect! Congrats!

view this post on Zulip David Egolf (Sep 23 2024 at 16:32):

Awesome! :big_smile:

To wrap up this puzzle, it remains to show that our components ϵp:(ΛΓ)(p)p\epsilon_p:(\Lambda \circ \Gamma)(p) \to p assemble to form a natural transformation ϵ:ΛΓ1Top/X\epsilon: \Lambda \circ \Gamma \to 1_{\mathsf{Top}/X}.

view this post on Zulip David Egolf (Sep 23 2024 at 16:34):

To check this, we consider a naturality square associated to a morphism f:ppf:p \to p' in Top/X\mathsf{Top}/X:
naturality square

view this post on Zulip David Egolf (Sep 23 2024 at 16:36):

This diagram lives in Top/X\mathsf{Top}/X, so each of the morphisms here is a morphism of bundles. To show that this diagram commutes, it suffices to show that the corresponding functions commute. So, we consider this square of topological spaces and continuous functions:
square

view this post on Zulip David Egolf (Sep 23 2024 at 16:39):

Now, two functions with the same source and target are equal iff they agree when evaluated at any element. So, let's trace an element of (ΛΓ)p(\Lambda \circ \Gamma)_p around this square, and see what we get via the top right path as compared to the bottom left path.

view this post on Zulip David Egolf (Sep 23 2024 at 16:41):

Our space (ΛΓ)p(\Lambda \circ\Gamma)_p is the space of germs of sections of the bundle p:YXp:Y \to X. So, an element of this space is of the form [s]x[s]_x, which refers to the germ of some section s:UYs:U \to Y of pp at the point xUx \in U, for some open set UXU \subseteq X.

view this post on Zulip David Egolf (Sep 23 2024 at 16:41):

Going around the bottom left path, [s]xs(x)f(s(x))[s]_x \mapsto s(x) \mapsto f(s(x)), where we have used the definition of ϵp\epsilon_p discussed above.

view this post on Zulip David Egolf (Sep 23 2024 at 16:54):

Going around the top right path, I need to recall how ΛΓ\Lambda \circ \Gamma acts on a morphism f:ppf:p \to p' in Top/X\mathsf{Top}/X.

First, Γ\Gamma converts this morphism ff of bundles to a natural transformation between sheaves. This natural transformation at component UU (where UU is open subset of XX) sends a section of pp to a section of pp' by post-composition with ff. That is, a section s:UYs:U \to Y gets mapped to a section of YY' given by fs:UYYf \circ s:U \to Y \to Y'. So, the UU-th component function acts by post-composition with ff.

Then we apply Λ\Lambda, which needs to take this natural transformation and produce a morphism of bundles. The bundles in question are specifically bundles of germs of sections. Given a germ [s]x(ΛΓ)p[s]_x \in (\Lambda \circ \Gamma)_p, with s:UYs:U \to Y a section over UU, we want to get a germ "hovering over" xx in (ΛΓ)p(\Lambda \circ \Gamma)_{p'}. We do this by applying our UU-th component function to ss, to get the germ [fs]x[f \circ s]_x.

view this post on Zulip David Egolf (Sep 23 2024 at 16:56):

We can now trace [s]x[s]_x around the top-right path in our square. We get [s]x[fs]x(fs)(x)=f(s(x))[s]_x \mapsto [f \circ s]_x \mapsto (f \circ s)(x) = f(s(x)). We conclude that the square commutes, and so our original square of bundle morphisms also commutes.

Thus, an arbitrary naturality square for ϵ:ΛΓ1Top/X\epsilon: \Lambda\circ \Gamma \to 1_{\mathsf{Top}/X} commutes, and so ϵ\epsilon is indeed a natural transformation!

view this post on Zulip David Egolf (Nov 18 2024 at 17:06):

I want to start on Part 4 of the blog post series today! (My motivation to work through this series remains fairly high; it's just a matter of finding the energy to do so.)

view this post on Zulip David Egolf (Nov 18 2024 at 17:14):

The goal in Part 4 is to learn how to "pull back" sheaves along a continuous map. First, we review how to push them forward along a continuous map.

Given a continuous map f:XYf:X \to Y we get an induced (preimage) map from the open sets of YY to the open sets of XX. This in turn induces a functor from O(Y)op\mathcal{O}(Y)^{\mathrm{op}} to O(X)op\mathcal{O}(X)^{\mathrm{op}}, which we'll call f1f^{-1}. By precomposing with f1f^{-1} we can start with a presheaf on XX and end up with a presheaf on YY.

view this post on Zulip David Egolf (Nov 18 2024 at 17:14):

The resulting sheaf is the "pushforward" of our original sheaf FF along ff, denoted fFf_*F, so that we have fF=Ff1f_*F = F \circ f^{-1}. This process can be extended to morphisms in a functorial way, so we end up with a functor from presheaves on XX to presheaves on YY. In fact, this also gives a functor from sheaves on XX to sheaves on YY.

view this post on Zulip David Egolf (Nov 18 2024 at 17:16):

Now we want to go in the opposite direction: from sheaves on YY to sheaves on XX, given a continuous function f:XYf:X \to Y. As a mini-challenge to myself, I'm going to see if I can guess how we might do this before the blog post gives the answer...

view this post on Zulip David Egolf (Nov 18 2024 at 17:21):

A presheaf on YY, such as F:O(Y)opSetF:\mathcal{O}(Y)^{\mathrm{op}} \to \mathsf{Set}, intuitively attaches some information to each open set of YY. However, we've seen before that we can associate information to each point of YY using a presheaf on YY. Namely, we can form a full subcategory of O(Y)op\mathcal{O}(Y)^{\mathrm{op}} by including exactly the open sets that contain some point yy of interest, apply FF to that get a diagram in Set\mathsf{Set}, and then take the colimit of that diagram in Set\mathsf{Set}. In this way, we can associate a set to each point of YY using FF.

view this post on Zulip John Baez (Nov 18 2024 at 17:32):

Nice! For those who haven't read all > 1000 comments on this thread, you're now alluding to how any presheaf over a topological space has a 'stalk' at each point of that space, the stalk being the set of 'germs' of the presheaf at that point.

view this post on Zulip David Egolf (Nov 18 2024 at 17:36):

Yes! We saw earlier that this information can be organized as a "bundle" on YY; as a continuous function to YY. Specifically, we get a continuous function p:Λ(F)Yp:\Lambda(F) \to Y where the set (the set of germs, the "stalk") associated to yYy \in Y is given as p1(y)p^{-1}(y).

view this post on Zulip David Egolf (Nov 18 2024 at 17:40):

Now, I want to associate a set to each point of XX. I notice that applying f:XYf:X \to Y to a point of XX gives me a point of YY, and so I could associate to xXx \in X the set which is already associated (using FF) to f(x)f(x).

view this post on Zulip David Egolf (Nov 18 2024 at 17:46):

What I really want is to produce a bundle on XX. (That's because I could then convert that bundle on XX to a presheaf on XX!) To do that, I think we can use a pullback:
getting a bundle on X

view this post on Zulip David Egolf (Nov 18 2024 at 17:48):

Since this diagram lives in Set\mathsf{Set} [edit: this is wrong - see discussion below], we have some hope to explicitly compute this pullback. It should be the subset of X×Λ(F)X \times \Lambda(F) consisting of pairs (x,g)(x, g) such that f(x)=p(g)f(x) =p(g)

view this post on Zulip David Egolf (Nov 18 2024 at 17:50):

What set are we attaching to some point xXx \in X? This is the set of pairs (x,g)(x, g) such that f(x)=p(g)f(x)=p(g). So, we are indeed attaching to xx the data attached by FF to f(x)Yf(x) \in Y. That matches our intuitive guess from above!

view this post on Zulip David Egolf (Nov 18 2024 at 17:53):

We can then convert this bundle pp' on XX to a presheaf on XX, by taking the presheaf of sections of pp'. So we obtain a presheaf on XX from a presheaf on YY, as was our goal!

view this post on Zulip David Egolf (Nov 18 2024 at 17:57):

Ah, a correction - the diagram I drew above lives in Top\mathsf{Top}, the category of topological spaces. So that complicates things a little bit.

view this post on Zulip John Baez (Nov 18 2024 at 17:59):

This strategy sounds great! Maybe we can polish it up a bit. There are ways to turn presheaves into bundles and bundles back into presheaves. These processes are adjoint functors. But even better, I think we've seen there's an equivalence between the really nice presheaves on a topological space XX, namely the sheaves, and the really nice bundles over XX, namely the etale spaces.

So given a map of spaces f:XYf: X \to Y we can take a sheaf on YY, convert it to an etale space over YY, pull it back to XX, and convert it back to a sheaf. This process exists since we can pull back a bundle and get a bundle. But if we can pull back an etale space and get an etale space, this process will be even nicer, since we'll never be leaving the world of etale spaces and sheaves, which are just two equivalent ways of talking about the same thing.

view this post on Zulip David Egolf (Nov 18 2024 at 18:01):

Oh that's a great point! We want to not only pull back presheaves to get presheaves - we want to pull back sheaves to get sheaves. So it will be even better if we can show that not only does pulling back a bundle give a bundle, but pulling back an etale space (which closely relates to a sheaf) gives us an etale space.

view this post on Zulip David Egolf (Nov 18 2024 at 18:03):

I still think it's a reasonable first step to finish showing that we can pull back a bundle to get a bundle. That basically amounts to showing that Top\mathsf{Top} has pullbacks.

view this post on Zulip John Baez (Nov 18 2024 at 18:04):

Definitely that's the right first step, especially since etale spaces are bundles with a mere extra property - so you can then go ahead and see whether pulling back bundles preserves that property.

view this post on Zulip David Egolf (Nov 18 2024 at 18:05):

I know that the forgetful functor U:TopSetU:\mathsf{Top} \to \mathsf{Set} is a right adjoint, and hence it preserves limits. So if Top\mathsf{Top} has a pullback of some diagram, its underlying set and functions are given by the corresponding pullback of the diagram's image in Set\mathsf{Set}. That immediately gives us a candidate for the underlying set and functions of a pullback in Top\mathsf{Top}.

view this post on Zulip David Egolf (Nov 18 2024 at 18:07):

However, we still need to figure out a topology to put on this pullback.

view this post on Zulip John Baez (Nov 18 2024 at 18:07):

I can't resist giving a hint (which you probably already know): in Set, a pullback is a subset of a product.

view this post on Zulip David Egolf (Nov 18 2024 at 18:16):

Oh, that's a helpful hint! I'll draw a diagram to illustrate the general situation:
pullback in Top

In Set\mathsf{Set}, we have that A×BAA \times_B A' is a subset of A×AA \times A'. So in Top\mathsf{Top}, we could put the subspace topology on A×BAA \times_B A'. That topology is the coarsest one so that the inclusion i:A×BAA×Ai:A \times_B A' \to A \times A' is continuous.

view this post on Zulip David Egolf (Nov 18 2024 at 18:23):

We want to show that for any other cone over the AfBfAA \to_f B \leftarrow_f' A' diagram, we have a unique continuous function gg that makes this diagram commute:
universal property

view this post on Zulip David Egolf (Nov 18 2024 at 18:25):

If this diagram is to commute in Top\mathsf{Top} its image under U:TopSetU:\mathsf{Top} \to \mathsf{Set} must certainly commute. This determines gg uniquely by the universal property of pullbacks in Set\mathsf{Set}.

view this post on Zulip David Egolf (Nov 18 2024 at 18:28):

Specifically, we must have p(g(c))=q(c)p(g(c)) =q(c) and p(g(c))=q(c)p'(g(c)) = q'(c) for any cCc \in C. So, g(c)=(q(c),q(c))g(c) = (q(c), q'(c)) for any cCc \in C. It remains to show that this function is continuous.

view this post on Zulip David Egolf (Nov 18 2024 at 18:32):

Using the functions q:CAq:C \to A and q:CAq':C \to A' we get, using the fact that products exist in Top\mathsf{Top}, a corresponding continuous function :CA×A:C \to A \times A' that acts by c(q(c),q(c))c \mapsto (q(c),q(c')). The function gg is then giving by co-restricting this induced function.

view this post on Zulip David Egolf (Nov 18 2024 at 18:33):

I seem to recall that if one has a continuous function f:ABf:A \to B with f(A)CBf(A) \subseteq C \subseteq B and one co-restricts ff to get a function :AC:A\to C where CC has the subspace topology, then this co-restricted function is continuous. If that's true, then gg is continuous, as desired.

view this post on Zulip David Egolf (Nov 18 2024 at 18:36):

(Along the way, I assumed that Top\mathsf{Top} has products. I'm content to assume this for now, unless someone thinks it would be good at this point to prove.)

view this post on Zulip David Egolf (Nov 18 2024 at 18:38):

Since Top\mathsf{Top} has pullbacks, in particular we can pull back a bundle to get another bundle. That ability, combined with the fact that we can convert between bundles and presheaves, gives us the ability to pull back a presheaf on YY along f:XYf:X \to Y to get a presheaf on XX.

view this post on Zulip David Egolf (Nov 18 2024 at 18:40):

We next want to show that we can pull back sheaves to get not just a presheaf but a sheaf. To do that, it suffices to show that pulling back an etale space gives us an etale space (because we can convert between etale spaces and sheaves).

view this post on Zulip David Egolf (Nov 18 2024 at 18:44):

An etale space amounts to a local homeomorphism f:ABf':A' \to B. Recalling the definition of local homeomorphism, ff' is a continuous map such that each point of AA' has an open neighborhood UU such that f(U)f'(U) is open and the restriction fU:Uf(U)f'|_{U}:U \to f'(U) is a homeomorphism, where UU and f(U)f'(U) are equipped with the subspace topology.

view this post on Zulip David Egolf (Nov 18 2024 at 18:46):

I want to show that pulling back an etale space f:ABf':A' \to B along a continuous map f:ABf:A \to B gives an etale space p:A×BAAp:A \times_B A' \to A. We've already seen that the pulled back map pp is continuous when A×BAA \times_B A' is equipped with the subspace topology induced by A×BAA×AA \times_B A' \subseteq A \times A', but it remains to show that pp is a local homeomorphism.

view this post on Zulip David Egolf (Nov 18 2024 at 18:57):

It might help to draw a picture to visualize the pulling back of an etale space. But I'll leave that for next time.

More broadly, it feels good to be back working on this again!

view this post on Zulip John Baez (Nov 18 2024 at 21:42):

David Egolf said:

I seem to recall that if one has a continuous function f:ABf:A \to B with f(A)CBf(A) \subseteq C \subseteq B and one co-restricts ff to get a function :AC:A\to C where CC has the subspace topology, then this co-restricted function is continuous.

Yes! And it's even better than that. If we give CBC \subseteq B the subspace topology, a function f:ACf: A \to C is continuous if and only if it's the corestriction of some function f:ABf: A \to B whose image lies in CC.

So to get a pullback in Top\mathsf{Top} we just take the pullback of the underlying diagram in Set\mathsf{Set} and give the resulting set the subspace topology coming from the product space (as you explained).

I'm glad you're back in action.

view this post on Zulip David Egolf (Nov 19 2024 at 18:43):

John Baez said:

David Egolf said:
Yes! And it's even better than that. If we give CBC \subseteq B the subspace topology, a function f:ACf: A \to C is continuous if and only if it's the corestriction of some function f:ABf: A \to B whose image lies in CC.

Ah, this rings a bell! I think you're mentioning what I've seen called the "characteristic property" of the subspace topology.

view this post on Zulip David Egolf (Nov 19 2024 at 18:46):

I next want to show that pulling back a local homeomorphism in Top\mathsf{Top} along any continuous function gives us a local homeomorphism. If we can do this, that'll mean that we can pull back a sheaf to get a sheaf.

view this post on Zulip David Egolf (Nov 19 2024 at 18:47):

This appears to be another example of something I've noticed earlier: learning this stuff has involved more topology than I expected :sweat_smile:! But the topology involved is good to learn too, so I don't mind too much.

view this post on Zulip David Egolf (Nov 19 2024 at 18:54):

Here's the situation:
pullback in Top

I'll assume that f:ABf':A' \to B is a local homeomorphism, and I want to show that p:A×BAAp:A \times_B A' \to A is then a local homeomorphism.

view this post on Zulip David Egolf (Nov 19 2024 at 18:56):

A function g:XYg:X \to Y is a local homeomorphism if these two conditions are met:

  1. g(X)g(X) is an open subset of YY
  2. Every point xXx \in X has some open neighbourhood UU so that gU:Ug(U)g|_U:U \to g(U) is a homeomorphism

view this post on Zulip David Egolf (Nov 19 2024 at 18:58):

Let's show that pp meets condition (1).
We recall that A×BAA \times_B A' has as points the pairs (a,a)(a,a') such that f(a)=f(a)f(a)=f(a'), and that pp projects down to the first coordinate. Thus, p(A×BA)p(A \times_B A') is the subset of AA consisting of points aAa \in A such that there is some aAa' \in A' with f(a)=f(a)f(a) = f(a').

view this post on Zulip David Egolf (Nov 19 2024 at 18:59):

That is, p(A×BA)p(A \times_B A') is the subset of AA that is mapped by ff to somewhere in the image of f:ABf':A' \to B'. This is the preimage under ff of f(A)f'(A').

view this post on Zulip David Egolf (Nov 19 2024 at 19:01):

Since ff' is a local homeomorphism, f(A)f'(A') is open. And since ff is continuous, its preimage of f(A)f'(A') is also open. Since this is exactly the image of pp, we conclude that p(A×BA)p(A \times_B A') is open in AA.

view this post on Zulip David Egolf (Nov 19 2024 at 19:03):

It remains to show that pp meets condition (2).
Given some point (a,a)A×BA(a,a') \in A \times_B A', we need to show there's some open neighbourhood UU containing that point so that pp restricts to a local homeomorphism on UU.

view this post on Zulip David Egolf (Nov 19 2024 at 19:05):

Since f:ABf':A' \to B is a local homeomorphism, we know there is some open set VAV \subseteq A' containing aa' so that ff' restricts to a local homeomorphism on VV. Maybe we can use VV to create our open set UA×BAU \subseteq A \times_B A' containing (a,a)(a,a')?

I feel like I need to draw a picture to help guide this process.

view this post on Zulip David Egolf (Nov 19 2024 at 19:21):

pulling back a local homeomorphism

view this post on Zulip David Egolf (Nov 19 2024 at 19:27):

Referencing the picture, if we have some VV around around aa' where ff' restricts to a local homeomorphism, we can try forming the analogous open set around (a,a)(a,a'). This still feels tricky.

I think I need to "spread out" in the AA direction as well as the AA' direction. We could try doing this by taking f1(f(V))Af^{-1}(f'(V)) \subseteq A.

view this post on Zulip David Egolf (Nov 19 2024 at 19:36):

Drawing a picture to illustrate that:
picture

view this post on Zulip David Egolf (Nov 19 2024 at 19:38):

For brevity, let Y=f1(f(V))Y = f^{-1}(f'(V)). What are the points "hovering over" YY in A×BAA \times_B A' near (a,a)(a,a')?

view this post on Zulip David Egolf (Nov 19 2024 at 19:41):

To specify a point in A×BAA \times_B A' we need to specify an element of AA and an element of AA'. The AA' coordinates in our subset of interest intuitively should all belong to VV. So let vVv \in V. What is the corresponding element of AA? Presumably it should be f1(f(v))f^{-1}(f'(v)).

However, the problem with this is that ff is not necessarily injective, so f1(f(v))f^{-1}(f'(v)) could have more than one point.

view this post on Zulip David Egolf (Nov 19 2024 at 19:44):

But that's okay, maybe. We can consider points of the form (x,y)(x,y) where yVy \in V and xf1(f(y))x \in f^{-1}(f'(y)).

Is such a point (x,y)(x,y) an element of A×BAA \times_B A'? We just need f(x)=f(y)f(x) = f'(y), which is true. So, such a point is indeed an element of A×BAA \times_B A'.

view this post on Zulip David Egolf (Nov 19 2024 at 19:47):

Next, I would want to show that the collection of such points (x,y)(x,y) forms an open subset of A×BAA \times_B A'. I'm not sure how to do that.

view this post on Zulip David Egolf (Nov 19 2024 at 19:52):

Maybe there is a simpler way to do this. We know that the projection on the second coordinate p:A×BAAp':A \times_B A' \to A' is continuous. Pick some point (a,a)A×BA(a,a') \in A \times_B A' for which we wish to find an open neighbourhood around, such that pp is a homeomorphism when restricted to that neighbourhood.

Then p(a,a)=ap'(a,a') = a' has some open set VAV \subseteq A' where ff' restricts to a homeomorphism. Since pp' is continuous, (p)1(V)(p')^{-1}(V) is an open subset of A×BAA \times_B A'.

view this post on Zulip David Egolf (Nov 19 2024 at 19:57):

What is (p)1(V)(p')^{-1}(V) like? It consists of points (x,y)(x,y) such that yVy \in V and f(x)=f(y)f(x) = f'(y).

view this post on Zulip David Egolf (Nov 19 2024 at 19:59):

Actually, I think (p)1(V)(p')^{-1}(V) is exactly the set I had arrived at by considering the pictures above! That's pretty cool! And now we know that set is an open subset of A×BAA \times_B A'!

view this post on Zulip David Egolf (Nov 19 2024 at 20:09):

Now I think we are in business. To quickly recap:
picture

We let f:ABf':A' \to B' be a local homeomorphism and we want to show that this implies that pp is a local homeomorphism too. We already saw that the image of pp is open, and it remains to show that for any point (a,a)A×BA(a,a') \in A \times_B A' there exists an open set UU containing (a,a)(a,a') such that pp restricts to a homeomorphism on UU.

After some thought, we have arrived at a strategy for showing there exists such a UU. Given (a,a)A×BA(a,a') \in A \times_B A', we know that aAa' \in A' has an open set VV containing it such that ff' restricts to a homeomorphism on VV. We then take the preimage of VV with respect to pp' to obtain an open subset of A×BAA \times_B A' containing (a,a)(a,a').

It remains to show that pp restricts to a homeomorphism on U=(p)1(V)U=(p')^{-1}(V).

view this post on Zulip David Egolf (Nov 19 2024 at 20:15):

Since pp is continuous, its restriction to UU is continuous. It remains to show that (1) its restriction to UU is bijective and (2) its inverse as a function is also continuous.

view this post on Zulip David Egolf (Nov 19 2024 at 20:18):

A point in UU is of the form (x,y)(x,y) where yVy \in V and f(x)=f(y)f(x) = f'(y). Our map pp returns the first coordinate. pp is certainly surjective onto its image, but we still need to show that pp is injective. That amounts to showing that if (x,y)(x,y) and (x,y)(x,y') are both in UU, then y=yy=y'.

view this post on Zulip David Egolf (Nov 19 2024 at 20:20):

If (x,y)(x,y) and (x,y)(x,y') are both in UU, that implies that f(x)=f(y)=f(y)f(x) = f'(y) = f'(y'). But note that y,yVy,y' \in V, where ff restricts to a homeomorphism - and in particular where ff restricts to an injective function. Thus, y=yy=y' as desired. So, pp is injective when restricted to UU.

We conclude that pp restricts to a continuous bijection on UU. It remains to show that the inverse (as a function) of this restricted function is also continuous.

view this post on Zulip David Egolf (Nov 19 2024 at 20:23):

Let's call the function that is an inverse (as a function) to pp by the name qq. So q:p(U)Uq:p(U) \to U. This sends an xAx \in A to the pair (x,y)(x,y) where yy is the unique yVAy \in V \subseteq A' such that f(y)=f(x)f'(y) = f(x).

view this post on Zulip David Egolf (Nov 19 2024 at 20:28):

We recall that ff' restricts to a homeomorphism on VV. In particular it has a continuous inverse (f)1:f(V)V(f')^{-1}:f'(V) \to V. So we can compute our map q:p(U)Uq:p(U) \to U which sends x(x,y)x \mapsto (x,y) by using these two functions:

(Here, iAi_A is the inclusion map :p(U)A:p(U) \to A, ff refers to a restricted and corestricted version of ff, and iAi_{A'} is the inclusion map :VA:V \to A').

Each of these two functions is continuous, and thus the induced map to A×AA \times A' is continuous. And, the corestrictions of this map to A×BAA \times_B A' and to UU are both continuous. So pp has a continuous inverse when restricted to UU.

We conclude that pp is indeed a homeomorphism when restricted to UU!

view this post on Zulip David Egolf (Nov 19 2024 at 20:32):

John Baez said:

A more 'postmodern' approach might dive straight into sheaves on sites, but I prefer explaining math in a way that doesn't cut off the roots.

I like this approach! It is more work in some ways, but it's really nice to have motivation to learn some topology - and it's fun to see the topology in action. (I find it hard to get motivated to work on point set topology unless some other topic I care about makes use of it in a way I know about!)

view this post on Zulip David Egolf (Nov 19 2024 at 20:34):

I think I've shown above that the pullback of a local homeomorphism is a local homeomorphism. So we now have a way to pull back a sheaf to get another sheaf:

  1. convert the sheaf to its corresponding etale space (which is a local homeomorphism)
  2. pull back that local homeomorphism (along some continuous function of interest) to get another local homeomorphism
  3. convert that local hohomeomorphism back to a sheaf

view this post on Zulip David Egolf (Nov 19 2024 at 20:37):

Consulting the current blog post, I see that we next have this puzzle, which will expand our understanding of how pullbacks of bundles work:

Puzzle. Show that this construction [the pullback] extends to a functor f:Top/YTop/Xf^*:\mathsf{Top}/Y \to \mathsf{Top}/X. [Where f:XYf:X \to Y is a continuous function].

view this post on Zulip David Egolf (Nov 19 2024 at 20:37):

I'll stop here for today!

view this post on Zulip John Baez (Nov 19 2024 at 20:58):

I'll check this out! Let me move my interruption down here:

David Egolf said:

This appears to be another example of something I've noticed earlier: learning this stuff has involved more topology than I expected :sweat_smile:!

Yes, because I wanted to introduce sheaves and topoi through the classical and 'familiar' example of sheaves on topological spaces. All my students had to have taken a year of topology (one quarter of point-set topology, one of differential topology and one of algebraic topology). So, I could build on that. Also, most applications of sheaves in math still use sheaves on topological spaces, though in his work on algebraic geometry (esp. etale cohomology, to prove Weil's conjectures) Grothendieck introduced sheaves on more general sites.

A more 'postmodern' approach might dive straight into sheaves on sites, but I prefer explaining math in a way that doesn't cut off the roots.

view this post on Zulip John Baez (Nov 19 2024 at 20:59):

Whoops, now your reply appears before my interruption. :oh_no: No big deal.

view this post on Zulip John Baez (Nov 19 2024 at 21:01):

(I find it hard to get motivated to work on point set topology unless some other topic I care about makes use of it in a way I know about!)

Some students find point set topology interesting for its sake, but a lot of it was developed for applications - e.g. to real and complex analysis, and thus to understanding integrals and differential equations and things like that. Developed as a subject in its own right it's like "baby category theory" - the study of a very particular class of posets.

view this post on Zulip David Egolf (Nov 20 2024 at 18:42):

The next goal is to show that pulling back any continuous function f:XYf:X \to Y in Top\mathsf{Top} extends to a functor f:Top/YTop/Xf^*:\mathsf{Top}/Y \to \mathsf{Top}/X.

I am surprised that this is true, and curious as to whether it is the special case of a more general situation. Apparently it is! The nLab notes that in any category CC with pullbacks a morphism f:XYf:X \to Y induces a pullback functor f:C/YC/Xf^*:C/Y \to C/X, which is a sort of "base change".

view this post on Zulip Kevin Carlson (Nov 20 2024 at 18:49):

How come you're surprised?

view this post on Zulip David Egolf (Nov 20 2024 at 18:58):

Kevin Carlson said:

How come you're surprised?

I guess, on first impression, it strikes me as an impressive coincidence that taking pullbacks defines not only one but two functors! (The functor I was previously aware of is the one that maps an appropriately shaped diagram to its pullback.)

I now wonder if other "take the limit" functors have additional functors associated to them in a similar way, or if the "take the pullback" functor is special in this regard. I suppose we should at least expect the "take the pushout" functor to have a corresponding "pushforward" functor.

view this post on Zulip David Egolf (Nov 20 2024 at 19:22):

Upon contemplating this diagram, I had an idea for how ff^* should act on morphisms:
diagram

view this post on Zulip David Egolf (Nov 20 2024 at 19:23):

I think we can define ff^* on an arbitrary morphism of bundles over XX, g:ABg:A \to B, by defining fg:X×YAX×YBf^*g:X \times_YA \to X \times_Y B as (x,a)(x,g(a))(x,a) \mapsto (x,g(a)). Notice that, referencing our diagram, we have fbfg=faf^*b \circ f^*g = f^*a, because fbf^*b and faf^*a just grab the first coordinate, which is unchanged by fgf^*g.

The identity morphism 1A:AA1_A:A \to A gets mapped to (x,a)(x,1A(a))=(x,a)(x,a) \mapsto (x,1_A(a)) = (x,a), which is the identity morphism between the pulled back bundles involved.

ff^* respects composition because (f(h)f(g):(x,a)(x,g(a))(x,h(g(a))=(x,hg(a)))=f(hg)(f^*(h) \circ f^*(g):(x,a) \mapsto (x,g(a)) \mapsto (x,h(g(a)) = (x, h \circ g(a))) = f^*(h \circ g).

view this post on Zulip David Egolf (Nov 20 2024 at 19:42):

Assuming this is correct (hopefully it is!), we now have our desired functor f:Top/YTop/Xf^*:\mathsf{Top}/Y \to \mathsf{Top}/X induced by a continuous function f:XYf:X \to Y. This also gives us a functor from presheaves on YY to presheaves on XX, and a functor from sheaves on YY to sheaves on XX.

view this post on Zulip John Baez (Nov 20 2024 at 19:52):

David Egolf said:

Upon contemplating this diagram, I had an idea for how ff^* should act on morphisms....

Nice diagram! Contemplating this diagram, I immediately want to define fgf^\ast g using the universal property of the pullback X×YBX \times_Y B. Let's see: we've got morphisms X×YAXX \times_Y A \to X and X×YABX\times_Y A \to B visible in the diagram, and they obey the necessary commutative square condition to make X×YAX \times_Y A into a 'competitor' of X×YBX \times_Y B, so there exists a unique map X×YAX×YBX \times_Y A \to X \times_Y B such that yada yada....

So yes, that works, but it should agree with your 'concrete' description of fgf^\ast g.

view this post on Zulip John Baez (Nov 20 2024 at 19:54):

There is some advantage to avoiding the 'concrete' description of fgf^\ast g in terms of ordered pairs, because this fact - that in any category CC with pullbacks a morphism f:XYf:X \to Y induces a pullback functor f:C/YC/Xf^*:C/Y \to C/X - holds even in contexts where pullbacks have nothing to do with ordered pairs.

view this post on Zulip John Baez (Nov 20 2024 at 19:58):

By the way, this fact is important all over the place, and so is the fact that in many contexts, like any topos, the pullback functor ff^\ast has both a left and a right adjoint. We may even run into those in our course someday.

view this post on Zulip David Egolf (Nov 20 2024 at 20:22):

John Baez said:

There is some advantage to avoiding the 'concrete' description of fgf^\ast g in terms of ordered pairs...

That makes sense. Let me see if I can understand how you used the universal property of pullbacks. A pullback cone is final among all the cones over the diagram involved. So if we can set up X×YAX \times_Y A to be the apex of a cone over the appropriate diagram, the universal property will guarantee a unique morphism of cones exists, which involves a morphism :X×YAX×YB:X \times_Y A \to X \times_Y B.

view this post on Zulip David Egolf (Nov 20 2024 at 20:22):

Attempting to set up the cone discussed above:
cone

For this to really be a cone, we need ffa=b(gpA)f \circ f^*a = b \circ (g \circ p_A). We have b(gpA)=(bg)pA=apA=ffab \circ (g \circ p_A) = (b \circ g) \circ p_A = a \circ p_A = f \circ f^*a as desired!

view this post on Zulip David Egolf (Nov 20 2024 at 20:23):

So then we are set up to use the universal property of pullbacks to find our morphism of interest :X×YAX×YB:X \times_Y A \to X \times_Y B. Great!

view this post on Zulip John Baez (Nov 20 2024 at 20:37):

Good! Whenever you need to map something to a pullback, like X×YBX \times_Y B, you should feel a Pavlovian instinct to find maps from that something to XX and to BB.

view this post on Zulip David Egolf (Nov 21 2024 at 18:20):

I think we've now done all the puzzles/exercises from Part 4. So it's on to Part 5!

In this part, we're going to talk about why the category of presheaves on a given topological space forms an elementary topos. We'll work in a more general setting: apparently the category of presheaves on any category forms an elementary topos!

view this post on Zulip David Egolf (Nov 21 2024 at 18:23):

Let CC be a category, so that the category of presheaves on CC is the functor category [Cop,Set][C^{\mathrm{op}}, \mathsf{Set}]. In this category, each object is a functor :CopSet:C^{\mathrm{op}} \to \mathsf{Set} and the morphsims are natural transformations. So an object of this category attaches a set to each object of CC.

view this post on Zulip David Egolf (Nov 21 2024 at 18:24):

For [Cop,Set][C^{\mathrm{op}}, \mathsf{Set}] to be an elementary topos it needs to have, among other things, finite colimits. I thought it could be a good challenge to try to show that [Cop,Set][C^{\mathrm{op}}, \mathsf{Set}] has finite colimits, before reading the part of the blog post that discusses this.

view this post on Zulip David Egolf (Nov 21 2024 at 18:30):

Roughly speaking, I think the intuition is that this category of presheaves inherits colimits from Set\mathsf{Set} in a way analogous to how a set of functions :AR:A \to \mathbb{R} inherits a notion of addition "pointwise" from R\mathbb{R}. For example, if F,G:CopSetF,G:C^{\mathrm{op}} \to \mathsf{Set}, then I expect that their coproduct F+GF+G satisfies (F+G)(c)F(c)+G(c)(F+G)(c) \cong F(c) +G(c) for each cCc \in C.

view this post on Zulip David Egolf (Nov 21 2024 at 18:38):

Let's consider the general case. Let JJ be a small category, and let D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}] be a JJ-shaped diagram in [Cop,Set][C^{\mathrm{op}}, \mathsf{Set}]. This is a bunch of presheaves related by natural transformations, potentially required to satisfy certain equations. I'll call the colimit of this diagram (should it exist) by the name colim(D):CopSet\mathrm{colim}(D):C^{\mathrm{op}} \to \mathsf{Set}.

view this post on Zulip David Egolf (Nov 21 2024 at 18:40):

For any object cCc \in C we need to determine a set colim(D)(c)\mathrm{colim}(D)(c). Intuitively, we can do this by grabbing the part of our diagram DD concerned with cc. This gives a diagram in Set\mathsf{Set}, and then we can take the colimit of that diagram to get colim(D)(c)\mathrm{colim}(D)(c).

view this post on Zulip David Egolf (Nov 21 2024 at 18:45):

Starting from DD, how can we get a diagram in Set\mathsf{Set} associated to the object cc in CC? Intuitively, we can do it like this:

view this post on Zulip David Egolf (Nov 21 2024 at 18:47):

I'd like to express this process as a functor from CopC^\mathrm{op} to the category of JJ-shaped diagrams in Set\mathsf{Set}, which I'll call [J,Set][J, \mathsf{Set}]. So, we're looking for some functor F:Cop[J,Set]F:C^\mathrm{op} \to [J, \mathsf{Set}].

view this post on Zulip Peva Blanchard (Nov 21 2024 at 18:53):

Yes, I find helpful also to think of DD as a functor Cop×JSetC^{op} \times J \to \text{Set}. I.e., I have a set D(c,j)D(c,j) which is contravariant in cc and covariant in jj.

view this post on Zulip David Egolf (Nov 21 2024 at 18:59):

That does sound helpful! If I understand correctly, you are using this adjunction: Cat(A,[B,C])Cat(A×B,C)\mathsf{Cat}(A,[B,C]) \cong \mathsf{Cat}(A \times B, C). In our case, this becomes: Cat(Cop,[J,Set])Cat(Cop×J,Set)\mathsf{Cat}(C^{\mathrm{op}},[J,\mathsf{Set}]) \cong \mathsf{Cat}(C^{\mathrm{op}} \times J, \mathsf{Set}).

So, a functor F:Cop[J,Set]F:C^{\mathrm{op}} \to [J, \mathsf{Set}] is associated by this adjunction to some unique F:Cop×JSetF':C^{\mathrm{op}} \times J \to \mathsf{Set}.

Or, working with D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}] we can similarly get a corresponding functor D:J×CopSetD':J \times C^{\mathrm{op}} \to \mathsf{Set}.

view this post on Zulip Peva Blanchard (Nov 21 2024 at 19:02):

Yes, also, the analogy with linear algebra is interesting.
Let's momentarily think of JJ and CopC^{op} as finite sets.

The following data are all mutually related:

We also have a function XRXX \to \mathbb{R}^X that sends an element xx to the "vector" that is 1 on xx and 0 everywhere else. And this function looks like the Yoneda embedding CSetCopC \to \text{Set}^{C^{op}}.

(one difference is that there is no action of arrows, so _op\_^{op} means nothing here)

view this post on Zulip David Egolf (Nov 21 2024 at 19:30):

That analogy with linear algebra is pretty cool!

I think we now have the tools in place to find our functor F:Cop[J,Set]F:C^{\mathrm{op}} \to [J, \mathsf{Set}], starting with D:J[Cop,Set]D:J \to [C^{\mathrm{op}},\mathsf{Set}]. Moving DD across the adjunction mentioned above, we get D:J×CopSetD':J \times C^{\mathrm{op}} \to \mathsf{Set}. Precomposing with the isomorphism s:Cop×JJ×Cops:C^{\mathrm{op}} \times J \to J \times C^{\mathrm{op}}, we get Ds:Cop×JSetD' \circ s:C^{\mathrm{op}} \times J \to \mathsf{Set}. Moving this across the adjunction discussed above, we get (Ds):Cop[J,Set](D' \circ s)':C^{\mathrm{op}} \to [J, \mathsf{Set}], which I suspect is the FF I was looking for.

view this post on Zulip David Egolf (Nov 21 2024 at 19:33):

Now, I seem to recall that there is a "take the colimit" functor T:[J,Set]SetT:[J, \mathsf{Set}] \to \mathsf{Set}. Assuming this is the case, we can form T(Ds):CopSetT \circ (D' \circ s')':C^{\mathrm{op}} \to \mathsf{Set}. I suspect that this is the (object part of the) colimit of our diagram in [Cop,Set][C^{\mathrm{op}},\mathsf{Set}] that we started with.

view this post on Zulip David Egolf (Nov 21 2024 at 19:45):

Returning to the blog post, we have a related puzzle. Changing its notation to match what I'm using here, we have:

Puzzle. Show in the above situation that colimD(,c)\mathrm{colim}D'(-,c) depends functorially on cCopc \in C^{\mathrm{op}} and that the resulting functor is the colimit of the diagram D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}].

Here DD' is the functor D:J×CopSetD':J \times C^{\mathrm{op}} \to \mathsf{Set} corresponding to D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}]. Also, D(,c)D'(-,c) refers to the functor D(,c):JSetD'(-,c):J \to \mathsf{Set}.

The "resulting functor" on objects I believe acts like ccolimD(,c)c \mapsto \mathrm{colim}D'(-,c).

view this post on Zulip David Egolf (Nov 21 2024 at 19:47):

I'll stop here for today!

view this post on Zulip John Baez (Nov 21 2024 at 21:06):

David Egolf said:

Now, I seem to recall that there is a "take the colimit" functor T:[J,Set]SetT:[J, \mathsf{Set}] \to \mathsf{Set}.

That sounds right if JJ is a small category (which is what you typically assume for a category being used as a "diagram shape".)

view this post on Zulip David Egolf (Nov 21 2024 at 22:47):

On a bit of a side note, there's a weird thing about TT. Namely, it seems to require making a choice of colimit for each diagram, even when we have multiple isomorphic options available. This makes me wonder if there could be a nicer way to think about TT or something similar to TT.

view this post on Zulip John Baez (Nov 22 2024 at 00:19):

The same issue happens already when we write down "the" product functor C×CCC \times C \to C when CC is a category with products. One solution is to use an [[anafunctor]], which maps an object not to an object but to the universal property of an object. Another, I believe, is to switch to homotopy type theory.

view this post on Zulip Josselin Poiret (Nov 22 2024 at 10:34):

John Baez said:

The same issue happens already when we write down "the" product functor C×CCC \times C \to C when CC is a category with products. One solution is to use an [[anafunctor]], which maps an object not to an object but to the universal property of an object. Another, I believe, is to switch to homotopy type theory.

yet another solution is to consider that "having colimits" is actually not property but structure, and that such categories should be equipped with a specific colimit-producing functor. This is the same approach as split vs. non-split Grothendieck fibrations: one shouldn't throw away structure by squashing it

view this post on Zulip Morgan Rogers (he/him) (Nov 22 2024 at 14:35):

David Egolf said:

I now wonder if other "take the limit" functors have additional functors associated to them in a similar way, or if the "take the pullback" functor is special in this regard.

You can come up with variants where this works to an extent, but rarely will you encounter any instances as wide-ranging as pullbacks!

view this post on Zulip David Egolf (Nov 24 2024 at 19:16):

To review, the current goal is to show that colimD(,c):CopSet\mathrm{colim}D'(-,c):C^{\mathrm{op}} \to \mathsf{Set} is a functor, where D:J×CopSetD':J \times C^{\mathrm{op}} \to \mathsf{Set} is the functor corresponding to our JJ-shaped diagram of presheaves D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}], and D(,c):JSetD'(-,c):J \to \mathsf{Set}.

view this post on Zulip David Egolf (Nov 24 2024 at 19:19):

I think I'll start by trying to show that D(,c):JSetD'(-, c):J \to \mathsf{Set} really is a functor.

view this post on Zulip David Egolf (Nov 24 2024 at 19:20):

We have two functors:

view this post on Zulip David Egolf (Nov 24 2024 at 19:22):

Then we can construct a functor (1J,Δc):JJ×Cop(1_J,\Delta_c):J \to J \times C^{\mathrm{op}} using the fact that Cat\mathsf{Cat} has products. This functor acts on objects by j(j,c)j \mapsto (j,c).

view this post on Zulip David Egolf (Nov 24 2024 at 19:22):

We notice that D(,c):JSetD'(-,c):J \to \mathsf{Set} is the same thing as D(1J,Δc):JJ×CopSetD' \circ (1_J, \Delta_c):J \to J \times C^{\mathrm{op}} \to\mathsf{Set}, and is therefore a functor.

view this post on Zulip David Egolf (Nov 24 2024 at 19:31):

We have obtained a diagram in Set\mathsf{Set}, which is the same shape as the diagram we started out with in [Cop,Set][C^{\mathrm{op}}, \mathsf{Set}]. The jj-th set in our diagram is D(j,c)D'(j,c), which is what? I think D(j,c)=D(j)(c)D'(j,c) = D(j)(c). D(j)D(j) is the jj-th presheaf in our original diagram, and D(j)(c)D(j)(c) is obtained by evaluating that jj-presheaf at cCc \in C.

So, our diagram in Set\mathsf{Set} has this as its jj-th set: the set attached by the jj-th presheaf to cCc \in C. Intuitively, this diagram is obtained by evaluating our original diagram of presheaves at cc.

view this post on Zulip David Egolf (Nov 24 2024 at 19:35):

What is colimD(,c)\mathrm{colim}D'(-,c)? This is the colimit of the diagram discussed above. So, it is obtained by evaluating each presheaf in our original diagram DD at cc to get a diagram of sets, and then taking the colimit of that resulting diagram in Set\mathsf{Set}.

view this post on Zulip David Egolf (Nov 24 2024 at 19:36):

Given all this context, we want to show that ccolimD(,c)c \mapsto \mathrm{colim} D'(-,c) defines a functor G:CopSetG:C^{\mathrm{op}} \to \mathsf{Set}.

view this post on Zulip David Egolf (Nov 24 2024 at 19:53):

We've already said what GG does on objects: for cCc \in C, it takes in our diagram DD of presheaves, evaluates it at cc, and then takes the colimit of the resulting diagram in Set\mathsf{Set}.

However, we still need to specify what GG does on morphisms.

view this post on Zulip David Egolf (Nov 24 2024 at 19:54):

Let f:ccf:c \to c' be a morphism in CopC^{\mathrm{op}}. We need to dream up some function from the colimit of DD when evaluated at cc, to the colimit of DD when evaluated at cc'.

view this post on Zulip David Egolf (Nov 24 2024 at 20:01):

I want to use the universal property of a colimit to do this. G(c)G(c) is the tip of a cone under the diagram DD when evaluated at cc, and G(c)G(c) is the tip of a cone under the diagram DD when evaluated at cc'. If we can somehow get a cone under DD evaluated at cc with tip G(c)G(c'), we'll be in business.

view this post on Zulip David Egolf (Nov 24 2024 at 20:03):

Here's a picture that illustrates how we can do this:
picture

view this post on Zulip David Egolf (Nov 24 2024 at 20:05):

Each PiP_i is a presheaf in our diagram DD. We have a morphism Pi(f):Pi(c)Pi(c)P_i(f):P_i(c) \to P_i(c') for each ii. I am hoping that we can compose these Pi(f)P_i(f) with the morphisms in the cone under DD evaluated at cc' to get a cone under DD evaluated at cc with tip G(c)G(c').

view this post on Zulip David Egolf (Nov 24 2024 at 20:09):

To do this, I think it suffices to show that a morphism f:ccf:c \to c' in CopC^{\mathrm{op}} induces a morphism of diagrams of shape JJ in Set\mathsf{Set}. Then a cone under a diagram of shape JJ is also a morphism of diagrams of shape JJ, and thus the composition of these two morphisms is as well.

view this post on Zulip David Egolf (Nov 24 2024 at 20:11):

Basically, we want to show that there is a functor :Cop[J,Set]:C^{\mathrm{op}} \to [J, \mathsf{Set}] that acts on objects by sending cCc \in C to a JJ-shaped diagram in Set\mathsf{Set} given by evaluating DD at cc.

view this post on Zulip David Egolf (Nov 24 2024 at 20:14):

We already have D:J×CopSetD':J \times C^{op} \to \mathsf{Set}. We saw above that we can use an adjunction and the "swapping" isomorphism between Cop×JC^{\mathrm{op}} \times J and J×CopJ \times C^{\mathrm{op}} to get such a functor. So we have some functor H:Cop[J,Set]H:C^{\mathrm{op}} \to [J, \mathsf{Set}]. Thus we are assured that any morphism in CopC^{\mathrm{op}} induces a morphism of certain JJ-shaped diagrams in Set\mathsf{Set}.

view this post on Zulip David Egolf (Nov 24 2024 at 20:20):

I just want to double check that HH sends cCc \in C to our diagram DD evaluated at cc. Referencing the adjunction above, we have H(c)(j)=D(j,c)=D(j)(c)H(c)(j)=D'(j,c) = D(j)(c). So, the jj-position in our diagram H(c)H(c) is indeed given by evaluating our original diagram at location jj at cc. Thus, HH indeed sends an object cc to the diagram DD evaluated at cc.

view this post on Zulip David Egolf (Nov 24 2024 at 20:23):

Now we are in business! We can now say what the functor ccolimD(,c)c \mapsto \mathrm{colim}D'(-,c) does on morphisms, recalling that it sends an object cc to the colimit of our diagram DD evaluated at cc. Given f:ccf:c \to c' in CopC^{\mathrm{op}}, we get a morphism of JJ-shaped diagrams namely H(f):D(c)D(c)H(f):D(c) \to D(c'), where D(c)D(c) refers to our diagram DD evaluated at cc and D(c)D(c') is defined similarly.

view this post on Zulip David Egolf (Nov 24 2024 at 20:25):

Then, a colimit G(c)G(c) has an associated cone uc:D(c)G(c)u_c:D(c) \to G(c), and a colimit G(c)G(c') has an associated cone uc:D(c)G(c)u_{c'}:D(c') \to G(c'). Then we can form a cone ucH(f):D(c)D(c)G(c)u_{c'} \circ H(f): D(c) \to D(c') \to G(c').

Then we can use the universal property of colimits to obtain a function from G(c)G(c) to G(c)G(c').

view this post on Zulip David Egolf (Nov 24 2024 at 20:28):

So, our proposed functor G:CopSetG:C^{\mathrm{op}} \to \mathsf{Set} acts like this:

view this post on Zulip David Egolf (Nov 24 2024 at 20:32):

It remains to check that this GG really is a functor, and that it is the colimit of our diagram DD of presheaves.

view this post on Zulip David Egolf (Nov 24 2024 at 20:36):

In diagram form, this is our situation, where we are working in the category of JJ-shaped diagrams in Set\mathsf{Set}:
diagram

view this post on Zulip David Egolf (Nov 24 2024 at 20:39):

We want to show that G(fg)=G(f)G(f)G(f \circ g) = G(f) \circ G(f). By definition, G(fg)G(f \circ g) is the unique morphism that makes the outermost path in this diagram commute:
diagram

view this post on Zulip David Egolf (Nov 24 2024 at 20:41):

Since both of the inner rectangles commute, we can paste them together to get a larger commuting rectangle: G(g)G(f)uc=ucH(g)H(f)G(g) \circ G(f) \circ u_c = u_{c''} \circ H(g) \circ H(f). Since HH is a functor, this implies that (G(g)G(f))uc=ucH(gf)(G(g) \circ G(f)) \circ u_c = u_{c''} \circ H(g \circ f). That is, G(g)G(f)G(g) \circ G(f) satisfies the condition that uniquely determines G(gf)G(g \circ f), so we must have G(gf)=G(g)G(f)G(g \circ f) = G(g) \circ G(f).

view this post on Zulip David Egolf (Nov 24 2024 at 20:43):

The identity morphism 1c:cc1_c:c \to c induces the identity morphism from the diagram D(c)D(c) to itself, and consequently G(1c)=1colimD(c)=1G(c)G(1_c) = 1_{\mathrm{colim} D(c)}= 1_{G(c)}.

view this post on Zulip David Egolf (Nov 24 2024 at 20:45):

Thus, we conclude that G:CopSetG:C^{\mathrm{op}} \to \mathsf{Set} is indeed a functor. It remains to show that GG is really the colimit of our JJ-shaped diagram of presheaves, D:J[Cop,Set]D:J \to [C^\mathrm{op}, \mathsf{Set}].

view this post on Zulip David Egolf (Nov 24 2024 at 20:47):

To show that GG is really the colimit, we can aim to show it satisfies the appropriate universal property. To do that, we first need to think about how we get a colimit cone under DD with tip GG.

view this post on Zulip David Egolf (Nov 24 2024 at 20:50):

The first idea that comes to mind is as follows. To get a natural transformation λ:PG\lambda:P \to G, where PP is some presheaf in our diagram JJ, we can try setting λ\lambda by specifying each component. We can try setting λc:P(c)G(c)\lambda_c:P(c) \to G(c) using the corresponding part of the colimit cone (in Set\mathsf{Set}) under D(c)D(c) with tip G(c)G(c).

view this post on Zulip David Egolf (Nov 24 2024 at 20:52):

If we set λc:P(c)G(c)\lambda_c:P(c) \to G(c) for each cCc \in C in this way, do we really get a natural transformation λ:PG\lambda:P \to G?

view this post on Zulip David Egolf (Nov 24 2024 at 21:15):

We want to show that this square commutes for any morphism f:ccf:c \to c' in CopC^{\mathrm{op}}:
naturality square

view this post on Zulip David Egolf (Nov 24 2024 at 21:18):

I am guessing that this is part of the big diagram we saw earlier. If we can figure out how, exactly, then the commutativity of our earlier diagram should imply the commutativity of this one.

view this post on Zulip David Egolf (Nov 24 2024 at 21:22):

Here are the two diagrams I'm comparing:
two diagrams

view this post on Zulip David Egolf (Nov 24 2024 at 21:24):

The left diagram is in Set\mathsf{Set}, and expresses the (hopeful) naturality of λ:PG\lambda:P\to G. The right diagram is in [J,Set][J, \mathsf{Set}] and uses the fact that G(c)G(c) is a colimit of the diagram D(c)D(c) to induce a morphism from G(c)G(c) to G(cG(c').

view this post on Zulip David Egolf (Nov 24 2024 at 21:26):

We can think of the diagram on the right as a collection of (related) diagrams. For each jJj\in J we get a diagram in Set\mathsf{Set} by evaluating each functor at jj.

Let's assume that PP is in our diagram DD at location jj. So D(j)=PD(j) = P. Then we can form a new diagram for our diagram in [J,Set][J, \mathsf{Set}] by precomposing with j:1Jj:1 \to J. This is the functor from the category with a single object and morphism that sends the single object to jJj \in J.

view this post on Zulip David Egolf (Nov 24 2024 at 21:29):

Our new diagram replaces D(c)D(c) with D(c)j=P(c)D(c)_j = P(c), and similarly D(c)D(c') with D(c)j=P(c)D(c')_j = P(c'). It also sends ucu_c to its jj-th component :P(c)=D(c)jG(c):P(c)=D(c)_j \to G(c), which is λc\lambda_c. Similarly, it sends ucu_{c'} to λc\lambda_{c'}.

view this post on Zulip David Egolf (Nov 24 2024 at 21:32):

In forming this new diagram, we also replace H(f)H(f) with its jj-th component :D(c)jD(c)j:D(c)_j \to D(c')_j, which goes from P(c)P(c) to P(c)P(c'). So, if H(f)j=P(f)H(f)_j = P(f) the new diagram we form from the one on the right is just the diagram on the left.

view this post on Zulip David Egolf (Nov 24 2024 at 21:42):

Intuitively, H:Cop[J,Set]H:C^{\mathrm{op}} \to [J,\mathsf{Set}] takes in an object cc of CC and then creates a diagram in Set\mathsf{Set} by evaluating all our presheaves in DD at cc. So we get a bunch of diagrams, one associated to each object of CC. Each diagram is a functor, and HH maps each morphism to a natural transformation. We must have H(f):H(c)H(c)H(f):H(c) \to H(c'), so that H(f)H(f) is a natural transformation from D(c)D(c) to D(c)D(c') induced by ff.

view this post on Zulip David Egolf (Nov 24 2024 at 21:48):

I could just assume that this works out, but I would prefer to prove that HH acts in the way that I want. To figure out what HH does on morphisms, we can first figure out what DD' does on morphisms.

I'll stop here for today, but perhaps next time I can work this out:

view this post on Zulip John Baez (Nov 25 2024 at 01:26):

I'm feeling a bit too busy to check what you just wrote, David, especially since I bet it's all fine. (If you're worried about something please say so!) Instead I want to add a remark to an earlier conversation:

David Egolf said:

Now, I seem to recall that there is a "take the colimit" functor T:[J,Set]SetT:[J, \mathsf{Set}] \to \mathsf{Set}.

On a bit of a side note, there's a weird thing about TT. Namely, it seems to require making a choice of colimit for each diagram, even when we have multiple isomorphic options available.

I forgot to mention one way people usually deal with this. They show that if you choose colimits for every JJ-shaped diagram and get a functor TT, and I do it some other way and get a functor TT', then there's a natural isomorphism between TT and TT'.

This reassures us that it doesn't matter which choice we make. At least, it doesn't matter if we refrain from doing anything 'evil' - i.e., something that works for one functor but not for some other naturally isomorphic functor!

This is a nice concrete example of why it's good to avoid 'evil'. It's not just a matter of esthetics. It means we can choose a take-the-colimit functor TT without having to decide which one.

view this post on Zulip David Egolf (Nov 25 2024 at 01:53):

John Baez said:

I'm feeling a bit too busy to check what you just wrote, David, especially since I bet it's all fine. (If you're worried about something please say so!)

I think I'm on the right track, it's just taking me a while to get to my destination. But so far so good, I think! (In general, when I write something long like this I don't really expect anyone else to read it all. Despite that, I still like to document the learning process in case it could be useful to someone!)

John Baez said:

I forgot to mention one way people usually deal with this. They show that if you choose colimits for every JJ-shaped diagram and get a functor TT, and I do it some other way and get a functor TT', then there's a natural isomorphism between TT and TT'.

This reassures us that it doesn't matter which choice we make. At least, it doesn't matter if we refrain from doing anything 'evil' - i.e., something that works for one functor but not for some other naturally isomorphic functor!

This is a nice concrete example of why it's good to avoid 'evil'. It's not just a matter of esthetics. It means we can choose a take-the-colimit functor TT without having to decide which one.

That's reassuring! Although there are a bunch of different "take the colimit" functors in this context, they are all basically the same, so it doesn't really matter which one we pick.

I suppose in general we can avoid "evil" in this sense by only identifying an object we're working with up to isomorphism. This "blurriness" then would stop us from using any of the particular features of any specific choice, which might not be invariant across isomorphic alternatives. Although maybe this is often inconvenient!

view this post on Zulip John Baez (Nov 25 2024 at 03:14):

It's often inconvenient to only know the isomorphism class of an object: it's like holding a slippery pig that's twisting around so wildly that you can't point to any specific feature. But it's perfectly fine if you know an object up to a specified isomorphism, which is precisely what happens when the object is defined by a limit or colimit, or any other universal property. In this case, if you make a choice of this object, say XX, and I make a choice, say XX', they are not merely isomorphic, we both get access to a specific isomorphism α ⁣:XX\alpha \colon X \to X'. This allows us to transfer any structure you may have on XX to my XX', and vice versa.

view this post on Zulip Peva Blanchard (Nov 25 2024 at 20:09):

John Baez said:

if you make a choice of this object, say XX, and I make a choice, say XX', they are not merely isomorphic, we both get access to a specific isomorphism α ⁣:XX\alpha \colon X \to X'. This allows us to transfer any structure you may have on XX to my XX', and vice versa.

I'm not sure I understand precisely this point. I'll try to spell it out in the case of the representability of a presheaf FF on a category CC.

Choosing a representative of FF amounts to choosing an object XX and a natural isomorphism α:FC(_,X)\alpha : F \Rightarrow C(\_, X). Now, assume we have two such choices (X,α)(X, \alpha) and (X,α)(X', \alpha'). Then, we have an explicit natural isomorphism

αα1:C(_,X)C(_X) \alpha' \alpha^{-1} : C(\_, X) \Rightarrow C(\_ X')

By Yoneda lemma, this yields a specific isomorphism r[αα1]:XXr[\alpha' \alpha^{-1}] : X \to X'.

Is that the kind of examples you have in mind when you mention "having access to a specific isomorphism"?

view this post on Zulip John Baez (Nov 26 2024 at 01:45):

Yes, exactly. And more generally, whenever anything is defined by a universal property like a limit, or colimit, or tensor product of modules, etc., we say X is a universal object with structure S if for any other X' with structure S, there exists a unique morphism from X to X' (or the other way around) making some diagrams commute. Then if both X and X' are universal objects with structure S, there are unique morphisms

α:XX \alpha: X \to X'

and

β:XX \beta: X' \to X

making those diagrams commute, and we can use the uniqueness clause in a clever but completely standard way to show α\alpha and β\beta are inverses!

view this post on Zulip John Baez (Nov 26 2024 at 01:47):

Thus, not only are XX and XX' isomorphic - which is enough to transfer any property from XX to XX', or vice versa - but we also get a specific isomorphism between them, which lets us transfer any structure from XX to XX' or vice versa.

view this post on Zulip John Baez (Nov 26 2024 at 01:48):

For example, "having 7 elements" is a property, so if XX is a set with 7 elements and XX' is isomorphic to XX then we know XX' has 7 elements.

But "being a group" is a structure, so if the set XX is made into a group in some particular way and we only know XX' is isomorphic to XX', we don't have enough to make XX' into a group in some particular way. A specific isomorphism α ⁣:XX\alpha \colon X \stackrel{\sim}{\longrightarrow} X' would be enough.

And it takes even more to transfer stuff.

view this post on Zulip David Egolf (Nov 26 2024 at 03:37):

That's pretty interesting! It it interesting how the above argument relates these two things: being "extremal and concisely so" (attempting to convey some of the intuition associated with being a universal object) with respect to having certain structure, and being related by isomorphism. It makes me wonder if we could relax the "extremal and concisely so" condition and get a notion of morphism that is weaker than isomorphism but still indicates some measure of similarity is present.

view this post on Zulip David Egolf (Nov 26 2024 at 03:39):

For example, we could call XX a "weak universal object with structure SS" if for any other XX' with structure SS there exists at least one morphism from XX to XX' making some diagrams commute. Then perhaps we would obtain some notion of induced morphism between weakly universal objects with structure SS, which indicates some degree of similarity?

view this post on Zulip John Baez (Nov 26 2024 at 03:43):

People do talk about weak limits and especially weak pullbacks, which have the weakened sort of universal property you mention. I've never used them. But I sometimes joke about one object being merely "morphic" to another, as opposed to isomorphic.

view this post on Zulip Alex Kreitzberg (Nov 26 2024 at 04:30):

Your argument for transferring structure just needs an isomorphism, not a unique isomorphism right? Is there some other feature of limits that are preserved because it's a unique isomorphism?

My intuition is that there's a contractible groupoid of objects described by the limit, making the limit "unique", so "everything" gets translated even "stuff", because there's only one thing up to equivalence.

But the way you explained the above is making wonder if that's not quite right. That I'm confused about some detail.

view this post on Zulip John Baez (Nov 26 2024 at 06:37):

Alex Kreitzberg said:

Your argument for transferring structure just needs an isomorphism, not a unique isomorphism right?

It needs to be a specified isomorphism: that is, an isomorphism you actually know.

The existence and uniqueness clauses in the definition of any universal property guarantee that whenever we have two objects XX and XX' with the same universal property, we get a specified isomorphism between them. Without the existence clause we might not have any isomorphism at all; without uniqueness there wouldn't be a particular one.

There are other ways to specify isomorphisms between objects, but a universal property quickly and efficiently specifies an isomorphism between any two objects with that property!

view this post on Zulip John Baez (Nov 26 2024 at 06:43):

My intuition is that there's a contractible groupoid of objects described by the limit, making the limit "unique",

Yes, in the appropriate categorical sense of "unique".

so "everything" gets translated even "stuff", because there's only one thing up to equivalence.

Let me think about that! That's an interesting point. My intuition was that since a functor U:CDU: \mathsf{C} \to \mathsf{D} that "forgets stuff" is not faithful, even if D\mathsf{D} is a contractible groupoid, C\mathsf{C} may not be.

view this post on Zulip Morgan Rogers (he/him) (Nov 26 2024 at 10:09):

John Baez said:

My intuition is that there's a contractible groupoid of objects described by the limit, making the limit "unique",

Yes, in the appropriate categorical sense of "unique".

The groupoid need not be contractible! There can be multiple isomorphisms between the objects, that's why we need to specify one.

view this post on Zulip Amar Hadzihasanovic (Nov 26 2024 at 13:17):

I guess that the precise statement would be that the category of limit cones over a diagram F:JCF: J \to \mathsf{C} is a contractible groupoid, and its objects are "labelled in objects of C\mathsf{C}" via the functor that sends a cone to its tip. So perhaps "a contractible groupoid labelled in objects" rather than "contractible groupoid of objects" is the accurate rephrasing?

view this post on Zulip Morgan Rogers (he/him) (Nov 26 2024 at 14:01):

Yes, I wasn't being too deliberately pedantic, just underlining the point John was making about how much information you need about isomorphisms to transfer properties vs structure.

view this post on Zulip David Egolf (Nov 26 2024 at 19:08):

Returning to the topos theory blog posts, my current goal is as follows. We have an adjunction ×Cop[Cop,]- \times C^{\mathrm{op}} \dashv [C^{\mathrm{op}},-] of functors :CatCat:\mathsf{Cat} \to \mathsf{Cat}. In particular, this implies we have Cat(J×Cop,Set)Cat(J,[Cop,Set])\mathsf{Cat}(J \times C^{\mathrm{op}}, \mathsf{Set}) \cong \mathsf{Cat}(J, [C^{\mathrm{op}}, \mathsf{Set}]).

This means that given a functor D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}] there is a corresponding functor D:J×CopSetD':J \times C^{\mathrm{op}} \to \mathsf{Set}. I want to figure out how DD' acts!

view this post on Zulip David Egolf (Nov 26 2024 at 19:18):

This feels like an important thing to know how to do, but I'm a bit unsure how to get started!

view this post on Zulip David Egolf (Nov 26 2024 at 19:31):

The idea of using the Yoneda lemma somehow vaguely occurs to me, but I don't quite see how that would help.

view this post on Zulip David Egolf (Nov 26 2024 at 19:57):

Maybe this is an idea: because we have the adjunction above, in particular we have a natural isomorphism Cat(×Cop,Set)Cat(,[Cop,Set])\mathsf{Cat}(- \times C^{\mathrm{op}}, \mathsf{Set}) \cong \mathsf{Cat}(-, [C^{\mathrm{op}},\mathsf{Set}]). Expressing this in terms of the opposite category of Cat\mathsf{Cat} we have a natural isomorphism Catop(Set,×Cop)Catop([Cop,Set],)\mathsf{Cat}^{\mathrm{op}}( \mathsf{Set},- \times C^{\mathrm{op}}) \cong \mathsf{Cat}^{\mathrm{op}}([C^{\mathrm{op}},\mathsf{Set}],- ).

view this post on Zulip David Egolf (Nov 26 2024 at 20:01):

The Yoneda lemma then tells us that this natural isomorphism corresponds to an element of Catop(Set,[Cop,Set]×Cop)\mathsf{Cat}^{\mathrm{op}}( \mathsf{Set}, [C^{\mathrm{op}}, \mathsf{Set}] \times C^{\mathrm{op}}), which is an element of Cat([Cop,Set]×Cop,Set)\mathsf{Cat}( [C^{\mathrm{op}}, \mathsf{Set}] \times C^{\mathrm{op}}, \mathsf{Set}).

view this post on Zulip David Egolf (Nov 26 2024 at 20:02):

This special element I bet is going to be an "evaluation" functor, and working out exactly what it is will probably be useful.

view this post on Zulip John Baez (Nov 26 2024 at 20:02):

Are you familiar with [[cartesian closed categories]]? There are lots of categories where the functor "taking the product with the object x" has a right adjoint called [x, -], and Cat is one of those.

view this post on Zulip David Egolf (Nov 26 2024 at 20:05):

John Baez said:

Are you familiar with [[cartesian closed categories]]? There are lots of categories where the functor "taking the product with the object x" has a right adjoint called [x, -], and Cat is one of those.

Somewhat! I've been mostly referring to the article [[closed monoidal category]]. As far as I understand, a cartesian closed category is a closed monoidal category where the monoidal product is given by taking the product.

view this post on Zulip David Egolf (Nov 26 2024 at 20:06):

Scrolling down in the article you linked above, I see a section called "Some basic consequences" which looks like it might be helpful.

view this post on Zulip John Baez (Nov 26 2024 at 20:07):

Right. I think you can find a formula for D' in terms of D just by writing down the simplest thing that parses and then checking it works - that's the way to solve 90% of problems in category theory. :smirk:

I could be wrong... but did you try that? Yoneda feels like overkill to me.

view this post on Zulip David Egolf (Nov 26 2024 at 20:11):

I figured that I could probably guess how DD' should work in the way you describe, but I was somehow feeling that I wanted to understand more generally the process of hopping across an adjunction.

I think the article you linked gives an answer that makes me somewhat happy though: In particular, that article notes that we can get from a morphism ϕ:Z[X,Y]\phi:Z \to [X,Y] to a morphism ϕ:Z×XY\phi':Z \times X \to Y as follows: we apply the evaluation morphism :[X,Y]×XY:[X,Y] \times X \to Y after the morphism ϕ×1X:Z×X[X,Y]×X\phi \times 1_X:Z \times X \to [X,Y] \times X.

view this post on Zulip David Egolf (Nov 26 2024 at 20:13):

So, I can just spell out what the evaluation morphism does, and I should be in business!

I also find it satisfying to note that the evaluation morphism probably comes from applying the Yoneda lemma to the adjunction in question, as I was begin to investigate above.

view this post on Zulip David Egolf (Nov 26 2024 at 20:19):

Alright, we want some "evaluation morphism" e:[X,Y]×XYe:[X, Y] \times X \to Y. This should be a functor. On objects, intuitively it will map (F,x)F(x)(F,x) \mapsto F(x). On morphisms, it needs to send a pair (α,f)(\alpha, f) where α:FG\alpha:F \to G and f:xyf:x \to y to some morphism from F(x)F(x) to G(y)G(y).

view this post on Zulip David Egolf (Nov 26 2024 at 20:20):

Since α:FG\alpha:F \to G is a natural transformation, we have this commutative square:
naturality square

view this post on Zulip David Egolf (Nov 26 2024 at 20:21):

So I'm going to guess that ee sends (α:FG,f:xy)(\alpha:F \to G, f:x \to y) to G(f)αx=αyF(f)G(f) \circ \alpha_x = \alpha_y \circ F(f). This morphism goes from F(x)F(x) to G(y)G(y) as required.

view this post on Zulip David Egolf (Nov 26 2024 at 20:29):

Is this proposed e:[X,Y]Ye:[X, Y] \to Y a functor?

e(1(F,x))=e((1F:FF,1x:xx))=F(1x)(1F)x=F(1x)1F(x)=1F(x)=1e(F,x)e(1_{(F,x)})=e((1_F:F \to F, 1_x:x\to x)) = F(1_x) \circ (1_F)_x = F(1_x)\circ 1_{F(x)} = 1_{F(x)} = 1_{e(F,x)}, so ee acts as it should on identity morphisms.

view this post on Zulip David Egolf (Nov 26 2024 at 20:35):

It remains to show that ee preserves composition. But I will leave that for next time!

view this post on Zulip Peva Blanchard (Nov 26 2024 at 21:43):

This is interesting, I wasn't expecting the use of the Yoneda lemma to highlight the evaluation functor.
When exercising with adjunctions, I found interesting to describe their units and counits.
I don't want to interrupt your flow here, so I'll just write in a "spoiler" box :)

view this post on Zulip David Egolf (Nov 27 2024 at 18:55):

I want to show that this e:[X,Y]×XYe:[X, Y] \times X \to Y preserves composition.

So let us consider a composite morphism (α,f)(α,f):(F,x)(G,y)(H,z)(\alpha',f') \circ (\alpha, f):(F,x) \to (G,y) \to (H,z). Here f:xyf:x \to y, f:yzf':y \to z, α:FG\alpha:F \to G and α:GH\alpha':G \to H . This morphism is (αα:FH,ff:xz)(\alpha' \circ \alpha:F \to H, f' \circ f:x \to z).

ee maps this to: H(ff)(αα)x=H(f)H(f)(α)xαxH(f' \circ f) \circ (\alpha' \circ \alpha)_x = H(f') \circ H(f) \circ (\alpha')_x \circ \alpha_x

view this post on Zulip David Egolf (Nov 27 2024 at 18:56):

That morphism is the bottom left path from top left to bottom right in the following diagram:
diagram

view this post on Zulip David Egolf (Nov 27 2024 at 18:57):

Every (small) square in this diagram is a naturality square, so any path from the top left to the bottom right composes to form the same morphism.

view this post on Zulip David Egolf (Nov 27 2024 at 18:59):

Now we want to compute e(α,f)e(α,f)e(\alpha',f') \circ e(\alpha, f). We have e(α,f)=G(f)αxe(\alpha, f) = G(f) \circ \alpha_x and e(α:GH,f:yz)=H(f)αye(\alpha':G \to H, f':y \to z) = H(f') \circ \alpha'_y.

So this composite is e(α,f)e(α,f)=(H(f)αy)(G(f)αx)e(\alpha',f') \circ e(\alpha, f) = (H(f') \circ \alpha'_y) \circ (G(f) \circ \alpha_x).

This is another path from top left to bottom right in our diagram, so this is equal to H(ff)(αα)xH(f' \circ f) \circ (\alpha' \circ \alpha)_x. We conclude that ee does indeed preserve composition!

view this post on Zulip David Egolf (Nov 27 2024 at 19:03):

Peva Blanchard said:

This is interesting, I wasn't expecting the use of the Yoneda lemma to highlight the evaluation functor.
When exercising with adjunctions, I found interesting to describe their units and counits.
I don't want to interrupt your flow here, so I'll just write in a "spoiler" box :)


Thanks for pointing that out! Now I feel more incentivized to think about the unit and counit of adjunctions that I may run across in the future...

view this post on Zulip David Egolf (Nov 27 2024 at 19:07):

Now, the reason I did all this was because I wanted to figure out how D:J×CopSetD':J \times C^{\mathrm{op}} \to \mathsf{Set} acts on morphisms, given a JJ-shaped diagram D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}] of presheaves on CC. We should be able to spell out DD' in detail now!

view this post on Zulip David Egolf (Nov 27 2024 at 19:11):

We start with D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}]. Using DD, we'll first form D×1Cop:J×Cop[Cop,Set]×CopD \times 1_{C^{\mathrm{op}}}: J \times C^{\mathrm{op}} \to [C^{\mathrm{op}}, \mathsf{Set}] \times C^{\mathrm{op}}. Then we'll apply our evaluation functor e:[Cop,Set]×CopSete:[C^{\mathrm{op}}, \mathsf{Set}] \times C^{\mathrm{op}} \to \mathsf{Set} to end up in Set\mathsf{Set}.

view this post on Zulip David Egolf (Nov 27 2024 at 19:28):

So let's consider D=e(D×1Cop):J×CopSetD' = e \circ (D \times 1_{C^{\mathrm{op}}}):J \times C^{\mathrm{op}} \to \mathsf{Set}

On objects, it acts like this:
(j,c)(D(j),c)D(j)(c)(j,c) \mapsto (D(j), c) \mapsto D(j)(c)

On morphisms, it acts like this:
(f:jj,g:cc)(D(f):D(j)D(j),g:cc)(f:j \to j', g:c \to c') \mapsto (D(f):D(j) \to D(j'), g:c \to c')
D(j)(g)D(f)c\mapsto D(j')(g) \circ D(f)_{c}

view this post on Zulip David Egolf (Nov 27 2024 at 19:29):

The situation is pictured here:
picture

view this post on Zulip David Egolf (Nov 27 2024 at 19:30):

Hopefully I did that right. It feels like it would be easy to make a mistake here.

view this post on Zulip David Egolf (Nov 27 2024 at 19:33):

Scrolling way back, I think my reason for spelling out how DD' works was to figure out how H:Cop[J,Set]H: C^{\mathrm{op}} \to [J, \mathsf{Set}] works. This would involve "hopping across" the adjunction again :sweat_smile:!

view this post on Zulip David Egolf (Nov 27 2024 at 19:34):

That sounds like a lot of work, so I might instead take some time to rethink my strategy. I'll stop here for today.

view this post on Zulip John Baez (Nov 27 2024 at 19:43):

It's always useful to start by guessing what the answer must be: in problems of this sort, there is usually an "obvious best guess".

view this post on Zulip John Baez (Nov 27 2024 at 19:44):

The physicist John Wheeler gave some advice that really affected me, even though I'd already half-known it before. Namely: never do a calculation unless you already know the answer.

view this post on Zulip John Baez (Nov 27 2024 at 19:46):

(It's actually enough to think you know the answer - then the calculation will prove you wrong.)

view this post on Zulip John Baez (Nov 27 2024 at 19:52):

So, when you get time you might just write down what you think HH should be, without calculating it.

view this post on Zulip David Egolf (Nov 27 2024 at 19:53):

That's an interesting perspective! I think I sometimes use calculations as a sort of "extension ladder" to reach beyond what my intuition is telling me. But if I started reaching too far beyond things get a bit wobbly. So the idea of consistently grounding calculation in a specific guess or intuition sounds potentially quite helpful!

view this post on Zulip David Egolf (Nov 27 2024 at 19:54):

I'll see what guess I can dream up for HH next time I work on this.

view this post on Zulip John Baez (Nov 27 2024 at 19:55):

Yes, Wheeler was exaggerating for effect; both perspectives on calculation are important!

view this post on Zulip Alex Kreitzberg (Nov 28 2024 at 06:42):

John Baez said:

The physicist John Wheeler gave some advice that really affected me, even though I'd already half-known it before. Namely: never do a calculation unless you already know the answer.

Do you have a cute story/anecdote that insists on calculating the answer when you believe you already know it? The temptation to say "I already know this, what's the point of the calculation?" Feels far more lethal to me. (Of course all advice is tailored to who is receiving it)

view this post on Zulip John Baez (Nov 28 2024 at 18:25):

I don't have a specially cute story like that: I just know I'm pretty much unable to do a serious calculation correctly unless I have a good idea about where it's going. Here I'm generally talking about calculations that involve integrals, algebraic equations, etc. - since those are the most complicated calculations I've done. So typically what happens is that I calculate rather quickly but make lots of copying mistakes, where when copying from one line of text to the next and doing some manipluations a double minus sign turns into a single minus sign, I forget to distribute a factor over all the terms in a sum, and so on. If I know where the calculation should be going, I can tell when something is going wrong, so I can diagnose these errors. But if I have no idea what the result should be, it takes a long time, because I tend to become 'blind' to these errors: I can look at them over and over, and still not see them.

view this post on Zulip John Baez (Nov 28 2024 at 18:27):

The same general principles apply to category-theoretic computations, especially with enormous commutative diagrams.

However, what I like about category theory is that it's harder to make computational mistakes, because in many situations there's only one possible expression that can possibly parse: if you write down the wrong thing, you get something that has the wrong type or is undefined. I like to say that category theory is 'rigid', not flexible: if you accidentally bend things a bit, they tend to break completely, so you can tell.

Physicists try to get themselves into similar situations by relentlessly using dimensional analysis. Then many mistakes can be spotted by noticing that what you wrote has the wrong dimensions. This amounts to replacing the ring of real numbers by a graded commutative ring, graded in some abelian group, where you're not allowed to add things of different grades.

James Dolan noticed that graded commutative rings of this sort can also be seen as categories called 'dimensional categories'... so using dimensional analysis in physics gives it more of the 'rigidity' we expect from category theory:

view this post on Zulip John Baez (Nov 28 2024 at 18:47):

Recently I spent two or three weeks trying to correctly do some computations in statistical mechanics, essentially taking the limit of an integral, and I screwed up about 100 times before getting it right! The problem was that I really didn't know the right answer ahead of time: I had a rough idea of it, but I quickly discovered that rough idea was wrong and then I was lost at sea. In the end I had to do a very concrete example of these calculations, rather than the general abstract calculations, before I discovered a conceptual error I was making:

I actually enjoyed these few weeks very much, since it's been a long time since I've done calculations that were so involved, and so deeply reliant on ideas from physics. When I finally straightened out all the mistakes it was glorious!

view this post on Zulip David Egolf (Nov 28 2024 at 18:57):

That rings true for me as well! Certainly in the context of math it often helps me to try and consider a specific example, but that's also true in the case of writing a program. I've spent weeks slowly debugging a program that is supposed to reconstruct images, where my only clue that something is wrong is that the image just doesn't look right at all. Without that clue, just reading the code, I would have been very hard pressed to identify what was wrong!

view this post on Zulip David Egolf (Nov 28 2024 at 19:03):

Let me see if I can dream up a guess for how the functor H:Cop[J,Set]H: C^{\mathrm{op}} \to [J, \mathsf{Set}] should work. The context is that we have a JJ-shaped diagram D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}] of presheaves on CC.

On objects, I expect HH to send cCc \in C to the JJ-shaped diagram in Set\mathsf{Set} given by evaluating each presheaf in our original diagram at cc.

view this post on Zulip David Egolf (Nov 28 2024 at 19:06):

On morphisms, I don't have a guess for what HH should do yet. If f:ccf:c \to c' in CopC^{\mathrm{op}}, H(f):H(c)H(c)H(f):H(c) \to H(c'). This is to be a natural transformation from H(c):JSetH(c):J \to \mathsf{Set} to H(c):JSetH(c'):J \to \mathsf{Set}. To specify it, it suffices to specify its components.

view this post on Zulip David Egolf (Nov 28 2024 at 19:12):

So let λ:jk\lambda:j \to k in JJ. Here's the naturality square for λ\lambda:
naturality square

view this post on Zulip David Egolf (Nov 28 2024 at 19:14):

Let's see if we can figure out a guess for H(f)j:H(c)(j)H(c)(j)H(f)_j:H(c)(j) \to H(c')(j). Now, H(c):JSetH(c):J \to \mathsf{Set} is a diagram of sets obtained by evaluating each presheaf at cc. If we just grab the jj-th set from that diagram, this should just be what we get when we evaluate the jj-th presheaf at cCc \in C.

view this post on Zulip David Egolf (Nov 28 2024 at 19:15):

So H(f)j:H(c)(j)H(c)(j)H(f)_j:H(c)(j) \to H(c')(j) should be a function from (the set obtained by evaluating the jj-th presheaf in DD at cc) to (the set obtained by evaluating the jj-th presheaf in DD at cc').

view this post on Zulip David Egolf (Nov 28 2024 at 19:21):

The jj-th presheaf is D(j):CopSetD(j):C^{\mathrm{op}} \to \mathsf{Set}. This is a functor, so we have D(j)(f):D(j)(c)D(j)(c)D(j)(f): D(j)(c) \to D(j)(c'). I think we have D(j)(c)=H(c)(j)D(j)(c) = H(c)(j) and D(j)(c)=H(c)(j)D(j)(c') = H(c')(j). So, D(j)(f):H(c)(j)H(c)(j)D(j)(f):H(c)(j) \to H(c')(j).

view this post on Zulip David Egolf (Nov 28 2024 at 19:24):

I can now form a guess for what H:Cop[J,Set]H:C^{\mathrm{op}} \to [J,\mathsf{Set}] does on morphisms. It takes a morphism f:ccf:c \to c' in CopC^{\mathrm{op}} and sends it to the natural transformation from H(c):JSetH(c):J \to \mathsf{Set} to H(c):JSetH(c'):J \to \mathsf{Set} with jj-th component given by D(j)(f):H(c)(j)H(c)(j)D(j)(f):H(c)(j) \to H(c')(j).

view this post on Zulip David Egolf (Nov 28 2024 at 19:28):

I'd next want to check that this guess makes the above naturality square commute. But I'll stop here for today.

view this post on Zulip John Baez (Nov 28 2024 at 23:03):

David Egolf said:

Let me see if I can dream up a guess for how the functor H:Cop[J,Set]H: C^{\mathrm{op}} \to [J, \mathsf{Set}] should work. The context is that we have a JJ-shaped diagram D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}] of presheaves on CC.

I'm losing track of what you're doing. Are you trying to turn a functor

D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}]

into a functor

H:Cop[J,Set]H: C^{\mathrm{op}} \to [J, \mathsf{Set}] ?

When I look at these two I think:

Oh, they're basically just the same thing! DD takes a guy (= an object or a morphism) in JJ and produces something that takes a guy in CopC^{\text{op}} and gives a set. HH takes a guy in CopC^{\text{op}} and produces something that takes a guy in JJ and gives a set.

In other words, for any guys jJj \in J and cCopc \in C^{\text{op}}, which could be objects or morphisms, we have

D(j)(c)=H(c)(j) D(j)(c) = H(c)(j)

view this post on Zulip David Egolf (Nov 28 2024 at 23:07):

Yes, @John Baez that is what I'm trying to do! I agree that D(j)(c)=H(c)(j)D(j)(c) = H(c)(j) for objects c,jc,j, but I hadn't considered that the same equation could hold for morphisms!

view this post on Zulip John Baez (Nov 28 2024 at 23:09):

Okay. It's good to notice that when you have a two-variable functor like D()()D(-)(--) or H()()H(--)(-), it makes sense when both - and -- are objects, when both of them are morphisms, and also when one is an object and another is a morphism!

view this post on Zulip John Baez (Nov 28 2024 at 23:10):

So, it's good to do as many computations as you can while remaining noncommital about whether the variables are objects or morphisms. Then you can effectively do multiple computations at once.

view this post on Zulip David Egolf (Nov 28 2024 at 23:13):

That sounds cool! I'm not quite understanding yet how that equation can make sense when both the things we feed in our morphisms.

Let's say f:ccf:c \to c' in CopC^{\mathrm{op}}. Then H(f)H(f) is a natural transformation between two functors in [J,Set][J, \mathsf{Set}]. I'm not seeing how it makes sense to then feed a morphism in JJ to H(f)H(f); I don't think of H(f)H(f) as something that takes in morphisms - it's just a bunch of component functions.

view this post on Zulip John Baez (Nov 28 2024 at 23:13):

That's indeed a good way to make it confusing!

view this post on Zulip John Baez (Nov 28 2024 at 23:16):

Now maybe you're at the stage before you're convinced that Cat is cartesian closed. Once you're convinced, you know that

H ⁣:Cop[J,Set]H \colon C^{\text{op}} \to [J,\mathsf{Set}]

is just another way of talking about

H~ ⁣:Cop×JSet \tilde{H} \colon C^{\text{op}} \times J \to \mathsf{Set}

and this has no trouble eating a morphism in CopC^{\text{op}} and a morphism in JJ and producing a morphism in Set\mathsf{Set}.

view this post on Zulip John Baez (Nov 28 2024 at 23:18):

But what you're wondering is how

H ⁣:Cop[J,Set]H \colon C^{\text{op}} \to [J,\mathsf{Set}]

eats a morphism in CopC^{\text{op}} and a morphism in JJ and produces a morphism in Set\mathsf{Set}.

view this post on Zulip David Egolf (Nov 28 2024 at 23:21):

Scrolling way back, I reached a point where I needed to know what HH does to morphisms. That's what I would like to understand.

I already am convinced that Cat\mathsf{Cat} is cartesian closed. So I agree with you when you talk about how the functor HH is just another way of talking about some functor H~\tilde{H}. I think my problem is that I don't understand exactly how moving across this adjunction actually works.

view this post on Zulip David Egolf (Nov 28 2024 at 23:22):

It's one thing to know that I can exchange one functor for another using this adjunction. It's another thing to understand exactly what functor I get out from this exchange process.

view this post on Zulip David Egolf (Nov 28 2024 at 23:23):

It's possibly my original approach was just a needlessly painful way to go about things, and that this could all be avoided with a different strategy.

view this post on Zulip John Baez (Nov 28 2024 at 23:24):

To see "directly" how

H ⁣:Cop[J,Set]H \colon C^{\text{op}} \to [J,\mathsf{Set}]

eats a morphism in CopC^{\text{op}} and a morphism in JJ and produces a morphism in Set\mathsf{Set} - that is, to get myself into trouble as you just have, and get back out- I need to remember more precisely how

H~ ⁣:Cop×JSet \tilde{H} \colon C^{\text{op}} \times J \to \mathsf{Set}

eats a morphism f:ccf : c \to c' in CopC^{\text{op}} and a morphism g:jjg: j \to j' in JJ and produces a morphism in Set\mathsf{Set}.

On the one hand, we just take the morphism (f,g):(c,j)(c,j)(f,g) : (c,j) \to (c',j') and feed it in H~\tilde{H}. But on the other hand, it's good to remember that

(f,g)=(f,1j)(1c,g)=(1c,g)(f,1j) (f,g) = (f , 1_{j'}) \circ (1_c, g) = (1_{c'}, g) \circ (f, 1_j)

so we can play various tricks.

view this post on Zulip John Baez (Nov 28 2024 at 23:26):

This formula I just urged you to "remember", which may have some typos in it, allows you to break things down in a way where you don't bust your brain wondering what does a natural transformation do to a morphism???

view this post on Zulip John Baez (Nov 28 2024 at 23:26):

(What it does to a morphism, ultimately, is give a commutative square, meaning an equation of some sort... so, in a way it doesn't do much!)

view this post on Zulip John Baez (Nov 28 2024 at 23:32):

This formula

(f,g)=(f,1j)(1c,g)=(1c,g)(f,1j) (f,g) = (f , 1_{j'}) \circ (1_c, g) = (1_{c'}, g) \circ (f, 1_j)

is a completely general formula that says: when you've got a morphism in a product of categories, you can always write it as

or

This is why, when you have a functor going out of a product category like Cop×JC^{\text{op}} \times J, you never need to think about what it does to a pair of morphisms, one in CopC^{\text{op}} and one in JJ. You can always just think about what it does to a pair consisting of one object and one morphism.

view this post on Zulip John Baez (Nov 28 2024 at 23:33):

And I feel that allows us to avoid the problem you were facing (and ultimately also confront that problem and solve it).

view this post on Zulip David Egolf (Nov 28 2024 at 23:50):

That formula is interesting!

view this post on Zulip David Egolf (Nov 28 2024 at 23:52):

By the way, I think I ran across earlier today a way that one can think of applying a natural transformation to a morphism. One starts by contemplating the naturality square for the morphism in question, which is f:XYf:X \to Y in the picture below:
naturality square

view this post on Zulip David Egolf (Nov 28 2024 at 23:52):

Then one can define α(f)=G(f)αX=αYF(f):F(X)G(Y)\alpha(f) = G(f) \circ \alpha_X = \alpha_Y \circ F(f):F(X) \to G(Y).

view this post on Zulip David Egolf (Nov 28 2024 at 23:53):

I don't know if that will be helpful in this context, but it might be!

view this post on Zulip David Egolf (Nov 29 2024 at 00:01):

I like the formula (f,g)=(f,1j)(1c,g)(f,g) = (f, 1_{j'}) \circ (1_c, g). I don't immediately see how to use it to figure out what HH does to a morphism. Maybe something will occur to me when I give this another proper try, hopefully tomorrow.

view this post on Zulip John Baez (Nov 29 2024 at 00:04):

Well, I think you ran into trouble figuring out what H()()H(-)(--) is when both - and -- are morphisms, and this formula helps with that. If both of them are morphisms, you can use the formula I gave to reduce to the case where just one is an interesting morphism, and the other is an identity morphism (and thus essentially an object). Since

(f,g)=(f,1j)(1c,g)=(1c,g)(f,1j) (f,g) = (f , 1_{j'}) \circ (1_c, g) = (1_{c'}, g) \circ (f, 1_j)

we get

H(f,g)=H(f)(1j)H(1c)(g)=H(1c)(g)H(f)(1j) H(f,g) = H(f )(1_{j'}) \circ H(1_c)(g) = H(1_{c'})(g) \circ H(f)(1_j)

and I would usually write this simply as

(f,g)=H(f)(j)H(c)(g)=H(c)(g)H(f)(j) (f,g) = H(f)(j') \circ H(c)(g) = H(c')(g) \circ H(f)(j)

view this post on Zulip John Baez (Nov 29 2024 at 00:04):

I could be mixed up, but I feel this would help you.

view this post on Zulip David Egolf (Nov 29 2024 at 00:11):

That sounds helpful - thanks! Unfortunately the pieces aren't quite coming together for me right now. I'm not even sure I could clearly explain my point of confusion at the moment. I'll plan to sleep on this and give it a solid try tomorrow.

view this post on Zulip John Baez (Nov 29 2024 at 00:12):

Sleep solves many math problems! Good night!

view this post on Zulip Peva Blanchard (Nov 29 2024 at 12:49):

Sometimes, finding the relevant notation can help.

Let's start with H:Cop[J,Set]H : C^{op} \to [J, \text{Set}].

For every object cc, H(c)H(c) is a functor from JJ to Set\text{Set}.
Hence, for every object cCopc \in C^{op} and jJj \in J, we have

H(c)(j)SetH(c)(j) \in \text{Set}

And, for every morphisms f:jjf : j \to j' in JJ we have a morphism

H(c)(f):H(c)(j)H(c)(j)H(c)(f) : H(c)(j) \to H(c)(j')

Now, let g:ccg : c \to c' be a morphism of CC. H(g)H(g) should be a natural transformation from the functor H(c):JSetH(c') : J \to \text{Set} to the functor H(c):JSetH(c) : J \to \text{Set}. I find useful to denote a natural transformation explicitly as a family of morphisms, here indexed by a variable jj running over the objects of JJ. I.e.,

H(g)(j):H(c)(j)H(c)(j)H(g)(j) : H(c')(j) \to H(c)(j)

In other words, H(g)(j)H(g)(j) is the component of the natural transformation H(g)H(g) at object jj.

I find that these notations, and John's hint, should help defining the functor H~:Cop×JSet\tilde{H} : C^{op} \times J \to \text{Set} on objects and morphisms.

view this post on Zulip David Egolf (Nov 29 2024 at 19:11):

Thanks to both of you, I feel that I understand this adjunction a lot better now!

view this post on Zulip David Egolf (Nov 29 2024 at 19:13):

I'm going to try and start this puzzle over again, using the using recent discussion to help solve it. I'm going to try to keep my discussion of the puzzle much more concise this time.

view this post on Zulip John Baez (Nov 29 2024 at 22:10):

Great!

view this post on Zulip David Egolf (Nov 29 2024 at 22:52):

We start with a JJ-shaped diagram D:J[Cop,Set]D:J \to [C^{\mathrm{op}}, \mathsf{Set}] of presheaves on CC, where JJ is a small category. Our goals are to: (1) describe a candidate colimit for this diagram and (2) show that the candidate colimit really is a colimit.

view this post on Zulip David Egolf (Nov 29 2024 at 22:55):

We will make use of this adjunction: Cat(J×Cop,Set)Cat(J,[Cop,Set])\mathsf{Cat}(J \times C^{\mathrm{op}}, \mathsf{Set}) \cong \mathsf{Cat}(J, [C^{\mathrm{op}}, \mathsf{Set}]). Given DD we can use this bijection to know there is a corresponding D:J×CopSetD':J \times C^{\mathrm{op}} \to \mathsf{Set}.

We have D(j,c)=D(j)(c)D'(j, c) = D(j)(c) on objects.

A morphism in J×CopJ \times C^{\mathrm{op}} is of the form (g:jj,f:cc)(g:j \to j', f:c \to c'). We can rewrite this as (g,1c)(1j,f)(g, 1_{c'}) \circ (1_j, f). So to describe what DD' does on morphisms it suffices to describe D(g,1c)D'(g, 1_{c'}) and D(1j,f)D'(1_j,f).

We have D(g,1c)=D(g)(c)D'(g, 1_{c'}) = D(g)(c') . Here D(g)D(g) is a natural transformation from D(j)D(j) to D(j)D(j'), which are both functors :CopSet:C^{\mathrm{op}} \to \mathsf{Set}. By D(g)(c)D(g)(c') we mean the cc' component of D(g)D(g).

We also have D(1j,f)=D(j)(f)D'(1_j, f) = D(j)(f). Here D(j):CopSetD(j):C^{\mathrm{op}}\to \mathsf{Set} and f:ccf:c \to c' in CopC^{\mathrm{op}} so it makes to directly supply ff to D(j)D(j).

view this post on Zulip David Egolf (Nov 29 2024 at 22:59):

We now have defined D:J×CopSetD':J \times C^{\mathrm{op}} \to \mathsf{Set}.

Next, we introduce a functor Fc:JJ×CopF_c:J \to J \times C^{\mathrm{op}}, defined by F=(1J,Δc)F = (1_J, \Delta_c), where 1J:JJ1_J:J \to J is the identity functor and Δc:JCop\Delta_c:J \to C^{\mathrm{op}} is the functor constant at cCc \in C.

We can then form DFc:JSetD' \circ F_c:J \to \mathsf{Set}. Since Set\mathsf{Set} has colimits, we can take the colimit of this diagram to get some set, colim (DFc)\mathrm{colim~}(D' \circ F_c).

view this post on Zulip David Egolf (Nov 29 2024 at 23:01):

Intuitively, DFcD' \circ F_c is the diagram obtained by evaluating each of our presheaves at cc, and by taking the cc-th component of each natural trasnformation D(g)D(g) as gg ranges over morphisms in JJ.

view this post on Zulip David Egolf (Nov 29 2024 at 23:03):

Given a morphism f:ccf:c \to c' we define a natural transformation α(f):DFcDFc\alpha(f):D' \circ F_c \to D' \circ F_{c'}, by setting α(f)j=D(j)(f)\alpha(f)_j = D(j)(f). (To be concise, I won't spell out here the details involved with checking that this really is a natural transformation.)

view this post on Zulip David Egolf (Nov 29 2024 at 23:05):

For each morphism f:ccf:c \to c', we then obtain a natural transformation colim (DFc)colim (DFc)\mathrm{colim~}(D' \circ F_c) \to \mathrm{colim~}(D' \circ F_{c'}) by using the universal property of a colimit in Set\mathsf{Set}:
diagram

view this post on Zulip David Egolf (Nov 29 2024 at 23:06):

Here ucu_c and ucu_{c'} are the natural transformations corresponding to the universal cones with apex colim (DFc)\mathrm{colim~}(D' \circ F_c) and colim (DFc)\mathrm{colim~}(D' \circ F_{c'}) under the diagrams DFcD' \circ F_c and DFcD' \circ F_{c'}, respectively.

I will use (colim D)(f)(\mathrm{colim~}D)(f) to refer to the unique dashed natural transformation that makes the above diagram commute.

view this post on Zulip David Egolf (Nov 29 2024 at 23:10):

We can now complete the object part of goal (1). Here is the object part of our candidate colimit for DD, which I'll call colim D:CopSet\mathrm{colim~}D:C^{\mathrm{op}} \to \mathsf{Set}:

view this post on Zulip David Egolf (Nov 29 2024 at 23:14):

To complete goal (1), we also need to give a universal cone under DD with apex colim D\mathrm{colim~}D. For each position jj in our diagram DD, we need a natural transformation λ(j):D(j)colim D\lambda(j):D(j) \to \mathrm{colim~}D.

We can achieve this by setting λ(j)c=(uc)j\lambda(j)_c = (u_c)_j for each cCc \in C, where (uc)j(u_c)_j is the jj-th component of the natural transformation uc:(DFc)colim (DFc)u_c:(D' \circ F_c) \to \mathrm{colim~}(D' \circ F_c). (To be concise, I will not spell out here the verification that λ(j)\lambda(j) is really a natural transformation.)

view this post on Zulip David Egolf (Nov 29 2024 at 23:28):

Defining each λ(j):D(j)colim D\lambda(j):D(j) \to \mathrm{colim~}D in this way does indeed define a cone under DD with tip colim D\mathrm{colim~}D. (Again, I won't spell out the verification here).

So we have accomplished goal (1); we have found a colimit candidate for our diagram DD.

view this post on Zulip David Egolf (Nov 29 2024 at 23:32):

It remains to show that colim D\mathrm{colim~}D is really a colimit of DD. So, for any other cone under DD, we need to show there is a unique morphism to that from our candidate universal cone with tip colim D\mathrm{colim~}D.

diagram

view this post on Zulip David Egolf (Nov 29 2024 at 23:34):

Referencing the diagram above, we have a cone under DD with tip XX and "legs" of the form α(j)\alpha(j). We wish to show there is a unique natural transformation f:colim DXf:\mathrm{colim~} D \to X that induces a morphism of cones from our candidate colimit cone to the cone with tip XX.

view this post on Zulip David Egolf (Nov 29 2024 at 23:37):

Earlier, we saw that there is an evaluation functor e:Cop×[Cop,Set]Sete:C^{\mathrm{op}} \times [C^{\mathrm{op}}, \mathsf{Set}] \to \mathsf{Set}. We also have a functor [Cop,Set]Cop×[Cop,Set][C^{\mathrm{op}}, \mathsf{Set}] \to C^{\mathrm{op}} \times [C^{\mathrm{op}}, \mathsf{Set}] induced by the constant functor at cCc \in C and the identity functor of [Cop,Set] [C^{\mathrm{op}}, \mathsf{Set}].

By composing these two, we obtain a functor :[Cop,Set]Set:[C^{\mathrm{op}}, \mathsf{Set}] \to \mathsf{Set} that evaluates any presheaf at cc.

view this post on Zulip David Egolf (Nov 29 2024 at 23:40):

Since applying a functor preserves composition, we can "evaluate at cc" the diagram above to get another commuting diagram. This tells us that the cc-th component of ff is forced to be the function fcf_c that makes the following diagram commute:
diagram

view this post on Zulip David Egolf (Nov 29 2024 at 23:44):

Thus, if ff exists, it is unique. It remains to show that all these fcf_c assemble to form a natural transformation, and that this natural transformation corresponds to the morphism of cones pictured earlier.

view this post on Zulip David Egolf (Nov 30 2024 at 00:06):

We begin by checking that ff is a natural transformation. So, for any h:abh:a \to b in CopC^{\mathrm{op}} we want to show that this naturality square commutes:
naturality square

view this post on Zulip David Egolf (Nov 30 2024 at 00:08):

I'm running out of steam, so I'll stop here for now. I think I got further than I did last time, at least!

Next time, I may try to work out an example, to see how the fcf_c assemble to form a naturality square in practice. Hopefully that will help! But if I can't figure this out next time, I may give up and consult "Categories for the Working Mathematician".

view this post on Zulip David Egolf (Dec 01 2024 at 18:53):

This feels super close to working, and I had an idea. The morphisms colim D(h)\mathrm{colim~}D(h) and fbf_b are both induced by universal properties. I think it could be helpful to see if the morphism they compose to is also given by a universal property.

view this post on Zulip David Egolf (Dec 01 2024 at 19:01):

Using slightly different notation, this is our situation:
diagram

view this post on Zulip David Egolf (Dec 01 2024 at 19:02):

D(a)D(a) means the diagram evaluated at aa, and similarly for D(b)D(b). To avoid overloading α\alpha, I'm now using vv to refer to the cone under DD with tip XX.

view this post on Zulip David Egolf (Dec 01 2024 at 19:03):

This diagram illustrates that we have a cone under D(a)D(a) with tip X(b)X(b), given by composing the natural transformations vbα(h):D(a)X(b)v_b \circ \alpha(h):D(a) \to X(b).

view this post on Zulip David Egolf (Dec 01 2024 at 19:06):

The universal property of colim D(a)\mathrm{colim~}D(a) then ensures that there is a unique morphism !:colim D(a)X(b)!:\mathrm{colim~}D(a) \to X(b) such that !ua=vbα(h)! \circ u_a = v_b \circ \alpha(h). But because the rectangle and the triangle in the above square commute, this unique morphism must be fbcolim D(h)f_b \circ \mathrm{colim~}D(h).

view this post on Zulip David Egolf (Dec 01 2024 at 19:09):

My next idea is to try and show that X(h)faX(h) \circ f_a also satisfies this condition, and hence by uniqueness is equal to fb(colim D)(h)f_b \circ (\mathrm{colim~}D)(h).

view this post on Zulip David Egolf (Dec 01 2024 at 19:33):

We obtain a new diagram, where our goal is to show that (X(h)fa)ua=vbα(h)(X(h) \circ f_a) \circ u_a = v_b \circ \alpha(h):
diagram

view this post on Zulip David Egolf (Dec 01 2024 at 19:36):

If we can show that two certain sub-diagrams commute, we can paste them together to conclude that the outermost part of this diagram commutes. We want to show:

  1. X(h)va=vbα(h)X(h) \circ v_a = v_b \circ \alpha(h)
  2. faua=vaf_a \circ u_a = v_a.

view this post on Zulip David Egolf (Dec 01 2024 at 19:40):

We immediately have faua=vaf_a \circ u_a = v_a because faf_a is by definition the unique morphism that makes this triangle commute.

view this post on Zulip David Egolf (Dec 01 2024 at 19:45):

The condition X(h)va=vbα(h)X(h) \circ v_a = v_b \circ \alpha(h) looks a lot like a naturality square condition.

This still feels tricky, but I think it's something I could ask a question about concisely. Maybe I'll start a new thread to ask a specific related question. [edit: I have now done so!]