You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I'm going to try an experiment, where I work through John Baez's topos theory blog posts while "thinking out loud" in this topic. (For discussion of this idea, see #meta: meta > "learning out loud"? ).
A few disclaimers: I'm not sure how far I'll get! Also, my discussion here is unlikely to be self-contained; if you want to follow along, you may need to reference the blog posts. And finally, I am almost certainly going to make a lot of mistakes!
Please feel free to join in, or just to point out when I'm confused about something! I can't guarantee I'll have the energy to give you a full response if you do so, but please know that I will still appreciate whatever you choose to post.
I'm going to take a puzzle/exercise-based approach. I find it helps me focus my thoughts to have a particular thing I'm trying to figure out. (Sometimes I'll even jump straight to an exercise before reading a section! Then that exercise helps motivate my reading.)
The first exercise I want to contemplate is this:
In many of these examples something nice happens. First, suppose we have and an open cover of by open sets . Then we can restrict to getting something we can call . We can then further restrict this to . And by the definition of presheaf, we have
In other words, if we take a guy in and restrict it to a bunch of open sets covering , the resulting guys agree on the overlaps . Check that this follows from the definition of functor and some other facts!
The exercise is "Check that...", right? Do you have an idea of where to start? :wink:
Morgan Rogers (he/him) said:
The exercise is "Check that...", right? Do you have an idea of where to start? :wink:
Yes, that's right! And I think I do have an idea of where to start! It'll take me a minute to type it out, though.
We have a functor . Here, is the category of open sets of a topological space , where we have a unique morphism from to exactly if . I think of a morphism from to as saying " contains ".
The first thing I want to note is that is a poset. Consequently, all diagrams commute in ! In particular, this diagram commutes for any :
diagram
Next, I'll use the fact that functors send commutative diagrams to commutative diagrams. That means that this diagram in also commutes:
diagram
by I mean the "restriction function" that restricts things from to , for . This is the image under of the unique morphism from to in .
Now, let's pick some . Since this diagram commutes, we have that . I believe this is just different notation for the thing we wanted to prove!
(side question: how do you make the diagrams (pictures) so fast?)
Peva Blanchard said:
(side question: how do you make the diagrams (pictures) so fast?)
I use this amazing website to draw the diagrams: https://q.uiver.app/ .
Then all I have to do is take screenshots, and paste them into my draft!
Alright, we next have our first official "puzzle"!
Puzzle. Let and for each open set take to be the set of continuous real-valued functions on . Show that with the usual concept of restriction of functions, is a presheaf and in fact a sheaf.
I'll start by seeking to show that is a presheaf.
To show that is a presheaf, we need to show that it is a functor . Now, for each open , we have that is the set of continuous-real valued functions on . To talk about "continuous" functions means that there needs to be some topology on . I think it's reasonable to assume that is equipped with the subspace topology it inherits from .
I'm a bit intrigued by the fact that we then have , where is equipped with the subspace topology. This leads me to consider the functor .
I'd now like dream up some functor , so that we can express as .
The first idea that comes to mind is to let act as follows:
It remains to show that is a really a functor, and that .
I would have gone for a more direct proof that is functor.
spoiler
To show that is a functor, I think the only tricky thing to check (unless I'm missing something) is as follows: We want to show that if and , with , are open subsets of equipped with the subspace topology inherited from , then the inclusion function is continuous.
To show this, I want to use this property of the subspace topology: "Let be as subspace of and let be the inclusion map. Then for any topological space a map is continuous if and only if the composite map is continuous".
In our case, we have the inclusion map and the two inclusions to , namely: and . Since and is continuous, we conclude that is also continuous.
@Peva Blanchard I was strongly considering aiming for a more direct proof! I'll be interested to take a look at the "spoiler" in a bit!
It remains to show that . To show this, it suffices to show the following:
Let's consider in . So, . First, we note that first equips with the subspace topology and then gives the set of all real-valued continuous functions on .
Next, we consider . is the morphism from to corresponding to the (continuous) inclusion function . Then, . Here, acts by precomposition, so it sends a continuous function to the function . We note that is indeed acting to restrict functions, as desired.
Peva Blanchard said:
I would have gone for a more direct proof that is functor.
spoiler
I've taken a look at this now! Thanks for chiming in! I went for the less direct approach in part because it felt like it made it easier for me to realize that there are topology things going on. (The "reduction ad obvious-um" is great :laughing:, but I wanted to try and work out the details this time!)
I think there's going to be a bit more topology involved in showing that also satisfies the sheaf condition... :upside_down:
:smile: I see, I was not sure which level of details you wanted to work at.
Btw, I find this format very nice. I read John's blog post series on topos not that long ago, but I never took the time to do the puzzles in detail. If you don't mind I'll probably join you on some puzzles, using the "spoiler" feature.
Peva Blanchard said:
:smile: I see, I was not sure which level of details you wanted to work at.
Btw, I find this format very nice. I read John's blog post series on topos not that long ago, but I never took the time to do the puzzles in detail. If you don't mind I'll probably join you on some puzzles, using the "spoiler" feature.
That sounds awesome!
(deleted)
David Egolf said:
I'm going to take a puzzle/exercise-based approach. I find it helps me focus my thoughts to have a particular thing I'm trying to figure out. (Sometimes I'll even jump straight to an exercise before reading a section! Then that exercise helps motivate my reading.)
The first exercise I want to contemplate is this:
In many of these examples something nice happens. First, suppose we have $s \in F U$ and an open cover of $U$ by open sets $U_i$. Then we can restrict $s$ to $U_i$ getting something we can call $s|_{U_i}$. We can then further restrict this to $U_i \cap U_j$. And by the definition of presheaf, we have
$(s|_{U_i})|_{U_i \cap U_j} = (s|_{U_j})|_{U_i \cap U_j}$
In other words, if we take a guy in $F U$ and restrict it to a bunch of open sets covering $U$, the resulting guys agree on the overlaps $U_i \cap U_j$. Check that this follows from the definition of functor and some other facts!
I’ll aim to attempt this exercise today.
Julius Hamilton said:
I’ll aim to attempt this exercise today.
Sounds great! By the way, I think it's very much in the spirit of this topic to "think out loud" a bit on these exercises. So, if you feel like it, please feel free to share some thoughts on the exercise - whether you're stuck on it or whether you have completed it!
Here's the puzzle I was working on above:
Puzzle. Let and for each open set take to be the set of continuous real-valued functions on . Show that with the usual concept of restriction of functions, is a presheaf and in fact a sheaf.
We saw above that is a presheaf. It remains to show that is a sheaf.
is a sheaf if we can do the following:
To give an analogy to imaging, we might think of each as a picture of some region in space. Then we would like to be able to "stitch together" a bunch of pictures that agree on their overlaps to get one picture of a larger area. Depending on what conditions we place on the pictures in question, this may or may not be possible! For example if we care about "plausibility" in some sense, note that we can't always stitch together plausible images of small areas to make a plausible image of some larger area.
Let us assume that we have selected some , a bunch of so that , and for each an so that these selected continuous-real-valued functions "agree on overlaps" in the sense mentioned above. We wish to show that these can be glued together to form a real-valued continuous function that restricts to on , and further that this resulting function is unique.
Now, a function is completely determined by the value that it takes at each point in . Since , an arbitrary point of is in some . And since we have . So, the value of is determined at each point once we pick for each . So, if exists, it is certainly unique.
Let's define as follows. For , if , let . This definition gives us a function, because if and , then . It remains to show that is continuous.
At this point, I recall the "gluing lemma" from topology (the quote below is from "Introduction to Topological Manifolds" by Lee):
Let and be topological spaces, and let be either an arbitrary open cover of or a finite closed cover of . Suppose that we are given continuous maps that agree on overlaps: . Then there exists a unique continuous map whose restriction to each is equal to .
This lemma assures us that is indeed continuous! I think that concludes this puzzle. (If I was working this out offline, I'd consider trying to prove the relevant part of this "gluing lemma". But to keep this topic a bit more focused, I'll not do that here).
Here's another (possibly tricky) puzzle for you. When you proved that was a presheaf, you introduced another functor, . Does this proof that is a sheaf have some formulation that involves ?
(To make things a bit simpler, I suggest getting rid of the "op"s in the definition of .)
I would like to see a proof of the gluing lemma! To my mind this is the most interesting part of the whole puzzle. It's also not hard to prove. In fact, I never knew anyone had stated it formally as a "lemma" in some book.
The reason it's important is this: to show that -valued continuous functions form a sheaf, it turns out that the crucial step - the only step where something about continuity matters - is the step where you show that continuity is a "local" property.
That is, you can check if a function is continuous by running around your space, looking at each point, and asking "is the function continuous here?" Your function is continuous iff the answer is "yes" at each point.
Later I give an example of a property that's not local: namely, for an -valued function to be bounded is not local. So there's no sheaf of bounded -valued functions.
So, by outsourcing this "gluing lemma" to some textbook, I think you're missing out on an insight that this puzzle was designed to deliver.
David Egolf said:
This lemma assures us that is indeed continuous! I think that concludes this puzzle. (If I was working this out offline, I'd consider trying to prove the relevant part of this "gluing lemma". But to keep this topic a bit more focused, I'll not do that here).
This gluing business is really what clicked for me when trying to understand sheaves. Continuity is a "local" property, so it goes well with gluing. I'll try to prove the gluing lemma.
spoiler
Thanks for pointing that out, @John Baez ! (And @Reid Barton , thanks for suggesting another puzzle - I'll potentially take a look at it in a bit! It sounds intriguing.)
I was already a bit "spoiled" on the proof of the gluing lemma, when I looked it up in my book earlier. Lee uses what he calls the "Local Criterion for Continuity" to prove the lemma in the case where the form an open cover of . Here is a statement of the "Local Criterion for Continuity" from Lee's book:
A map between topological spaces is continuous if and only if each point of has a neighborhood on which (the restriction of ) is continuous.
If we accept this, then we can use this to prove that our is continuous (we recall that ). Let us pick an arbitrary point from . We want to show that there is a neighborhood of on which the restriction of is continuous. Since , we know that for some . The restriction of to is , which we know is continuous. By the "Local Criterion for Continuity", we conclude that is continuous.
It might be good to take this a step further and prove the "Local Criterion for Continuity" described above. I'll probably give this a try in a bit!
Yes, that "local criterion for continuity" is the key step. It's actually a wonderful fact: while continuity is defined in a somewhat "global" way for a function , there turns out to be a concept of a function being "continuous at a point", such that is continuous iff it is continuous at each point .
If you know how to prove this in your sleep, then of course there's no need to prove it here! But otherwise, it's worth thinking about.
David Egolf said:
Julius Hamilton said:
I’ll aim to attempt this exercise today.
Sounds great! By the way, I think it's very much in the spirit of this topic to "think out loud" a bit on these exercises. So, if you feel like it, please feel free to share some thoughts on the exercise - whether you're stuck on it or whether you have completed it!
That’s exactly what I want to do. I get self conscious that my amateur sloppiness feels like fluff. But I will. One sec.
David Egolf said:
I'm going to take a puzzle/exercise-based approach. I find it helps me focus my thoughts to have a particular thing I'm trying to figure out. (Sometimes I'll even jump straight to an exercise before reading a section! Then that exercise helps motivate my reading.)
The first exercise I want to contemplate is this:
In many of these examples something nice happens. First, suppose we have $s \in F U$ and an open cover of $U$ by open sets $U_i$. Then we can restrict $s$ to $U_i$ getting something we can call $s|_{U_i}$. We can then further restrict this to $U_i \cap U_j$. And by the definition of presheaf, we have
$(s|_{U_i})|_{U_i \cap U_j} = (s|_{U_j})|_{U_i \cap U_j}$
In other words, if we take a guy in $F U$ and restrict it to a bunch of open sets covering $U$, the resulting guys agree on the overlaps $U_i \cap U_j$. Check that this follows from the definition of functor and some other facts!
I live under time constraints (like all of us) so if it seems like I could easily get the answers to these questions just by reading more, I’m just trying to make it clear that I was encouraged by David egolf to “think out loud” and it is helpful for me to be able to learn on-the-go like this. Thanks.
I took a real analysis class as an undergraduate but did not take a lot of other standard math classes. I never really studied topology.
The definition seems simple (I already looked it up). I want to make my thinking super rigorous which is why I am trying to formulate everything in Coq lately. That’s been fun but means I also need to learn more about Coq itself.
(This is meant to be on-topic, I’m saying I prefer to learn by expressing math in Coq).
Set is a fundamental keyword in Coq - I think a Type. I know there is the meta-type Sort as well. I am not clear in some regards how Coq treats types and sets differently. For example, I don’t know if Types have zero implication of their size, ie how many terms are assumed to exist in any given type.
I need to express “some specific element / term of type Set, but I don’t know which one (becusss it’s meant to be arbitrary)”. I think this is the Parameter keyword but not sure yet.
Parameter X : Set.
A topology is arguably of Coq Sort “Record”. I understand Record to be yet another of these mathematical ways of describing “collections” of things. A Record tries to have no commitment to any theories of mathematics, like Sets do. A record is a Type, but it can contain “multiple things”. So, a topology is:
A Record T
Consisting of a set X
And a set O of subsets of X
Such that the 3 topology conditions hold. They are:
Empty set and X are in S.
Arbitrary unions.
Finite intersections.
(I never took the time to think about why unions can be arbitrary but intersections have to be finite).
Actually, Topology should not be a record, it should use keyword Definition, since a Record is better for a single instance of an object? Not sure.
Record T : Type :=
| Parameter X : Set
| Parameter S : Set :=
(assume we just import a “power set function” until I have time to define one myself)
| (I haven’t learn how to state constraints on a type yet, but I need to state here the topology requirements. I guess it’s of type Prop, since they are Boolean. Some like Definition has_empty : S -> Prop := (assume we import a definition of empty set and set membership).
And so on.
There’s my thinking aloud before I head off to work. I wanted to work up to questions I had about presheafs. I was stuck on thinking about inclusion mappings.
Baez’s post mentions that we need to reverse the direction of the arrows (O(X)^{op}) and I was trying to fully understand.
I have actually been spending a lot of time thinking about what the real nature of being “functional” is. I know the common definition. But I want to see clearly why the properties of categories come from mappings being functional (sometimes).
An inclusion mapping is functional. It maps an element of a subset of S to the same element in the set S. How do you define that mathematically?
I guess we can express “mapped to itself” with an expression like f: S1 -> S such that S1 \subset S and f(x) = x. This is allowed by the axiom of extensionality? Though they are elements in different sets, there is already an equality relation defined on the elements of the two respective sets.
How can we reverse an inclusion mapping? It wouldn’t be functional. An inclusion mapping is injective but not surjective.
Thanks!
Thanks for joining in @Julius Hamilton ! I don't know enough about Coq to understand your questions relating to it (hopefully someone else here does, though!).
I can talk a bit about how I understand and though, in case that is helpful to you.
Yeah please do
In my understanding, is a category that we can create when we're given a topology on the set . To create this category, we need to say what its objects and morphisms are.
As you mentioned above, any topology on has a collection of subsets of - the "open sets". The open sets of our topology are the objects of .
And given two open sets and in our topology on , we can ask this question: Is a subset of ? If the answer to this question is "yes!", then we put a morphism from to . Otherwise, we don't put a morphism from to .
Then is another category: it's just like except we turn all the arrows around. So now we put a morphism from to exactly if .
I don't really think of these morphisms as functions. I think of them more like "yes!" answers to a yes/no question.
I'm not sure I was really addressing your point of confusion :sweat_smile:. Hopefully this is still somewhat helpful!
Ok. Yeah that simplifies it dramatically.
I think I was confused regarding how simple a morphism can be. I’ll think more about that.
In many of these examples something nice happens. First, suppose we have and an open cover of by open sets .
s is some arbitrary set.
U is an open set in O(X).
Does it matter if it turned out that s was a set in O(X)? Since O(X) is surely a sub-category of Set?
I’ll assume an open cover of U is a union of sets in O(X) such that U \subset of the union.
Then we can restrict to getting something we can call .
Baez’s post said we need to flip the morphisms precisely so we can “restrict” the functor. So this is saying, “only those elements of s such that there exists an element x in U_i for which F(x) \in s”. ?
We can’t do this without flipping the morphisms? I’d like to think about how. (Thinking out loud :wink:)
The notation means that is an element of the set . For example, depending on what does, might be a continuous real-valued function with domain of .
@Julius Hamilton - I think it's very helpful to focus on a specific example of a sheaf when trying to understand the definition of sheaf, and (unsurprisingly) I recommend the example David is talking about, where is the set of continuous functions .
Or, if "continuous" is distracting to you, think about the sheaf where is the set of all functions .
Or, if is distracting to you, replace it with some other set.
If you read all the sheaf axioms keeping some example like this in mind, they should make more sense.
(The reason I picked a sheaf of continuous functions is because topos theory originated as a generalization of topology, as the name suggests - so ideas from topology can help explain what people are doing in topos theory. There are also other ways to get into topos theory, but my course notes - and the book they're based on - start with topology. Luckily you only need to know a small amount of topology.)
In the spirit of trying to better understand why we can detect continuity of a function in a local way, I'll now to try to prove this:
A map between topological spaces is continuous if and only if each point of has a neighborhood on which (the restriction of ) is continuous.
I got stuck along the way. To get unstuck, I referenced Lee's book on topological manifolds. So, what I wrote below below closely follows the proof in Lee's book.
If is continuous, then for any point , we have that is a neighborhood of on which the restriction of (which is just ) is continuous.
For the other direction, we assume that each point of has a neighborhood (an open set) on which the restriction of is continuous. We wish to show that must then be continuous. To show that is continuous, we consider an arbitrary open subset of . We wish to show that the preimage of under is open in .
I'll call this preimage . Now, a set is open exactly if every point in is contained in an open subset . Thinking of openness in this way seems likely helpful, as it involves a condition that can be checked at each point - and we are trying to understand why continuity involves a condition that can be checked at each point.
Now, we know that any has some neighborhood such that is continuous. Since is continuous and is open, is also open in the subspace topology on . Thus, it is the intersection of some open set (in the topology on ) with . Since is open and is open, is also open (in the topology on ).
We are hoping that is an open subset of that contains . This would provide the neighborhood of "breathing room" about in that we need for to be open.
By definition, is the subset of that maps into under . So, its elements are exactly those that are: (1) in and (2) map to under . Thus, . Note that . We also note that is a subset of .
We conclude that an arbitrary point of has a neighborhood contained in . Therefore is continuous.
John Baez said:
Julius Hamilton - I think it's very helpful to focus on a specific example of a sheaf when trying to understand the definition of sheaf, and (unsurprisingly) I recommend the example David is talking about, where $FU$ is the set of continuous functions $f: U \to \mathbb{R}$.
Or, if "continuous" is distracting to you, think about the sheaf where $FU$ is the set of all functions $f: U \to \mathbb{R}$.
Or, if $\mathbb{R}$ is distracting to you, replace it with some other set.
If you read all the sheaf axioms keeping some example like this in mind, they should make more sense.
A presheaf is a functor from to . (Just restating definitions to exercise myself).
Baez says that we can see why we would want to take the opposite category of if we think of one possible presheaf sending each to that set in that contains all real-valued functions over .
Let’s consider some examples.
If is the real numbers, let’s consider a common topology over the reals. (Which is?)
Topologies are a way to express geometric concepts. Why are they so fundamentally defined in terms of “open sets”?
Perhaps it has to do with continuity and limits. Maybe it allows us to define the epsilon-delta condition without recourse to a distance metric?
David Egolf said:
The first thing I want to note is that $\mathcal{O}(X)^{\mathrm{op}}$ is a poset. Consequently, all diagrams commute in $\mathcal{O}(X)^{\mathrm{op}}$!
Why?
Julius Hamilton said:
David Egolf said:
The first thing I want to note is that $\mathcal{O}(X)^{\mathrm{op}}$ is a poset. Consequently, all diagrams commute in $\mathcal{O}(X)^{\mathrm{op}}$!
Why?
I'm sure there are more sophisticated ways of thinking about this, but that is how I approach it.
Julius Hamilton said:
David Egolf said:
The first thing I want to note is that is a poset. Consequently, all diagrams commute in !
Why?
Here's how I think of it ("it" being "all diagrams commute in a poset"):
When we say (part of) a diagram "commutes", we mean that two different sequences of morphisms compose to the same morphism. If a category is a poset, it has at most one morphism from to , for any objects and . Therefore, if I have two different sequences of morphisms from to , when I compose the morphisms in each sequence there's only one possible morphism for me to get as a result!
Consequently, these two sequences of morphisms must compose to the same morphism. And hence the corresponding (part of a) diagram must commute.
David Egolf said:
Julius Hamilton said:
David Egolf said:
The first thing I want to note is that $\mathcal{O}(X)^{\mathrm{op}}$ is a poset. Consequently, all diagrams commute in $\mathcal{O}(X)^{\mathrm{op}}$!
Why?
Here's how I think of it ("it" being "all diagrams commute in a poset"):
When we say (part of) a diagram "commutes", we mean that two different sequences of morphisms compose to the same morphism. If a category is a poset, it has at most one morphism from $A$ to $B$, for any objects $A$ and $B$. Therefore, if I have two different sequences of morphisms from $A$ to $B$, when I compose the morphisms in each sequence there's only one possible morphism for me to get as a result!
Consequently, these two sequences of morphisms must compose to the same morphism. And hence the corresponding (part of a) diagram must commute.
That is such a beautifully simple explanation. You have a knack for clear simple understanding.
Eric M Downes said:
- Can you distinguish between the "generator arrows" of a poset, and the morphisms only in the closure?
I’d never thought of that before. I’m curious to know what those elements might be called in an abstract algebra setting. I’ve been thinking about how a one object category is a monoid, and if there is a corresponding abstract algebraic structure for a “multi-object category”. The thing is, not all the arrows (elements of the “structure”) compose with one another. All algebraic structures I know of are defined by closure and by “totality”.
I think your point is that all the arrows in a thin category are generators. Now I have to think about what how categories can have non-generator arrows.
I think the most basic way of expressing the compositional requirement of the arrows in a category is, “if they can compose, they do.” “If they are composable, then compose.”
I’ll come to the other points you made in a bit.
David Egolf said:
Now, let's pick some $s \in FU$. Since this diagram commutes, we have that $r_{U_i \to U_i \cap U_j} \circ r_{U \to U_i}(s) = r_{U_j \to U_i \cap U_j} \circ r_{U \to U_j}(s)$. I believe this is just different notation for the thing we wanted to prove!
I believe I follow your argument piece by piece but want to digest it more.
Every diagram in a poset commutes.
Functors preserve commutative diagrams (why)?
In , the morphisms essentially say “contains”.
If we think of as mapping a set to a function defined on that set, we can think of the “contains” morphism in as corresponding, under , to a restriction of some function on some subset of its domain.
Baez basically just asks us to show that restricting some (which can be thought of as a function) to an open subset , and restrict it to some other subset , you can further restrict both of those restricted functions to , and they are the same. Which David showed.
Questions:
David Egolf said:
Alright, we next have our first official "puzzle"!
Puzzle. Let $X = \mathbb{R}$ and for each open set $U \subseteq \mathbb{R}$ take $F U$ to be the set of continuous real-valued functions on $U$. Show that with the usual concept of restriction of functions, $F$ is a presheaf and in fact a sheaf.
I'll start by seeking to show that $F$ is a presheaf.
In order to show it is a presheaf, I think we have to show has a natural topology and forms a thin category, and then that fulfills the functor axioms (if we reverse the direction of the arrows). I’ve been trying to tell myself “a functor is a morphism in the category of categories” as a single idea to remind myself of the definition. I think the important thing is if two arrows in compose, then arrows must compose in such a way that . Which basically says, “when you map the morphisms over, you can take the composition before, or you can take it after”.
I’ll follow the rest of David’s proof a little later.
I don't have energy right now to respond in detail to your comments, @Julius Hamilton. But I did notice that above you asked "why?", regarding the fact that functors send commutative diagrams to commutative diagrams. You might find it a helpful exercise to pick a particular (simple) commutative diagram, and then aim to show that applying a functor to that diagram yields a commutative diagram.
All good brotha rest up. Yes I will think about that.
To prove it, need to first formalize what it means for a diagram to be commutative.
As an aside, in an alternative definition, one could take preserving diagram commutativity as a defining characteristic of a functor; then recover by applying this to a simple diagram. Not as efficient as a technical definition, but seems conceptually useful.
Reid Barton said:
Before moving on to the next puzzle, I want think about this for a bit:
Here's another (possibly tricky) puzzle for you. When you proved that was a presheaf, you introduced another functor, . Does this proof that is a sheaf have some formulation that involves ?
The words "possibly tricky" strike some fear into my heart, but this sounds fun to think about - so I'll see what I can figure out...
First, let's recall the context:
Imagine we have two open sets and (in the standard topology on ) with . For to be a sheaf, I think we need to be able to construct a unique real-valued continuous function from a pair of real-valued continuous functions and , provided that and agree on . [Or should they agree on ? Still figuring this out...]
We also want to be able to go the other way: given the constructed from and above, we want to be able to recover and from by appropriately restricting .
When we have two batches of information that we want to be equivalent, that makes me start to think of limits or colimits!
I'm pretty sure what I just wrote above isn't quite right. Or, at least, I'm quite confused about it.
But here's a picture illustrating the very rough idea I have in mind, that I'm still working to clearly spell out:
picture
This diagram is in .
Very roughly, I'm starting to wonder if should do something like "preserving pullbacks", if is to be a sheaf. But it will take me some thinking to express this idea more clearly!
I think you're on the right track. I think all this can be simplified a bit.
I go into this in a bit more detail in Part 3 of my course. Just two short paragraphs. But you seem to be enjoying discovering this stuff on your own, which is really better.
If you try to develop a subject on your own, the way you're doing, it can become much easier to understand what people are doing when you read the 'official' treatment.
By the way, another tiny point:
David Egolf said:
Here is a statement of the "Local Criterion for Continuity" from Lee's book:
A map between topological spaces is continuous if and only if each point of has a neighborhood on which (the restriction of ) is continuous.
I'm losing track of who said what where, but I think I saw someone derive this "local criterion for continuity" from something more basic, which might be called the "local criterion for openness:
A subset of a topological space is open each point is contained in some open set contained in .
This is amusingly easy to prove. For the direction just take . For the direction just note and use the fact that a union of open sets is open.
So we can say openness is a 'local' condition: to see if a set is open, you can run around checking some condition about all its points, and the set is open iff this condition holds for all its points.
And this implies that continuity is also a local condition: to see if a function is continuous, you can run around checking some condition at all points of its domain, and the function is continuous iff this condition holds for all those points.
Yes, I made implicit use of this "local criterion for openness" above! Before doing so, I had never noticed this connection between continuity being "locally detectable" and openness being "locally detectable"! Cool stuff! :smile:
Yes!
And there's a "for all" in both of these "local criteria". I haven't thought about it hard, but I bet this is connected to the fact that the sheaf condition can be stated in terms of limits. (A "for all" is a limit, and the pullback you were looking at is also a limit.)
I guess for now the main moral is that sheaves are all about "locally detectable" properties.
I'm wondering then what makes a property "local".
It is as if it was something that is trivially parallelizable: I imagine a (possibly uncountable) set of agents that would check for each point whether the property holds "around" that point, and, most importantly, they don't need to communicate/synchronize. And the agents are indistinguishable (each of them runs the same check procedure).
That sounds right! It's a nice thought. I can try to make it slightly more precise. There's an agent for each point. Each agent runs the same check procedure, which can be checked in an arbitrarily small neighborhood of their point. Then at the end we report the answer "true" if and only if they all get the answer "true".
Important properties of functions like continuity, differentiability, smoothness, analyticity, upper and lower continuity, and measurability all work like this.
Later in my posts I talk about the idea of a 'germ', which is connected to this stuff.
Oh yes, I remember now the formal definition of 'germ', but it's the first time that I get an inverse "tree-like" mental picture of it. Here's what I have in mind.
Initially, we have one agent that checks whether the property holds over the open set . The agent can spawn (possibly uncountably many) new agents, that are clones of their genitor, each of them being responsible for an open subset of . The parent agent reports true iff all its children reports true. And the process continues like this.
This is a very poor algorithm: depending on the topological space, this could take a transfinite number of steps and a transfinite number of agents.
What I find amusing is that the opposite category of open subsets of somehow describes this big clone-spawning branching process. The points of are exactly the (infinitely long) branches.
ps: To be more precise, I should force agents to merge when they work on the same open subset.
The idea of "local agents that can work in parallel" reminds me a whole lot of an ultrasound reconstruction technique I know of, where the reconstruction at each point can be computed in isolation of the reconstruction at different points. But this is not quite analogous because the agents in this case would report a number, not just a "yes!" or "no!" regarding whether some property holds.
One could also consider checking a reconstructed image point by point, and at each point asking if the reconstruction at that point is "plausible" (in some sense) given the observations. (This would probably involve assessing whether the observed data "relevant to this point" is similar enough relative to what we'd expect if our reconstruction as this point reflects the true object).
However, just because a reconstructed image agrees (in some sense) with the observed data at each point does not imply that the entire reconstructed image agrees with the observed data. So, in this example, "reconstruction plausibility" is not "locally detectable".
This has been a fruitful bonus question to think about!
However, starting next week, I'm hoping to move on to the next puzzle in the blog post, which is this:
Let and for each open set take to be the set of bounded continuous real-valued functions on . Show that with the usual concept of restriction of functions, is a separated presheaf but not a sheaf.
David Egolf said:
The idea of "local agents that can work in parallel" reminds me a whole lot of an ultrasound reconstruction technique I know of, where the reconstruction at each point can be computed in isolation of the reconstruction at different points. But this is not quite analogous because the agents in this case would report a number, not just a "yes!" or "no!" regarding whether some property holds.
I'll cheat a bit (because I remember later posts from John's series). I think these Boolean agents actually are the truth values of the topos of sheaves over . When the root agent reports "yes!", then the property holds everywhere. When one of its children does, then the property holds on the associated subset. I think we can think of this "collection of boolean agents over " as a specific sheaf.
spoiler
Now regarding agents that would report numbers. I think we should remember that the parent agent aggregates the results of its children. In the "yes/no" case, the aggregation is simply a big conjunction. If, oth, "point"-children report numbers, then the parent can aggregate just by making a tuple out of them, indexed by their location. In other words, each agent really reports real-valued function, and the parent aggregation process amounts to gluing the functions reported by its children, i.e., taking a categorical limit.
I will stop there, as otherwise, it's just going to be another burrito tutorial.
Anyway, a lot of things clicked today, so thank you :)
Good stuff, Peva! I'm too tired to think hard about what you said, so I'll just report the slogan "topos theory is the study of local truth".
Do presheaf categories still have subobject classifiers if Set does not have a subobject classifier because one is a constructive predicativist?
Did you already see the nLab page [[predicative topos]]?
Julius Hamilton said:
Eric M Downes said:
- Can you distinguish between the "generator arrows" of a poset, and the morphisms only in the closure?
I’d never thought of that before. I’m curious to know what those elements might be called in an abstract algebra setting. I’ve been thinking about how a one object category is a monoid, and if there is a corresponding abstract algebraic structure for a “multi-object category”.
"Generators" is most common for such elements in most contexts; famously the symmetric group can be generated by just two maps and . What is the operation under which a permutation group specifically can be said to be closed?
A family of subsets can generate a topology . The topology is the closure under arbitrary unions and finite intersections. Is a topology a category?
There is such a thing as a delooping; simplest context is a finite commutative monoid, in which every element is an object. Draw an arrow just when (Green's relations). Take your favorite finite commutative monoid (without inverses if you want to deal with fewer arrows), how few arrows can you specify, such that asserting closure under composition of arrows fills in the rest of the cayley table?
The above arrow drawing requires associativity of elements. "Magmas" are the non-associative binops. For a familiar structure that is closed in an elemental sense but not closed in another very meaningful sense, consider the rock-paper-scissors magma
This binary operator is not associative. You can rephrase the associativity condition as a kind of closure (or lack-thereof) under a certain familiar operation. What is it? How many elements must the closed structure have?
Julius Hamilton said:
I think your point is that all the arrows in a thin category are generators. Now I have to think about what how categories can have non-generator arrows.
No, you can have non-generator arrows in a thin category. Consider a poset and there are two "generator" arrows
what third arrow must also be present?
(And, having answered that question, and recalling there is at most one arrow between any two objects in a thin category, you should understand why all diagrams* in a thin category commute.)
Here's the next puzzle I want to work through:
Let and for each open set take to be the set of bounded continuous real-valued functions on . Show that with the usual concept of restriction of functions, is a separated presheaf but not a sheaf.
We start by showing is a functor , which means it is a presheaf. On objects, sends an open set to the set of bounded continuous real-valued functions on . Note: to determine if a function from it continuous, we need to put a topology on . To talk about continuity, we equip with the subspace topology it inherits from .
On morphisms, sends a morphism to the corresponding restriction function, which sends a bounded continuous real-valued function to , where is the inclusion map. We saw earlier that inclusion maps like are continuous, and therefore the restriction of a continuous function is continuous. Further, restricting a bounded function yields a bounded function.
For each object in , sends the identity morphism to the identity function on . This is because restricting a function to its own domain leaves the function unchanged.
If we have the situation in , then we have . That is because restricting a function to some domain in two steps yields the same result as restricting it to that domain all at once.
We conclude that is a functor and hence a presheaf.
Next, we show that is a separated presheaf but not a sheaf. If was a sheaf, we'd always be able to do the following:
If this always exists and is unique, then is a sheaf. If doesn't always exist, but is unique when it exists, then is a separated presheaf.
In this puzzle, if exists it is unique. For any , since , we have that for some . Then, since , we have that . So, the value of at every point is fixed (if it exists) once we pick all our .
But doesn't always exist! That's because if you glue together an infinite number of bounded real-valued continuous functions that agree on overlaps, you don't always get a bounded function! Intuitively, if you run around and check that each little bit of a function is locally bounded, you can't conclude that the whole thing is bounded.
We conclude that is a separated presheaf and not a sheaf.
There is quite a bit of discussion before the next puzzle! So, I'll try to introduce the next puzzle a little.
To my understanding, part of the goal of the next puzzle is to work towards a notion of morphism between categories of sheaves. And since each category of sheaves is an "elementary topos", this is relevant for thinking about morphisms between elementary topoi.
And why do we care about morphisms between topoi? Here are a couple possible reasons:
At any rate, here is the next puzzle:
Show that is a presheaf. That is, explain how we can restrict an element of to any open set contained in , and check that we get a presheaf this way.
Here, is defined as: for each open subset of a topological space . Note that is a continuous function and is a presheaf on . We also have . Note that is open because is open and and continuous.
Roughly, our goal here is to make a presheaf on given a continuous function and a presheaf on .
David Egolf said:
In this puzzle, if exists it is unique. For any , since , we have that for some . Then, since , we have that . So, the value of at every point is fixed (if it exists) once we pick all our .
But doesn't always exist! That's because if you glue together an infinite number of bounded real-valued continuous functions that agree on overlaps, you don't always get a bounded function! Intuitively, if you run around and check that each little bit of a function is locally bounded, you can't conclude that the whole thing is bounded.
We conclude that is a separated presheaf and not a sheaf.
I wanted to provide a concrete counter-example.
Consider the topological space , the half-open unit interval, with the induced topology from . For all , let and on . The cover , , and each is bounded. Moreover, for any , and and obviously match on . If the presheaf of bounded functions were a sheaf, there would exist a bounded function defined on such that . In particular, for all
whence a contradiction, i.e., is not a sheaf.
There are variants of this idea: presheaf of Lipschitz functions (take or in the example above) (edit: works as well), presheaf of functions with bounded derivatives of order (just integrate times the previous examples).
Thanks for sharing some specific examples! I hadn't thought of those!
The counterexample I had in mind looks like this in picture form:
counterexample picture
The idea is to consider the identity function that sends to . Then, we can get each by restricting to, say, . Each is then bounded, and the collection of agrees on overlaps, but when we try to glue together all the , our resulting function isn't bounded anymore.
oh yes, nice! The common pattern to get a "non-sheafy presheaf" is to start from a invalid candidate defined globally such that the restrictions of this candidate to specific open subsets satisfy a condition.
Each invalid candidate has a sort of singularity at some point (e.g., 0 in my example, and in yours), and then we just restrict to open subsets that avoid "just enough" this point.
Mmh, these examples do not work any more if is compact. Compact means that from any covering we can extract a finite cover of . If is compact, is the presheaf of bounded functions on a sheaf? It seems so.
If is compact, couldn't it still have a non-compact open subset ? And then we could maybe set up an unbounded function defined on to show that we don't have a sheaf. (Restrict to a bunch of where , and where is bounded for each . Then these glue together to , which is unbounded. So then, we can't always glue together a bunch of compatible to make an element of .)
I'm not sure that a compact set can contain a non-compact open subset. (I'm thinking about the closed unit interval ). (edit: oh my brain ...)
My initial thought was that something like would be a non-compact open subset of the topological space . But I am a bit shaky on compactness, so maybe I'm just confused. (I'd need to review this stuff!)
Oh yes you're right!
I focused on the total space, but yes we can reproduce the example on any open subset.
Now, I'm looking for a topological space such that the presheaf of bounded functions is indeed a sheaf. From our discussion, it suffices that any open subset of be compact, right? I'm wondering what kind of space is that.
One example: any set equipped with the trivial topology (, are the only open sets). Topologically, such a space behaves like a space with one point.
Another example: a finite set , with any topology.
In other words: any set with a topology admitting a finite number of open sets.
I wonder if there are any examples where has an infinite number of open sets.
(Interesting stuff! I need to take a break for today - I have to manage my energy carefully - but of course please feel free to keep posting here.)
Sure! I'll probably continue under another topic, so as not to divert the purpose of yours.
Digression: here's another example of a separated presheaf that's not a sheaf, which I just thought of. Take with its usual topology, where all subsets are open, and let be the presheaf where for any is the set of computable partial functions whose domain includes .
So, simply put, consists of all partially defined functions from the natural numbers to the natural numbers such that you can write a computer program which halts and spits out when .
For any finite , these are all the functions from to .
But for there are lots of functions from to that aren't computable.
Mmh, it's not as easy to come up with an explicit example (i.e., a witness of the non-sheafiness of ).
Here's my attempt.
Let with .
Define as the function that outputs if the -th Turing machine halts, and otherwise. All the 's agree on the intersections (the 's are disjoint).
If were a sheaf, then there would be a total computable function on that would solve Turing's halting problem, whence a contradiction.
It's a bit weird, I had to convince myself that does belong to (still not 100% sure). It seems trivially true since is finite. The strangeness comes from the fact that I am invoking the halting problem's oracle to define .
In a sense, the covering of by the acts like an oracle.
John Baez said:
So, simply put, consists of all partially defined functions from the natural numbers to the natural numbers such that you can write a computer program which halts and spits out when .
This sounds cool! But I'm having a hard time wrapping my mind around it. Is the idea that we need a single program that takes in any and then produces the corresponding ? Or are we allowed to have different programs, say one for each , to calculate ?
I'm guessing it's the first - we need a single program that can handle any . Then if and is a singleton , given we can write a very short program that outputs given : just output .
Then if is finite, and we know for each , we can still write a single program that will run in a finite amount of time, and that outputs given . We can just create a bunch of if/then statements that check to see if the given input value corresponds to the output value as varies. Since is finite, there will be a finite number of if/then statements that run, and so the run-time will be finite.
But if is infinite, and there's no clever trick to figure out the values of quickly, then I was going to say that the approach I outlined above wouldn't always run in a finite amount of time, because we'd need an infinite number of if/then statements. But, for any finite , I think we'd only have to run a finite number of if/then statements to look up the appropriate value for . So it seems like the runtime would be finite for any input ?
I think I must be confused on something!
Maybe the problem with the program I outline above (in the case where is infinite) is that it would need to be infinite in length (even though its runtime for any input would always be finite). That doesn't sound like a legitimate "computer program"!
David Egolf said:
John Baez said:
So, simply put, consists of all partially defined functions from the natural numbers to the natural numbers such that you can write a computer program which halts and spits out when .
This sounds cool! But I'm having a hard time wrapping my mind around it. Is the idea that we need a single program that takes in any and then produces the corresponding ? Or are we allowed to have different programs, say one for each , to calculate ?
You need one program that takes in any and halts after printing out .
The other option, one program for each , would say that every function is computable. For any you can write a program which prints when you input .
This fact is exactly the reason why computable functions don't form a sheaf!
I can take and cover it with singletons. The restriction of any function to any singleton is computable, even if is not computable.
In simple rough terms: we're failing to get a sheaf because you can't always glue together infinitely many programs into one program.
Or even more tersely: computability of functions is not a local property.
From math overflow, another example of presheaves that are not sheaves: presheaves of constant functions
i.e., generally can't glue compatible local constant functions into a global constant function
Nice! We were talking about examples of separated presheaves that are not sheaves, and the presheaf of constant functions is actually a separated presheaf.
Reminder: a presheaf is a sheaf if given sections on open sets covering which agree when restricted to the overlaps , there exists a unique section on that restricts to each of the . If we have uniqueness but perhaps not existence, then our presheaf is called separated. As David showed a while back, the presheaf of bounded real-valued functions on a space is separated but usually not a sheaf.
Btw, one reason this concept is important is that there's a trick called 'sheafification' that turns a presheaf into a sheaf. One way to do it involves doing a certain maneuver twice. The first pass turns the presheaf into a separated presheaf, and then second pass turns it into a sheaf! It's kind of amazing.
It's probably too technical to get into now, but in case anyone cares, this maneuver is called the "plus construction", and you can read about it on the nLab.
Cool, thanks for the reminder about the separated aspect. All these examples put a good spotlight on the glueing condition. Now that we've solidly established the definition of a sheaf, which feels rather substantive, I will somewhat naively now ask: what are a couple of cool things that we can do with sheaves, in at least a semi-applied sense? I'm sure there are many; just fishing around here for some favorites.
p.s. I know that we're headed towards the topos side of town; in this question I'm fishing around for some good immediate / semi-concrete applications. For example, they're somehow going to give us insight into the structure of manifolds? Or stuff in computer science, ...
(If this would go beyond a few high level points, it could be spun off into a separate topic)
Here is one example of application in network dynamic theory: Opinion dynamics on discourse sheaves.
This paper, which I want to read someday, also comes to mind: Sheaves are the canonical data structure for sensor integration
A sensor integration framework should be sufficiently general to accurately represent all information sources, and also be able to summarize information in a faithful way that emphasizes important, actionable information. Few approaches adequately address these two discordant requirements. The purpose of this expository paper is to explain why sheaves are the canonical data structure for sensor integration and how the mathematics of sheaves satisfies our two requirements. We outline some of the powerful inferential tools that are not available to other representational frameworks.
I want to start thinking a little bit about the next puzzle. Here it is again:
Show that is a presheaf. That is, explain how we can restrict an element of to any open set contained in , and check that we get a presheaf this way.
Here, is defined as: for each open subset of a topological space . Note that is a continuous function and is a presheaf on .
We also have . Note that is open because is open and and continuous.
Roughly, our goal here is to make a presheaf on given a continuous function and a presheaf on .
I wonder if a continuous function induces a functor . If it does, then we could form as .
Let's see. If is a continuous function, let's try to define a functor as follows:
Our proposed functor automatically respects composition, because all diagrams commute in a poset. And if is the identity morphism for , then this gets mapped to the identity morphism on , as desired.
So, I think that is indeed a functor! (Hopefully I didn't miss something!)
It seems that a continuous function does in fact induce a functor .
If that is true, then I think that is just . For an open set , it spits out , which is what is supposed to do. And it is indeed a functor, because composing two functors yields a functor.
David Tanzer said:
Cool, thanks for the reminder about the separated aspect. All these examples put a good spotlight on the glueing condition. Now that we've solidly established the definition of a sheaf, which feels rather substantive, I'll ask: what are a couple of cool things that we can do with sheaves, in at least a semi-applied sense? I'm sure there are many; just fishing around here for some favorites.
My own favorite applications of sheaves are the ones that made people invent sheaves in the first place - applications to algebraic geometry and toplogy. I don't know how deeply we want to get into those here. But it's not surprising that some of the most exciting applications of a concept are the ones that made people take the trouble to develop it in the first place!
Briefly, since a bounded analytic function must be constant, there are no everywhere defined analytic functions on the Riemann sphere except constants - all the interesting ones have poles. This issue affects all of complex analysis and algebraic geometry. This puts pressure on us to either accept 'partially defined' functions as full-fledged mathematical objects or work with sheaves of functions, e.g. work with lots of different open sets in the Riemann sphere and let be the set of analytic functions everywhere defined on .
Mathematicians took the second course, because partially defined functions where you haven't specified the domain of definition are a pain to work with. So nowadays all of algebraic geometry (subsuming chunks of complex analysis, and much much more) is founded on sheaves. In this subject one can do a lot of amazing things with sheaves. Later on these tricks expanded to algebraic topology. And this is how a typical math grad student (like me) is likely to encounter sheaves.
Needless to say, I'm happy to get into more detail about what we actually do with sheaves. But it's quite extensive: the proof of Fermat's Last Theorem and pretty much all the other big results in algebraic geometry relies heavily on sheaves.
Digressing a bit, I found this video pretty amusing, even though it's serious:
This is near the start of a series of over a hundred videos that works through the proof of Fermat's Last Theorem step by step.
But this list of prerequisites is very intimidating. Sheaves have a lot of exciting applications in pure math that are infinitely easier to explain.
I created a new topic to continue the discussion of applications of sheaves #learning: reading & references > Applications of sheaves
Here's the next puzzle in the first blog post:
Show that taking direct images gives a functor from the category of presheaves on to the category of presheaves on .
In the previous puzzle, we showed that the "direct image" of a presheaf on is a presheaf on . As a first step in showing that this gives us a functor, we still need to figure out how our direct image functor acts on morphisms between presheaves (which are natural transformations).
For my easy reference, I'll note that and .
This one is going to take me some thought. I don't have any intuition for natural transformations between presheafs yet. I think what I'll do to start with, is to draw a naturality square describing part of a natural transformation between two presheafs. Hopefully that will help me find some intuition!
If , we have a (unique) morphism from to in . Let be presheafs on . Then, to have a natural transformation , we need this square to commute for all pairs where and are open sets of such that :
Intuitively, for any , the natural transformation component tells us how to view that -data on as some -data on . Further, this process needs to respect restriction.
So we can expect there to be a natural transformation, for example, from the presheaf of bounded and continuous functions on to the presheaf of continuous functions on . In this case, each is an inclusion function.
But I wouldn't expect there to be a natural transformation from the presheaf of continuous functions on to the preseheaf of continuosu and bounded function on . That's because I can't think of a nice way of converting any continuous functions to a corresponding continuous and bounded function.
I am not so sure about the latter. If is the singleton set consisting of the zero function on , then we can define as the unique map that sends every element of to zero.
That sounds natural.
Puzzle. Is there a natural transformation from the presheaf of continuous functions to the presheaf of continuous and bounded functions that sends some functions to non-constant functions?
puzzle answer
I don't think that's "almost" the answer. I think it's exactly the answer! If there are any non-constant continuous functions on your space, your natural transformation will convert all continuous functions to bounded continuous functions, and send some to non-constant continuous functions.
John Baez said:
Puzzle. Is there a natural transformation from the presheaf of continuous functions to the presheaf of continuous and bounded functions that sends some functions to non-constant functions?
The first idea that comes to mind is:
I think this doesn't work though. That's because a restriction of an unbounded function might be bounded. If that happens, then the naturality square doesn't commute if one tries to follow the approach I described above.
Peaking at @Peva Blanchard 's answer... Huh, I did not expect arctan to show up! I guess its virtue is that it takes in any input, and squishes it down to a fixed finite range. Further, it does this without sending two inputs to the same output. And you can "squish" a function down and then restrict it, or you can restrict it first and then squish it down, and you'll get the same answer. So our naturality square will commute!
Anything defined using cases is going to have trouble being natural. It sometimes works - but when I try to do something natural, I avoid methods that involve different cases, because the spirit of naturality is to do something that works uniformly for all cases.
What Peva did is postcompose with a bounded continuous function; this turns any continuous function into a bounded continuous function. People who take real analysis use arctan as their go-to guy for this purpose, because this is also 1-1, so postcomposing with it doesn't lose any information, but they could equally well use tanh or lots of other things.
For this puzzle postcomposing with sin or cos would work fine too.
That's actually a really cool point! If is any continuous function, and is continuous and bounded, then is also continuous and bounded! And if this post-composition doesn't lose information (which I think corresponds to being a monomorphism), then we've managed to produce a continuous bounded function that "still remembers" the original unbounded function that it came from!
This way of "applying the same procedure pointwise". I think it relates to the way one checks the sheaf condition.
(I'm trying to make this statement clearer)
It may already show up from the presheaf condition.
To get a natural transformation between presheaves what you do to sections needs to be "local": you can restrict a section to a smaller open set and do the operation, or do the operation and then restrict, and these need to agree.
The most local of local operations are the "pointwise" ones.
But notice that there are other local operations: for example differentiation gives a map from the presheaf of smooth real-valued functions on the real line to itself.
(I'm saying "presheaf" a lot here. Each time I could have said "sheaf", but I don't think I'm using the sheaf condition in what I'm saying.)
(I should add that a map between sheaves is just defined to be a map between presheaves that happen to be sheaves.)
I have the feeling that, in the case of sheaves, a natural transformation is uniquely determined by what it does "pointwise" (more precisely, on the germs).
Something like: the set of natural transformations from an -valued sheaf to a -valued sheaf is equivalently described by a sheaf with values in (the functions from to ).
Analysts would never say differentiation is done "pointwise" - so yes, I think the correct word should be something like "germwise".
This could become a theorem once we (= David) officially study germs; then we could show a map between sheaves is determined by what it does to germs.
Building on the discussion above, I think I can now start to work out how we can get a natural transformation between two direct image functors and . If we have a natural trasnformation , then for each open subset of , we need to figure out how to compute -data on from -data on . This will serve as the -th component of a natural transformation from to .
So, let's assume we have the two sets and . We're looking for a function . Let . Our goal is to figure out .
Since , . From this, we need to get some element . To do this, we can use our natural transformation . Since , we can just provide to this function and get out .
We've arrived at the following idea: Let our direct image functor send a natural transformation to the natural transformation having -th component
Next, let's check that really is a natural transformation. Evaluating these functors at some morphsim in , we get this square:
This square can be rewritten as:
square 2
And this square diagram commutes, because is a natural transformation. We conclude that any naturality square for commutes, and hence is a natural transformation. So, is sending natural transformations to natural transformations, as it should.
It only remains to show that is a functor!
First, we need to check that for any identity morphism in . By our definition of , we have . Since each component of is an identity function, is the identity function from to . So, we see that is the identity natural transformation from to itself. (Indeed, the identity natural transformation from to itself has as its -th component the identity function on ).
Lastly, we need to check that respects composition. That is, we need to show that for two composable morphisms . To show that two natural transformations are equal, it suffices to show that each of their components are equal. So, we wish to show that , for any .
By definition of , we have . We also have and . So, . By definition of vertical composition of natural transformations, we have that .
We conclude that respects composition! And now we can conclude that taking direct images using a continuous function yields a functor !
Whew, that felt like a lot. I suppose this sort of thing gets quicker with practice! But I wonder if there is a faster (more abstract?) way to work this out, as well.
It would be cool if there were a higher level / more systematic way of proving such things. A proof assistant? I haven't used them. But that wouldn't seem to help with the basic understanding. It's hard to see a way around needed to unpack definitions and verify them in detail. I appreciate the clarity, detail and completeness of your posts here!
I think there is a higher level way of doing, but, at some point, we still need to work out the details.
Here is my attempt.
We have a continuous function . This function is equivalently described as a functor , viewing the poset of open sets as a category.
Consider the functor given by the composition
of the functor "taking the opposite category", and the functor "hom-ing into Set".
In particular, maps the functor to a functor:
It remains to show that is indeed the direct image functor. (I'll skip that part for now)
The tedious details are still present: I haven't proved that the functor "hom-ing into " is well-defined and a functor. This is, I think, proven exactly as @David Egolf did.
David Egolf said:
Whew, that felt like a lot.
For what it's worth, it doesn't feel like a lot to me. I think if this were part of a book it would be less than a page. Some of the work is coming up with the ideas: that's the fun part. But a lot of the work in writing these arguments is just formatting things in LaTeX. I'm very glad you're doing it, because you're helping other people. But it's less work on paper.
Once you do this kind of argument for a few years, the standard moves become so ingrained that they're almost automatic... except when they're not, meaning that some brand new move is required.
David Tanzer said:
It would be cool if there were a higher level / more systematic way of proving such things. A proof assistant?
I'm the opposite: what I really want is some software that will go out to dinner and talk to my friends so I can stay home and prove theorems.
As for "systematic", I think @David Egolf's approach to this question was perfectly systematic. To prove P implies Q, he expanded out P using definitions to get a short list of things to check, and then checked each of these using Q, which he expanded out just enough to get this done.
Category theory is full of proofs like this; many mathematicians look down on it because it's not tricky enough, but to me that's a virtue. The main hard part is keeping track of nested layers of structure imvolved... and a main reason for doing lots of proofs like this is to get good at keeping a lot of structures in mind.
The number theorist Serge Lang has an exercise in his book Algebra that goes like this:
Take any book on homological algebra, and prove all the theorems without looking at the proofs given in that book.
Homological algebra was invented by Eilenberg-Mac Lane. General category theory (i. e. the theory of arrow-theoretic results) is generally known as abstract nonsense (the terminology is due to Steenrod).
He is of course joking to some extent, and definitely showing off. But the hard part in homological algebra - or other kinds of category theory - is developing an intuition for the structures involved so you can guess what's true. The proofs of theorems are often easy in comparison.
(But sometimes they're not - there are some really deep results too.)
We have now arrived to the final puzzle in the first blog post! For context, recall that is a continuous function, that is a presheaf on , and is the corresponding direct image presheaf on . Here's the puzzle:
Puzzle. Show that if is a sheaf on , its direct image is a sheaf on .
We saw above that is a presheaf. So, we only need to check that we can "glue together" things appropriately: if we start with a bunch of that agree on overlaps, so that for all and , then there is always a unique that restricts to on for each . Here, each is an open subset of and .
Let's start out with a bunch of , which agree on overlaps and where . We want to show there is a unique that restricts to each on .
Note that for each , by definition of . I want to use the fact that is a sheaf to glue together these to get some .
First, let's show that , making use of the fact that .
Let . That means that there is some so that . Since , that means that for some . Thus, for some . Hence, . We conclude that .
Next, let . That means that there is some so that . Thus, there is some so that . Since , we know that . Hence and thus . Therefore, .
We conclude that .
The next order of business is to talk about "agreeing on overlaps". With respect to , we know that for any and . A particular element of is restricted to as follows: note that is an element of and then restrict it (using the fact that is a presheaf, and so provides a notion of restriction) using to an element of .
So, if with respect to , then this means that restricting (using ) from an element of to an element of yields the same result as restricting (using ) from an element of to an element of .
Now, we'd like to show that if and agree on overlaps with respect to , then they agree on overlaps with respect to , where we view as an element of and as an element of . To show they agree on overlaps with respect to , we need to show that restricting from an element of to an element of yields the same result as restricting from an element of to an element of .
By the above discussion, this follows provided that .
We now show that . It suffices to show that .
Let . That means there is some so that . Since , we know that and . Hence, and . Thus, , and so .
Next, let . That means that and . Hence and therefore . Therefore, .
We conclude that , and so .
From the above discussion, if (where each is an open subset of ) and we have a bunch of (as varies) that agree on overlaps with respect to , then we have:
Since is a sheaf, there is then a unique that restricts (using ) to on each . We hope that this restricts to each on with respect to . This is true, because restricting to an element of with respect to is done by restricting to an element of with respect to , and we know this yields (by definition of ).
So, I think we have managed to show there is at least one "gluing together" of our to get a that restricts to on each . It remains to show that there is only one way to do this, so that is the unique "gluing" of our .
Let's imagine we've got some that restricts (with respect to ) to on , for each . That means it restricts from to (with respect to ) for each . That is, is a valid "gluing" of all the . Since is a sheaf, there is only one such gluing, namely . Therefore, .
Consequently, there is exactly one way to "glue together" our to get a . We conclude that is indeed a sheaf!
Great! Your proof looks perfect!
The only change I might make is to pull out a few facts as "lemmas", since they don't really involve presheaves per se: they are properties of the inverse image of a subset along a function .
Of course the inverse image of an open subset along a continuous map is open, but these properties are even more fundamental: they work for any subset and any map.
So suppose is any function.
Lemma 1. If are subsets of then
Lemma 2. If are subsets of then
Just for good measure, let denote the operation of taking the complement of a subset.
Lemma 3. If is a subset of then
I think you used Lemma 1 only in the case of the intersection of two subsets. You used the general case of Lemma 2. And you didn't need Lemma 3 at all.
Subsets of a set form a [[complete boolean algebra]] - this is jargon is a way of capturing all the rules that govern intersections, unions and complements in classical logic. Lemmas 1-3 say that is a morphism of complete boolean algebras!
If someone out there has never thought about this, it's worth comparing the 'image' operation. The image of a subset under a function is defined by
The image operation does not obey analogues of all three of Lemmas 1-3. I.e. it doesn't preserve all unions, intersections and complements.
Puzzle. Which lemmas fail?
Moral: inverse image is 'better' than image. This is one reason it's nice that we use inverse image, not image, to define continuity.
All these simple thoughts will get refined more and more as one digs deeper into topos theory.
John Baez said:
Subsets of a set form a [[complete boolean algebra]] - this is jargon is a way of capturing all the rules that govern intersections, unions and complements in classical logic. Lemmas 1-3 say that is a morphism of complete boolean algebras!
That is very cool! I'll plan to think a bit more about this, as well as the puzzle you gave relating to the image operation.
Great! I wish someone had told me - way back when I was a youth - that 'inverse image' is better behaved than 'image', and also had explained why. Back then inverse image seems like a more sneaky concept than image, in part because of its name. So it seemed a bit weird that it was used in the definition of continuity. Of course from another viewpoint this makes perfect sense: this gives a definition of continuity of maps between metric spaces that matches the definition! But it's much more satisfying to understand the fundamental role of inverse images.
John Baez said:
Subsets of a set form a [[complete boolean algebra]] - this is jargon is a way of capturing all the rules that govern intersections, unions and complements in classical logic. Lemmas 1-3 say that is a morphism of complete boolean algebras!
I've reviewed some things relating to Boolean algebras, and I think this is making sense!
One interesting point jumped out to me. When I think about morphisms "preserving the structure", I usually think of equations like this one: . That is, the binary operation only gets applied once on each side of the equation. This is in contrast to something like , where potentially we are taking the union of an infinite number of sets on each side of the equation.
I assume that the idea is to "preserve equations". If we know that we are mapping between two complete Boolean algebras, then arbitrary (small) meets and joins always exist in both the source and target Boolean algebras. So then for any collection of in some complete Boolean algebra, always exists - call it . Consequently, we can always write this kind of equation for any collection of elements . Asking this equation to be preserved under would mean we'd want .
I guess the moral of the story is this: if we have fancier equations that hold in all the structures of interest, we'll get fancier corresponding requirements for a structure-preserving map.
For to be a morphism of complete Boolean algebras, there's another condition we'll want it to meet. It's simple, but I thought it might be nice to note explicitly. In particular, for and , we'll want in to imply that in .
To see that this holds, let . That means that . Since , that means and hence . Therefore, , as desired.
I think the idea of "preserving equations" is right. It can also be rephrased as "preserving limits/colimits". When seeing a complete boolean algebra as a category, then the join (resp. meet) of an arbitrary collection of elements is literally the colimit (resp. limit) of this collection.
I now want to think about the image operator for a function . We have for any . Let's see in which ways fails to be a morphism between the complete Boolean algebras and .
First, notice that is not necessarily . So the biggest ("top") element is not always mapped to the top element! (This is because is not always surjective). However, does map the empty set to the empty set, and it preserves arbitrary unions. Also, if , then .
doesn't preserve intersections in general. For example, if and are disjoint non-empty subsets of , but , then but is not empty. This sort of thing can happen when isn't injective.
also doesn't preserve complements in general. That is, if , then we don't necessarily have that . For example, let . Then . But isn't necessarily all of (as isn't necessarily surjective). Therefore, is not always empty.
Great, now you see exactly how inverse image is better than image! Inverse image sends maps between sets to maps of complete boolean algebras, and indeed gives a functor from to the opposite of the category of complete boolean algebras. I like to call this "the duality between set theory and logic".
There's more to say about this, but enough for now.
A tiny typo here:
doesn't preserve intersections in general. For example, if and are disjoint non-empty subsets of , but , then but is not empty.
You meant it may not be empty... as you made clear in your very next sentence.
If you can stand one more puzzle about this: do you see the way in which inverse image is 'logically simpler' and 'easier to compute' than image?
Since I assumed that and are non-empty, and also that , I think actually isn't empty in this case. But one wouldn't need to assume ! More generally, I think the idea is that even if and are disjoint, and can have some elements in common.
John Baez said:
If you can stand one more puzzle about this: do you see the way in which inverse image is 'logically simpler' and 'easier to compute' than image?
Hmmm, that's interesting. Nothing immediately comes to mind, but I'll give it some thought!
My initial thought is that an inverse image seems harder to compute then an image!
It seems like it requires at least as many evaluations of to compute an inverse image, as compared to an image.
I'll give it some more thought and see what else I can think of...
David Egolf said:
Since I assumed that and are non-empty, and also that , I think actually isn't empty in this case. But one wouldn't need to assume ! More generally, I think the idea is that even if and are disjoint, and can have some elements in common.
Whoops - you're right, I didn't read your comment carefully enough. Sorry.
David Egolf said:
My initial thought is that an inverse image seems harder to compute then an image!
- For , to compute an image of , I just need to compute for each
- To compute the inverse image of some , I think I need to check each element and see if
It seems like it requires more evaluations of to compute an inverse image, as compared to an image.
Maybe there are dual ways of thinking about this!
I was thinking: given and a subset of its domain, what do we have to do to check whether a given element of is in the image of that subset?
Given and a subset of its codomain, what do we have to do to check whether a given element of is in the inverse image of that subset?
I believe this explains why inverse image is 'better': always a boolean algebra homomorphism.
I have to take a look at what you just said! Here's the idea that came to mind for me, though:
I think we can use the fact that the inverse image interacts nicely with unions, intersections, and complements. For example, let's say I want to compute the inverse image of , and I know the inverse image of each . Then I can just compute the intersection of the inverse images of the .
By contrast, if I know the image of a bunch of , I can't directly compute the image of in an analogous way. That's because is not necessarily equal to .
So, the inverse image operation is computationally nicer than the image operation in this sense: you can compute more things directly, when given some known prior things.
John Baez said:
Maybe there are dual ways of thinking about this!
I was thinking: given and a subset of its domain, what do we have to do to check whether a given element of is in the image of that subset?
Given and a subset of its codomain, what do we have to do to check whether a given element of is in the inverse image of that subset?
Assume we have and a subset of . To check if is in the image of , we need to check each element of and see if we ever get .
Let's now assume we have a subset of . To check if is in the inverse image of , we just need to compute and see if it lands in !
So, we'll need fewer evaluations of to see if a particular element is in a inverse image, as compared to what we need to check if an element is in an image!
It's interesting to compare these two things:
John Baez said:
Great, now you see exactly how inverse image is better than image! Inverse image sends maps between sets to maps of complete boolean algebras, and indeed gives a functor from to the opposite of the category of complete boolean algebras. I like to call this "the duality between set theory and logic".
I somehow missed reading this earlier. I like it!
David Egolf said:
It's interesting to compare these two things:
- it's easier to check if a given element is in an inverse image, as compared to an image
- it's easier to compute an entire image, as compared to an entire inverse image
In complexity theory, enumerating elements of a subset and checking that an element is a member of a subset are somehow related, but not really equivalent (this depends on how "easy to compute" is defined).
a. For instance, regarding your second bullet point, it seems that you have the following picture in mind (I may be wrong):
b. Now the related way is to ask for procedures that decide on whether an element is a member of a subset. In that case, your first bullet point can be restated as:
I think cases a and b are "as easy as each other". They seem "dual", although I don't know if we can make this statement precise.
Things become complicated if we try to apply one case starting from the other. I mean:
Assuming that one can also check the equality "y = f(x)", and one can enumerate elements of the domain of , here are two (informal) ways:
Oh! I think there is a more "categorical" rewording of all of the above. Given any category and morphism in , we get two functors:
And each of these functors may have adjoints:
(this is the kind of things that makes my head spin: over/under, post/pre, left/right)
(edit: deleted the latest message. I thought I could reformulate the above in terms of adjunctions between pre/post-composition and left/right Kan extension/lift, but it was incorrect.)
Thanks for the link on "dovetailing". That's a neat concept I'd not heard of before!
I'm not quite understanding why you want to enumerate the elements of , though. Wouldn't it be less work to just loop over all the elements in (which I've been assuming is all of ) and then apply to those, and see if we ever get an element of ?
I guess my proposed approach here involves two loops:
for all :
--
-- for all :
---- check if
I suppose these two loops effectively involve considering each element of , unless we add a "break" statement in the second loop. The break statement could let us quit out of the second loop early if we find some that is equal to.
That explains, I think, why you wanted to enumerate all elements of . I think we basically had the same idea; I was just struggling to see that the way you formalized the idea matched (at least roughly) what I had in mind!
I think it is the same idea. Dovetailing is relevant when the sets are infinite. For instance, in your pseudo-code example (the two loops), the inner loop can be infinite, so you never enumerate all of . But, besides that technical point, you really expressed the same idea.
Today, I want to start on Part 2 of the blog post series! :tada:
The first puzzle is this:
Check that with this choice of restriction maps is a presheaf, and in fact a sheaf.
This puzzle needs some context! Here it is:
Now, it seems to me that, in maths, unless you work in computability theory, it is unusual to consider structures that can be enumerated. It makes no sense, a priori, for most mathematical structures, e.g., the real numbers. The other way, i.e., testing an element is more common, and, by varying what is meant by "testing", easier to generalize.
This is why I think, the inverse image seems "logically easier" to conceive than the image . More precisely, in , the mother of all tests is the membership test , i.e., a characteristic function . You can transport these characteristic functions along just by pre-composing with .
oops, you already moved to Part 2. Let's move on then :)
Peva Blanchard said:
oops, you already moved to Part 2. Let's move on then :)
There's no rush! What you are saying is interesting, and I look forward to thinking about it! If there's more you want to say on that topic, please feel free to keep posting about it here. I can always shuffle the "part two" messages to the bottom afterwards.
Here's a picture, to help visualize the concept of a "bundle":
In this case the blue area is and the black line is . Our continuous map projects each point down to the corresponding point on the black line.
There is another bundle that may be familiar to you. Say we encode RGB pixel values with real values between and , i.e., is the space of colors. Let be the unit square, thought of as a canvas. Then we have a (trivial) bundle (just the projection on the first factor) that represent all the possible RGB-images on the unit square.
That's a neat example! I'm imagining all possible RGB values "floating over" each point of our square. A section of that bundle I think corresponds to an RGB image over (an open subset) of the unit square.
Here's a picture to visualize the concept of "section", using the bundle I drew above:
section
The orange line is a section of our bundle over the yellow subset of .
In this case, we might imagine that the black line is an observed noisy signal, and the blue "envelope" describes at each point the possible "actual" (without noise) signal values. A section then is a "point-wise plausible guess" for a portion of a denoised signal.
At first you said the black line was a picture of X, but to me it looks like a picture of a section... they're both reasonable interpretations but people tend use the latter, and draw X as a horizontal line down below the bundle itself, so that p "projects down" from Y to X. Later you seem to have used the latter interpretation because you said "the black line is an observed noisy signal", which sounds like a section to me.
Here's a 3d picture of a bundle and a section s from Wikipedia:
People like to use E instead of X and B instead of Y, and call E the total space and B the base space of the bundle.
I guess we might have weird cases, like this one.
It might also be useful to have some more explicitly defined examples, like the exponential map from to ;)
Morgan Rogers (he/him) said:
It might also be useful to have some more explicitly defined examples, like the exponential map from to ;)
oh yes, another one would be on the complex numbers, or more generally .
(Which makes me wonder, since , if we can combine multiple bundles together)
There are ways to combine bundles over general spaces but the kind you're thinking of (taking products and sums) relies on the algebraic structure of . See if you can figure out how you might use the addition and multiplication of the complex numbers to combine bundles! Power series are a bonus challenge ;)
Peva Blanchard said:
I guess we might have weird cases, like this one.
Yes, that picture shows a 'bundle' in the extremely general sense introduced here (a continuous map from a space to a space ), but not a [[fiber bundle]]. For a fiber bundle we typically want the 'fibers' to be homeomorphic to each other for all , while in the picture some fibers are homeomorphic to and others to .
Morgan Rogers (he/him) said:
There are ways to combine bundles over general spaces but the kind you're thinking of (taking products and sums) relies on the algebraic structure of . See if you can figure out how you might use the addition and multiplication of the complex numbers to combine bundles! Power series are a bonus challenge ;)
I see. Indeed, if we consider only bundles over (or any other ring actually), we can do something as follows. Let and , then we can define as the composite
We can do the same for any other binary continuous operations like, e.g., multiplication.
Similarly, given a complex number and a bundle , I can define the bundle by pointwise multiplication .
Now, let's use the notation for the bundle corresponding to the identity morphisms. Then we wave , and more generally for any polynomial
we get a bundle
(This is a weird entity. I would expect the total space to look more like a polynomial, but here it is just a big cartesian product)
If is a power series instead of a mere polynomial, the bundle is not well-defined. It is not clear at all if the series converges; plus it is a weird series as it involves infinitely many different variables.
If I assume that the series has a positive radius of convergence, e.g. 1, it might help. We know then that for any complex number with , the series converges to a well-defined (complex) value, and the induced function is continuous.
I could restrict my setting to the situation where the all have modulus less than . But yet, as we have infinitely many variables, I'm not sure that the series converges. Even less that it depends continuously on the 's.
I will leave it here. I may have taken the wrong turn when defining things.
Nice attempt! You got the main ideas I think, but some things to note: first, at the moment you have a lot of variables around; there is a way to reduce them by also precomposing with something. Second, there are indeed a bunch of subtleties to beware of for power series: for it to define a bundle, you not only need to restrict to a subset where the series converges, you need to make sure the function is continuous! It's hard to express those conditions categorically, which is why you'll rarely see analysis and category theory talking to each other. Or to turn that comment into an exercise: one way you might hope to express a power series is as a diagram of bundles over whose colimit would determine the power series. But this can't work because of the way we define morphisms of bundles; can you see why?
Indeed, there is a brutal way to reduce the number of variables. It suffices to precompose what I did with the diagonal map . This amounts to consider only the bundles of the form .
More precisely, given a polynomial , this amounts to consider the function as a bundle .
When is a (formal) power series, then the domain of cannot be the whole complex plane. For instance,
which is defined on .
So we must restrict the domain. Let's consider all the ways to restrict this power series. That is we consider all the bundles with , where is an open subset where the series converges and is continuous. My knowledge about complex analysis is a bit rusty, so I'm being a bit sketchy here...
These bundles are objects in the over category . Let's consider the category with those bundles as objects, and as morphisms the ones induced by inclusion of subsets . We could take the colimit of in , provided it exists (I don't know any argument in favor of that, on the top of my mind).
It looks like would be the "maximal" domain of definition of the power series . But, thanks to your hint, I think this is wrong. That's because there are other morphisms in , e.g., homeomorphisms. So, there could be issues like the total space being homeomorphic to the maximal domain of (???).
Oh yes, in particular, the coefficients of the power series are not preserved by homeomorphisms.
For instance:
Which means that one cannot hope to recover the power series from the colimit , even if it exists.
By the way, there's a huge amount to say about analytic functions and power series using sheaves: this is one of the things sheaves were developed for!
Peva Blanchard said:
It looks like would be the "maximal" domain of definition of the power series . But, thanks to your hint, I think this is wrong. That's because there are other morphisms in , e.g., homeomorphisms. So, there could be issues like the total space being homeomorphic to the maximal domain of (???).
Great work! I was actually trying to hint at something a bit more basic than this: the fact that morphisms of bundles over fix the values in . I can't express the bundle corresponding to a power series as the colimit of the partial sums because there aren't bundle morphisms between the bundles corresponding to those partial sums in general!
Peva Blanchard said:
Now, it seems to me that, in maths, unless you work in computability theory, it is unusual to consider structures that can be enumerated. It makes no sense, a priori, for most mathematical structures, e.g., the real numbers. The other way, i.e., testing an element is more common, and, by varying what is meant by "testing", easier to generalize.
I was wondering if one could come up with a procedure for enumerating (listing the elements of, I assume?) a subset of interest given a way to test if elements in are in . But I suppose if has infinitely many elements, this could be very impractical - this procedure may often require an infinite number of tests to be run. From that perspective, it does make more sense to focus on testing individual elements - as that is something that we can probably actually do!
Actually, if all you have is a testing procedure for , via a characteristic function that consumes elements of , there is no generic way to build a procedure that produces elements of . For that you need another assumption, e.g., that you already have a procedure that produces elements of , which you can then "filter" using the characteristic function.
I don't want to reveal too much about John's later posts (also because I don't have an expert knowledge in those things), but these "testing procedures", or "characteristic functions", will play a crucial role with respect to an important notion in topos theory, namely that of "subobject classifier".
Yes, I was focused on testing procedures. My claim that inverse images are "logically simpler" than images merely meant this:
Say you have a function . Then if we have
while if we have
In this sense, images are defined using an existential quantifier . So:
Since unions are also defined using an existential quantifier, and existential quantifiers commute, images preserve unions.
Since intersections are defined using a universal quantifier, and existential and universal quantifiers don't commute, images don't preserve intersections.
Since complements are defined using negation, and existential quantifiers don't commute with negation, images don't preserve complements.
But since inverse images are defined much more simply, without any quantifiers, they preserve unions, intersections and complements!
I'm feeling tired today, but I'd like to try and make at least a little progress on the current puzzle. Here it is again:
Check that with this choice of restriction maps is a presheaf, and in fact a sheaf.
And here's the context, again:
First, I want to show that is a functor, and hence a presheaf. Here's what it does on objects and morphisms:
To show that is a functor , we first need to show that restricting a section of actually gives us a section of . Let be a section of over , where is an open subset of . We want to show that given by is a section of over . (Here is an open subset of with ).
To do this, it suffices to show that . Checking this at an element , we find . We conclude that , as desired. (Also, is continuous, as it is given by composing continuous functions).
Next, we want to show that for any object (open subset of ) . By definition, is the function that takes a given section and restricts its domain to , yielding . We note that this is the identity function on , so that .
To finish showing that is a functor (and hence a presheaf), we need to show that respects composition. That is, if we have an equation of the form in , then we need to show that . This is true because restricting the domain of a section in two steps, or restricting the domain all at once yields the same result.
We conclude that is a presheaf!
The next order of business is to show that is not only a presheaf, but also a sheaf! But that's a job for another day, when I have a bit more energy.
I can't resist to give it a try.
spoiler
Great! Good luck on your energy levels. I think you'll find that showing that the presheaf of sections of a bundle is a sheaf is similar to the earlier problem where you showed that the presheaf of continuous real-valued functions is a sheaf. Indeed that earlier problem can be seen as a special case of this one if you take .
I now want to show that the presheaf of sections is in fact a sheaf.
To do this, let's start out with a bunch of as varies. Let's require that for all and . (Here, each is an open subset of ). To conclude that is a sheaf, we need to show that there always exists a unique such that for all .
Recalling that , any particular is a continuous map such that . Intuitively, we want to "glue together" these sections to get a section of over . If exists, it is unique. That is because for any , for some , so we must have . This produces a function because for all .
It remains to show that exists. To do that, we need to check that defining as (where ) gives us a continuous function such that .
We start by considering continuity. By the "local criterion for continuity" discussed above, a function is continuous exactly if for any point there is a neighborhood of such that is continuous. For any , there is some so that , because . And by assumption we know that is continuous. We conclude that is continuous.
Next, we need to show that . For any , there is some so that . Then, . We conclude that , as desired.
John Baez said:
Great! Good luck on your energy levels. I think you'll find that showing that the presheaf of sections of a bundle is a sheaf is similar to the earlier problem where you showed that the presheaf of continuous real-valued functions is a sheaf. Indeed that earlier problem can be seen as a special case of this one if you take .
Thanks for the good luck! Sometimes hoping for higher energy levels does feel a bit like waiting for a lucky dice roll; I find it's quite difficult to predict my energy levels accurately.
I want to consider the case , where sends to for any . Then a section of over (where is an open subset of ) is a continuous function such that . I want to show that a section of over gives us a real-valued continuous function , and a real-valued continuous function gives us a section of over .
A section is in particular a continuous function. Therefore, by the universal property of products, it corresponds to two continuous functions: (1) a function and (2) a function . So, given a section , we get a real-valued continuous function , given by .
Let's now start with a continuous function . We want to construct a section of over from . By the universal property of products, to get a continuous function , we just need to specify a continuous function from to and a continuous function from to . Let's take our function to be the inclusion (which is continuous because has the subspace topology).
We want to show that the induced function is in fact a section. Indeed, , as desired.
I guess what I really wanted to show is that there is a bijection between the set of sections of over and the set of real-valued continuous functions from . I'm running out of steam, but this seems important to note: Since for any section , we have that is of the form for some . That is, the function induced by a section must be the inclusion.
I'm hoping that one can make use of this fact to show that the procedures I described above ((1) for constructing a continuous real-valued function on from a section of over and (2) for constructing a section of over from a continuous real-valued function on ) are in fact inverses of one another.
I'll stop here for now!
David Egolf said:
I guess what I really wanted to show is that there is a bijection between the set of sections of over and the set of real-valued continuous functions from . I'm running out of steam, but this seems important to note: Since for any section , we have that is of the form for some . That is, the function from induced by a section must be the inclusion.
Yes, that's a really important observation. And there's nothing really special about here. More generally this is how you can take any section of a bundle over and turn it into a continuous function . And you're right: this gives a bijection between such sections and continuous functions .
So, sections of bundles are a generalization of continuous functions. I'll let you do the work, but I wanted to have the fun of stating the dramatic conclusion!
Just to formalize the statement. Does it mean that there is a natural isomorphism between the sheaf of continuous functions on and the sheaf of sections of the bundle ?
Yes, that's right. Nice!
It's not much extra work to state (or prove) the idea more generally: for any topological spaces and , there's a natural isomorphism between the sheaf of continuous -valued functions on and the sheaf of sections of the bundle .
Cool!
It also motivates "fiber bundles". The common fiber somehow acts like the codomain of values of the "functions". The difference is that the total space is not necessarily neatly decomposed as a cartesian product .
The correspondance seems to go this way:
Also, I understand why we would want to consider "étale spaces": the ultimate form of this correspondance game is the equivalence between the category of sheaves on and the category of étale spaces over .
Mmh, I think my mental picture is wrong ... an étale space does not seem to generalize fiber bundle.
Peva Blanchard said:
The correspondence seems to go this way:
- bundles of the form naturally corresponds to sheaves of -valued continuous functions
- fiber bundles with fiber naturally corresponds to sheaves of sections that locally look like a -valued continuous function (?)
Yes, those are both right. We could make the second one more precise if we wanted, either in lowbrow ways or in highbrow ways using sheaf cohomology. But now is probably not the time to do that, especially since the "course" he's going through does not introduce fiber bundles.
Peva Blanchard said:
Mmh, I think my mental picture is wrong ... an étale space does not seem to generalize fiber bundle.
The course talks more about étale spaces fairly soon, so let's wait a bit and revisit this. But you're right: I would say étale spaces generalize covering spaces, which are fiber bundles with discrete fiber.
In the next puzzle, we work on building a functor . We've already seen how to make a sheaf on from a continuous function . It remains to figure out how acts on morphisms in , and then to check that our resulting really is a functor.
Here's the next puzzle:
Suppose we have two bundles over , say and , and a morphism from the first to the second, say . Suppose is a section of the first bundle over the open set . Show that is a section of the second bundle over . Use this to describe what the functor does on morphisms, and check functoriality.
First, we recall that a morphism from a bundle to a bundle is a continuous function such that .
In picture form, this commutative diagram describes a morphism from to :
a morphism from p to p'
Notice that if "sits over" (so that ), then also sits over (as ). We might think of our continuous function as being possible to decompose into several pieces, where the piece of maps to some subset of .
With that context in place, I now want to check this:
To show that is a section of , we need to show that sends each element of to itself. Noting that , we find . Since is a section of over , . We conclude that for all , and so is indeed a section of over .
The next order of business is to describe what does on morphisms. But I'll stop here for today!
I'll just restate, in this setting, an example that we discussed earlier. Let's just look at the case where and . The bundles and are just the projection on the second factor.
Then, we get a morphism from to in with
I choose , but clearly we can replay this game with any function .
But there's a funny variant if we take
I think this is an example of morphism in which does not arise from a continuous real-valued function as before.
Hmmm, let me try to understand what you just said.
I'll work in . Let us assume we have two bundles of this form: and , where and for all , , and .
If we have a continuous function , then the function which sends to is continuous. Is it also a morphism of bundles from to ? Let's consider for some . We get . We conclude that from any continuous function we get an morphism of bundles from to given by .
I think then the question is: what other morphisms exist from to that can not be produced in this way?
The you give above is not of the form , for example.
Any continuous function from to induces a continuous function from to . Let's define by . In some cases, this can vary as does!
I'm guessing we can't induce a morphism of bundles where this kind of thing happens when we start out with just a single continuous function from to .
Yes exactly! To be complete, we should prove that there is no such that for all .
spoiler
I think I have a good guess regarding how to finish describing . Once I get a bit more energy - hopefully soon - I will type that up here. But today I need to rest up!
I used to have energy problems, but then I started taking meds (in case that might help you).
Alright, let me take a stab at describing what does on morphisms. Let's assume we have two bundles over , namely and . These induce presheaves (indeed sheaves) on , by sending each open subset of to an appropriate set of sections over that subset.
Let's call these sheaves and . Given a morphism of bundles from to induced by a continuous function , we want to define a natural transformation .
Let's set up a naturality square corresponding to the morphism in where and are open subsets of and . We recall that:
Given a section of over and a morphism of bundles , we can form . We saw earlier that this is indeed a section of over . So, post-composing by provides a function from sections of over to sections of over .
Based on the above, we now draw a proposed naturality square corresponding to the morphism in :
square
To show we get a natural transformation from to in this way, we still need to show this square commutes for an arbitrary morphism .
Let's pick an and trace it around the diagram. Restricting its domain to can be accomplished by precomposing with the (continuous) inclusion map . Going around the top right side of the square, we get . Going around the bottom left side of the square, we get . By associativity of composition, these two results are equal, and so the square commutes.
We conclude that post-composing with at each component describes a natural transformation .
Next, let's show that is a functor.
First we need to show that for any bundle . The identity morphism of is induced by the (continuous) identity function . So, is the natural transformation which post-composes by at each component. This is indeed the identity natural transformation from to itself, as desired.
Finally, we need to show that for two composable bundle morphisms and . Let's compare the components of these two natural transformations. The component of is a function that post composes after a section , so it is the function . The component of is given by composing the component of after the component of . That means that the component of corresponds to a function . By associativity of composition, we conclude that the component of and are equal. So, , as desired.
We conclude that is indeed a functor!
I'm excited, because the next section of the current blog post talks about "germs"! The rough intuition I have for germs is that they can be used to describe the different possible "very local behaviours" super close to a point.
For example, "The Rising Sea" (by Vakil) defines germs at a point to be equivalence classes of smooth functions defined on open sets containing : we say that is in the same equivalence class as if there is some such that and . So, intuitively, two functions defined on open sets containing are in the same germ at if they restrict to the same function when we "zoom in" to some open set that is "close enough" to .
Above, we saw how to make a presheaf on from a bundle over . We now want to go in the other direction: can we make a bundle over from a presheaf on ?
Presheafs on and bundles over can both be viewed as "attaching information" to parts of . Given a bundle , the data "attached" to some point is . Given a presheaf , the data "attached" to an open subset is .
So, to make a bundle from a presheaf, we need to figure out how to attach data to individual points of given data attached to each open subset of .
Assume we have some presheaf . We can come up with a set to "attach" to as follows:
Next time, I'd like to think about in the particular case where is the presheaf (which is also a sheaf) that sends each open subset of to the set of continuous real-valued functions .
Good! That's a great example for getting a more concrete picture of germs. I recommend taking so you can actually graph these continuous functions and visualize these germs. And in this example I also recommend comparing the sheaves of
Remember, a functions from an open set of to is analytic if at each point it has a Taylor series with a positive radius of convergence.
The reason I bring this up is that derivatives, Taylor series and germs are three famous ways to study how a function looks in an arbitrarily small neighborhood of a point. And there are some revealing differences in the 3 cases listed above!
I remember a funny function :
which is smooth (therefore continuous).
One can try to describe the germ of at , when regarded as a continuous, resp. smooth, function.
It also turns out that is not analytic, because of what happens at .
Yes, this is a great example of how the germs of smooth functions differ from those of analytic functions. There is more to say about this but I'll let David proceed at his own desired pace so he's not "drinking from a firehose".
Peva Blanchard said:
I remember a funny function :
which is smooth (therefore continuous).
One can try to describe the germ of at , when regarded as a continuous, resp. smooth, function.
It also turns out that is not analytic, because of what happens at .
FWIW this function is often used (after integrating and introducing radial or similar coordinates) to construct (relatively) explicit partitions of unity.
I next want to want to understand this part of the blog post, where is the presheaf which sends each open subset of to the set of continuous real-valued functions :
By the definition of colimit, for any open neighborhood of we have a map .
So any continuous real-valued function defined on any open neighborhood of gives a ‘germ’ of a function on . But also by the definition of colimit, any two such functions give the same germ iff they become equal when restricted to some open neighborhood of .
Specifically I'd like to prove the statement "any two such functions give the same germ iff they become equal when restricted to some open neighborhood of ".
Once I've done that, then I think it would be good to do the things that John Baez suggests above: take and make some graphs to visualize germs, and compare the germs we get when considering sheaves of continuous, smooth, or analytic -valued functions. (Then I think it'll be time for the next puzzle, probably!)
I'm not sure where to start, but I think it may be helpful to get some sense for what a cone under our diagram is like.
So, let be a cone under . Note that is a natural transformation from to the functor that is constant at the set . Since is a natural transformation, all its "naturality squares" must commute.
Let's examine the naturality square for the morphism in , where and are open subsets of each containing , such that . Here's the corresponding naturality square:
square
Since is a cone under , this diagram commutes. That means . This will be useful to know in a moment. Intuitively, this tells us that restricting a continuous function (from an open subset containing to an open subset containing ) doesn't change the germ of it corresponds to.
I'm interested in the case where two continuous functions and get mapped to the same germ (same element of ). And I want to show that this happens when there is some open so that and . Let's draw part of a cone (for some set ) under the diagram in the situation where there is some open containing :
diagram
We have that and . That implies that . Similarly,
Let's now assume that we have continuous functions and such that . Thus, . Therefore, .
So, we see that if and restrict to the same function on some open subset of that contains , then they get mapped to the same element by any cone under . In particular, they must correspond to the same germ of !
It remains to show that if two continuous functions and correspond to the same germ of , then they must restrict to the same function on some open containing . I'll leave that for next time, though.
Good work! We'll be able to draw a lot of lessons from what you're doing now, because many of the ideas you're coming up with now (and will come up with next time :upside_down: ) apply in far more general situations than the one you're considering here. But I won't distract you with those lessons until you're done!
I have to say a huge thank you to @David Egolf and @John Baez (and multiple others) for this wonderful discussion here on Topos Theory. I am not yet at the point of getting into these blogs like David has, but I am slowly beginning to catch hints of Topos Theory in some of the readings/investigations I have been doing. At some point I think I shall converge back to this discussion but am very thankful it is on this Zulip for future reference. At any rate, I follow along with great curiosity in silent reflection of these points!
Next, I want to show that if two continuous functions and (with and being open sets containing ) correspond to the same germ of , then they must restrict to the same function on some open containing .
To get there, I first want to think conceptually about what it means for our set of germs to be (part of the data of) a colimit of . The full data of the colimit of is some natural transformation . By definition of a colimit, this cocone of is initial in the category of cocones of . That is, for every other cocone (where is the functor constant at some set ), there is a unique natural transformation so that .
Here's a picture illustrating the situation:
picture
The triangle diagram lives in the category of functors from to , together with natural transformations between them. The arrow at the bottom is a function; it is a morphism in . Note that a natural transformation from one constant functor to another is induced by a morphism from the object the first functor is constant at to the object the second functor is at.
In this diagram, I think can be viewed as an "observation" of the functor . I think our goal is to find the "most informative observation" of the functor having a target of some constant functor . Indeed, I think that is the "most informative" observation of , in the sense that any other observation of it can be computed as for some .
Let's think about this a bit more using components. Let be the -th component of , where is some open subset of containing . The -th component of is just , and so the commutativity of our diagram implies that . Taking some particular , so that is a continuous function defined on the open set containing , we learn that . So, given the germ that belongs to, namely , we can compute the observation using some . For this reason, I think it makes sense to say that that germ of a particular function at is the "most informative" observation of that function "locally about ".
Next time, I want to make use this intuition to show that if two continuous functions and correspond to the same germ of , then they must restrict to the same function on some open containing . To do that, here's my current rough plan:
I don't think a proof by contradiction is necessary here, but you can try it and then perhaps straighten it out to a direct proof.
John Baez said:
I don't think a proof by contradiction is necessary here, but you can try it and then perhaps straighten it out to a direct proof.
This makes me want to find a direct proof! But I'll start out with the (attempted) proof by contradiction, and see what happens.
Let be the proposed colimit of . And to obtain a contradiction, assume we have two continuous functions and (with and open sets containing ), such that:
We aim to construct a cocone (for some set ) of so that there is no satisfying . That would show that can't possibly act in this way if we want it to be a colimit.
To construct my plan to use what I think is supposed to be the actual colimit. We define an equivalence relationship on real-valued functions defined on an open set of containing . We decree that exactly if there is some open set containing on which . Then, we form the set by having one element per equivalence class. I'll call the element of corresponding to the equivalence class of by the name . Then, we let .
Notice that if there is no open containing where and restrict to the same function, then and are not equivalent. That means that . We will aim to use this in a minute to obtain a contradiction.
There's a couple things to show first, though:
(I'd love to finish this off today, but I think I'll need to rest up and come back to this hopefully tomorrow!)
Yes, I think an explicit construction of the colimit as a quotient by an equivalence relation is the right way!
Here is a very basic example with finite sets. The bottom right corner is the colimit of the diagram consisting of the three other corners. The square brackets enclose equivalence classes.
The way I like to see it is two-steps: first we take the disjoint union (the bullets ), and then we glue things together by adding wires (labeled ). The equivalence classes correspond to the connected components of the resulting graph.
image.png
That's a nice example! Your visualization of the "gluing" is very cool!
I think we're in the home stretch now. The next thing I want to do is to show that the following really defines an equivalence relationship on real-valued functions mapping from open subsets of that contain :
For any , , because .
If and , then we want to show that . Since, , there is some open containing so that . And since there is some open containing so that . Now, is open, and is a subset of both and . So on , we have that and . Hence and thus .
If , we want to show that . Since , there is some open containing so that . That implies that and hence .
We conclude that is indeed an equivalence relation on the set of real-valued functions having some open domain of that contains .
Next, I want to show that is a cocone of . Recall from above that is defined as , where is the equivalence class of according to the equivalence relationship .
To show that is a cocone of , it suffices to show that any naturality square of commutes. Given a morphism in (where and are open subsets of containing , with ), here is the correspond naturality square:
naturality square
To show this square commutes, it suffices to show that . At a particular element of , say , that means that .
To show this is true, it suffices to show that . Since is an open subset of containing , and , we conclude that . Hence for any , and so for any and .
We conclude that an arbitrary naturality square of commutes, so that is a natural transformation , and thus a cocone.
Now, we are in a good spot to demonstrate a contradiction. Recall that we assumed that:
We will now show that there is no natural transformation such that . (Which would be a contradiction, because is supposed to be a colimit). For this equation to hold, it must hold at every component. In particular, we must have and . Noting that every component of is just , we have that and .
We also know that . Using all this, we conclude that . But this is a contradiction: by definition of and we can't possibly have . ( would imply that , which would imply that there is some open set containing in the intersection of the domains of and where they restrict to the same function - and we know this is false by the assumptions we have placed on and ).
We conclude that if is to be the colimit of , then two functions with the same germ at must have some open neighborhood in the intersection of their domains containing such that they restrict to the same function!
I'll pause here for now, and plan to focus on some examples of germs next time!
In case you are interested, here is, I think, a more direct proof. It amounts to showing that the quotient you suggest satisfies the universal property of the colimit.
spoiler
That is interesting! I'm trying to understand what you just wrote...
I think there might be a typo. If I understand correctly, we have that , but the morphism from to is called in the diagrams above. I would have expected it to be called something more like , as I think it corresponds to restricting from to .
So, I think you start out by describing the cocone (to use the notation I was using above), where the component sends each element of to its equivalence class under .
Then, to show this cocone satisfies the universal property of the colimit, you introduce another cocone having -th component .
Next, you define an . Here, is the disjoint union of the sets as varies over open sets containing . Because the disjoint union is the coproduct in , a collection of functions induce a function .
You then I think note that if then . If , that means there is some containing where . In this situation, this diagram commutes:
diagram
Since , . By commutativity of the diagram, this implies that . Since is induced using the universal property of disjoint unions, this implies that indeed .
Now, . We want to use to induce a natural transformation from to . To do this, we just need a morphism .
At this point, I think we want to use something like the "universal property of quotients" to induce our . I don't remember how that stuff goes very well right now... But I assume the basic idea is to set .
We have to show this is well-defined. If , then and , but since , we learn that . So, is indeed well-defined.
I think it just remains to show that :
To show that induces a morphism of cocones, we need to show that for all . For some , we have , as desired.
Finally, we want to show that is the unique morphism that induces a morphism of cones from our cocone with tip to our cocone with tip .
So, we just saw that we need for all . Since projects to equivalence classes, this means we need . As varies, we'll obtain this condition for all equivalence classes. So, I think for all is forced, if is to be a morphism of our cocones.
I think that means we can conclude that does indeed induce the unique morphism from our cocone with tip to our cocone with tip . We conclude that our cocone with tip with tip is indeed initial, and so it is indeed the colimit of our diagram!
Thanks, @Peva Blanchard , for working out the direct proof! It found it interesting and helpful to review. :smile:
Starting to move in the direction of examples of germs, there is a nice example in the book "An Introduction to Manifolds" (by Tu), on page 12:
The functions with domain and with domain the open interval have the same germ at any point in the open interval .
I think there might be a typo. If I understand correctly, we have that , but the morphism from to is called in the diagrams above. I would have expected it to be called something more like , as I think it corresponds to restricting from to .
Oh you're right! Yes I made a typo in the diagrams.
Before moving on to the next puzzle, I'd like to try and visualize a germ for the presheaf , which sends each open subset of to the set of continuous real-valued functions .
To visualize a germ at , (which is an element of ), I'll draw a little cartoon of a bunch of continuous functions (defined on different open sets containing ) that correspond to the same germ. That is, they become the same function when restricted to a "small enough" open set containing .
I'd be happy to talk more about examples of germs (e.g. in the continuous vs smooth vs analytic cases), but I don't know really know how to go about comparing those. So I'll move on to the next puzzle. But if you have something you'd like to say regarding examples of germs, please feel welcome to share your thoughts here!
Here is the next puzzle, together with some context:
Show that with this topology on the map is continuous.
Context:
I am still working to understand the proposed topology on .
David Egolf said:
I'd be happy to talk more about examples of germs (e.g. in the continuous vs smooth vs analytic cases), but I don't know really know how to go about comparing those. So I'll move on to the next puzzle. But if you have something you'd like to say regarding examples of germs, please feel welcome to share your thoughts here!
One very important fact about analytic germs is that you know how to name all of them! In fact you probably learned how in a calculus course.
Thanks, @Kevin Carlson for your comment! It's been a while since I took a calculus course, and I can't remember if we ever used the word "analytic". But let me see if I can figure out what you're hinting at.
I'll be referencing John Baez's remark above:
Remember, a functions from an open set of to is analytic if at each point it has a Taylor series with a positive radius of convergence.
One way to put an equivalence relationship on a set is to use a function where is some property of . Then we let .
If is the set of real-valued analytic functions, with each element of defined in some open set containing , I want to try setting to be the Taylor series of about . Then I'm hoping that the equivalence relationship induced by is the same as the equivalence relationship "belongs to the same germ". If that works out, I am hoping that would imply that the analytic germs at are in bijection with the Taylor series about that converge in some open subset of containing .
If , that implies that and have the same Taylor series about . Because and are analytic, and both have a positive radius of convergence about . I think that means that and become equal when restricted to this region of convergence about . And this restricted function is still analytic, so I think this implies that and belong to the same analytic germ at .
If and belong to the same analytic germ at , then they are both analytic and have some common analytic restriction to some open subset about . That restriction, being analytic, can be expressed as a Taylor series in some region with a positive radius of convergence about . And so, and have the same Taylor series about when we are "close enough" to . I am hoping that implies that and must have the same Taylor series about , so that .
Well, I feel rather shaky on this stuff. Any corrections or clarifications would be appreciated! :smile:
That’s the idea! Sounds like you’re still just a little stuck on whether having the same Taylor series on a small enough neighborhood of a point means you have the same Taylor series at that point. But there’s no difference between “my Taylor series near ” and “my Taylor series at ”, because, recall, the Taylor series is calculated by calculating all the derivatives of at So if two analytic functions agree near , they have the same Taylor series there. And conversely, since you compute the functions by actually plugging into the Taylor series where it converges! Hopefully that wasn’t handing you anything it would’ve been more fun to figure out on your own, just trying to help remind you of some old calculus stuff.
Thanks for clarifying! That makes sense: since a Taylor series is computed entirely using information "extremely close" to (by computing ), if two analytic functions agree on some open set containing , they must have the same Taylor series at . (All the derivatives are computed using limits which only care about behaviour as we get "really close" to : we'll eventually get inside the open set where these two functions agree during the limiting process). In particular, if two analytic functions obtain the same Taylor series at when we restrict both of them to some open set about (which means they agree on some open set containing ), then the two original analytic functions must have the same Taylor series at .
Yes! Now we have everything to explain why this function is not analytic.
spoiler
David Egolf said:
Context:
- is the (disjoint?) union of all the sets of germs as varies, where is the presheaf which sends each open subset of to the set of continuous functions
Yes, the disjoint union (aka "coproduct"). You'd never want to say two germs at two different points are equal.
If my blog post left that unclear, I should fix it.
If you haven't thought much about analytic functions, it might help to know that @Peva Blanchard is giving the standard example to show how the concept is a bit subtle. This is a function that has an th derivative at for all , which is still not analytic. In fact all these derivatives are zero, yet the germ of this function at is nonzero!
Maybe it's good to think about something much less weird:
Puzzle. Find a function that vanishes at , along with its first million derivatives:
but is nonzero for all .
Peva's example is much stranger, because we don't stop at a million or any finite number - all the derivatives of this function are all well-defined for all , and they all vanish at , but this nonzero for all .
The point of Peva's example is that if you have a function that is infinitely differentiable, its germ at x = 0 can contain more information than all its derivatives at x = 0. But for analytic functions, all the information about the germ is contained in the derivatives - since you can recover the function from its power series, at least in some neighborhood of x = 0.
Thanks to both of your for your comments! I'm taking a little break today from this thread, but I hope to return to it tomorrow. The idea that a smooth function can have more information in its germ at a point (in addition to the values of all its derivatives at that point) is interesting, and I look forward to responding in more detail to your comments soon.
John Baez said:
Maybe it's good to think about something much less weird:
Puzzle. Find a function that vanishes at , along with its first million derivatives:
but is nonzero for all .
The first idea that comes to mind for me is to try for big enough. Each derivative we take reduces the exponent of by . I think this implies that the first derivatives are all zero. (Eventually though, after we take derivatives, we get which is non-zero at .) I think setting gives us a function that meets the requirements of the puzzle.
John Baez said:
David Egolf said:
Context:
- is the (disjoint?) union of all the sets of germs as varies, where is the presheaf which sends each open subset of to the set of continuous functions
Yes, the disjoint union (aka "coproduct"). You'd never want to say two germs at two different points are equal.
John Baez said:
If my blog post left that unclear, I should fix it.
I was fairly sure we don't ever want to consider two germs at different points to be equal, but I started slightly worrying about this issue because the symbol was used instead of the symbol in the blog post:
notation
Actually, I suppose that each of is only defined up to isomorphism if we just require each to be (part of) a colimit of an appropriate diagram. From that perspective, it seems bad to take the union of these as varies, because the union is an operation that cares about the equality of elements of the different sets we are taking a union of. (And we can change which elements in different are equal by swapping out isomorphic copies of some ).
Peva Blanchard said:
Yes! Now we have everything to explain why this function is not analytic.
spoiler
Huh! I suppose this function "takes off" from zero so slowly that all its derivative at don't even notice! So we have two smooth functions (this one, and the function constant at zero) that have the same Taylor series at , but there is no open set containing in which those two functions restrict to the same function!
In this example, we see that computing all the derivatives at a point of a smooth function doesn't always determine uniquely which smooth germ of that function belongs to.
I find myself wondering what additional information (in addition to the value of all the derivatives) is needed to determine the germ that a smooth function belongs to at some point . I suppose we'd like to find some information that determines on some small enough neighborhood of . We just saw that all the values of the derivatives of at aren't always going to be enough to do this! So we need some additional information.
But I'm unsure how we could go about discovering what that additional information is.
David Egolf said:
John Baez said:
Puzzle. Find a function that vanishes at , along with its first million derivatives:
but is nonzero for all .
I think setting gives us a function that meets the requirements of the puzzle.
Yes, that's the best solution of this puzzle!
David Egolf said:
We just saw that all the values of the derivatives of at aren't always going to be enough to do this! So we need some additional information.
But I'm unsure how we could go about discovering what that additional information is.
In a sense the difficulty of this question is why the concept of 'germ' is so useful: the germ of a function is the tautological answer to this question!
The question is interesting.
I'm wondering how we could "measure" the "complexity" of the set of germs at . For instance, the analytic germs at seem to form a vector space of countable dimension (I think).
It's not exactly of countable dimension, in the usual linear algebra sense, since the space of infinite sequences has uncountable dimension (you can't get any Taylor series with infinitely many nonzero coefficients as a linear combination of $$x^n$$s!) But it's "countable-dimensional" in the functional analysis sense, which is that there's a countable "basis" when you allow for convergent infinite sums from that basis, or similarly, the linear span of that countable "basis" is dense. So one way of taking David's interesting question is, whether we can find an explicit basis for the germs of smooth functions at a point. I don't know the answer but I have an intuition that we cannot!
It is tricky because this requires (at least) a topology on the set of germs. There is the coarsest topology making the projection continuous. But, this is not enough: the induced topology on the subset is trivial (I think).
We would need to "topologize" the sheaf of continuous/smooth/analytic functions: each is a topological space (instead of just a set) for every open subset .
Mmh ... this is going too far out of my reach, so I'll stop there.
If you want a vector space of germs that's of countably infinite dimension, the nicest choice is the sheaf of polynomial functions on the real line, or the complex plane...
... or any algebraic variety, which is roughly a space described by a bunch of polynomial equations, like the space of solutions of . But people call polynomial functions on algebraic varieties regular functions.
Algebraic varieties are the traditional object of study of algebraic geometry, and the sheaf of regular functions on an algebraic variety became the star of algebraic geometry: people call it .
You can define algebraic varieties over various fields, but the most traditional case uses . For any 'smooth' -dimensional complex algebraic variety, the germ of its sheaf of regular functions at any point is isomorphic to the germ of the sheaf of polynomial functions on
After algebraic varieties were quite well understood and Grothendieck started chafing at their limitations, he defined the concept of 'scheme', which is roughly a topological space equipped with a sheaf that acts like the sheaf of regular functions.
I'm not giving a precise definition here, but it's very notable that the concept of scheme explicitly involves the concept of sheaf! So modern algebraic geometry, which uses schemes, is heavily reliant on sheaves.
If we start exploring sheaves that are like the sheaf of analytic functions, we are moving in a somewhat different direction. There's an important concept of complex manifold, which is a space covered by 'charts' that are copies of for some , with transition functions that are analytic. Any such manifold has a sheaf of analytic functions on it... and the germ of this sheaf at any point is isomorphic to the germ of analytic functions at any point of .
Just as we can have algebraic varieties that aren't smooth, like the space of solutions of (which has a sharp 'cusp' at the origin), we can also define complex analytic varieties, which generalize complex manifolds but don't need to be smooth. I know almost nothing about these, but they're again defined using sheaves.
David Egolf said:
Peva Blanchard said:
Huh! I suppose this function "takes off" from zero so slowly that all its derivative at don't even notice!
Right. But it's very peculiar. It's like starting your car so smoothly that at first you don't accelerate at all.
For the nth derivative to become bigger than zero, the (n+1)st derivative needs to be bigger than zero first... and here that happens for all n, yet all these derivatives start at zero.
How does it work?
As you increase from 0 to this function goes from 0 to 1. Its first derivative goes up to about 1/2 and then goes down. But before that, its second derivative goes up to 3... later it goes down. And before that, its third derivative goes up to about 20. And before that, its fourth derivative goes up to about 200. And so on.
So while you may feel the function takes off very gently, because all its derivatives are zero at , in fact there's a huge flurry of activity going on for arbitrarily small .
Looking at those examples of germs was interesting! And it's cool to learn that sheaves get used in all kinds of places. It's always a nice bonus when learning about one thing makes it a bit easier to learn some other things!
I want to return my attention to the next puzzle, which has to do with putting a topology on , the set of all our germs for our sheaf of continuous real-valued functions on . I'm still trying to understand the topology described in the blog post - but I'll write out my understanding so far.
We'd really like to have a "germ bundle" that sends a particular germ to the point that it is associated with. (Each germ in is associated with exactly one point , as it belongs to exactly one of as ranges over ). If we could construct a bundle from a sheaf in this way, then we'd be able to think about sheaves from the perspective described here ("Sheaves in Geometry and Logic", page 64):
Alternatively, a sheaf on can be described as a rule which assigns to each point of the space a set consisting of the "germs" at of the functions to be considered, as defined in neighborhoods of the point ... Viewed in this way, the sheaf is a set which "varies" (with the point ) over the space .
Now, for to really be a bundle, it needs to be a continuous function. To talk about its continuity, we need to put a topology on . However, referencing pages 84-85 of "Sheaves in Geometry and Logic", this isn't the only function that we want to be continuous when we select an appropriate topology for .
We also have some other interesting functions which we'd like to be continuous, so that they can be sections of our bundle. Given some , so in our case in , we define a function ("g" refers to "germ") defined by , where by I mean the germ of at the point . Note that , so that if was continuous, it would provide a section of our bundle . In this way, we are hoping to associate each element of a sheaf set (which is a set of continuous real-valued functions in the case of our ) to a corresponding section of our germ bundle . To make this happen, we need to choose the topology on appropriately, so that is continuous (for each as varies).
I'll stop here for today. Next time, I'm planning to look at the minimum of open sets we need to put in our topology for so that becomes continuous. Then I think I want to check that a given still has a hope of being continuous, even after we've declared those subsets of to be open.
John Baez said:
David Egolf said:
We just saw that all the values of the derivatives of at aren't always going to be enough to do this! So we need some additional information.
But I'm unsure how we could go about discovering what that additional information is.
In a sense the difficulty of this question is why the concept of 'germ' is so useful: the germ of a function is the tautological answer to this question!
By the way, I'd like to know any sort of answer to this question. I was optimistic when I saw this question on Math Stack Exchange:
but the answers were completely useless (except for one answer who told the original questioner that he was asking about the germ of a smooth function: he hadn't known this concept had a name). I think there should be interesting things to say about this question even if a fully satisfying answer is not known.
One very rough idea that comes to mind: instead of taking the limit of something like as approaches , maybe we could consider "taking a limit" of the truth values of a bunch of propositions like "" as approaches 0.
When for , we get a sequence of truth values that looks like: true, true, true... as we assess the truth value of "" as approaches . By contrast, if for all , then the sequence of truth values we get from "" is false, false, false.. as we assess the truth value of "" as approaches zero.
I'm not sure how useful this is... I was just trying to think of "measurements really close to 0" that determine that the zero function is different from our "slow takeoff" function which at is zero and has all derivatives equal to zero.
John Baez said:
So while you may feel the function takes off very gently, because all its derivatives are zero at , in fact there's a huge flurry of activity going on for arbitrarily small .
I also wonder how one could formalize this "huge flurry of activity". Maybe that could be helpful for distinguishing these functions from one another using some kind of measurement involving a limiting process which approaches ?
Here is a baby step in that direction.
Let be the sheaf of smooth functions on the unit interval , and be any sub-sheaf of .
Given a smooth function , I want to consider its derivatives all at once. So we can consider the -fold power of , namely, the sheaf . We have natural transformation whose component over an open subset is given by
This induces a linear function on the germs at an arbitrary point
We have another linear function given by a evaluating a function at . Then we have a linear function
which maps the germ of a function at to the sequence of values of its derivatives at . Finally, we can consider the kernel of this linear function.
It seems that is injective, while is surjective (hence too).
When is a sub-sheaf of the sheaf of analytic functions, the kernel is trivial, . This is because the germ of an analytic function at is entirely determined by the values of its derivatives at the point .
Question: Is the converse true? I.e., if for all then is a sub-sheaf of the sheaf of analytic functions?
We can go further and try to describe the kernel for the sheaf of smooth functions.
To build a germ in , we must first choose a sequence with an open neighborhood of , and a smooth function such that . This data already gives quite a lot of freedom. The tricky condition is to ensure that:
A strategy would be to start from something that does not care about this condition, and iterate so that in the limit the tricky condition holds. (I'm being hand-wavy here because I haven't figured it out yet)
I had an idea that seems related to @Peva Blanchard's. There's a sheaf of smooth real-valued functions on , and its germ at is some vector space . We want to understand this space. Peva has described a map, I'll abbreviate it as
sending the germ of any smooth function to the list of derivatives
This is well-defined and this is the 'understandable aspect' of . So we really want to understand the kernel .
This raises the question: can we extract any real numbers from the germ of a smooth function at in a linear way, other than by taking derivatives of that function at ?
So:
Question. Can we explicitly describe any nonzero linear map ?
Since we know is infinite-dimensional, there exist infinitely many linearly independent linear maps . But this does not imply that we can get our hands on any of them, because it's possible that my last sentence can only be proved using the axiom of choice (or some weaker nonconstructive principle)! There are some famous examples of this frustrating situation in analysis.
I've asked this question on MathOverflow and will see if it gets any useful answers.
By the way, there's a rather surprising theorem related to all this:
Theorem. The map sending the germ of any smooth function to its list of derivatives is surjective.
I think to get the idea for how prove this, it's enough to solve this
Puzzle. Find a smooth function whose nth derivative at is .
At first you might think this is impossible, since the power series
has zero radius of convergence. But such a function does exist! As a clue, I'll say that to construct it, it helps to use the fact that there exists a smooth function that's zero for and , and positive for .
By the way, I know I'm digressing from the main theme of this discussion, which is sheaves. But it's hard to resist, because I've spent a lot of time teaching analysis, and the difference between the sheaf of smooth functions and the sheaf of analytic function is pretty interesting, not only as example of how different sheaves work differently, but because mathematicians and physicists spend a lot of time working with smooth and analytic functions.
John Baez said:
Question. Can we explicitly describe any nonzero linear map ?
You could take any sequence tending to and ask about the limit of a sequence derived from the values of the function of that point. For instance, you could ask about the limit of . The hard part is guaranteeing that such a functional will converge and isn't simply a function of the derivatives at . Or you could ask about the relative measure of points at which the function is 0 on a sequence of intervals tending to 0. That is, take the limit as of . This is bounded, at least, but there's again no guarantee of convergence (at least a priori; maybe there's some slick analytic argument proving that this converges)
Morgan Rogers (he/him) said:
John Baez said:
Question. Can we explicitly describe any nonzero linear map ?
You could take any sequence tending to and ask about the limit of a sequence derived from the values of the function of that point. For instance, you could ask about the limit of . The hard part is guaranteeing that such a functional will converge and isn't simply a function of the derivatives at .
This raises a good issue, namely, how fast can a smooth function f grow for small x if all its derivatives vanish at x=0. Your proposed quantity will be finite if for all such f there exists C with
for all large enough n.
This seems unlikely since all I know is that for all such f and all natural numbets k there exists C with
for all large enough n. This follows from the first k derivatives of f vanishing at x=0.
Unfortunately there is no slowest growing function that grows faster than all polynomials! So we can probably show no candidate like what you suggested can work: it'll either be zero for all smooth f whose derivatives all vanish at 0, or infinite for some such f.
I'm not following in detail, but I just wanted to highlight a strategy that I noticed Peva Blanchard and John Baez use above. (Which I thought was really cool!) We're interested in information besides the derivatives of a function that can help us determine which germ at a point a smooth function belongs to. The strategy - to my understanding - goes like this:
I don't remember seeing this strategy before, and I like it!
Hmm so we need something that converges for all such (so that is well-defined) but isn't forced to be . Well that's fun to think about. I'll let you get back to sheaves now :)
In the mathoverflow question, John Baez introduces a vector space:
There's a sheaf of smooth real-valued functions on , and its germ at is some vector space .
Now, we saw earlier that (which is for and for ) and the zero function are not in the same germ. That means that and are different elements of . But, they have the same derivatives, so that ( is defined in that question to be the function that takes the germ of a smooth function to the derivatives of that function). This means that is in the kernel of .
I'd like to define a non-zero linear real-valued map from the kernel of . To define such a map, I think it suffices to specify the value of the map on each element of a set of basis vectors for . I am hoping that we could find just two linearly independent vectors in and say what does to those, and then just let send all vectors that aren't a linear combination of those two to zero.
I think we already have one nonzero vector in . If we could just find another one, that is linearly independent from this one, maybe we could construct an from that? So, I'm wondering if we can think of more examples of pairs of smooth real-valued functions (defined on some open set containing ) that have the same derivatives at , but don't belong to the same germ at zero.
(I wonder if also has all derivatives equal to zero at zero, and if it belongs to a different germ from ...)
You can multiply by any function which is bounded at to get another potentially linearly independent function, I think.
I suppose that defining some in the way I sketched above wouldn't really help us that much. That's because such an would assign a real number to each germ at zero, but it wouldn't directly provide this "measurement" for smooth functions. So, although such an I think could tell certain germs apart (which can't be distinguished using derivatives), it seems like we'd need something more to determine which smooth functions with the same derivative values at a point don't belong to the same germ at that point.
David Egolf said:
I'd like to define a non-zero linear real-valued map from the kernel of . To define such a map, I think it suffices to specify the value of the map on each element of a set of basis vectors for . I am hoping that we could find just two linearly independent vectors in and say what does to those, and then just let send all vectors that aren't a linear combination of those two to zero.
How does this work? Think about a simpler case: trying to define a linear function that maps to , to , and all vectors that aren't a linear combination of those two to zero. What should be?
Hmm, well we want to be -linear. So, . Since isn't a linear combination of and , we set . So we find .
Okay, that's a linear function, but it's not doing what you said. You said all vectors that aren't a linear combination of the first two should be sent to zero. But is not a linear combination of and , and is not zero.
What you in fact did is choose one vector that's not a linear combination of the first two, and decree that of it is zero. You chose the vector . If you'd chosen the vector , for example, and decreed that of that is zero, you'd get a different linear map .
Returning to the actual problem, it's this arbitrary choice that makes defining a nonzero linear function from to so difficult! And is not just 3-dimensional, it's infinite-dimensional, so the choice requires a lot more thought - and it seems nobody knows how to do it, except by resorting to the axiom of choice.
The problem is of this sort.
You have a vector space and you're trying to define a nonzero linear map . You know a couple of vectors (you might know more) and you say you want
for some numbers .
You can do this if and are linearly independent. By a general theorem, which relies on the axiom of choice, we know such an exists. If many such exist. But getting your hands on one is an entirely different matter!
You can get your hands on one if you can find a linear subspace such that
1) no vector in is a linear combination of and and
2) Every vector in is a linear combination of and some vector in .
Then there is a unique such that
and
for all . At this point we've gotten our hands on . But depends on our choice of .
How do we know there always exists obeying conditions 1) and 2)? There's a theorem saying that there exists a basis of that starts with and and continues with some other vectors . Then we can define to be the space of all linear combinations of these other vectors .
However, to prove this theorem you need a version of the axiom of choice: in general there's no 'procedure' to choose these vectors . You chose the vector because you only needed one and it was staring you in the face. But in our actual example the dimension of is uncountably infinite and - to the best of my knowledge - nobody knows a basis for it. That's why my MathOverflow problem seems to be hard.
There might still be some other way to define a nonzero linear map .
This discussion may seem like a huge disgression from sheaves, and in a way it is. But the issue of how relying on the axiom of choice makes it difficult to get your hands on things you want is a big deal in analysis, so it's nice that we've bumped into an example. And a good way to do math without the axiom of choice is topos theory, which is what we're supposed to be learning here!
Indeed, you'll notice that the really good answers about what version of the axiom of choice you need to prove that every vector space has a basis come from Andreas Blass. Blass is an excellent category theorist, and I think he knows topos theory quite well.
John Baez said:
What you in fact did is choose one vector that's not a linear combination of the first two, and decree that of it is zero. You chose the vector . If you'd chosen the vector , for example, and decreed that of that is zero, you'd get a different linear map .
Ah! I wondered what I was missing. Thanks for pointing that out!
If I'm understanding correctly, you're saying (among other things):
But, potentially, although there's no procedure in general for completing a basis for an arbitrary infinite dimensional vector space, there could maybe be such a procedure in this particular case (?). It just might be hard to find (if it exists) I guess!
On a related note, it's a weird feeling to know that many examples of a certain kind of thing exist, but at the same time we may not be able to name any examples :astonished:!
David Egolf said:
On a related note, it's a weird feeling to know that many examples of a certain kind of thing exist, but at the same time we may not be able to name any examples :astonished:!
This is actually quite common in mathematics, and there are even many situations where we know that the probability of a number having some property is 1 yet we don't know if most familiar numbers have that property (though surely they must).
See for example the result about Khinchin's constant.
It's sort of like how knowing there are lots of ants doesn't mean you know any of their names: they are numerous yet anonymous.
David Egolf said:
If I'm understanding correctly, you're saying (among other things):
- to specify what does to all the vectors in requires making some choices: you have to choose what it does to enough vectors besides our and , being careful to not get any contradictions resulting from these choices, while ensuring that the choices we make actually specify on all of
- if we were able to complete our to some basis for , then we could safely set for all the other basis vectors besides and in
- however, there's no "procedure" in general to get a basis for - so we're stuck for now.
Right - good summary.
There are other ways to define linear maps than saying what they do on each member of a basis, and often they are easier to work with, e.g. taking the derivative at x is a linear map from germs of smooth functions at x to real numbers, and we don't need to pick a basis to define it! But I don't see how to use an approach like that for this problem, either. It may be lack of cleverness, or it may be a deep issue.
But, potentially, although there's no procedure in general for completing a basis for an arbitrary infinite dimensional vector space, there could maybe be such a procedure in this particular case (?). It just might be hard to find (if it exists) I guess!
Right. I'd say we just don't know yet. And we may never know.
John Baez said:
See for example the result about Khinchin's constant.
It's sort of like how knowing there are lots of ants doesn't mean you know any of their names: they are numerous yet anonymous.
There is another example that I find fascinating. The set of computable real numbers is countable. This implies that almost all real numbers are not computable: there is no (finitely described) algorithm to enumerate their digits. To put it in more colloquial terms: there is no "reasonable" way to poke inside them.
It does not mean we cannot define one though. For instance, Chaitin's constant is the probability that a random Turing machine will halt. This constant is well defined, and we know it is not computable.
Another related example: we say a real number is normal in base 10 if in its decimal expansion every string of n digits appears with frequency , which is what you'd expect of a 'random' number. More generally we can talk about normal numbers in any base. A number that's normal in every base is called uniformly normal.
The set of numbers that are not normal in base has measure zero, and the countable union of sets of measure zero again has measure zero, so the set of numbers that are not uniformly normal has measure zero.
In simple rough terms: the probability that a number is uniformly normal is 1.
For this reason, and because people have actually done compute calculations to check, everyone believes , and other famous irrational numbers are uniformly normal. But nobody has been able to show this for any interesting examples.
There is some slight hope that people can show is normal in base 16 (and thus base 2), because there's a cool formula that makes it easy to compute individual base 16 digits of without computing all the previous digits. But people haven't succeeded yet.
While we're digressing, I got this interesting email about my 'germs of smooth functions' question:
dear john baez
i hope that you don’t object to my getting in touch with you in this manner but this is not directly an answer to your query but might contain material of interest.
the appropriate functional analytic structure of the space of germs of smooth functions has been understood since the 70’s. it is a complete, conuclear convex bornological space. the class of convex bornological spaces (cbs’s) was introduced and investigated by the french-belgian school (waelbroeck, buchwalter, hogbe-nlend) and is in a certain sense dual to that of locally convex spaces (lcs’s)—the role of the topology is taken by the bounded sets. this duality can be formalised in terms of category theory—the category of complete cbs’s (lcs’s) is the ind category (pro category) in the sense of grothendieck generated by that of Banach spaces. further the dual of each lcs has a natural cbs structure and the duals of cbs’s are lcs’s.
the bounded sets of the space of germs are defined to be the sets of such germs which are represented by smooth functions on a neighbourhood of zero which are bounded there as are the sets of their (higher) derivatives.
as mentioned above, this is a respectable space (even conuclear). for example, suitable forms of the classical results (closed graph, uniform boundedness, banach-steinhaus) hold. it has a well defined dual, which is a complete lcs. in fact it is the space of distributions with support at the origin, in other words, linear combinations of the delta distributions and its derivatives.
there is, of course a big but. there is no hahn-banach theorem for cbs’s. this means that duality theory can collapse (i.e., the dual space can reduce to the zero vector). usually (i.e., for the important function and distribution spaces), this doesn’t happen. in your case, we have something intermediate—the dual space is infinite dimensional, in a certain sense (which can be made precise), the smallest such space, but it is not large enough to separate points. in fact, the intersection of the kernels of elements of the dual is precisely the space that you describe.
i could go on but i will stop here with the hope that this helps. i would be happy to try to answer any questions you might have.
sincerely
jim cooper
In case this is too hard to understand, one thing he's saying is that we can define 'bounded' sets in the vector space I called , so we can talk about linear functions that map bounded sets to bounded sets... but the only such is zero.
I guess this means it'll be hard to find an explicit such ... though "hard to find" is a touchy-feely concept.
Another closely related thing he said that might be unclear is that the dual of the whole space of germs is the space of “distributions generated by the Dirac delta and its derivatives”, which means that we weren’t missing any nice linear functions on the space of germs of smooth functions—they’re really only the differentiation operations.
That’s a relief to know!
I feel like it's a good time to work a bit on the next puzzle, again. Recall that this involves thinking about the topology on , where is the set of germs at for our sheaf , which sends an open set of to the set of continuous real-valued functions .
There is a function that sends each germ to the point it is associated with. We would like this function to be continuous. To ensure that, we need to be open in , for each open set .
However, we're not done yet, as we want certain functions to to be continuous as well. Given some (a continuous function from to ), we define the function that acts by , where is the germ at that belongs to. We have that , so that if was continuous, it would be a section of our bundle .
How can we ensure that is continuous for all and all open ? We need to be open for every open set . Earlier, we declared that certain subsets of need to be open: namely the preimage of the open sets in under our projection mapping . Given some open subset , that means that needs to be open.
So, let us consider an open set in of this form. That is , we let for some open subset of . What is ? These are the points in that map to under . Since consists exactly of germs associated to points in , the part of that maps to under is . Therefore, . Since and are both open in , is open too. We conclude that declaring enough subsets of to be open so that becomes continuous is compatible with the "germ assignment" functions being continuous.
Now, we actually still aren't done, I believe. I think we want to declare as many subsets of to be open as possible, while preserving the continuity of and (for every and for every ). (Although I'm not sure why we'd want to do this.)
Let's consider some particular , which sends to . Without knowing anything extra about , we know that in the subspace topology on these two subsets are continuous: (1) the empty subset and (2) the subset that is all of . Making use of the fact that is open, if we declare to be open in , then is open, and so the continuity of is not disrupted.
I'll stop here for now, but it still remains to show that declaring to be open for each (as varies) preserves the continuity of every "germ assigning function" .
(I hope to finish up thinking about the topology on soon... I recognize it's probably not the easiest thing to have a conversation around!)
For anyone trying to follow along again, here's the puzzle again:
Now that we have the space of germs for each point we define
There is then a unique function
sending everybody in to So we've almost gotten our bundle over We just need to put a topology on
We do this as follows. We'll give a basis for the topology, by describing a bunch of open neighborhoods of each point in Remember, any point in is a germ. More specifically, any point is in some set so it's the germ at of some where is an open neighborhood of But this has lots of other germs, too, namely its germs at all points We take this collection of all these germs to be an open neighborhood of our point A general open set in will then be an arbitrary union of sets like this.
Puzzle. Show that with this topology on the map is continuous.
I don't know if this helps, but:
1) In topology I tend to think visually, so I find it hard to start solving this puzzle until I draw a picture of and the open neighborhoods described here. I'd probably try to take the sheaf of smooth real-valued functions on , and try to draw one of these open sets . The picture might not be accurate, but it woulds somehow help me think about whether is continuous.
2) Here's an example of how it helps: in the process of thinking about this picture, I'm instantly led to remember that continuity can be studied locally. A function is continuous iff it is continuous at each point in its domain, and this in turn is true iff of every open set contained in some neighborhood of is open. We discussed the first fact earlier here somewhere, but I forget if we discussed the second fact. It comes to mind now because we're trying to show is continuous and our picture of the open sets of is a local one.
John Baez said:
I don't know if this helps, but:
1) In topology I tend to think visually, so I find it hard to start solving this puzzle until I draw a picture of and the open neighborhoods described here. I'd probably try to take the sheaf of smooth real-valued functions on , and try to draw one of these open sets . The picture might not be accurate, but it woulds somehow help me think about whether is continuous.
Thanks for the suggestion! I felt like I was making progress with what I typed out above, but it wasn't feeling very intuitive. Drawing a picture sounds like it may help with gaining some intuition. So, I think I'll shift over to working on this, next. (I may go back to thinking about the continuity of the "germ assigning" functions later).
Alright, let me try to draw a picture of and its open neighborhoods. The elements of are germs of at various points. And our open neighborhoods in the topology described in the puzzle are unions of the sets as varies over and as varies over the open sets of .
So, let's pick some particular to be some . If I let , then I can draw of picture of this . That seems like a place to start.
So then, here's a picture of some . The open set is indicated in red.
picture of s
Now, let's consider . For each , , the germ of at . So, is the set of germs of . Each germ at I think is roughly like a "local shape" that functions can have at . In general, a germ of a continuous function contains more information than just its derivatives at that point. But to get a picture, I'll pretend that the germ of at is determined just by the slope of at . (I'm assuming that this particular is differentiable, too).
I'll organize my drawing of , which is to be an open set of , by thinking of as having a collection of "local shapes" (germs) for each point which "hover over" each point .
Here's my attempted visualization of , which picks out the "local shape" (germ) of at each point in :
visualizing germs of s
This whole 2D region is part of . So, for each point we have a collection of shapes hovering above (and below) the -axis corresponding to different germs at that point. The blue point at some is , which in this simplified drawing is supposed to (partially) describe the local shape of at using its first derivative there. Notice that the first derivative really doesn't provide enough information to reconstruct our function about some point (in particular it forgets "vertical shifting"), but this is at least some of the information that describes our about each point.
This is not what I expected an open set of to look like! My picture might be too inaccurate and approximate for it to give good intuition, but maybe not! It seems interesting.
Now, let's consider our in this example (which is the function we wish to show is continuous). We need to be open in for each open subset of . Let's take to the open subset of indicated in red above. Then a point in maps to some if it is a germ associated to . In our picture, this will correspond to the points hovering above (and below) .
Here's a picture of , which is a subset of :
preimage of an open set
is indicated by the red line segments,and is indicated by the shaded light red regions. Intuitively, this preimage is the disjoint union of all the "local behaviours/shapes" possible for continuous real-valued functions (as provided by ) at each point.
If is to be continuous, this needs to be open. With our proposed topology, that means it needs to be the union of the image of some "germ assigning" functions (one of which we visualized in blue in a drawing above). I guess that means that for any germ at some , there needs to be at least one function which belongs to that germ, so that its behaviour about is described by . If that's right, the continuity of might correspond roughly to the idea that "every possible local behaviour at occurs for at least one element of a sheaf set , for some containing ".
John Baez said:
2) Here's an example of how it helps: in the process of thinking about this picture, I'm instantly led to remember that continuity can be studied locally. A function is continuous iff it is continuous at each point in its domain, and this in turn is true iff of every open set contained in some neighborhood of is open. We discussed the first fact earlier here somewhere, but I forget if we discussed the second fact. It comes to mind now because we're trying to show is continuous and our picture of the open sets of is a local one.
I don't think we directly discussed the second fact, although I may just be forgetting. Next time, I'll plan to start by proving that fact! Then I'll try to connect that fact to the pictures I've drawn above.
Although, thinking it over a bit, I think I might have an idea of how to solve the puzzle already... I guess I'll see what I feel like trying out tomorrow!
David Egolf said:
This is not what I expected an open set of to look like! My picture might be too inaccurate and approximate for it to give good intuition, but maybe not! It seems interesting.
It seems like a pretty good picture to me. It's really important to realize that for many familiar sheaves on , like the sheaf of smooth functions, the corresponding space of germs is not very easy to draw or visualize. And I think the best way to realize this is to try to draw it. You've drawn a kind of 'approximation' to it - and by thinking about the information your drawing leaves out, you're starting to get a sense for how peculiar this space is!
One thing that's strange about this space is that is not 'Hausdorff'. This means you can find two different germs that can't be separated by open sets: i.e., you can't find disjoint open sets with . That germ at zero of the weird function @Peva Blanchard described cannot be separated by open sets from the germ at zero of the constant function 0. That's because these functions are equal at all points slightly left of zero. (For a proof of the similar fact about continuous functions see this).
Well, I'm probably getting ahead of myself here, so I should stop. But my main point is, you're doing a fine job of attempting to draw a space that's impossible to draw in a fully accurate way... and I've found such attempts very useful!
This is really nice. Thanks to your detailed exposition David, I corrected a very wrong picture I had.
Indeed, I thought that the topology on was the coarsest topology making the projection continuous. This means that any open set in is a union of sets of the form for every open subset in .
But this topology is not enough (too coarse). Because we also want to think about a section of as a continuous function .
Now, maybe I can share the mental picture I have now about the required topology on . (Hopefully, it is correct). I find it easier to deal with neighborhoods instead of open sets. Given a germ at , what does it mean for another germ at a different base point to be "in the neighborhood of "? We can answer that question by providing a witness of the fact that they are close to each other. Such a witness is a pair where is an open set containing both and , and is a section such that
I picture this witness as providing a connecting path between and . With this picture, we see that for every , the germ of at is in the neighborhood of . In other words, a pair with and encodes a specific neighborhood of .
To continue with this picture, we can interpret the separation of two points and in (the "Hausdorff" property as explained by @John Baez )
These points are separated if we can find two disjoint neighborhoods of and respectively. Informally, this means that we have a neighborhood of , and a neighborhood of , and such that and never "agree" over .
For instance, let's consider the germ of the funny function at , and the germ of the constant zero function at . In that case,
By the way, does it mean that the topology of is Hausdorff when is the sheaf of analytic functions? (I'll think about it, no need to answer right away)
Thanks to both of you for your interesting comments! Thinking about whether is Hausdorff is interesting. (Side note: somehow the open sets on remind me of the closed sets in the Zariski topology...which I seem to recall is not (usually?) Hausdorff either.)
I'm going to take a break from this thread today, to rest up, but I hope to get back to it tomorrow!
Peva Blanchard said:
By the way, does it mean that the topology of is Hausdorff when is the sheaf of analytic functions? (I'll think about it, no need to answer right away)
I think the answer is yes. I wasn't sure if it would digress too much, so I opened another topic.
By the way, something clicked for me about "evaluating a function at some point".
Because of my -based math education, I am used to thinking about a function as being a graph, i.e., the set of pairs with . In that case, the evaluation of at is just picking out the second component of this pair, namely, the value .
But, with our previous discussion, it turns out this evaluation procedure is actually very narrow. When we deal with continuous map , another evaluation procedure is given by "taking the germ of at ". The mental picture I have in mind, is a sequence of open neighborhoods of that converges towards , and over which we take the restrictions of . This is like distilling to get the most concentrated information about at .
This reminds me of the way we define a distribution as a the dual of a space of test functions. Formally, a distribution is a linear map . I picture a test function as some kind of smooth bump around a point somewhere, so that the value sums up the "behavior of around that point". In a way, test functions play the same role as the open subsets of in the previous paragraph, the map is analog to the restriction map . We can evaluate a distribution at a point by considering a sequence of test functions that "converge towards ", and taking the limit of the 's.
Wow, there is a lot interesting stuff to catch up on here :sweat_smile:! Today, I'll try to understand what you both are saying regarding the fact that is not Hausdorff, for our sheaf of continuous real-valued functions on open subsets of .
To show that is not Hausdorff, we need to find two points (germs) so that there are no open sets and with , and . In other words, for any open set containing and any open set containing , is always non-empty.
Right! And you can find an example of this! It's a lot easier to find one for the sheaf of continuous functions than with smooth functions, where a sneaky example Peva described comes to our aid.
This is feeling tricky for me today. But, referencing this page, I think we want to consider this situation:
We have this situation with , , the zero function, and the function that is zero for and for . and have different germs at zero, but any open set that contains also contains some negative numbers with some "breathing space" around them. So, we can pick some negative number in : then and must have the same germ at . (That is because they both restrict to the zero function for sufficiently small open intervals about a negative number ).
So, we can form two sequences of open sets in , by taking and as becomes a smaller and smaller open neighborhood containing . We can form a sequence such that is some negative number present in both and . Applying and to this sequence gives us two sequences and . But these two sequences are actually equal, because and have the same germs at any negative point .
Now, intuitively, the sequence should converge to and the sequence should converge to . We just noted that both of these sequences are equal... but our proposed limits of them are different (as )! So it seems like we might have a situation where limits aren't unique, which I think would relate to being non-Hausdorff, referencing this page.
There's probably a simpler way to do this, explained here. I'll look at that next.
Following the proof here, let us assume that we have this situation again:
We will aim to show that and are points of that can't be separated by open sets. That is, there are no open subsets and of with and with .
To obtain a contradiction, let us assume that is Hausdorff, so that there are such disjoint open sets and . By definition of the topology on , for some and and similarly for some and .
Since , for some particular , where . Since germs can only be equal if they are associated to the same point, this implies that . Similarly, for some particular , where , so that .
Since and are assumed disjoint, that means that and are also disjoint. Thus, for any , .
Since and have the same germ at , there is some open subset containing where these two functions restrict to the same function. Similarly, there is some open subset containing where and restrict to the same function. Taking the intersection of these two open sets, we get an open set containing where for each point we have and . Now, we know that for any . Therefore, for any .
However, we know by assumption that there is always some point in any open subset containing where and have the same germ. Thus, we have obtained a contradiction. We got this contradiction by assuming that was Hausdorff. We conclude that must not be Hausdorff!
Whew! Hopefully I did that correctly. There is still a lot more of catching up for me to do in this thread, but I'll stop here for today.
I think the proof is correct. The proposition can be used to present concrete examples of continuous functions , simpler than and , that cannot be separated by open sets in (the sheaf of continuous functions).
spoiler
By the way, @John Baez gave a very neat puzzle on the other thread. (spoiler alert, I gave a proof there).
Here it is.
Suppose is sheaf on a topological space . Then is Hausdorff if and only if for every open and every , the set of points for which the germ of equals the germ of is closed in .
The next thing I'm hoping to do in this thread is to prove this:
John Baez said:
A function is continuous iff it is continuous at each point in its domain, and this in turn is true iff of every open set contained in some neighborhood of is open.
Once I've done that, I'm hoping to try and solve the current puzzle!
Unfortunately, I don't have the energy in the tank to work on this today. Once I have energy, I hope to return to this thread and work on what I just described.
Before I start on this topology exercise, I wanted to mention that I rather like @Peva Blanchard's mental picture regarding the topology on . I think the basic idea is this: two germs are in the same open set if they are germs of the function for some points . So, in a way, this particular continuous function provides a "bridge" that lets us "connect" two germs, in the sense that its set of germs is an open set containing both and .
This gets me wondering if we can define a category using this intuition. Let the objects of be the germs of , the elements of . And let us put a morphism from to if and are both germs of at some points in . We'll also want to put a morphism from to in this case, because the condition we are checking is symmetric in and .
To make a category from this, we'd need to define composition. I'm not immediately sure if there's a nice way to do this... and I don't want to get too sidetracked, so I'll stop here.
Alright, on to this topology exercise:
John Baez said:
A function is continuous iff it is continuous at each point in its domain, and this in turn is true iff of every open set contained in some neighborhood of is open.
We've already seen above that a "function is continuous iff it is continuous at each point in its domain". We want to show that these conditions are equivalent to the condition that of every open set contained in some neighborhood of is open.
These kinds of statements still intimidate me a bit, so I'll try to draw a picture to illustrate what we're trying to prove.
Here's my picture:
picture
Here, is an open set containing , and is an open set with . I could have alternatively drawn so that it includes , but since that isn't required I chose not to.
EDIT: I need to update the picture... the result to be shown is slightly different than what I listed above.
Sorry, I left out a condition!!! I meant to require .
So, the result to be shown is "f is continuous at if for some neighborhood of , the inverse image of every open subset containing is an open set containing ".
Compare this to the definition of "continuous at ": is continuous at iff the inverse image of every open set containing contains an open set containing of .
So the difference is saying it's enough to look at open sets containing that "aren't too big".
Intuitively this makes sense, since we're talking continuity "at ". This should only depend on what's going on near , and near .
John Baez said:
So the difference is saying it's enough to look at open sets containing that "aren't too big".
Oh, I like that! That does help make it more intuitive. I'll draw a new picture now, and I'm hopeful that this intuition will be reflected in that picture as well.
Here's a picture illustrating the condition "for some neighborhood of , the inverse image of every open subset containing is an open set containing ":
picture
Here, and are open sets, as is .
We'd like to show that if satisfies this condition in the picture, then is continuous at . That is, we'd like to show there is some open set containing such that is continuous. My first guess was to try and set . The problem with this is that isn't necessarily open.
Oh, wait, yes does have to be open! That's because is an open set containing , and so is an open set containing !
Alright, so let's set and try to show that is continuous. We have that , where and both have the subspace topology. Let's consider some open subset of . We'd like to show that is open. Now, if , we know that is open and hence is open in .
It remains to consider the case where is an open subset of that doesn't contain . I don't immediately see how to show that is still an open subset of .
Well, I think I'm stuck here for the moment, but at least some progress was made. I'll stop here for today!
David Egolf said:
It remains to consider the case where is an open subset of that doesn't contain . I don't immediately see how to show that is still an open subset of .
You mean "doesn't contain ", not "doesn't contain ". But more importantly....
I don't think this case matters. Only stuff around can possibly matter. Today I accidentally wrote down a bogus definition of "continuous at ", but then I fixed it. Here's the fixed version:
John Baez said:
the definition of "continuous at ": is continuous at iff the inverse image of every open set containing contains an open set containing of .
So note, we're not demanding that is open, which would be too much since parts of might be very far from . We're just demanding that contain an open neighborhood of .
I was working from this definition of "continuous at ": is continuous at exactly if there is some open set containing such that is continuous.
Maybe next time I'll try to show that the definition I was using is equivalent to the definition which you provided:
John Baez said:
the definition of "continuous at ": is continuous at iff the inverse image of every open set containing contains an open set containing of .
David Egolf said:
I was working from this definition of "continuous at ": is continuous at exactly if there is some open set containing such that is continuous.
Okay, that's a fine definition.
It should be equivalent to the one I gave, but I don't mean to be overwhelming you with the task of showing lots of definitions are equivalent!
I don't think these two conditions are equivalent. David's is stronger. It means is continuous in a neighbourhood of .
I wondered if Schechter's "Handbook of Analysis and Its Foundations" talked about this. On page 417, it defines a function to be continuous at the point if this condition is satisfied: the inverse image of each neighborhood of is a neighborhood of . This reminds me of the definition that @John Baez gave above. It should be noted that Schechter uses the term "neighborhood" in a way he defines on page 110: is a neighborhood of a point if for some open set .
Schechter also touches on a condition similar to one I described above, saying on page 418 that a mapping is continuous iff is "locally continuous" in the sense that each point in has a neighborhood such that is continuous. He doesn't use the phrase "continuous at a point" in this context.
Schechter also says (on page 417) that the following two conditions are equivalent for a function between two topological spaces:
Although this is all somewhat tangential to sheaves, I am pleased that - I think - I am slowly starting to get some of this topology stuff straight! :sweat_smile:
At this point, I might just assume that everything Schechter says here is true, to better focus on the main topic of this thread. Namely, by assuming the things I just listed above are true, I'd like to see if I can then prove that is continuous.
Sure! Schechter is organizing these things better than I am, by the way. I hadn't realize how many subtly different ways there are to say "continuous at a point", all of which are equivalent. Apparently I just make one up each time I need this concept.
Alright, let's again consider our projection function which sends each germ to the point it is associated with. To show is continuous, we have a few different equivalent conditions available to us now. If we can prove any of these conditions are true for , is continuous:
Schechter also provides several more equivalent conditions for the continuity of a function, but hopefully one of the conditions I've listed will be helpful for solving this puzzle.
I'm going to try using condition (2), because it's least familiar to me and I'm curious about it. :laughing:
So, let's consider some point . This is a germ associated to the point , consisting of an equivalence class of real-valued continuous functions which are each defined on some open set containing .(Recall that two such functions are equivalent exactly if they agree on some open set containing ). In particular, we're considering the equivalence class of some continuous function with .
Now, let us introduce a neighbourhood of . This is a subset of containing an open set so that . We wish to show that is a neighborhood of .
By definition, . That is, this preimage consists exactly of all the germs associated to points in . Since , we do have . It remains to show that we can find some open subset of which contains .
We've already got , and we're looking to build up an open set in about consisting only of germs associated to points in . To build this open set, we need to find some points in that are "near" to . By definition of the topology of , we know that is an open subset of . I think we can use this to get some germs "nearby" that also sit in .
To do this, let's restrict (with ). We know that is open and contains . Hence is an open set containing . Then, and is an open set of . Since is a subset of , is an open subset of containing .
I think we have found an open set containing that is a subset of ! That is, I think we've shown that is a neighbourhood of if is a neighbourhood of . Thus, is continuous at any , and hence it is continuous!
I'm a bit confused because I thought you were solving this puzzle, which is not about -valued functions, but rather an arbitrary sheaf on a topological space :
Now that we have the space of germs for each point we define
There is then a unique function
sending everybody in to So we've almost gotten our bundle over We just need to put a topology on
We do this as follows. We'll give a basis for the topology, by describing a bunch of open neighborhoods of each point in Remember, any point in is a germ. More specifically, any point is in some set so it's the germ at of some where is an open neighborhood of But this has lots of other germs, too, namely its germs at all points We take this collection of all these germs to be an open neighborhood of our point A general open set in will then be an arbitrary union of sets like this.
Puzzle. Show that with this topology on the map is continuous.
Are you doing the special case where is the sheaf of continuous -valued functions on a topological space ?
John Baez said:
Are you doing the special case where is the sheaf of continuous -valued functions on a topological space ?
Yes, I was doing that special case. But I think I'll plan to next give this a try for an arbitrary sheaf on a topological space ! I am hoping that the pattern of the argument will be similar.
I think it should be almost identical! Working with a sheaf of continuous functions makes things easier to visualize, so it's a good test case.
That's encouraging to hear!
Let be a sheaf on a topological space . Then we wish to show that our map is continuous. (Recall that sends each germ in to ). Following the argument above - which was carried out for a special case - let's consider some point , which is a germ associated to the point . This is the germ in that some sheaf element belongs to, where .
Now, let us introduce a neighbourhood of . This is a subset of containing an open set so that . We wish to show that is a neighbourhood of .
By definition consists exactly of all the germs associated to points in . Since , we do have . It remains to show that we can find some open (in ) subset of which contains .
We've already got , and we're looking to build up an open set in about consisting only of germs associated to points in . To build this open set, we need to find some points in that are "near" to .
For our , let denote the set of all germs of over the various points of . By definition of the topology of , this is an open set. And we know that contains , as .
Now, from this, we wish to construct an open set of that contains and is a subset of .
To do this, we will "restrict" to the open set which contains . Since we are working in the general case, this restriction is more abstract than just restricting the domain of a function. However, since is a presheaf, we have a restriction function available to us, so our restriction of is simply . By definition of the topology of , is an open set of .
It remains to show that (1) is a subset of and (2) contains . To show (1), note that this set consists only of germs associated to over the points of . Since , all of these germs belong to points in , and so is a subset of .
To show (2), we note that restricting a presheaf element does not change its germ at a point. This is because each germ set is the tip of a cocone for the diagram consisting of the various with varying over the open sets of containing , together with the restriction functions between them. So, . Hence .
We conclude that is an open set of containing , and that it is also a subset of . Thus, if is a neighbourhood of , then is a neighborhood of .
So, is continuous at any point . Hence, is continuous!
I don't think I used the fact that was a sheaf anywhere, although I did make use of the fact that was a presheaf. So, I think the same result should hold for an arbitrary presheaf over some topological space .
Assuming the above is correct, we have now shown that we can make a bundle from a presheaf on ! The next puzzle asks us to upgrade this process to get a functor . Here is the category of presheaves on and is the category of bundles over .
To define our functor , as a first step we'll need to show how to get a morphism of a bundles from a morphism of presheaves. (I'll stop here for today!)
Great! All this looks good, and I especially like how you "psychoanalyzed" your proof and noticed that it works for presheaves. I probably should have posed the puzzle for presheaves.
While I forget exactly what I did in the course notes, I imagine soon we'll do something like this:
1) get a functor from presheaves on to bundles over , sending each presheaf to its bundle of germs
2) get a functor from bundles on to sheaves over , sending each bundle to its sheaf of sections
3) compose these functors to get a functor from presheaves to sheaves, called sheafification
4) show that sheafification is left adjoint to the obvious forgetful functor from sheaves on to presheaves on .
What a nice thread. I really like how the calm pace of this discussion leads to upbeat non-trivial concepts like sheafification.
"Sheafification" is such a fun word... it reminds me of another fun word I learned recently: "rectangulation"! Peeking ahead in the blog post, I suppose we're probably going to have an "ètalification" functor too, given by composing our functors in the opposite order (so we get a functor that converts each bundle to an étale bundle).
Anyways, to get there, we first need to show we really do have a functor which converts presheaves and presheaf morphisms to bundles and bundle morphisms. (This functor is called in the blog post, but I'll call it for the moment, to avoid confusion due to the fact that means the space of all the germs of ).
We just saw that we can get a bundle from a presheaf on a topological space by forming the bundle of germs , which sends each germ to the point it belongs to. Now, let's assume we have a morphism of a presheafs on , namely . We wish to construct a morphism of bundles from to . That means we're looking for a continuous map so that . Strictly speaking, has source of and target of , but I'll use the same symbol to refer to its underlying continuous map from to . Hopefully this won't be too confusing!
Here's the situation in picture form:
induced morphism of induced bundles
We wish to find some so that this diagram commutes.
For this diagram to commute, we must have that maps germs of associated to to germs of associated to . So, we can consider the function as being formed from multiple functions, one for each . I'll call -th function , where is the set of germs of associated to .
I'm not sure how to define . But maybe we can start by considering as becomes a smaller and smaller open set that contains . Intuitively, I'd like to set to be some kind of "limit" of as approaches .
I'm wondering if there is some way to define as some colimit, analogous to how is (part of) a colimit. This is the picture I've been starting at:
picture
Maybe we could try to define as the (hopefully unique) function that makes this diagram commute? I've only drawn part of the full diagram I have in mind; we should have and present in the full diagram as varies over all open sets containing .
Trying to draw the full picture, I thought of a diagram involving functors and natural transformations:
picture 2
In this picture, the functors map to from the full subcategory of given by taking only the open sets containing . is the inclusion functor from this full subcategory. The natural transformations pointing down the page correspond to our colimit co-cones. Finally, is the functor constant at the set , and is defined similarly.
The idea is that could (hopefully) be defined in terms of the (hopefully) unique natural transformation making this diagram commute.
I'm not sure if this is a good direction to explore... I'll stop here for today. Any hints or thoughts relating to or would be most welcome!
Trying to define as the "colimit of the 's" reveals a good mental picture, but maybe too involved for a formal proof.
Instead, I suggest to look at how acts on a specific germ at , e.g., choosing a representative for some open neighborhood of . How would you define the germ ?
David Egolf said:
"Sheafification" is such a fun word... it reminds me of another fun word I learned recently: "rectangulation"! Peeking ahead in the blog post, I suppose we're probably going to have an "ètalification" functor too, given by composing our functors in the opposite order (so we get a functor that converts each bundle to an étale bundle).
I got interested into that. I think I found a proof that we have an adjunction . If true, this implies that sheafification is a monad, while étalification is a comonad.
(@Eric M Downes It took me a few seconds to understand the emoji "Grothenwoke" :D)
His eyes shoot stalks!
Peva Blanchard said:
Trying to define as the "colimit of the 's" reveals a good mental picture, but maybe too involved for a formal proof.
Instead, I suggest to look at how acts on a specific germ at , e.g., choosing a representative for some open neighborhood of . How would you define the germ ?
I just realized that where I left off above and your hint here are (I think) quite related. If the diagram I drew above is to commute, then it must commute in particular at each component of the natural transformations involved. Requiring commutativity at the -th component, we then want this diagram to commute:
diagram at U
If this is to commute, then it must in particular commute at each element. So, pick some . (Note that automatically, by definition of the subcategory is mapping from). Then we need . There is still some work left to define from this, but this feels like progress.
(In general, I wonder if this kind of thing can provide an interesting strategy for trying to induce a map between two colimits of different diagrams).
Yes that's right. I think you can already define point-wise.
You can use the formula you inferred from the naturality condition: for any germ of the form , with and an open neighborhood of ,
But, first, you need to prove that this is well-defined, i.e., that this definition is invariant when we choose another representative for some other over another neighborhood of .
Yes, I would be inclined to define the map between etale spaces over a space coming from a map between presheaves on by saying what it does to each germ. I'd do that by treating a germ as an equivalence class of sections, then doing the standard trick of choosing a representative of that equivalence class, writing down some formula that parses, and then checking that the answer doesn't depend on the representative.
I consider all of this "follow your nose" mathematics: writing down the only guess you can easily think of given the data available, then checking it works. I never considered David's more thoughtful approach of working explicitly with colimit diagrams. Probably it's because I consider that more "bulky", and harder to do calculations with. So while I applaud David's approach in spirit I would unthinkingly have taken Peva's approach, and I think in practice that's the easier one.
I'm all for "following my nose"... the only problem with the nose-following approach is that sometimes my nose doesn't know the right way to go :sweat_smile:. But I suppose that mostly comes with experience.
By the way, after considering this diagram some more...
diagram
...I realized that composing the morphisms along the top and right-hand side of this diagram gives us a natural transformation from to a functor that is constant at a particular object. That is, these morphisms compose to give us a co-cone under . Then the unique existence of follows by the fact that our set of germs of at is the tip of a colimit cocone (and hence is initial among cones of )!
I still want to check "by hand" that setting is "well-defined". (Although I suspect it must be, at this point, in light of the paragraph immediately above this one).
We need to check that if for some for , then .
I want to use the fact that restricting a sheaf element doesn't change the germ it belongs to. So, I'm aiming to show that and restrict to the same thing on some open set containing .
Now, we know that ,with and . I think we proved a while ago that this implies there is some open set containing so that .
I'll use this fact, together with the naturality of , referencing this diagram:
diagram
We start with and . was defined so that we have . Consequently, . Since the left and right "trapezoids" of our diagram commute (because is a natural transformation), we have that and similarly .
Putting this all together, we find that . Thus, .
Since and restrict to the same thing on (which is an open set containing ), and since two sheaf elements have the same germ at if they restrict to the same thing in some open set containing , we conclude that .
So, if for some with , we have that as desired. We conclude that setting actually defines a function!
It still remains to show that is continuous. But I will leave that for another day!
This is a bit tangential, but the above has helped me realized how we can take the "limit" or "colimit" of a natural transformation between two diagrams with limits or colimits! This seems pretty cool because it lets us "condense" the data of a natural transformation (which could consist of many morphisms) to a single morphism.
Here's a picture illustrating how this works for limits:
picture
Here and are diagrams of the same shape and is a natural transformation. and correspond to the limit cones over and . Then composing gives us a cone over . Since is the terminal such cone, there is a unique morphism which induces a natural transformation so that the diagram commutes.
For example, in a category with products, I expect that the "product" of two morphisms is a , in the case where and are discrete diagrams with two objects. In this case, the data of a natural transformation corresponds to two morphisms and .
Yes, I've been thinking about "condensing a natural transformation" too, and your "colimit of natural transformations" picture.
I will probably open another topic to discuss the details, but here is an overview.
There is a fact about presheaves on a topological space (or more generally any category): it is a closed category
This means that if you have two presheaves on , there is another presheaf . Intuitively, the presheaf represents (in the category of presheaves on ) all the natural transformations from to .
Then, you can look at the associated bundle over . And there are interesting things.
For instance, the function you were looking for a few messages above would correspond to a point of this bundle. And the to a section of this bundle (a global section, i.e., over the entire space ).
Actually, there is a correspondance between natural transformations from to and global sections of the bundle .
Actually, any natural transformations from to yields a global section of the bundle .
David Egolf said:
I'm all for "following my nose"... the only problem with the nose-following approach is that sometimes my nose doesn't know the right way to go :sweat_smile:. But I suppose that mostly comes with experience.
That's true. But now that you've had an experience, I hope you see that this strategy counts as following your nose, each step following naturally from the one before:
I would count this as sufficient, though there are certainly details one can unpack here, which you unpacked in your much more careful argument here.
Given a morphism of presheaves , where each presheaf is a presheaf on a topological space , we were able to define a function from to , which sends each germ of at a point to a germ of at that same point. Namely, we got the function which acts by , where and .
It remains to show that is continuous.
At this point, part of me wishes we had defined the topology on (and on ) in a different but equivalent way, in terms of some universal property. I am guessing that doing that might help make it clearer why needs to be continuous.
Before thinking about that, let me see how far I can get while working with the definition we've used so far. I will try to show that is continuous at an arbitrary point , where and . Let be a neighborhood of . This is a subset of containing an open set so that . We wish to show that is a neighbourhood of .
To show that is a neighbourhood of it suffices to find some open set that contains and is a subset of .
To get further, I want to find an open set containing using . Since and , we have that . Hence, the set of all the germs of over forms an open set of . And since , .
We've just seen that is an open subset of containing . Next, I want to create an open set from this one, aiming to obtain a subset of our neighbourhood . Since is a neighbourhood of , it contains an open set that contains . Consequently, is an open set containing that is a subset of .
This set is some open set that is a subset of . Hence it is the union of sets of the form , by definition of the topology of . Each of these sets, being a subset of , contains only germs belonging to over some subset of . So, each of these open sets is really of the form for an open subset of .
Since this set contains , there is some open containing such that . This is an open set of containing , that is also a subset of . Hence, is a subset of containing . If we can show that is open, then I think we will have shown that is continuous at .
is a bit of a mouthful, but I'm hoping working with it won't be too bad. I'll stop here for today though!
Hmm, something seems 'heavy' about this discussion so far. Let me see if I can lighten it a bit. I'll follow my nose for a little while and see where it leads, but I won't go too far.
We're trying to show that is continuous. So we should think about how we defined the topology on . In my course notes I said something like this:
We'll give a basis for the topology, by describing a bunch of open neighborhoods of each point in Remember, any point in is a germ. More specifically any point of is in some stalk so it's the germ at of some where is an open neighborhood of But this has lots of other germs, too, namely its germs at all points We take this collection of all these germs to be an open neighborhood of our point A general open set in will then be an arbitrary union of sets like this.
The description of the topology must determine the strategy for how we'll show is continuous. Since inverse images automatically preserve unions, we don't need to check that the inverse image of a general open set under is open. It's enough to check it for the open neighborhoods of the form described above. So let's make up some convenient notation for them.
We've already decided to call any point in something like where and , where is any open neighborhood of .
Above I described a basis of open neighborhoods of , which are sets like this:
The vertical bar means "such that". I used to use a colon to mean "such that", but I decided that was confusing.
This is an efficient notation for our basis of open neighborhoods, so we should be able to do computations with it fairly painlessly.
Similarly, any point in will be an equivalence class like where and , where is any open neighborhood of . And we get a basis of open neighborhoods of that are sets like this:
Since we understand the topology on and in terms of a basis of open neighborhoods, to show is continuous we should check continuity at a point for every point .
We saw maps to . So to check that is continuous at the point , in principle we need to check that the inverse image of any open neighborhood of is an open neighborhood of .
But since I'm a master of topology, I know it's enough to check that the inverse image of one of our "basis" open neighborhoods contains a "basis" open neighborhood. I realize now that this may be a potential stumbling block for you, @David Egolf - it's a trick one learns in a topology class.
We've found a slick notation for these "basis" open neighborhoods. So let's write down one of these open neighborhoods of . It will look like this:
Now let's figure out its inverse image and see if it contains an open neighborhood of !
Well, I had better stop here... I may have already done too much, but I wanted to reach what I called the "potential stumbling block".
Wow, thanks! It will take me some time to work through what you just said, but it looks to be quite helpful! I didn't expect topology to come up quite so often in these blog posts :sweat_smile:. But it's good - I'm happy to be learning about these practical topology strategies!
But since I'm a master of topology, I know it's enough to check that the inverse image of one of our "basis" open neighborhoods contains a "basis" open neighborhood.
John knows this of course, but this should be changed to read: every element in the inverse image of a basic open has a basic open neighbohood contained in the inverse image.
Here's what I was trying to say. I was using the concept of "basis of open neighborhoods", which this link calls simply a "neighborhood basis":
(Check the link for the definition, David.) The reason is that in my lecture notes I described the topology on etale spaces in terms of a neighborhood basis, while avoiding the use the jargon "neighborhood basis".
Here's how I was using it:
Say quite generally that we have a function between topological spaces and , and we're trying to show is continuous at some point . Say we know a neighborhood basis for every point and every point in . Then to show is continuous at , it's enough to check that for every in the neighborhood basis of , the inverse image contains a set in the neighborhood basis of .
By now it seems like I've slipped into acting like David understands the concept of "neighborhood basis", which is unreasonable. I guess I'm being a bad teacher and describing how I'd solve a homework problem by following my nose, without remembering just how long my nose has grown over the years, and how long this has taken.
If I'd left him alone David would have solved the problem in his own way. I may have made the bad teacher's mistake of saying "oh, why don't you just do this?", where "this" is some trick only known to the teacher.
John Baez said:
If I'd left him alone David would have solved the problem in his own way. I may have made the bad teacher's mistake of saying "oh, why don't you just do this?", where "this" is some trick only known to the teacher.
I'm very glad to learn about new strategies or tricks! People pointing out different ways to think about a problem is one of the things I hoped would happen when I started this thread. I've run across at least some of the topology concepts you're using above, but I'm excited to see how they can make a specific problem easier to solve.
John Baez said:
The description of the topology must determine the strategy for how we'll show is continuous. Since inverse images automatically preserve unions, we don't need to check that the inverse image of a general open set under is open. It's enough to check it for the open neighborhoods of the form described above. So let's make up some convenient notation for them.
This makes sense. Let be an arbitrary open set of . Then we wish to show that its inverse image is also open. But since we have a basis for the topology on , we know that is the union of some , where each is in our basis. Then . Since the union of open sets is open, we see that if the inverse image of each basis set is open, then the inverse image of arbitrary open sets is open.
That's the idea! Later, due to remark by Todd, I started discussing the difference between a 'basis of open sets' and a 'basis of open neighborhoods of a point p'. The open sets I described are both of these, depending on whether you hold p (your germ) fixed or let it vary. The 'basis of open neighborhoods of a point' idea is especially nice for studying continuity at that point. But maybe you don't need to worry about this until you run into it on your own.
Next up, I'd like to review the concept of a "basis of open neighbourhoods", which you're using above. I think I've actually seen this before, but I haven't used it to solve problems yet. I referenced this and this.
I think this is the definition of a basis of open neighbourhoods at a point in some topological space : it is a collection of open subsets of , each containing , such that for any open subset that contains there is some so that . Intuitively, this is a collection of open sets "about " that lets us get "arbitrarily close" to .
John Baez said:
Say quite generally that we have a function between topological spaces and , and we're trying to show is continuous at some point . Say we know a neighborhood basis for every point and every point in . Then to show is continuous at , it's enough to check that for every in the neighborhood basis of , the inverse image contains a set in the neighborhood basis of .
Let me try to relate this condition for continuity at a point to the one I was using earlier. Above, I was trying to show continuity at by checking that the inverse image of a (not necessarily open) neighbourhood of is a neighbourhood of .
I'd like to think about how it is enough to consider the inverse image only of (open) neighbourhood basis sets, instead of the inverse image of arbitrary neighbourhoods of . Throughout, I assume we have a neighbourhood basis for and for . Let be an arbitrary neighbourhood of . It contains an open set that contains . Then, there is some in our neighbourhood basis for so that . If contains an open set containing , then certainly contains an open set containing . So, if the inverse image of every (open) neighbourhood basis set for is a neighbourhood of , then the inverse image of any neighbourhood of is a neighbourhood of .
To show that the inverse image of a (open) neighbourhood basis set of is a neighbourhood of , it suffices to show that its inverse image contains an open set containing . If its inverse image contains some set in the neighbourhood basis of , then its inverse image certainly contains an open set containing .
In conclusion:
To use the above in our case, we need to figure out a neighbourhood basis for each , and for each . To do that, we need to figure out a strategy for getting "arbitrarily close" to these points.
To get a bunch of open sets containing that "get arbitrarily close" to it, the first idea that comes to mind for me is to take the open set of germs of as we restrict its domain to be smaller and smaller open sets about . So, the sets in our proposed open neighbourhood basis for are of the form as becomes a smaller and smaller open set containing . In alternate notation, they are of the form , as ranges over all the open sets containing . I think @John Baez was indicating that we really do get a (open) neighbourhood basis for in this way. However, I don't immediately see how to prove this.
I'll stop here for today! Next time, I'm hoping to prove that we do get a neighbourhood basis in this way for a point .
David Egolf said:
So, the sets in our proposed open neighbourhood basis for are of the form as becomes a smaller and smaller open set containing . In alternate notation, they are of the form , as ranges over all the open sets containing . I think John Baez was indicating that we really do get a (open) neighbourhood basis for in this way. However, I don't immediately see how to prove this.
In my course notes I defined the topology on in essentially this way, by specifying these neighborhood bases, so I see nothing to prove! If you have some other way to define the topology, then you can try to prove this.
In my course notes I said something like this:
We'll give a basis for the topology, by describing a bunch of open neighborhoods of each point in Remember, any point in is a germ. More specifically any point of is in some stalk so it's the germ at of some where is an open neighborhood of But this has lots of other germs, too, namely its germs at all points We take this collection of all these germs to be an open neighborhood of our point A general open set in will then be an arbitrary union of sets like this.
That's me informally specifying a neighbohood basis about each point.
Here's my understanding of the situation:
This may be a situation where the thing to be proved is very fast and simple to prove once you know how to do it! But it seems to me that there is really something to be proved here.
Let be an arbitrary neighbourhood of in , where . It then contains an open set that contains . By definition of the topology of , we know that where each . Since , that implies that there is some specific so that .
Now, each basis element is the set of germs of some sheaf set element over some open set of . Hence where is an open set of and . Since , we have that . Hence has the same germ at as our does.
We know that two sheaf set elements and have the same germ at a point exactly if they restrict to the same sheaf set element on some open subset of containing . Thus, we have for some open set containing . Note that and .
Now, is an open set containing and further it belongs to our proposed neighbourhood basis . Since , we have that . Hence, we have found an element of that is a subset of an arbitrary neighbourhood of .
We conclude that really does form a neighbourhood basis of open sets for !
This is interesting to me, as it intuitively says that we can "approach arbitrarily close" to a point just by looking at the germs of various restrictions of . This simplifies things: in this context we only have to think about the germs of the restrictions of a single sheaf element , instead of all sheaf elements that happen to have the same germ as at .
I'll stop here for today. Next time, I'm hoping to use this neighbourhood basis to try and show the continuity of .
David Egolf said:
This may be a situation where the thing to be proved is very fast and simple to prove once you know how to do it! But it seems to me that there is really something to be proved here.
You're right. But you did it!
David Egolf said:
This is interesting to me, as it intuitively says that we can "approach arbitrarily close" to a point just by looking at the germs of various restrictions of . This simplifies things: in this context we only have to think about the germs of the restrictions of a single sheaf element , instead of all sheaf elements that happen to have the same germ as at .
Yes, that's a good thing to keep in mind with these etale spaces. So it's good you did that proof just now.
I just assumed it was obvious that if we have a bunch of open sets forming a basis for a topology, the sets in that basis containing a particular point form an open neighborhood basis for .
Let's see if I was fooling myself. It suffices to show that if is any open set containing , there exists such that
Since is a basis for the topology, is a union of some collection of these sets:
so at least one of the for contains , and for this one we have
. ∎
John Baez said:
I just assumed it was obvious that if we have a bunch of open sets forming a basis for a topology, the sets in that basis containing a particular point form an open neighborhood basis for .
I had been thinking along these lines, but then I realized that there can be a lot more open sets in our basis for the topology on that include , besides those of the form for some open set containing (where is not allowed to vary). For example, for some with satisfying , we have that is an open set in our basis that contains . And this open set is potentially different than the ones we can get just using , I think.
So, the collection of sets discussed above (which has sets given by the germs of when restricted to various open subsets of containing ) I think is smaller than the collection of sets from our basis that contain .
You're right, so I was being sloppy! I'm glad you caught that. I had to run through an example in my mind to see that this collection is really smaller. Luckily it's still a neighborhood basis, and it's a much more convenient neighborhood basis.
(My downfall was wanting to be extremely quick and informal in my course notes, and not use terms like "basis" or "neighborhood basis". I think it may save everyone work if I come out and clearly specify a neighborhood basis for each point. But let's see how things go.)
Ok, now that I understand this neighbourhood basis, let me see about trying to use it to prove that is continuous. Recall that is going to be a morphism of bundles induced by a morphism of presheaves on . And acts by .
To show that is continuous, we will aim to show it is continuous at an arbitrary point of . To show this, we need to show that the inverse image of any neighbourhood of is a neighbourhood of . But we recently saw that it suffices to show that the inverse image of any set in a neighbourhood basis of open sets for contains a set in a neighbourhood basis of open sets for .
We recently saw that for a point (with and ) we have a neighbourhood basis of open sets having elements of the form where is some open set containing and contained in .
Similarly, we have a neighbourhood basis of open sets . An element of this neighbourhood basis is of the form for some open set containing and contained in . Here, is shorthand for ,
So, let us consider the inverse image under of some arbitrary set in our neighbourhood basis of open sets for . Let's say we pick the neighbourhood basis element , where is an open set of containing and contained in . Given that , what can we say about the inverse image of this neighbourhood basis element?
Well, we see that any with is in the inverse image. So, the set is contained in the inverse image. But, since is an open set containing and contained in , this set is an element of our neighbourhood basis of open sets for !
We conclude that is continuous at an arbitrary point , and hence it is continuous!
Great! I guess it's clear now why I pushed you into this neighborhood basis idea. It makes this proof into a delicious downhill slide.
There must be other ways to proceed, but this feels to me like the way.
If we'd defined the topology in some other equivalent way from the start, some other proof might be good - but I haven't actually thought about other ways to define the topology on an etale space. You mentioned defining it using some universal property. One way might be to say: we give the weakest topology (fewest open sets) such that some class of maps out of it is continuous, or the strongest topology (most open sets) such that some class of maps into it is continuous. Maybe one of these works. But in this theorem we need to show continuity of maps - out of one etale space and into another.
We also want the projection from to to be continuous. I seem to recall you've already shown that? I think that's also easy with this neighborhood basis approach. If we give the weakest topology such that is continuous, do we get the same topology we're using now? I don't know; it should be easy to figure out but not today.
If we give the weakest topology such that is continuous, do we get the same topology we're using now? I don't know; it should be easy to figure out but not today.
Actually, I made the mistake of taking this weakest topology as the topology on the bundle . A priori, definition-wise, is coarser than since adds enough open sets to interpret sections of as continuous functions. But, I haven't tried to exhibit an actual example where would be strictly coarser than .
Okay - now I remember those comments of yours. I don't alas know such a counterexample, and when I try to visualize one I instantly think of this: one thing about etale spaces is that is not only continuous, it's a [[local homeomorphism]], where a section provides a continuous inverse to the projection restricted to the open neighbohood of the germ . This seems relevant somehow. Maybe prevents the existence of a counterexample? Or maybe we can use this condition as a kind of requirement that helps specify the topology of the etale space?
I think I found an example showing that (the coarsest topology on making the projection continuous) is strictly coarser than (the actual topology defined in John's blog post).
Take and the presheaf that maps any open subset to the set . Then .
An open set in , w.r.t , is exactly a set of the form , for some open subset in . This prevents to be a local homeomorphism w.r.t. .
But we know that is a local homeomorphism w.r.t. . So is strictly coarser than .
This is interesting. The requirement of being a local homeomorphism adds open subsets to the initial topology .
But it looks like we cannot express the topology as the initial or final topology of some collection of functions.
Oh, actually, it's quite possible that is the final topology on making all the functions continuous. I'll think about that.
(sorry David, I should have started this digression in another topic)
John Baez said:
There must be other ways to proceed, but this feels to me like the way.
This approach seemed quite nice! A "delicious downhill slide" indeed.
John Baez said:
We also want the projection from to to be continuous. I seem to recall you've already shown that?
Yes, I proved that earlier in this thread. (I had to scroll a long ways back to check though!)
Peva Blanchard said:
(sorry David, I should have started this digression in another topic)
I think reflecting on the topology we put on is quite interesting, and that discussion on that topic is a good fit for this thread.
I'm hoping that we can imagine a strategy or goal that would have led us to put the topology on that we did. One of my goals in learning math is to better understand how to create nice mathematical situations/structures. (I think this is a bit different than learning how to prove that structures that other people have come up with are quite nice.)
With the topology we chose to put on , it will turn out that is left adjoint to the functor . In particular, that implies we have a natural isomorphism for any presheaf . That implies that for any bundle we have a bijection: .
That means if I pick some particular natural transformation from to (which is the sheaf of sections of ), then there is some unique corresponding bundle morphism . If we let also denote the topological space that (the bundle) maps down to , then corresponds to a continuous map from to .
Now, imagine that we we hadn't yet set the topology on , but we want to set things up so that is left adjoint to . Then we are motivated in our choice of topology of : we need to choose our topology so that all of the induced are continuous.
I am not confident in working with adjunctions yet, so actually using this idea to figure out what topology we'd need on sounds tricky to me currently. But my rough hope is this:
The ideas @Peva Blanchard and @John Baez sketched above relating to the topology on are also interesting. And one of those may be the way to go instead, I'm not sure!
But I like that this approach lets us imagine a goal that could have led us to our choice of topology on . Namely, this goal: try to define the topology on so that is left adjoint to .
Oh that's nice.
Indeed, given a natural transformation , we can define the set-function
with and an open neighborhood of . Of course, this requires to prove that it is well-defined, i.e., that it is independent of the chosen representative.
And your perspective leads to another candidate for the topology on . Namely, the coarsest topology on making all those continuous.
One of my goals in learning math is to better understand how to create nice mathematical situations/structures.
That's a good goal, @David Egolf! Not enough people explicitly make this a goal, but I think it's something that can be learned, and category theory can be seen as a huge toolbox of methods for doing exactly this, though every other branch of math is important too.
I like your new project:
I hope to take inverse images of open sets under the induced to get a basis for the topology on .
I'd try to use this method to get those open sets I love, the sets I call , as inverse images of the sort you're talking about. Then you'd know that your topology has to at least contain those, which would be a big step forward.
Thanks to both of you for your encouraging words and interesting suggestions!
I am realizing that it will be helpful to learn more about this adjunction before figuring out what topology it requires on . (For example, I don't yet know enough about the adjunction to check that an ends up matching @Peva Blanchard's description). For that reason, I think I want to progress a bit further on the puzzles of the current blog post (and the next one) before thinking about this in more detail. But I have made a note to return to this question later, once we've worked through the discussion of the adjunction in the blog posts!
The current puzzle I'm working on is this one:
Puzzle. Describe how a morphism of presheaves on gives a morphism of bundles over and show that your construction defines a functor .
Starting with a natural transformation between presheaves on , we saw above how to form a morphism of bundles from to . It remains to show that this process actually defines a functor. (However, I need to rest up today, so I will return to this hopefully tomorrow!)
Given , then the induced morphism of bundles corresponds to the continuous function , for for some open subset of . This function maps from the topological space to the topological space . (Above, we called by the name ).
Here I am using to denote both the topological space of germs of and the projection from that space to , which sends each germ to the point it belongs to. Hopefully context will make it clear which usage I intend.
So, we notice that is preserving the source and target of morphisms.
Next, let be the identity natural transformation from to . Then correspond to the continuous function . But since each component of is an identity function, . Thus, the induced map is , which is the identity map from to . We conclude that is preserving identity morphisms.
It remains to show that preserves composition. Let us assume we have and . We wish to show that .
corresponds to the continuous function . Since , we have that .
But we notice that corresponds to the continuous function . So, we conclude that and so preserves composition.
Thus, is a functor.
The blog post next begins to discuss the fact that not only can we get a bundle from a presheaf on , but that this bundle has a nice property. Namely, each point in has a neighbourhood such that is a homeomorphism. Intuitively, each point in has some little region "near it" that looks (topologically) just like some neighbourhood of its image under . This seems like it might be helpful for understanding intuitively "around some germ", provided that we know what looks like.
Wikipedia gives a nice picture illustrating this kind of situation:
covering space
To set up the next puzzle, we need a few definitions:
We can now state the next puzzle:
Puzzle. Show that is a homeomorphism from to .
Technically, we wish to show that the restriction of to provides a homeomorphism from to . We define by , and aim to show this is a homeomorphism. (Since each germ in belongs to a point in , this function really does map to the set .)
First, we note that is a bijection. That's because it has an inverse as a function, which I will call . This inverse function acts by .
It remains to show that and are both continuous. We know that is continuous, and that the inclusion function is continuous. Hence is continuous. I seem to recall that is then also continuous, provided that only takes values in . If that's true, then is continuous.
I'll stop here for today. Next time, I'm hoping to finish showing that is a homeomorphism!
Great, you're moving along nicely here. And your memory is right. The subspace topology on a subset of a topological space is the one where the open sets of are defined to be the sets where is open in . It then instantly follows that if is any continuous function taking values in the subset , it gives a continuous function where is given the subspace topology.
By the way, some very careful people distinguish notationally between the function in this situation and the function , and call the latter a corestriction of the former, by analogy with restriction - since we're shrinking the codomain of instead of its domain.
So if I were one of those people I'd have said
It then instantly follows that if is any continuous function taking values in the subset , its corestriction to is continuous when is given the subspace topology.
I wanted to visualize a bit more what the étale space looks like in strange cases, and I thought it could be interesting to share .
Let be the unit interval, and a point.
Let be the presheaf such that, for every open subset
Then should look like this
image.png
Here seems to just be the union of two line segments that crosses at a point over . It seems to me that has "essentially" two neighborhoods that are homeomorphic to , depending on which line segment you choose.
Nice! And I guess if you change your mind which line you choose, and get a 'broken line segment', that's not an open neighborhod. This clarifies the original French meaning of the term espace étalé.
Actually, my conclusion might be wrong. I've been too sketchy when defining the presheaf . I need to specify the restriction morphisms.
Indeed, I was wrong ...
Let's write and . Let be two open subsets in .
When both contain , or when both do not contain , we have and we define the restriction morphism to be the identity.
The interesting case is when and . In that case, let's define
Now the étale space would look more like this
image.png
I.e., we have three line segments, one of them goes through , and the other two adheres to without touching it.
So we have a unique open neighborhood of that maps homeomorphically to .
These are interesting pictures! Even if the first one is not accurate to , I like how it illustrates this: it's possible to have multiple regions (in this case, lines) that "look the same about a point " while only having the point in common.
Regarding the restrictions maps, I don't understand how you are defining . You mention the condition " is on the left of " and the condition " is on the right of ". But couldn't an open set that doesn't contain have elements both to the left and right of ? It wasn't clear to me what the restriction map would be in that case.
Oh yes you're right! I am going too fast with the picture. I tend to draw and then formalize hastily in between two meetings. (I should definitely learn the moral lesson...)
The correct definition for , in the case and should be (hopefully)
And in picture
image.png
John Baez said:
By the way, some very careful people distinguish notationally between the function in this situation and the function , and call the latter a corestriction of the former, by analogy with restriction - since we're shrinking the codomain of instead of its domain.
I like to keep careful track of the source and target of morphisms, so I suppose I aspire to be one of these "careful people". Thanks for reminding me how corestriction interacts with continuity!
I think we can now finish showing that and are continuous. We saw above that is continuous. Then, since is a corestriction of and has the subspace topology, is continuous.
It remains to show that is continuous. We recall that it acts by .
To make typing this a little bit easier, I'm going to denote using the symbol . So, . I will aim to show that is continuous at any point . We saw above that the following collection of sets is a neighourhood basis of open sets for : namely as ranges over the open sets of contained in and also containing . To show that is continuous at , it suffices to show that the inverse image under of any set in our neighbourhood basis for contains an open set containing .
So, let us consider some arbitrary set in our our neighbourhood basis for . Here is an open set containing and contained in . The inverse image of under certainly contains , and hence contains an open set containing .
We conclude that is continuous at any point in , and so is a continuous function.
Thus, is a homeomorphism.
To remember what is, I might instead denote by - the set of germs of associated to the points of . Intuitively, "taking germs over of a fixed " produces a topological space just like . Since there are potentially many different elements in , there are potentially many "copies" of in , given by as varies over the elements of .
(I'll stop here for today!)
David Egolf said:
Since there are potentially many different elements in , there are potentially many "copies" of in , given by as varies over the elements of .
Yes it is exactly this observation that led me playing with strange cases. The trivial case is when all those copies are disjoint (e.g., like the parallel copies in the helix-like picture you posted before). Then I wondered what happens if we glue some of them at some specific point.
I believe we have now worked through the second blog post in the series! On to Part 3!
Part 3 begins by presenting a different way to specify the "sheaf condition" for a presheaf. Although this is not listed as an official puzzle, I would like to understand why this new formulation of the sheaf condition is equivalent to the formulation we've been using so far.
I was going to start by stating the new formulation of the sheaf condition. However, I don't understand it well enough to do so!
The new formulation involves a diagram that looks like this, where we have a collection of open sets that cover the open set in , and is a presheaf on :
diagram
I believe I understand how is defined. It is the function induced using the universal property of products via the collection of restriction functions . So, sends each sheaf element on to the tuple of its restrictions over the that cover . Intuitively, we are "decomposing" a sheaf element into smaller pieces.
I don't yet understand how and are defined. I think we will again be using the universal property of products, but it seems confusing at the moment... I will stop here for today!
It may help to think in a low brow way. Think of an element of as a list elements , one for each . Think of an element of as a list of elements , one for each pair . (Don't take the word 'list' too seriously here: the order doesn't matter, etc.) To get a map
we thus need a recipe to take
and extract from it
There are two 'obvious' recipes, and these are and .
To see why there are two, it may help to notice that
is exactly the same thing as
Here is another hint when we have three open sets .
When we fix , we get 3 restriction morphisms
On the other hand, if we fix , we also get 3 restriction morphisms
By taking the relevant products, and using the associativity, you get the two ways.
That's a clearer hint than mine. By the way, there's a highbrow way of thinking about this stuff in terms of the 'Cech nerve', 'descent' and the 'bar construction', which we discussed here, but I feel the lowbrow approach we're taking here is a good warmup for that highbrow approach.
Thanks to both of your for the hints! I will be referencing them as I try to better understand and .
We want to define some function . We know by the universal property of products that such functions corrrespond bijectively to cones with tip over the discrete diagram having objects of the form . So, to find a morphism , it suffices to find one morphism from to for each .
Now, to describe a function, it suffices to say what the function does to each element. An arbitrary element of is a tuple of presheaf elements, where we have one element associated to each . So, given a tuple of presheaf elements associated to the open sets in our cover, we want to determine some corresponding element in each .
What is an element of ? It is a presheaf element associated to . So, from a tuple of presheaf elements having one element for each open set in our open cover, we want to determine some presheaf element on .
Let be such a tuple of presheaf elements. has a presheaf element in it associated in particular to , and it also has one associated to . I will denote the presheaf element of associated to by , and the one associated to by .
If we restrict to , we get an element of . We can build up a function using this idea. Namely, let the function map act by , where is a restriction function provided by our presheaf.
We can use this approach to build a function for each . These functions all together then induce a unique function by the universal property of products.
Intuitively, this function takes a tuple of presheaf elements over our open cover, and sends this tuple to a tuple of presheaf elements. The output tuple has one element associated to each pairwise intersection of our open cover sets. Namely, it associates the restriction of to .
Now, we can also define a second function in a similar way. It will also be built up from functions as varies. We define the -th inducing function as . Then, we can use the universal property of products to induce a unique function .
In brief, this function takes a tuple of presheaf elements over the sets of our open cover, and for each pairwise intersection of those open sets it associates the restriction of to .
I think that's right. Assuming so, it appears that the functions I was looking for weren't all that complicated! Next time, building on this, I plan to work on understanding what it means for this to be an equalizer diagram:
diagram
(A side note: I just noticed that the phrase "a function is determined by its values on each of its inputs" can be thought of as an implicit reference to the universal property of coproducts in . That's kind of fun!)
David Egolf said:
I think that's right. Assuming so, it appears that the functions I was looking for weren't all that complicated!
Right!
David Egolf said:
(A side note: I just noticed that the phrase "a function is determined by its values on each of its inputs" can be thought of as an implicit reference to the universal property of coproducts in . That's kind of fun!)
Be aware it’s not exactly the same thing. The coproduct in is the disjoint union of sets. Therefore a function corresponds to family of functions .
“A function is determined by its values on each of its inputs” corresponds to the bijection . By the universal property of products, it gives you that a function can be thought of as a family of elements of .
Oh, now I see it. You can also interpret this sentence using the universal property of coproducts. You can write and the universal property of the coproduct gives you that a function corresponds to a family that is a family of elements of .
Yes, and we're also using a very special feature of the category of sets here, which is that every object is a coproduct of copies of the terminal object.
This makes the category of sets is "boringly simple" compared to other categories. But of course that boring simplicity is exactly why we like it!
@John Baez Oh but I remember an article of yours, at the n-category café, which presented some very nice characterization of the category of sets. I'm trying to google it, but I can't remember the exact keywords. I think there was some stuff involving comonoids...
Most relevant here is that is the free category with coproducts on one object, and it then turns out that this object is the terminal object. But it sounds like you're thinking of something else.
I don't know what article you're talking about, but maybe you mean that is the free symmetric monoidal category on a cocommutative monoid object; it then turns out that this object is the terminal object.
I probably did blog about this ages ago, as part of my "Coffee for Theorems" series.
Yes, I was mostly reacting to "boringly simple" because I remember my impression that the category of Set can really be "non-boringly non-simple".
But you're right it's not really related to the present topic.
I wonder if, in a category with coproducts, it makes sense to think of the "elements" in that category as the objects that can't be written as a "non-boring" coproduct (that is, excluding things like ) of other objects. (The "indecomposable" objects, if that's the right term). I'm motivated in this intuition by this fact relating to the universal property of coproducts: a morphism from some object is determined by corresponding morphisms from the .
I was hoping to work out today what it means for our diagram above to be an equalizer. However, it turns out I need to rest up today. So, I hope to get back to that tomorrow!
David Egolf said:
I wonder if, in a category with coproducts, it makes sense to think of the "elements" in that category as the objects that can't be written as a "non-boring" coproduct (that is, excluding things like ) of other objects. (The "indecomposable" objects, if that's the right term).
I don't know if it makes sense to think of indecomposable objects (yes, that's the right term) as "elements" - that's a sort of squishy question, so let me just say that I've never thought of them as "elements". However, they are important. They're especially important in categories where every object is a coproduct of indecomposables: then they serve as "building blocks" for general objects in a very nice way.
For example, a functor from a group to is called a representation of , and there's a category with representations of as objects and natural transformations as morphisms. There's a huge industry devoted to understanding for various kinds of groups .
If is a finite group, we have a wonderful theorem: every object in is a coproduct of indecomposable objects! Moreover it's a coproduct in a unique way (up to isomorphism and permuting the guys you're taking the coproduct of).
Even better, if we are using vector spaces over an algebraically closed field like , then if we have two indecomposable objects they are either isomorphic, in which case
or they're not, in which case
meaning there's only one morphism between them.
This should remind you of a "Kronecker delta" which is if and otherwise. It's a categorified Kronecker delta, where is 1-dimensional if and 0-dimensional otherwise!
So these indecomposables are not only building blocks for all objects, they "don't talk to each other" - there are no interesting morphisms between nonisomorphic indecomposables.
I'd call this a very "crunchy" situation - it's hard to put into words, but it's very much the opposite of a floppy, sloppy situation, it's very well-disposed to concrete calculations, and it makes the question of finding the indecomposable representations of a finite group incredibly important, because one you know them, you really know a lot!
You said you wanted to learn how to create nice mathematical situations and structures. I guess one thing that helps is learning a bit about some of the classic examples of situations that mathematicians really love, and why they're loved. for a finite group is one of these. Any category where every object is a coproduct of indecomposables has a similar crunchy feel to it.
I said I don't think of indecomposables as "elements", at least not in a sense similar to set-theoretic elements. But people often do think of them as similar to "atoms"... or elements in the chemical sense!
The discovery that every group representation is a coproduct of indecomposables in a unique way is very similar to the discovery that all molecules are made of atoms which can't be further decomposed (well, that's what they thought anyway) - and moreover, that this decomposition is unique.
The world would be a much more tricky place if I could decompose water into one oxygen atoms and two hydrogen atoms but you, using another technique, could decompose it into a phosphorus atom and a carbon atom.
If I understand correctly, in the case of sets, the atomic objects are the empty set and the singleton sets. So, there are only two iso classes of atomic objects, and . And, there is a unique arrow from from . This is in sharp contrast with !
Also, when we fix a singleton set , an element of an object is usually presented as an arrow . And the set of elements of corresponds to the hom set .
So, in an arbitrary category with coproducts, it makes sense to discuss, on one hand, atomic objects (indecomposable as a coproduct), and, on the other hand, elements of an object. But it's not obvious to me how to relate the two.
Yes, is like , and very different from , in that its initial object is also terminal.
It's an [[abelian category]], and the property I'm talking about, that every object is uniquely a coproduct of indecomposables, is essentially the same as it being [[semisimple]].
Peva Blanchard said:
If I understand correctly, in the case of sets, the atomic objects are the empty set and the singleton sets. So, there are only two iso classes of atomic objects, and . And, there is a unique arrow from from . This is in sharp contrast with !
I think it's best not to consider to be indecomposable. For the same reason isn't prime.
Yes, that's true.
You need the initial object not to count as indecomposable if you want to get the kind of result I'm claiming happens in for a finite group: that every object is uniquely a coproduct of indecomposables. Just like 1 mustn't be prime if you want every positive integer to be uniquely a product of primes.
This is under [[too simple to be simple]].
I look forward to resuming posting in this thread (and some others)! There's a lot of interesting posts various people have made that I look forward to responding to.
Unfortunately, I have been struggling more with my chronic fatigue the last few days. So, I think I need to rest up for several days and then hopefully I can return to posting in these threads. Here's wishing everyone a pleasant weekend!
No worries, as far as I am concerned, no need to give reasons for why you are not posting. Rest well :)
John Baez said:
Even better, if we are using vector spaces over an algebraically closed field like , then if we have two indecomposable objects they are either isomorphic, in which case
or they're not, in which case
meaning there's only one morphism between them.
It's interesting to me that this is a "nice mathematical situation". At first glance, I might have assumed it was "too boring" or "too simple"! But I suppose the idea roughly is that we have a bunch of "independent building blocks" in this category, which sounds handy.
I next want to continue contemplating this diagram:
diagram
Specifically, I am interested in understanding what it means for this diagram to be an equalizer. It's going to take me a minute to remember what we already said about this diagram!
To review:
Continuing the review, focusing now on the functions in our diagram above:
David Egolf said:
John Baez said:
Even better, if we are using vector spaces over an algebraically closed field like , then if we have two indecomposable objects they are either isomorphic, in which case
or they're not, in which case
meaning there's only one morphism between them.
It's interesting to me that this is a "nice mathematical situation". At first glance, I might have assumed it was "too boring" or "too simple"! But I suppose the idea roughly is that we have a bunch of "independent building blocks" in this category, which sounds handy.
Your impression that this is "too simple" shows you have good intuitions. The field of homological algebra is heavily focused on how categories deviate from the simple behavior described here, so from that point of view categories of representations of finite groups are boring. But:
1) When you're building a house, you don't want the bricks to be interesting: you want them to behave in simple nice ways. So, like the category of sets or the category of vector spaces, the category of representations of a finite group is a great thing for building further structures!
2) Knowing general facts about categories of representations of finite groups is not the end of learning about these categories - it's just the start. We want to know all the indecomposable representations of all finite groups, and we want to know everything about them. That's an endless task... but luckily, it's fascinating and leads to many mind-blowing discoveries.
So: if something is "too simple", you happily move on to the next step!
But I'll restrain myself. Back to why the sheaf condition is an equalizer condition!
Continuing to try and understand why the sheaf condition is an equalizer condition, I next want to think about this: what does it mean for some to satisfy ? For , we must have for each . We know that is the restriction of to . And is the restriction of to . For these two to be equal, we must have that . In other words, the two parts of that "overlap" on must agree on the overlap.
I know how to find an equalizer of a diagram in . In this case, an equalizer of our diagram will be given by:
Combining the two paragraphs above, the object part of an equalizer of our diagram is given by the subset of consisting of elements such that for all . Intuitively, each element of this set consists of a way to assign a presheaf element (provided by ) to each open subset in our open cover for , such that data we pick "agrees on overlaps". Together, the entire equalizer set corresponds to all the different ways in which we can do this.
David Egolf said:
Combining the two paragraphs above, the object part of an equalizer of our diagram is given by the subset of consisting of elements such that for all . Intuitively, each element of this set consists of a way to assign a presheaf element (provided by ) to each open set of , such that data we pick "agrees on overlaps". Together, the entire equalizer set corresponds to all the different ways in which we can do this.
The only thing I would change in that is to change "to each open set of " to "to each open set of the covering" (i.e., to each ). Otherwise, looks good.
Yes, thanks for pointing that out @Todd Trimble !
It remains to consider the case in which together with also provides an equalizer for our diagram. Since limits are unique up to isomorphism, if is the object part of an equalizer, that implies we have a bijection between and the set of all ways to pick data from each open set in our cover such that the selected data agrees on overlaps.
Let be the canonical isomorphism (induced by the universal property of limits). Picking some data for each in our open cover such that and agree on for all corresponds to picking an element of . Then is an element such that . That is, restricting to each gives us .
Here's the picture I was referencing when writing the above paragraph:
picture
In this case, corresponds to a "stitching together" process that takes in presheaf data on the open sets covering that agrees on overlaps, and returns a "global" presheaf element in that restricts to the data we selected on each . The existence of together with the fact that is (the object part of) an equalizer tells us that we can always perform this stitching process given appropriate input data.
(Note that together with isn't always an equalizer for our diagram! There will be lots of presheafs for which this is not the case. But I'm considering here the case in which this data does provide an equalizer).
There is still a bit more to do here, but I'll stop here for today.
This is great, @David Egolf! You not only got the idea, you figured out how to explain it well, conveying the intuition. The phrase "stitching together" is very good for how we assemble an element of from elements of the that agree when restricted to the .
Now, to show that is acting like a sheaf with respect to and its open cover of the , there is still a little more work to do. We've seen that if and are an equalizer (as above), then we can always form an element of from elements of the that "agree on overlaps". It remains to show that in this case there is a unique element of that restricts to each of these elements on the corresponding .
Let be some tuple of such that the "agree on overlaps". We want to show there is a unique element of that maps to under . Let us assume that we have the situation for some . Since is a bijection, there is a unique so that and similarly a unique so that . Therefore, implies that . Since , this implies that . Since is injective, this implies that and hence . We conclude that there is indeed a unique element of that restricts to each on .
So, if and the corresponding are always an equalizer for the diagram under discussion, given any open set of and any open cover of of (with each ), then I think that is a sheaf.
It remains to show that if is a sheaf, then and provide an equalizer for any version (as and its open cover vary) of the diagram discussed above. But I will leave that for next time!
Alright, we're in the home stretch now! Let's assume that is a sheaf. We want to show that this diagram is then an equalizer diagram (for any open set in and any open cover of using some with each ):
diagram
To do this, let's suppose we have some other cone over and . We want to show there is a unique morphism from this cone to our proposed equalizer cone, in the category of cones over and :
morphism of cones
Now, each element of corresponds to some element , such that . As we saw earlier, this means that picks out some data from each such that the data selected on and agree when restricted to , for all . So, we can think of each element of as being associated to some collection of data on the that "agrees on overlaps".
Since is a sheaf, we know there is a unique element that restricts to each on , for all . So, let's define in that way: it sends associated to some collection of "compatible on overlaps" data to the unique "stitched-together" element of induced by the data .
This will ensure that .
We want to show that there is only this one way to define . We need for any . That is, must restrict on each to . Since is a sheaf, there is only one such element of that satisfies this condition. So, indeed there is a unique morphism of cones to our cone involving and .
We conclude that if is a sheaf on , is an open set in , and we have an open cover for formed from sets with each , then this diagram is always an equalizer diagram:
diagram
Next time, I hope to start the section on sheafification!
We discussed these two functors above:
By applying after , we can make a sheaf from any presheaf! The next order of business is to understand in detail how this works. [I've not had the energy for math the last little while, but when I have a bit more energy I plan to start working this exercise!]
I will just say that this business of turning a presheaf into a sheaf is called "sheafification", and there's at least one very nice way to understand it other than than merely saying it's the composite . So this is a very worthwhile thing to think about. For example, I think you can see that it's the left adjoint of the forgetful functor from presheaves to sheaves.
I am currently puzzling over this sentence from the blog post:
So, if you think about it, you’ll see this: to define a section of the sheafification of over an open set , you can just take a bunch of sections of over open sets covering that agree when restricted to the overlaps.
I am unsure what is meant here by a section of a sheaf. (The sheafification of is a sheaf).
In the blog posts so far, we've talked about sections of bundles, but not sheaves, as far as I can remember. However, we recently saw that there is an equivalence of categories, between the category of sheaves on and the category of etale spaces over . When we talk about "a section of a sheaf", is this perhaps a way to refer to a section of the sheaf's corresponding etale space (which is a bundle) under this equivalence?
Looking in the comments for Part 2, I notice this comment (from John Baez):
I keep want to call an element of a 'section' of the presheaf over the open set ...
I am guessing that this is the intended meaning in the snippet of Part 3 that I quoted above.
Under this assumption, we can rewrite the snippet I quoted above. (I use to refer to the sheafification of ).
To define an element of over an open set , you can just take a bunch of elements, one from each as ranges over a collection of open sets that covers , provided that the chosen elements agree when restricted to overlaps.
For example, if is the presheaf of bounded continuous real-valued functions on open subsets of , then an element of can be obtained by stitching together a bunch of bounded continuous real-valued functions, defined on various open subsets of , provided that the cover and the selected bounded continuous real-valued functions agree on overlaps. In particular, such an element is not necessarily bounded. So we see that the sheafification of a presheaf can (at least sometimes) assign new data to an open set - data that the original presheaf did not assign to that open set!
Indeed, I was a bit surprised too, when reading papers, to see that an element is also called a section of over . Because a section is usually defined w.r.t. a bundle .
I think there is a way to this way of speaking rigorous by noticing that, as presheaves, is included in its sheafification . The inclusion is given, for every open subset , by where is the continuous function from to the étale space of that assigns to every the germ of at .
The function is a section of the étale space of .
And the map is injective.
This is why I think we can speak of as a section of over .
Yes, that's right, in the settings where a sheaf can be seen as a local homeomorphism, its sections in are literally local sections of that map; in more general settings it's metaphorical.
@David Egolf sorry for the sloppy use of language. Peva and Kevin guessed: at some point I started wanting to call an element of of a sheaf or even a presheaf something a bit more evocative than an "element", so I started calling it a "section" - apparently without adequate warning.
It's probably safest - as far as clarity goes - if I just don't do this at all.
For sheaves, it's quite safe to abuse language after one has internalized the fact that every sheaf is (isomorphic) to the sheaf of sections of its etale space.
And one can even get away with it for presheaves, using the trick Peva explained: given a presheaf we have a god-given inclusion where is the sheafification of , so we can think of elements of as some of the elements of , which we can think of as sections of the etale space of .
However, since this is supposed to be an introduction to the subject, I should not be forcing my readers into all this "thinking of X as Y" baloney.
I will just go back and edit my blog posts to change the offending term "section" to "element" as needed.
Okay, on the blog post Topos Theory (Part 3) I've changed
So, if you think about it, you’ll see this: to define a section of the sheafification of over an open set , you can just take a bunch of sections of over open sets covering that agree when restricted to the overlaps.
to this:
If you think about it a while, you'll see that sheafification works like this: to define an element of over an open set , you can just take a bunch of elements of over open sets covering that agree when restricted to the overlaps .
This is somewhat symbol-ridden compared to what I'd written - I was trying to talk like an ordinary bloke - but since I'd just said that is called 'sheafification', it should make sense in context.
Great! Thanks for clarifying that!
And it's spelled out in even more detail in the following puzzle, which I've also corrected:
Puzzle. Prove the above claim. Give a procedure for constructing an element of given open sets covering and elements that obey
If you spot more places where I do this, or any other problems, please let me know and I can fix them.
This "way of thinking about " reminds me of something that recently blew my mind.
Urs Schreiber gave a very nice talk at the Zulip CT Seminar, about higher topos theory in physics. I couldn't understand all the details, but he starts with a "way of thinking" that, I think, is accessible for CT beginners. Here it is.
When you have a space that you want to study, e.g., the surface of the earth, it is easier if you have a plot. A plot could be, for instance, a function that sends a tuples of coordinates (e.g. the latitude and longitude) to a point in . In other words, a plot can be seen as a function from some "nice" or familiar space to the space .
The key point is that the study of the space is exactly the study of all the plots for every nice space . To study the surface of the earth is the same as setting up all local charts and giving procedures to transform any one of them into another (provided they overlap).
This collection of charts can be described as follows: for every , we have a set of plots . Moreover, these sets should be consistent with one another. If there is a nice morphism , then there is a function that maps any plot to its pre-composition with the nice morphism. And there is a collection of nice morphisms that mutually agree on the pairwise intersection of their domains, then they can be glued together.
In other words, the study of is exactly the study of a sheaf defined on some category of "nice" spaces.
And now the twist: if we follow this wanna-be equivalence, then any sheaf on can be thought of as the sheaf of plots into a generalized space. To emphasize even more the situation, we can use the name to refer to this generalized space.
And then, an element is to be thought of as a plot .
ps: I am still processing all the twists involved. I hope I did not over interpret this stuff.
Note that, I am using the symbols and that have been used in this thread to denote an open subset and a topological space .
However, in Urs Schreiber's talk, the "nice" category is not the category of open subsets of a topological space. The first example he gave was something like the category of cartesian spaces . Apparently, this relates to the distinction between a petit and gros topos
Right, that's an excellent explanation of an important direction sheaf theory goes after one learns the basic example of sheaves on a topological space! This direction is often attributed to Grothendieck. Unfortunately my blog articles don't get far enough to talk about this direction: I only managed to cover some very basic material. But Mac Lane and Moerdijk do talk about it.
The first step toward this direction is generalizing sheaves on a set with a topology to sheaves on a category with a Grothendieck topology. And this is the kind of sheaf their book is mainly about.
Here is the next puzzle:
Puzzle. Prove the above claim. Give a procedure for constructing an element of given open sets covering and elements that obey
Since we are wishing to construct an element of , perhaps it will help to think about what the elements of are.
is the presheaf of sections of . So, the set is the set of sections of over . Thus, an element of is a section of over .
What is a section of over ? Well, is a bundle over (where is the topological space the open sets are from), which I will write as . (So I am using the symbol in two different ways now). We recall that each point of has some set "hovering over" it, in the sense that for each . Thus, a section of over is a continuous function such that for each .
So, if we can construct a map with for each , we will have constructed an element of . To do this, we first recall that is the colimit of the diagram given by the (and their restriction functions) as ranges over all the open sets in that contain . In particular, this implies that there is a ("cone leg") function from to if .
The following procedure now occurs to me:
Intuitively, is a "local behaviour" function, that builds up some "global" data on all of by describing how it behaves at each point. intuitively says that our global data at will behave locally like how does. So, we can view this definition of as a sort of "gluing process" that forms global data from local data. This glued together data, , is then an element of the set assigned to by the sheafification of .
It still remains to show that defined by (for ) is continuous. But I will stop here for today!
I will now try to prove that defined by (when ) is continuous. I hope to do this by showing that is continuous at any point .
To show that is continuous at , we need to show that the inverse image under of each neighborhood of is a neighborhood of . (I use the word "neighborhood" to mean a subset that contains the point in question, while having a subset that is open which also contains that point).
So, let be a neighborhood of . We wish to show that is a neighborhood of .
After spending some time dredging up my memories from the earlier parts of this thread, I think it suffices to show this: the inverse image of any neighbourhood from a neighbourhood basis of is a neighbourhood of .
To see why this is sufficient, let be an arbitrary neighbourhood of and let be a neighbourhood basis for . Then, by the definition of neighbourhood basis, there is some with , so that . If is a neighbourhood of , then that means it contains an open set that contains , which is further a subset of . Thus, we can conclude that is a neighbourhood of .
Earlier in this thread, we discussed a convenient neighbourhood basis for a point . First, since every element of is a germ, we note that for some , where is an open set containing . Then, we can form the collection of sets of the form as varies over the open subsets of that contain .
If I'm reading the earlier part of this thread correctly, we saw that this collection of sets forms a neighbourhood basis for . Intuitively, each set in the collection is a set of "local behaviours of " on some open set containing .
Returning to the current puzzle, to show that is continuous at , it suffices to show that the inverse image of any neighbourhood from our neighbourhood basis is a neighbourhood of .
Now, by definition. Here, is one of our provided sheaf set elements such that .
In this setting, let us consider an arbitrary neighbourhood from our neighbourhood basis . It is of this form: where is an open subset of that contains . I will denote this neighbourhood as .
We wish to show that is a neighbourhood of . What is this preimage? Well, . So, if we have some , its preimage is the collection of points such that .
For each , we have that . Hence, contains at least all of . Since is an open set containing , we conclude that really is a neighbourhood of !
We conclude that the inverse image of an arbitrary set in our neighbourhood basis for is a neighbourhood of . Therefore, the inverse image of an arbitrary neighbourhood for is a neighbourhood for . That implies that is continuous at , for any . Thus, is continuous, as desired!
Intuitively, if we think of continuous functions as having output that changes gradually as their input changes gradually, then the "local behaviour" of our data on specified by our changes gradually as we move around in .
(I suppose one way to formalize this intuition is to note that, by making use of , any path induces a path in . Namely , we get .)
Whew! Hopefully I did that correctly. I'll wrap up for today by introducing the next puzzle:
Puzzle. Show that for any presheaf there is a morphism of presheaves .
Show that these morphisms are natural in , so they define a natural transformation .
The puzzle in the blog post actually states this, which I suspect is a typo:
Show that these morphisms are natural in , so they define a natural transformation .
It looks like you did that last puzzle correctly - great!
Either or has to be a typo above, so pick the one that makes sense.
If you get stuck... I fixed the typo in my blog - thanks! And I also fixed another mistake where I spoke of a "section" of a presheaf instead of an "element". You've convinced me it's really bad to start using "section" in this other way.
Awesome! It's great you fixed those things! It's nice to know that this thread is helping make the blog posts even better for future readers.
Alright, my next goal is to show that for any presheaf there is a morphism of presheaves . All the presheaves I consider here will be on some topological space .
So, we are looking for a natural transformation from to .
Let be a morphism in . Here and are open sets of , with . Associated to , we have this naturality square:
naturality square
So, we need to specify a function for each open set in .
In the previous puzzle, we saw a way to build an element of given a cover of open sets for , together with for each so that the given agree on overlaps.
We can now use this procedure specialized to the case where our cover of open sets for consists of just the single set . If we are given some , we should be able to construct some element of .
An element of is a section , which is a continuous function such that for each . (Here, is the set of germs of at .)
Given , our recipe for constructing an from the previous puzzle is this: for each .
Intuitively, this is just like : at each point it specifies a germ , which describes the "local behaviour" of at that point.
So, we can try setting for each . It remains to check that this specifies a natural transformation.
Picking some , let's see if the naturality square commutes. and will be for . These two functions are the same because restricting a presheaf element from to leaves each of its germs in unchanged.
We conclude that an arbitrary naturality square (of the form pictured above) commutes, so we have indeed found a natural transformation .
I'll stop there for now! But the next part of the puzzle is to show that we can build up a natural transformation in this way.
To zoom out briefly before pressing onward:
We are working towards showing that there is an adjunction between these two functors. An important feature of an adjunction is a "unit". If is left adjoint to , the unit is a natural transformation . Intuitively, a unit tells us something about what happens to an object or morphism of that takes a "round trip" across the adjunction and back.
In our case, we wish to construct a natural transformation:
This will relate each presheaf to its sheafification .
To construct such a natural transformation, we need to say what each of its components are. Any component of is a morphism from some presheaf to its sheafification .
So, for each , we need a natural transformation .
Since is a natural transformation between presheaves on , it has one component for each open set of . Using our work from last time, how should we set ?
We can try this:
Note that and . By I mean the germ of at .
(We recall that an element of is a section with respect to which sends each germ to its associated point in ).
We saw last time that defined in this way is indeed a natural transformation from a presheaf to its sheafification. It remains to show that all the assemble to form a natural transformation .
To show that we get a natural transformation, we need to show that an arbitrary naturality square commutes. Let be an arbitrary morphism in . Our corresponding naturality square is:
naturality square
We want to show that is true. Both sides of this equation are natural transformations. To show that two natural transformations are equal, it suffices to show that each of their components are equal.
Hence, we now aim to show that , where is some arbitrary open set of .
In picture form, we wish to show that this diagram commutes:
naturality square evaluated at U
To show that this diagram commutes, we can trace around an arbitrary element and see what happens to it. We want to show that is true.
I start by aiming to expand the right-hand side of this equation.
We don't know anything in particular about , so we can't further simplify . However, we do know what is like. From above, we have . Each output is a function , which is in fact a section of .
Thus, .
We next turn our attention to the left-hand side of the equation above.
We know what is. It is . Thus, the left-hand side is:
Having re-written each side of the equation we wish to show is true, our new goal is to show this equation is true:
(Here, each side of the equation is a section of over . In particular, each side of the equation is a function .)
To go further with this, I think we need to re-write , so that we can figure out what it does to the input .
To start doing that, let's start by recalling what is.
Scrolling back in time, I found this:
David Egolf said:
Ok, now that I understand this neighbourhood basis, let me see about trying to use it to prove that is continuous. Recall that is going to be a morphism of bundles induced by a morphism of presheaves on . And acts by .
Translating this to the notation I'm currently using, we have:
here is shorthand for where . So we are making using of .
Next, I want to work out what is.
To do this, it will be helpful to recall what outputs given a morphism of bundles. After reviewing an earlier part of this thread, it appears that:
So, is a natural transformation from to . These are both sheaves on . The -th component of this natural transformation is then the function . Here, is a section of . Since , we have that , as desired.
Intuitively, the -th component of this natural transformation takes data described on in terms of local behaviour at each point in , and then transforms each piece of local behaviour (a point in ) to a new piece of local behaviour (a point in ).
In brief, and .
We are now in the position to try and rewrite this expression: . This can now be rewritten as:
We noted above that , where is shorthand for if . Thus, , and so:
We were able to rewrite each side of to the expression . Thus, this diagram commutes at any , and so it commutes:
naturality square evaluated at U
Since was arbitrary, we conclude that this diagram commutes for any open set of .
Thus, is true and this naturality square commutes:
naturality square
Since was an arbitrary natural transformation between presheaves on , we conclude that this diagram commutes for any natural transformation between presheaves on . So, any naturality square for commutes.
Finally, we conclude that is a natural transformation, as desired!
This means we have a candidate for the unit of an adjunction . Next time, I plan to think about what the counit of such an adjunction could look like!
Great work so far!
Next up, we wish to construct a candidate for the counit of an adjunction . This will be a natural transformation .
To start with, let's pick some bundle . The -th component of will be a morphism of bundles .
@John Baez I believe there is a typo in this part of the blog post:
Then you want to construct a morphism of bundles from your etale space to my original bundle.
I think the in this sentence should be , instead.
We want to define a morphism of bundles . I don't have the energy to figure out how to do that today. But I will quote the relevant part of the blog post to facilitate doing this on another day:
We get points in over from sections of over open sets containing . But you can just take one of these sections and evaluate it at and get a point in .
(Note that the blog post uses the notation where I use the notation .)
David Egolf said:
Then you want to construct a morphism of bundles from your etale space to my original bundle.
I think the in this sentence should be , instead.
I think you're right - I'll fix that. Thanks!
I also fixed another mistake where I used instead of . It's both a blessing and a curse that Greek has two e-like letters.
The real trick is to use and for two related variables.
Having taken a break, I think I'm feeling ready to resume my efforts here!
Upon reflection, the recent discussion relating to the adjunction has felt rather "heavy". I think it will help if I start by summarizing exactly how and work. Then, contemplating should feel a bit easier, I hope!
Good! Ideally when you have a clear mental image of and you can sort of "see" (in a more or less literal sense) what and should be - i.e., how close taking a round trip around between presheaves and bundles comes to getting you back to where you started. To me, seeing what's going on is more important than writing up a proof with lots of symbols, since if I can do the former, I believe I can do the latter when pressed.
(This is after having done 8 years of homework assignments and taught years of courses that kept challenging that belief, in sometimes quite threatening ways. :upside_down:)
I like that perspective @John Baez ! I think I've slowly started to develop some intuition for and , but I have a ways to go still. I'll keep the goal of developing a clearer picture in mind as I work on this.
Here's a summary for :
Great! Just for fun, I'll say: I seem to have mentally compacted a lot of stuff to a little picture of a bundle over (a rectangle sitting over a line), an open set , and a bunch of sections of over (some graphs of continuous functions defined on , drawn in that rectangle I mentioned).
Then there's a fancier picture where I have two bundles over and a bundle map . I can see how it sends sections of over to sections of over .
Drawing the pictures would have taken one tenth the time it takes to describe them!
That's helpful! For a given bundle , it makes sense to draw all the points that sends to as "hovering over" . And that is nicely visualized using the rectangle you describe!
The low dimension of this picture is also nice, because it lets us easily visualize a section. As a slightly fancier version, one could also imagine a rectangular prism floating over a rectangle. But that would be harder to draw!
Yes, it's funny how much advanced work on topology boils down to drawing pathetically simple 2-dimensional pictures and pretending they're higher-dimensional. Our retina is essentially 2-dimensional and we have to live with that.
I made an attempt to draw the second picture you described:
picture
This picture aims to illustrate how we get a section of from a section of , given satisfying . This condition on basically ensures that each arrow describing how maps points is vertical in this picture.
Nice! Yes, this sort of picture is always hovering in my eyeballs as I work with bundles, sheaves and presheaves... guiding me.
I'll stop here for today. Next time, I'd like to do a similar thing for . Namely, I hope to review how it acts on objects and morphisms, and maybe to draw a related picture.
Great!
The trick is to figure out how you want to draw a germ.
I'll start with a summary for :
Now, the challenge is to try and draw a picture illustrating this!
This is much tougher to draw. Do you want to hear how I draw a germ? It uses some 'artistic license', I'm afraid.
John Baez said:
This is much tougher to draw. Do you want to hear how I draw a germ? It uses some 'artistic license', I'm afraid.
Sure! That would be great!
Here's what I've drawn so far:
picture
The horizontal line is our topological space . The region indicated by the large oval is an open set of . Applying to gives us the set , which is represented here as a box floating above .
The squiggly line in that box indicates an element . A germ of at is indicated by a small circle around part of the part of associated to . (Intuitively, this is given in the limit by restricting to smaller and smaller open sets contained in and containing ). Finally the bundle projects this germ back down to .
The space of germs does not appear in this picture, which is somewhat unsatisfying.
However, I like this about the above picture: it illustrates how going from a presheaf to a bundle of germs involves a transition. Namely,
Here's a fancier version of the above picture, aiming to illustrate part of how sends a natural transformation of presheaves to a morphism of bundles:
picture
I've added a second box hovering over , corresponding to the set . Since we have a natural transformation , we have a function . The squiggly line in the box indicates . "Zooming in" on at gives us the germ . Finally, the bundle projects this germ back down to .
These pictures aren't perfect, but I think making them has been helpful for developing some intuition about what's going on!
Next time, I plan to start thinking about the "round trips" and . It would be very cool if we could figure out some pictures to illustrate these round-trip functors!
David Egolf said:
The squiggly line in that box indicates an element . A germ of at is indicated by a small circle around part of the part of associated to . (Intuitively, this is given in the limit by restricting to smaller and smaller open sets contained in and containing ).
Hey, that's more or less how I draw a germ. Morally, the germ of at is like restricted to an 'infinitesimal' open set containing .
David Egolf said:
The space of germs does not appear in this picture, which is somewhat unsatisfying.
Yes, the space of all germs (say of the sheaf of continuous or smooth real-valued functions) is too large to draw except in a completely oversimplified way. I don't see a way around that.
David Egolf said:
These pictures aren't perfect, but I think making them has been helpful for developing some intuition about what's going on!
Good! I find pictures helpful as long as I know their limitations, but maybe I haven't thought enough about how the mere process of trying to draw them makes me think about things in new ways.
Next time, I plan to start thinking about the "round trips" and . It would be very cool if we could figure out some pictures to illustrate these round-trip functors!
One challenge is that the etale space of a sheaf of sections of a bundle is usually huge compared to the original bundle. But you can just draw it as a rectangle labeled 'huge'. :upside_down:
Today, I want to try and draw a picture to illustrate the "round-trip" functor .
The intuition I have for this functor is as follows:
I'll next try to draw a picture that illustrates this process.
Here's the first part of the process, given by applying :
picture
In the top part of the picture, I visualize part of a presheaf on . The left box represents the set of data attached to the open set . Similarly, the right box represents the set of data attached to the open set . The wiggly lines in these boxes represent elements of these sets.
Because is a presheaf, we can zoom in on these wiggly lines to various points, and get germs. Each germ here is represented by a small circle, intending to convey the idea of "zooming in" on a wiggly line at some point. There are many different ways our data can wiggle, and so there are a huge number of germs. I've drawn a big cloud of little circles to try and represent the space of germs very roughly.
The arrows coming down from the top of the diagram to the bottom intend to illustrate how the germs of the pictured wiggly lines would be included in this huge space of germs.
Next, here is a picture for the second half of the process, which involves applying :
picture
sends a bundle to its sheaf of sections. A section in this case involves "flowing through" germ space in a continuous way, picking out a local behaviour at each point of some open set of . Such a section now describes some data attached to an open set of again!
In the picture, I've illustrated how we can "glue together" the two presheaf elements and that we started with. We get , which is an element of the set attached to by our sheaf of sections of .
If already supported "gluing together" of elements that agree on overlaps (to a unique result), then was in fact already a sheaf. But if contained elements agreeing on overlaps that couldn't be glued together, then this "round trip" process will result in a sheaf-version of ! And this will be different than in this case because it contains some additional "glued together" elements.
I'll stop here for today. Next time, I'm hoping to draw a picture illustrating the other round trip, namely
.
That was very nice. You rose to the challenge of drawing countless germs without creating a complete mess!
I'm realizing I don't yet have clear intuition for the round trip functor . To my understanding, this process converts any bundle over to an étale space over . (I will write "etale space" to mean " étale space", for ease of typing).
I think we proved that earlier in the thread, but I would struggle to explain how this happens using a picture.
There's two things I want to do at this point:
So for you start with a bundle over , then form its sheaf of sections, then look all the germs of sections and make these into the points of a whopping big new bundle over .
Any point in the whopping big new bundle gives a point in the original bundle, since a germ of a section at a point gives the point .
I hope I didn't ruin things just now - I usually try to play coy and let you figure out almost everything yourself!
Thanks! I don't fully follow what you said (yet), but I will try to draw a picture of what you just said and see what happens!
John Baez said:
I hope I didn't ruin things just now - I usually try to play coy and let you figure out almost everything yourself!
I do enjoy figuring things out, but in this case a nudge in the right direction is appreciated!
To draw a picture, I need to choose a bundle to start with. I would like to choose a bundle that is not an etale space, so I can see how upgrades it to an etale space.
Now, a bundle is an etale space exactly if is a local homeomorphism. A local homeomorphism is a continuous map so that about each point in the domain there is some open set so that is a homeomorphism (where and are both equipped with the subspace topology).
A homeomorphism is in particular an open map: it sends open sets to open sets. The inverse of a homeomorphism is also a homeomorphism, and so it will also send open sets to open sets. Because homeomorphisms are bijections, if we have a homeomorphism , we get an induced bijection of open sets, going between the open sets of and the open sets of .
So, to show a bundle is NOT an etale space, it would suffice to find some point so that any open neighborhood of has "too many" open sets compared to the image of under . To be more precise, it would suffice to show that there is no open set containing such that induces a bijection of open sets.
Here's a picture aiming to illustrate such a situation:
not a local homeomorphism
Here, is a subspace of , equipped with the subspace topology. is the real line, and is the continuous map given by composing the inclusion of to and the projection of down to the -axis.
is an open set containing , where is contained in the "vertical section" of . Notice that has many open neighborhoods in , given for example by various "vertical" open intervals containing . However, the image of under is just a single point. Viewed as a subspace of , this image only has two open sets: the empty set and the set containing the single point . Hence, there can be no bijection between the open sets of and the open sets of .
(And I suspect that this is true in fact for any open set containing : indeed, any such open set contains a small (and hence "vertical") open interval about , which is an open set that gets "collapsed" when passing to the image of ).
EDIT: More simply, restricted to any open set containing will be non-injective. Hence, this restriction of can't be a homeomorphism.
We conclude that is not a local homeomorphism, and so is not an etale space over .
Next time, I'm hoping to draw a picture illustrating the process of applying to this bundle . I'm curious to see how the resulting sheaf of sections will reflect the fact that is not a local homeomorphism!
After writing the above, I realized that the "local non-injectivity" of is what stops from being a local homeomorphism. With that in mind, I think this bundle is also not an etale space:
picture
The idea is that is some space that "branches" at some point. If we pick right at the branching point, then any open neighborhood of will contain points from both the "upper" and "lower" branches. And hence the projection won't be injective even when restricted to really small neighborhoods of , like the picture . Thus, this can't be a local homeomorphism.
Interesting. So maybe the following holds: if the bundle is étale, then all fibers must be in bijection with one another.
I'll think about it.
edit: a first observation is that we must assume to be connected
David Egolf said:
After writing the above, I realized that the "local non-injectivity" of is what stops from being a local homeomorphism. With that in mind, I think this bundle is also not an etale space: picture.
That's right! Most bundles that you can actually draw are not etale spaces! For example the bundle I always draw when someone asks me to draw a bundle:
is not an etale space.
The only etale spaces I can easily draw are the 'covering spaces', where every point has a neighborhood where for some discrete space :
Peva Blanchard said:
Interesting. So maybe the following holds: if the bundle is étale, then all fibers must be in bijection with one another.
This is certainly true if is connected and is a covering space. So the challenge is to think about etale spaces that aren't covering spaces.
Hmm, now I see that some such etale spaces are quite easy to draw, like this: is the real line, and is the open interval , and is the inclusion of in the line.
John Baez said:
Hmm, now I see that some such etale spaces are quite easy to draw, like this: is the real line, and is the open interval , and is the inclusion of in the line.
Yes! Looking on Wikipedia, I see that if is an open subset of , then the inclusion is a local homeomorphism, provided that is equipped with the subspace topology. (So in this case is an etale space over ).
That same article also notes that if is an open subset of , then these two conditions on a continuous map are equivalent:
(I assume that is again to be equipped with the subspace topology).
Thanks! I peeked at the Wikipedia page and I see that to prove the second condition implies they use a substantial theorem in algebraic topology, called invariance of domain, proved by Brouwer. This is one of the results they dole out in a first course on homology theory, to prove you can use it to settle questions that aren't obviously about homology theory.
Peva Blanchard said:
Interesting. So maybe the following holds: if the bundle is étale, then all fibers must be in bijection with one another.
I'll think about it.
edit: a first observation is that we must assume to be connected
For this condition on fibers to hold, we can also note that we need to be surjective, at least if is non-empty. Otherwise, we'll have at least one non-empty fiber and at least one empty fiber.
(In the case mentioned above by @John Baez, a covering space on a connected space is always surjective).
Your parenthetical claim actually doesn't follow from Wikipedia's definition of covering space. According to that definition can have empty fibers, e.g. we can have , because the "discrete space" mentioned in that definition can be empty.
It's good to allow empty fibers in that definition since ruling out the empty set by fiat tends to produce categories with worse properties: e.g. the empty covering space of is the initial object in the category of covering spaces of .
However, people are vastly more interested in covering spaces where the fibers are nonempty, and then is surjective.
Wikipedia says "some authors" require surjectivity in the case where is not connected. Those authors should probably require it even when is connected, since they obviously don't like covering spaces with empty fibers!
Grumble. Reminds me of an old-fashioned professor I TA'd linear algebra for who wasn't quite convinced that the zero vector space has a basis.
Born back before they discovered the empty set.
John Baez said:
Your parenthetical claim actually doesn't follow from Wikipedia's definition of covering space. According to that definition can have empty fibers, e.g. we can have , because the "discrete space" mentioned in that definition can be empty.
I admit I didn't try to prove this myself! I just read this in that Wikipedia article:
If is connected, it can be shown that is surjective...
Is that claim in the article incorrect? (As far as I can tell, that Wikipedia article doesn't actually try to prove this claim.)
Take the Wikipedia definition of 'covering space' and see if is a covering space when and is the unique map to . If this is a covering space by their definition, then it can't be true that
If is connected, it can be shown that is surjective...
This would then be a good time to start your career of correcting Wikipedia pages. :upside_down:
But it's possible I didn't read their definition carefully enough, and for some reason it rules out the case .
Alright, let me see what happens when we consider when is empty! (Time to put Wikipedia to the test!)
Adjusted for our notation, Wikipedia's definition reads as follows:
Let be a topological space. A covering of is a continuous map such that for every there exists an open neighborhood of and a discrete space such that and is a homeomorphism for every .
In the next sentence the article elaborates on this, and indicates that each is to be open (presumably as a subset of ).
Alright. Now, consider the case where is empty. Then each is also empty. Let us assume that is non-empty, and let be a non-empty open neighborhood of . For to be a covering, we need to be a homeomorphism for each . However, in this case any is mapping from an empty set to a non-empty set .
EDIT:
I think the following is wrong, but I leave it here for context:
[Hence can't be a homeomorphism, and can't be a covering. We conclude that Wikipedia's definition of a covering excludes from being empty, at least when is non-empty.]
Wait a second! I think I missed something possibly important.
Strictly speaking, we are not required to have a homeomorphism . Instead, we are required that for every we have a homeomorphism . So, if is empty, this condition can still hold trivially!
Let us consider . Since is empty, is also empty. We want to write the empty space as a coproduct where is empty. What is the empty coproduct in ?
I expect that the empty coproduct is the colimit of the empty diagram, which is the initial object of . So, the empty coproduct should be the empty space.
David Egolf said:
Strictly speaking, we are not required to have a homeomorphism . Instead, we are required that for every we have a homeomorphism . So, if is empty, this condition can still hold trivially!
Yes, that's how the empty set tricks people: a lot of things are vacuously true about it.
David Egolf said:
I expect that the empty coproduct is the colimit of the empty diagram, which is the initial object of . So, the empty coproduct should be the empty space.
Right. Maybe you can see why the editors of this page, who are probably not category theorists, slipped up around here.
So, it seems that can indeed be empty in Wikipedia's definition of a covering . To summarize the case when is empty and is non-empty:
So I indeed stand corrected! When using Wikipedia's definition of a covering, one can have a non-surjective covering even when is connected. The empty map provides such a covering when is empty and is connected.
(Nothing like a bit of fun with the empty set to start out the day :sweat_smile: !)
Before wrapping up for today, I wanted to draw a picture. Specifically, I want to start thinking about the sheaf of sections of this bundle :
bundle
I'm quite curious to see how the local non-injectivity of at gets removed as we apply !
Intuitively, the local non-injectivity of this map comes from the following fact: the two "branches" to the right of have points arbitrarily close to . I am guessing that by applying the above process we will end up with a bunch of germs associated to , but where each of these germs will only be "really close" to some germs associated to one of the two branches.
Here's a picture illustrating two sections of , namely and :
picture
Each of and is a section of over the open set . So, , where is the sheaf of sections of .
Notice that the section goes along the upper branch, while goes along the lower branch. I strongly suspect that the germ of at will be different from that of the germ of at . Intuitively that would reflect the fact that each section is passing through in a certain way!
If this intuition is right, we can begin to see the single point split into multiple germs at that point, including the germs I just described and . I suspect that is close to germs from near - for example, germs of on the upper branch. And similarly I suspect is close to germs from near - for example germs of on the lower branch.
Intuitively, we are getting multiple germs associated to . And I suspect that each of these germs is only arbitrarily near to germs from a section that flows along on one of the two branches splitting off from . So we can perhaps begin to see how our bundle of germs of sections of could be locally injective!
(I'll stop here for today)
David Egolf said:
So I indeed stand corrected! When using Wikipedia's definition of a covering, one can have a non-surjective covering even when is connected.
Okay, so we see eye to eye. Wikipedia's definitiom looks correct to me, and with this definition a covering space is surjective if is nonempty and is connected.
David Egolf said:
Notice that the section goes along the upper branch, while goes along the lower branch. I strongly suspect that the germ of at will be different from that of the germ of at . Intuitively that would reflect the fact that each section is passing through in a certain way!
Yes, this and all your other inuitions are right! :tada:
To prove this particular fact, note (or show) that two sections of a bundle, say and , give two elements of its sheaf of sections, and these two elements have the same germ at a point iff and become equal when restricted to some open neighborhood of . But in you picture and are not equal on any open neighborhood of .
Peva Blanchard said:
Maybe the following holds: if the bundle is étale, then all fibers must be in bijection with one another.
I've been thinking about this. I did not manage to prove that all fibers must be in bijection with one another yet, but I made a small step: when the bundle is étale, then every fiber is discrete (w.r.t. to the subspace topology).
Indeed, let a point in the fiber over . Since is étale, there exists an open neighborhood of s.t. is a homeomorphism. By definition of the subspace topology, is open in . But, being a homeomorphism implies that . Hence, every singleton in is open. I.e., is discrete.
This makes the "covering space" mental picture particularly hard not to see.
Peva Blanchard said:
Peva Blanchard said:
Maybe the following holds: if the bundle is étale, then all fibers must be in bijection with one another.
I've been thinking about this. I did not manage to prove that all fibers must be in bijection with one another yet...
Good! It's not true!
A while ago in this thread I mentioned a simple counterexample that works even when is connected. (I didn't say it's a counterexample, but it clearly is.)
You can also find a counterexample that's a covering space when is not connected.
But yes, I like your proof that the fibers of an etale space are discrete.
Oh indeed! I've been unknowingly assuming that every fiber was not empty. The inclusion provides a counter-example. From there, we can build other counterexamples with fibers of arbitrary different sizes.
E.g., with defined as the second projection on every term of the disjoint sum, and the 's being arbitrary discrete spaces.
I should probably reformulate my original goal then. "If is an étale bundle, with connected, and surjective, then all fibers are in bijection with one another". I'll think about it.
Alas, that conjecture is false too - and I think with your ability to create etale spaces with fibers of different sizes, it should be easy to disprove.
Maybe you should try to prove that if is a covering space, and is connected, then all fibers are in bijection with one another.
Etale spaces seem too flexible for a good result of this sort unless we essentially require that they're covering spaces.
However, I now see, hovering before my eyes, an etale space where is connected and all fibers are in bijection with one another, which is not a covering space.
Peva Blanchard said:
...when the bundle is étale, then every fiber is discrete (w.r.t. to the subspace topology).
I think this result provides some good intuition! I saw somewhere an analogy between an etale space and a pastry having many thin layers. I think one could view this result as saying that each point in a given fibre is "apart" from the other points in that fibre. And this could be viewed as saying that each point in any given fibre is in a separate "layer" from the others in that fibre.
Oh yes. Actually, if is any collection of open subsets of , then we can form the coproduct
with the obvious projection . It is then an étale bundle.
When is the set of all open subsets of , it looks like the corresponding bundle is initial among all the étale spaces over .
Anyway, this gives indeed a lot of flexibility.
David Egolf said:
Peva Blanchard said:
...when the bundle is étale, then every fiber is discrete (w.r.t. to the subspace topology).
I saw somewhere an analogy between an etale space and a layered pastry (one with many thin layers).
I had the exact same picture in mind! It's called "mille-feuille" in french. And it's quite crispy. The only difference is that étale spaces seem to lack cream. Étale spaces taste very dry.
Peva Blanchard said:
Oh yes. Actually, if is any collection of open subsets of , then we can form the coproduct
with the obvious projection . It is then an étale bundle.
When is the set of all open subsets of , it looks like the corresponding bundle is initial among all the étale spaces over .
Anyway, this gives indeed a lot of flexibility.
If I understand correctly, in particular this lets us have layers that overlap in terms of their projection:
picture
The projection map is not injective, but it is locally injective. Some fibres have two elements, and some fibres have a single element. So, the fibres aren't in bijection with one another, even though the projection is surjective and is connected.
Yes, that's the perfect example of that phenomenon!
John Baez said:
Maybe you should try to prove that if is a covering space, and is connected, then all fibers are in bijection with one another.
Let's try to prove this.
Fix a point in the base. There is an open neighborhood of such that the pre-image is homeomorphic to where is the fiber over . In particular, for every , the fiber is in bijection with .
This suggests to consider
We want to prove that . Since is connected and is not empty, it suffices to show that is both open and closed.
Clearly, . For any , there exists a neighborhood of such that . Then, . This proves that is a neighborhood of every of its points. I.e., is open.
Let , i.e., . Using the defining property of the covering, there exists a neighborhood of such that . In particular, for every , . Thus., . I.e., is open, and is closed. qed.
Great! I wasn't sure how to prove it, I just knew it was true. :upside_down: But this looks like the best way to prove it.
So, I think the idea of "all fibers being in bijection with one another" is not really something we should expect of etale spaces, unless they are covering spaces of connected spaces.
But here's the example I was imagining of an etale space where all fibers are in bijection with each other even though it's not a covering space!
Start with the map that projects onto the first coordinate:
Then let be the subset that's the union of all horizontal lines
where , together with the open line segment
Restricting to we get a map I'll abusively call . This is an etale space over a connected space, and all the fibers are in bijection with each other (since they're all countably infinite), but it's not a covering space.
Another example would be to take two copies of and quotient them together on . This gives a space which has fibers of cardinality above and fibers of cardinality above . So if we add a disjoint copy of then the fibers will have cardinality everywhere.
Thanks - that's a much more exciting example, because it doesn't use the dirty trick .
That’s the kind of thing I was thinking of, but then gave up because it isn’t a local homeomorphism at the branch point.
I suspect you can’t actually get an example of a local homeomorphism with fibers of constant finite size over a connected base that isn’t a covering.
Kevin Carlson said:
That’s the kind of thing I was thinking of, but then gave up because it isn’t a local homeomorphism at the branch point.
What if we define a topology such that the branch point y has open neighborhoods U containing just some of the lower branch at left (together with some of the stuff at right) and open neighborhoods V containing just some of the upper branch at left (together with some of the stuff at right)? Then their intersection needs to be open, and it doesn't 'look' open, but let's accept that.
A function p : Y → X between two topological spaces is called a local homeomorphism y ∈ Y has an open neighborhood U whose image f(U) is open in Y and the restriction f | U : U → f ( U ) is a homeomorphism.
I was wondering about that too but if you intersect two of those neighborhoods you see that there’s an open subset to the right of the branch point that looks like a half-open interval. In other words near the branch point I think the space we’ve specified is just a disjoint union of two open intervals and a half open interval, and so again this isn’t actually a local homeomorphism!
If you allow things continuing into just the upper left to be open but not just the lower left, then maybe…
Oh, eek, Oscar hasn’t specified a branch point in the way I thought since he glued over not !
Whew, this is confusing.
Yeah, I’ve never thought about the line-with-one-and-a-half origins before
Ok now I think Oscar’s example is actually fine.
Okay, I hadn't really understood what Oscar's example actually was. I thought there was one point where the two branches merge, but there are two, and that saves the day, apparently.
Though of course that's necessary to accomplish what we're looking for: the same number of points in every fiber!
Kevin Carlson said:
Oh, eek, Oscar hasn’t specified a branch point in the way I thought since he glued over not !
Right. The way I think about it is that bundles over can split , merge , begin or end . But whichever way they go the bit with above it has to be an open set. That's why I quotiented by above, leaving two origins.
Yeah, it’s a nice (and weird) example.
Here's another way to think about this. Start with a bundle like this:
This is the bundle Kevin and I originally thought Oscar was talking about: the fiber over the arrow has just one point, while each fiber to the left of that has two, and each fiber to the right has one.
Then apply the functor that @David Egolf has been investigating. So: take the sheaf of sections of our bundle , and then form the bundle of germs of that sheaf. This new bundle is an etale space!
The sheaf of sections of the original bundle has two different germs at the arrow, and two at each point to the left, and one at each point to the right.
So our etale space has two points above the arrow, while our original bundle had just one!
The counit must collapse those two points down to one.
So, it maps what Oscar was trying to talk about, down to what Kevin and I originally thought he was talking about.
Just to be sure that I understand the example correctly, especially the reason why it is not a covering space, I'm drawing it like this
image.png
If it were a covering, then the portion of the total space enclosed in the two yellow dashed lines would be homeomorphic to .
From the picture, we can see that it does not look like two disjoint copies of . But I have trouble formulating a precise argument that rules out the existence of a homeomorphism to .
I thought about connectedness, but if I'm not mistaken the white part is connected, hence has 2 connected components. So the invariant "number of connected components" is not enough to distinguish and .
Just looking at the number of connected components isn't enough, but you can use the fact that there are points in the base space for which both points above them are in the same component. That can't happen with .
Ah yes, I see, thank you! My mistake was in thinking of as a homeomorphism between topological spaces, whereas it should be be an isomorphism of bundles over .
Or more precisely. As a mere topological space, is indeed homeomorphic to . But, they are not isomorphic as bundles over , thanks to your argument.
They're not isomorphic as bundles over . But I don't think they are homeomorphic either. It's just harder to write down an invariant that proves that they're not homeomorphic.
Oh wait, one is Hausdorff and the other one isn't. That's a simple invariant.
Oh yes, the two origins cannot be separated in your example. I was wrong, and are not even homeomorphic as mere topological spaces
That's true. But the really important lesson here is that given bundles , , they are only isomorphic if there's a homeomorphism with . That last equation makes isomorphism of bundles much more than mere homeomorphism of their total spaces.
This means that there needs to be a subject of "invariants of bundles" which goes beyond the subject of "invariants of topological spaces". Algebraic topology provides lots of both. For vector bundles, the most famous invariants are the [[Chern classes]] (for complex vector bundles) and [[Pontryagin classes]] and [[Stiefel-Whitney classes]] (for real vector bundles). All these can be defined using [[classifying spaces]], which we've been talking about in another thread.
There are also "invariants of sheaves", which are especially well developed for sheaves of vector spaces - sheaves where the set attached to any open set is actually a vector space, and the restriction maps are linear. (Or, in algebraic geometry, sheaves of modules of the so-called "structure sheaf" - probably not worth explaining here. Grothendieck was especially involved in studying these, and his studies of such sheaves eventually led him to topos theory.)
John Baez said:
So our etale space has two points above the arrow, while our original bundle had just one!
John Baez said:
The counit must collapse those two points down to one.
This is helping me understand why our counit natural transformation needs to go from to and not from to . If we focus on a single point "hovering" in some bundle, and then apply , we get out the germs of sections that go through that point. It's very natural to define a function that collapses each of these germs back to the original point. By contrast, I don't see a nice way to define a function that would send our original point to some particular germ of a section that goes through that point. (I don't see a nice way to pick some distinguished germ that is most deserving of being mapped to by our original point).
Indeed, I doubt there's a way to pick a distinguished germ in general. It could be good to try to prove this by showing there exists no natural transformation . I think one can prove this by considering a single bundle that has lots of automorphisms, like the bundle
Each automorphism gives a naturality square that needs to commute, and I think one can show they can't all commute, no matter how one chooses .
I feel more comfortable now with and , thanks to the discussion and picture-drawing above. Building on this understanding, I now want to return to the counit for our adjunction .
We are looking for a natural transformation . Given a particular bundle , is its sheaf of sections, and then is the bundle of germs of that sheaf of sections. Thinking of our bundle as some geometry "hovering over" , then the sections are ways to "travel through" parts of that geometry, and we can get multiple germs at some point if there are sections with different germs passing through that point.
Given this intuition, intuitively I am hoping we can define a morphism of bundles that sends each germ of a section associated to some point back to . Intuitively we are "collapsing" the cloud of germs associated to a point back to that point.
Let be a section of over the open set , having germ at . Then we want to send to .
The first order of business is to check that really is a morphism of bundles. I will denote the corresponding function as , where is the space of germs of the sheaf .
This function needs to preserve fibers and it also needs to be continuous. By "preserving fibers" I mean that it maps any data hovering over to data hovering over , for any .
First, let's check that preserves fibers. We have for any germ of a section . But both and "hover over" in their respective bundles, so does preserve fibers.
I next want to show that is continuous. We know that the projection of germs to their "base" point is continuous. And we know that is continuous, for any section . I'm hoping to somehow use these facts to prove that is continuous.
Let , where is a section of over the open set . I'd like to find an open set containing such that the projection of the germs in that open set all land in .
I seem to recall that the set of all the germs of over is an open set containing in .
I'll use to refer to this open set containing . Then, we get a restriction of our projection (for germs of sections) as . This is a continuous function, because restricting a continuous function in this way always yields a continuous function.
Then, acts by . And this function is continuous, because it is the composite of continuous functions.
Further, we note that this function is the restriction of to .
Since each germ is the germ of some section for some open set , we can perform a similar procedure at each point in . The various form an open cover for , and our restriction of to each of these open sets is continuous.
We conclude that is continuous, because we can always "glue together" continuous functions that agree on overlaps to make a continuous function.
I hope I did that right! :sweat_smile: I'll stop here for today, at any rate.
Oh, one last thing!
I just realized I didn't check that is really a function. We need to show that if then . But if that implies that and are equal on some small enough open neighorhood of . In particular, they are equal when evaluated at . So, really is a function.
Yes, all this looks perfect! Congrats!
Awesome! :big_smile:
To wrap up this puzzle, it remains to show that our components assemble to form a natural transformation .
To check this, we consider a naturality square associated to a morphism in :
naturality square
This diagram lives in , so each of the morphisms here is a morphism of bundles. To show that this diagram commutes, it suffices to show that the corresponding functions commute. So, we consider this square of topological spaces and continuous functions:
square
Now, two functions with the same source and target are equal iff they agree when evaluated at any element. So, let's trace an element of around this square, and see what we get via the top right path as compared to the bottom left path.
Our space is the space of germs of sections of the bundle . So, an element of this space is of the form , which refers to the germ of some section of at the point , for some open set .
Going around the bottom left path, , where we have used the definition of discussed above.
Going around the top right path, I need to recall how acts on a morphism in .
First, converts this morphism of bundles to a natural transformation between sheaves. This natural transformation at component (where is open subset of ) sends a section of to a section of by post-composition with . That is, a section gets mapped to a section of given by . So, the -th component function acts by post-composition with .
Then we apply , which needs to take this natural transformation and produce a morphism of bundles. The bundles in question are specifically bundles of germs of sections. Given a germ , with a section over , we want to get a germ "hovering over" in . We do this by applying our -th component function to , to get the germ .
We can now trace around the top-right path in our square. We get . We conclude that the square commutes, and so our original square of bundle morphisms also commutes.
Thus, an arbitrary naturality square for commutes, and so is indeed a natural transformation!
I want to start on Part 4 of the blog post series today! (My motivation to work through this series remains fairly high; it's just a matter of finding the energy to do so.)
The goal in Part 4 is to learn how to "pull back" sheaves along a continuous map. First, we review how to push them forward along a continuous map.
Given a continuous map we get an induced (preimage) map from the open sets of to the open sets of . This in turn induces a functor from to , which we'll call . By precomposing with we can start with a presheaf on and end up with a presheaf on .
The resulting sheaf is the "pushforward" of our original sheaf along , denoted , so that we have . This process can be extended to morphisms in a functorial way, so we end up with a functor from presheaves on to presheaves on . In fact, this also gives a functor from sheaves on to sheaves on .
Now we want to go in the opposite direction: from sheaves on to sheaves on , given a continuous function . As a mini-challenge to myself, I'm going to see if I can guess how we might do this before the blog post gives the answer...
A presheaf on , such as , intuitively attaches some information to each open set of . However, we've seen before that we can associate information to each point of using a presheaf on . Namely, we can form a full subcategory of by including exactly the open sets that contain some point of interest, apply to that get a diagram in , and then take the colimit of that diagram in . In this way, we can associate a set to each point of using .
Nice! For those who haven't read all > 1000 comments on this thread, you're now alluding to how any presheaf over a topological space has a 'stalk' at each point of that space, the stalk being the set of 'germs' of the presheaf at that point.
Yes! We saw earlier that this information can be organized as a "bundle" on ; as a continuous function to . Specifically, we get a continuous function where the set (the set of germs, the "stalk") associated to is given as .
Now, I want to associate a set to each point of . I notice that applying to a point of gives me a point of , and so I could associate to the set which is already associated (using ) to .
What I really want is to produce a bundle on . (That's because I could then convert that bundle on to a presheaf on !) To do that, I think we can use a pullback:
getting a bundle on X
Since this diagram lives in [edit: this is wrong - see discussion below], we have some hope to explicitly compute this pullback. It should be the subset of consisting of pairs such that
What set are we attaching to some point ? This is the set of pairs such that . So, we are indeed attaching to the data attached by to . That matches our intuitive guess from above!
We can then convert this bundle on to a presheaf on , by taking the presheaf of sections of . So we obtain a presheaf on from a presheaf on , as was our goal!
Ah, a correction - the diagram I drew above lives in , the category of topological spaces. So that complicates things a little bit.
This strategy sounds great! Maybe we can polish it up a bit. There are ways to turn presheaves into bundles and bundles back into presheaves. These processes are adjoint functors. But even better, I think we've seen there's an equivalence between the really nice presheaves on a topological space , namely the sheaves, and the really nice bundles over , namely the etale spaces.
So given a map of spaces we can take a sheaf on , convert it to an etale space over , pull it back to , and convert it back to a sheaf. This process exists since we can pull back a bundle and get a bundle. But if we can pull back an etale space and get an etale space, this process will be even nicer, since we'll never be leaving the world of etale spaces and sheaves, which are just two equivalent ways of talking about the same thing.
Oh that's a great point! We want to not only pull back presheaves to get presheaves - we want to pull back sheaves to get sheaves. So it will be even better if we can show that not only does pulling back a bundle give a bundle, but pulling back an etale space (which closely relates to a sheaf) gives us an etale space.
I still think it's a reasonable first step to finish showing that we can pull back a bundle to get a bundle. That basically amounts to showing that has pullbacks.
Definitely that's the right first step, especially since etale spaces are bundles with a mere extra property - so you can then go ahead and see whether pulling back bundles preserves that property.
I know that the forgetful functor is a right adjoint, and hence it preserves limits. So if has a pullback of some diagram, its underlying set and functions are given by the corresponding pullback of the diagram's image in . That immediately gives us a candidate for the underlying set and functions of a pullback in .
However, we still need to figure out a topology to put on this pullback.
I can't resist giving a hint (which you probably already know): in Set, a pullback is a subset of a product.
Oh, that's a helpful hint! I'll draw a diagram to illustrate the general situation:
pullback in Top
In , we have that is a subset of . So in , we could put the subspace topology on . That topology is the coarsest one so that the inclusion is continuous.
We want to show that for any other cone over the diagram, we have a unique continuous function that makes this diagram commute:
universal property
If this diagram is to commute in its image under must certainly commute. This determines uniquely by the universal property of pullbacks in .
Specifically, we must have and for any . So, for any . It remains to show that this function is continuous.
Using the functions and we get, using the fact that products exist in , a corresponding continuous function that acts by . The function is then giving by co-restricting this induced function.
I seem to recall that if one has a continuous function with and one co-restricts to get a function where has the subspace topology, then this co-restricted function is continuous. If that's true, then is continuous, as desired.
(Along the way, I assumed that has products. I'm content to assume this for now, unless someone thinks it would be good at this point to prove.)
Since has pullbacks, in particular we can pull back a bundle to get another bundle. That ability, combined with the fact that we can convert between bundles and presheaves, gives us the ability to pull back a presheaf on along to get a presheaf on .
We next want to show that we can pull back sheaves to get not just a presheaf but a sheaf. To do that, it suffices to show that pulling back an etale space gives us an etale space (because we can convert between etale spaces and sheaves).
An etale space amounts to a local homeomorphism . Recalling the definition of local homeomorphism, is a continuous map such that each point of has an open neighborhood such that is open and the restriction is a homeomorphism, where and are equipped with the subspace topology.
I want to show that pulling back an etale space along a continuous map gives an etale space . We've already seen that the pulled back map is continuous when is equipped with the subspace topology induced by , but it remains to show that is a local homeomorphism.
It might help to draw a picture to visualize the pulling back of an etale space. But I'll leave that for next time.
More broadly, it feels good to be back working on this again!
David Egolf said:
I seem to recall that if one has a continuous function with and one co-restricts to get a function where has the subspace topology, then this co-restricted function is continuous.
Yes! And it's even better than that. If we give the subspace topology, a function is continuous if and only if it's the corestriction of some function whose image lies in .
So to get a pullback in we just take the pullback of the underlying diagram in and give the resulting set the subspace topology coming from the product space (as you explained).
I'm glad you're back in action.
John Baez said:
David Egolf said:
Yes! And it's even better than that. If we give the subspace topology, a function is continuous if and only if it's the corestriction of some function whose image lies in .
Ah, this rings a bell! I think you're mentioning what I've seen called the "characteristic property" of the subspace topology.
I next want to show that pulling back a local homeomorphism in along any continuous function gives us a local homeomorphism. If we can do this, that'll mean that we can pull back a sheaf to get a sheaf.
This appears to be another example of something I've noticed earlier: learning this stuff has involved more topology than I expected :sweat_smile:! But the topology involved is good to learn too, so I don't mind too much.
Here's the situation:
pullback in Top
I'll assume that is a local homeomorphism, and I want to show that is then a local homeomorphism.
A function is a local homeomorphism if these two conditions are met:
Let's show that meets condition (1).
We recall that has as points the pairs such that , and that projects down to the first coordinate. Thus, is the subset of consisting of points such that there is some with .
That is, is the subset of that is mapped by to somewhere in the image of . This is the preimage under of .
Since is a local homeomorphism, is open. And since is continuous, its preimage of is also open. Since this is exactly the image of , we conclude that is open in .
It remains to show that meets condition (2).
Given some point , we need to show there's some open neighbourhood containing that point so that restricts to a local homeomorphism on .
Since is a local homeomorphism, we know there is some open set containing so that restricts to a local homeomorphism on . Maybe we can use to create our open set containing ?
I feel like I need to draw a picture to help guide this process.
pulling back a local homeomorphism
Referencing the picture, if we have some around around where restricts to a local homeomorphism, we can try forming the analogous open set around . This still feels tricky.
I think I need to "spread out" in the direction as well as the direction. We could try doing this by taking .
Drawing a picture to illustrate that:
picture
For brevity, let . What are the points "hovering over" in near ?
To specify a point in we need to specify an element of and an element of . The coordinates in our subset of interest intuitively should all belong to . So let . What is the corresponding element of ? Presumably it should be .
However, the problem with this is that is not necessarily injective, so could have more than one point.
But that's okay, maybe. We can consider points of the form where and .
Is such a point an element of ? We just need , which is true. So, such a point is indeed an element of .
Next, I would want to show that the collection of such points forms an open subset of . I'm not sure how to do that.
Maybe there is a simpler way to do this. We know that the projection on the second coordinate is continuous. Pick some point for which we wish to find an open neighbourhood around, such that is a homeomorphism when restricted to that neighbourhood.
Then has some open set where restricts to a homeomorphism. Since is continuous, is an open subset of .
What is like? It consists of points such that and .
Actually, I think is exactly the set I had arrived at by considering the pictures above! That's pretty cool! And now we know that set is an open subset of !
Now I think we are in business. To quickly recap:
picture
We let be a local homeomorphism and we want to show that this implies that is a local homeomorphism too. We already saw that the image of is open, and it remains to show that for any point there exists an open set containing such that restricts to a homeomorphism on .
After some thought, we have arrived at a strategy for showing there exists such a . Given , we know that has an open set containing it such that restricts to a homeomorphism on . We then take the preimage of with respect to to obtain an open subset of containing .
It remains to show that restricts to a homeomorphism on .
Since is continuous, its restriction to is continuous. It remains to show that (1) its restriction to is bijective and (2) its inverse as a function is also continuous.
A point in is of the form where and . Our map returns the first coordinate. is certainly surjective onto its image, but we still need to show that is injective. That amounts to showing that if and are both in , then .
If and are both in , that implies that . But note that , where restricts to a homeomorphism - and in particular where restricts to an injective function. Thus, as desired. So, is injective when restricted to .
We conclude that restricts to a continuous bijection on . It remains to show that the inverse (as a function) of this restricted function is also continuous.
Let's call the function that is an inverse (as a function) to by the name . So . This sends an to the pair where is the unique such that .
We recall that restricts to a homeomorphism on . In particular it has a continuous inverse . So we can compute our map which sends by using these two functions:
(Here, is the inclusion map , refers to a restricted and corestricted version of , and is the inclusion map ).
Each of these two functions is continuous, and thus the induced map to is continuous. And, the corestrictions of this map to and to are both continuous. So has a continuous inverse when restricted to .
We conclude that is indeed a homeomorphism when restricted to !
John Baez said:
A more 'postmodern' approach might dive straight into sheaves on sites, but I prefer explaining math in a way that doesn't cut off the roots.
I like this approach! It is more work in some ways, but it's really nice to have motivation to learn some topology - and it's fun to see the topology in action. (I find it hard to get motivated to work on point set topology unless some other topic I care about makes use of it in a way I know about!)
I think I've shown above that the pullback of a local homeomorphism is a local homeomorphism. So we now have a way to pull back a sheaf to get another sheaf:
Consulting the current blog post, I see that we next have this puzzle, which will expand our understanding of how pullbacks of bundles work:
Puzzle. Show that this construction [the pullback] extends to a functor . [Where is a continuous function].
I'll stop here for today!
I'll check this out! Let me move my interruption down here:
David Egolf said:
This appears to be another example of something I've noticed earlier: learning this stuff has involved more topology than I expected :sweat_smile:!
Yes, because I wanted to introduce sheaves and topoi through the classical and 'familiar' example of sheaves on topological spaces. All my students had to have taken a year of topology (one quarter of point-set topology, one of differential topology and one of algebraic topology). So, I could build on that. Also, most applications of sheaves in math still use sheaves on topological spaces, though in his work on algebraic geometry (esp. etale cohomology, to prove Weil's conjectures) Grothendieck introduced sheaves on more general sites.
A more 'postmodern' approach might dive straight into sheaves on sites, but I prefer explaining math in a way that doesn't cut off the roots.
Whoops, now your reply appears before my interruption. :oh_no: No big deal.
(I find it hard to get motivated to work on point set topology unless some other topic I care about makes use of it in a way I know about!)
Some students find point set topology interesting for its sake, but a lot of it was developed for applications - e.g. to real and complex analysis, and thus to understanding integrals and differential equations and things like that. Developed as a subject in its own right it's like "baby category theory" - the study of a very particular class of posets.
The next goal is to show that pulling back any continuous function in extends to a functor .
I am surprised that this is true, and curious as to whether it is the special case of a more general situation. Apparently it is! The nLab notes that in any category with pullbacks a morphism induces a pullback functor , which is a sort of "base change".
How come you're surprised?
Kevin Carlson said:
How come you're surprised?
I guess, on first impression, it strikes me as an impressive coincidence that taking pullbacks defines not only one but two functors! (The functor I was previously aware of is the one that maps an appropriately shaped diagram to its pullback.)
I now wonder if other "take the limit" functors have additional functors associated to them in a similar way, or if the "take the pullback" functor is special in this regard. I suppose we should at least expect the "take the pushout" functor to have a corresponding "pushforward" functor.
Upon contemplating this diagram, I had an idea for how should act on morphisms:
diagram
I think we can define on an arbitrary morphism of bundles over , , by defining as . Notice that, referencing our diagram, we have , because and just grab the first coordinate, which is unchanged by .
The identity morphism gets mapped to , which is the identity morphism between the pulled back bundles involved.
respects composition because .
Assuming this is correct (hopefully it is!), we now have our desired functor induced by a continuous function . This also gives us a functor from presheaves on to presheaves on , and a functor from sheaves on to sheaves on .
David Egolf said:
Upon contemplating this diagram, I had an idea for how should act on morphisms....
Nice diagram! Contemplating this diagram, I immediately want to define using the universal property of the pullback . Let's see: we've got morphisms and visible in the diagram, and they obey the necessary commutative square condition to make into a 'competitor' of , so there exists a unique map such that yada yada....
So yes, that works, but it should agree with your 'concrete' description of .
There is some advantage to avoiding the 'concrete' description of in terms of ordered pairs, because this fact - that in any category with pullbacks a morphism induces a pullback functor - holds even in contexts where pullbacks have nothing to do with ordered pairs.
By the way, this fact is important all over the place, and so is the fact that in many contexts, like any topos, the pullback functor has both a left and a right adjoint. We may even run into those in our course someday.
John Baez said:
There is some advantage to avoiding the 'concrete' description of in terms of ordered pairs...
That makes sense. Let me see if I can understand how you used the universal property of pullbacks. A pullback cone is final among all the cones over the diagram involved. So if we can set up to be the apex of a cone over the appropriate diagram, the universal property will guarantee a unique morphism of cones exists, which involves a morphism .
Attempting to set up the cone discussed above:
cone
For this to really be a cone, we need . We have as desired!
So then we are set up to use the universal property of pullbacks to find our morphism of interest . Great!
Good! Whenever you need to map something to a pullback, like , you should feel a Pavlovian instinct to find maps from that something to and to .
I think we've now done all the puzzles/exercises from Part 4. So it's on to Part 5!
In this part, we're going to talk about why the category of presheaves on a given topological space forms an elementary topos. We'll work in a more general setting: apparently the category of presheaves on any category forms an elementary topos!
Let be a category, so that the category of presheaves on is the functor category . In this category, each object is a functor and the morphsims are natural transformations. So an object of this category attaches a set to each object of .
For to be an elementary topos it needs to have, among other things, finite colimits. I thought it could be a good challenge to try to show that has finite colimits, before reading the part of the blog post that discusses this.
Roughly speaking, I think the intuition is that this category of presheaves inherits colimits from in a way analogous to how a set of functions inherits a notion of addition "pointwise" from . For example, if , then I expect that their coproduct satisfies for each .
Let's consider the general case. Let be a small category, and let be a -shaped diagram in . This is a bunch of presheaves related by natural transformations, potentially required to satisfy certain equations. I'll call the colimit of this diagram (should it exist) by the name .
For any object we need to determine a set . Intuitively, we can do this by grabbing the part of our diagram concerned with . This gives a diagram in , and then we can take the colimit of that diagram to get .
Starting from , how can we get a diagram in associated to the object in ? Intuitively, we can do it like this:
I'd like to express this process as a functor from to the category of -shaped diagrams in , which I'll call . So, we're looking for some functor .
Yes, I find helpful also to think of as a functor . I.e., I have a set which is contravariant in and covariant in .
That does sound helpful! If I understand correctly, you are using this adjunction: . In our case, this becomes: .
So, a functor is associated by this adjunction to some unique .
Or, working with we can similarly get a corresponding functor .
Yes, also, the analogy with linear algebra is interesting.
Let's momentarily think of and as finite sets.
The following data are all mutually related:
We also have a function that sends an element to the "vector" that is 1 on and 0 everywhere else. And this function looks like the Yoneda embedding .
(one difference is that there is no action of arrows, so means nothing here)
That analogy with linear algebra is pretty cool!
I think we now have the tools in place to find our functor , starting with . Moving across the adjunction mentioned above, we get . Precomposing with the isomorphism , we get . Moving this across the adjunction discussed above, we get , which I suspect is the I was looking for.
Now, I seem to recall that there is a "take the colimit" functor . Assuming this is the case, we can form . I suspect that this is the (object part of the) colimit of our diagram in that we started with.
Returning to the blog post, we have a related puzzle. Changing its notation to match what I'm using here, we have:
Puzzle. Show in the above situation that depends functorially on and that the resulting functor is the colimit of the diagram .
Here is the functor corresponding to . Also, refers to the functor .
The "resulting functor" on objects I believe acts like .
I'll stop here for today!
David Egolf said:
Now, I seem to recall that there is a "take the colimit" functor .
That sounds right if is a small category (which is what you typically assume for a category being used as a "diagram shape".)
On a bit of a side note, there's a weird thing about . Namely, it seems to require making a choice of colimit for each diagram, even when we have multiple isomorphic options available. This makes me wonder if there could be a nicer way to think about or something similar to .
The same issue happens already when we write down "the" product functor when is a category with products. One solution is to use an [[anafunctor]], which maps an object not to an object but to the universal property of an object. Another, I believe, is to switch to homotopy type theory.
John Baez said:
The same issue happens already when we write down "the" product functor when is a category with products. One solution is to use an [[anafunctor]], which maps an object not to an object but to the universal property of an object. Another, I believe, is to switch to homotopy type theory.
yet another solution is to consider that "having colimits" is actually not property but structure, and that such categories should be equipped with a specific colimit-producing functor. This is the same approach as split vs. non-split Grothendieck fibrations: one shouldn't throw away structure by squashing it
David Egolf said:
I now wonder if other "take the limit" functors have additional functors associated to them in a similar way, or if the "take the pullback" functor is special in this regard.
You can come up with variants where this works to an extent, but rarely will you encounter any instances as wide-ranging as pullbacks!
To review, the current goal is to show that is a functor, where is the functor corresponding to our -shaped diagram of presheaves , and .
I think I'll start by trying to show that really is a functor.
We have two functors:
Then we can construct a functor using the fact that has products. This functor acts on objects by .
We notice that is the same thing as , and is therefore a functor.
We have obtained a diagram in , which is the same shape as the diagram we started out with in . The -th set in our diagram is , which is what? I think . is the -th presheaf in our original diagram, and is obtained by evaluating that -presheaf at .
So, our diagram in has this as its -th set: the set attached by the -th presheaf to . Intuitively, this diagram is obtained by evaluating our original diagram of presheaves at .
What is ? This is the colimit of the diagram discussed above. So, it is obtained by evaluating each presheaf in our original diagram at to get a diagram of sets, and then taking the colimit of that resulting diagram in .
Given all this context, we want to show that defines a functor .
We've already said what does on objects: for , it takes in our diagram of presheaves, evaluates it at , and then takes the colimit of the resulting diagram in .
However, we still need to specify what does on morphisms.
Let be a morphism in . We need to dream up some function from the colimit of when evaluated at , to the colimit of when evaluated at .
I want to use the universal property of a colimit to do this. is the tip of a cone under the diagram when evaluated at , and is the tip of a cone under the diagram when evaluated at . If we can somehow get a cone under evaluated at with tip , we'll be in business.
Here's a picture that illustrates how we can do this:
picture
Each is a presheaf in our diagram . We have a morphism for each . I am hoping that we can compose these with the morphisms in the cone under evaluated at to get a cone under evaluated at with tip .
To do this, I think it suffices to show that a morphism in induces a morphism of diagrams of shape in . Then a cone under a diagram of shape is also a morphism of diagrams of shape , and thus the composition of these two morphisms is as well.
Basically, we want to show that there is a functor that acts on objects by sending to a -shaped diagram in given by evaluating at .
We already have . We saw above that we can use an adjunction and the "swapping" isomorphism between and to get such a functor. So we have some functor . Thus we are assured that any morphism in induces a morphism of certain -shaped diagrams in .
I just want to double check that sends to our diagram evaluated at . Referencing the adjunction above, we have . So, the -position in our diagram is indeed given by evaluating our original diagram at location at . Thus, indeed sends an object to the diagram evaluated at .
Now we are in business! We can now say what the functor does on morphisms, recalling that it sends an object to the colimit of our diagram evaluated at . Given in , we get a morphism of -shaped diagrams namely , where refers to our diagram evaluated at and is defined similarly.
Then, a colimit has an associated cone , and a colimit has an associated cone . Then we can form a cone .
Then we can use the universal property of colimits to obtain a function from to .
So, our proposed functor acts like this:
It remains to check that this really is a functor, and that it is the colimit of our diagram of presheaves.
In diagram form, this is our situation, where we are working in the category of -shaped diagrams in :
diagram
We want to show that . By definition, is the unique morphism that makes the outermost path in this diagram commute:
diagram
Since both of the inner rectangles commute, we can paste them together to get a larger commuting rectangle: . Since is a functor, this implies that . That is, satisfies the condition that uniquely determines , so we must have .
The identity morphism induces the identity morphism from the diagram to itself, and consequently .
Thus, we conclude that is indeed a functor. It remains to show that is really the colimit of our -shaped diagram of presheaves, .
To show that is really the colimit, we can aim to show it satisfies the appropriate universal property. To do that, we first need to think about how we get a colimit cone under with tip .
The first idea that comes to mind is as follows. To get a natural transformation , where is some presheaf in our diagram , we can try setting by specifying each component. We can try setting using the corresponding part of the colimit cone (in ) under with tip .
If we set for each in this way, do we really get a natural transformation ?
We want to show that this square commutes for any morphism in :
naturality square
I am guessing that this is part of the big diagram we saw earlier. If we can figure out how, exactly, then the commutativity of our earlier diagram should imply the commutativity of this one.
Here are the two diagrams I'm comparing:
two diagrams
The left diagram is in , and expresses the (hopeful) naturality of . The right diagram is in and uses the fact that is a colimit of the diagram to induce a morphism from to ).
We can think of the diagram on the right as a collection of (related) diagrams. For each we get a diagram in by evaluating each functor at .
Let's assume that is in our diagram at location . So . Then we can form a new diagram for our diagram in by precomposing with . This is the functor from the category with a single object and morphism that sends the single object to .
Our new diagram replaces with , and similarly with . It also sends to its -th component , which is . Similarly, it sends to .
In forming this new diagram, we also replace with its -th component , which goes from to . So, if the new diagram we form from the one on the right is just the diagram on the left.
Intuitively, takes in an object of and then creates a diagram in by evaluating all our presheaves in at . So we get a bunch of diagrams, one associated to each object of . Each diagram is a functor, and maps each morphism to a natural transformation. We must have , so that is a natural transformation from to induced by .
I could just assume that this works out, but I would prefer to prove that acts in the way that I want. To figure out what does on morphisms, we can first figure out what does on morphisms.
I'll stop here for today, but perhaps next time I can work this out:
I'm feeling a bit too busy to check what you just wrote, David, especially since I bet it's all fine. (If you're worried about something please say so!) Instead I want to add a remark to an earlier conversation:
David Egolf said:
Now, I seem to recall that there is a "take the colimit" functor .
On a bit of a side note, there's a weird thing about . Namely, it seems to require making a choice of colimit for each diagram, even when we have multiple isomorphic options available.
I forgot to mention one way people usually deal with this. They show that if you choose colimits for every -shaped diagram and get a functor , and I do it some other way and get a functor , then there's a natural isomorphism between and .
This reassures us that it doesn't matter which choice we make. At least, it doesn't matter if we refrain from doing anything 'evil' - i.e., something that works for one functor but not for some other naturally isomorphic functor!
This is a nice concrete example of why it's good to avoid 'evil'. It's not just a matter of esthetics. It means we can choose a take-the-colimit functor without having to decide which one.
John Baez said:
I'm feeling a bit too busy to check what you just wrote, David, especially since I bet it's all fine. (If you're worried about something please say so!)
I think I'm on the right track, it's just taking me a while to get to my destination. But so far so good, I think! (In general, when I write something long like this I don't really expect anyone else to read it all. Despite that, I still like to document the learning process in case it could be useful to someone!)
John Baez said:
I forgot to mention one way people usually deal with this. They show that if you choose colimits for every -shaped diagram and get a functor , and I do it some other way and get a functor , then there's a natural isomorphism between and .
This reassures us that it doesn't matter which choice we make. At least, it doesn't matter if we refrain from doing anything 'evil' - i.e., something that works for one functor but not for some other naturally isomorphic functor!
This is a nice concrete example of why it's good to avoid 'evil'. It's not just a matter of esthetics. It means we can choose a take-the-colimit functor without having to decide which one.
That's reassuring! Although there are a bunch of different "take the colimit" functors in this context, they are all basically the same, so it doesn't really matter which one we pick.
I suppose in general we can avoid "evil" in this sense by only identifying an object we're working with up to isomorphism. This "blurriness" then would stop us from using any of the particular features of any specific choice, which might not be invariant across isomorphic alternatives. Although maybe this is often inconvenient!
It's often inconvenient to only know the isomorphism class of an object: it's like holding a slippery pig that's twisting around so wildly that you can't point to any specific feature. But it's perfectly fine if you know an object up to a specified isomorphism, which is precisely what happens when the object is defined by a limit or colimit, or any other universal property. In this case, if you make a choice of this object, say , and I make a choice, say , they are not merely isomorphic, we both get access to a specific isomorphism . This allows us to transfer any structure you may have on to my , and vice versa.
John Baez said:
if you make a choice of this object, say , and I make a choice, say , they are not merely isomorphic, we both get access to a specific isomorphism . This allows us to transfer any structure you may have on to my , and vice versa.
I'm not sure I understand precisely this point. I'll try to spell it out in the case of the representability of a presheaf on a category .
Choosing a representative of amounts to choosing an object and a natural isomorphism . Now, assume we have two such choices and . Then, we have an explicit natural isomorphism
By Yoneda lemma, this yields a specific isomorphism .
Is that the kind of examples you have in mind when you mention "having access to a specific isomorphism"?
Yes, exactly. And more generally, whenever anything is defined by a universal property like a limit, or colimit, or tensor product of modules, etc., we say X is a universal object with structure S if for any other X' with structure S, there exists a unique morphism from X to X' (or the other way around) making some diagrams commute. Then if both X and X' are universal objects with structure S, there are unique morphisms
and
making those diagrams commute, and we can use the uniqueness clause in a clever but completely standard way to show and are inverses!
Thus, not only are and isomorphic - which is enough to transfer any property from to , or vice versa - but we also get a specific isomorphism between them, which lets us transfer any structure from to or vice versa.
For example, "having 7 elements" is a property, so if is a set with 7 elements and is isomorphic to then we know has 7 elements.
But "being a group" is a structure, so if the set is made into a group in some particular way and we only know is isomorphic to , we don't have enough to make into a group in some particular way. A specific isomorphism would be enough.
And it takes even more to transfer stuff.
That's pretty interesting! It it interesting how the above argument relates these two things: being "extremal and concisely so" (attempting to convey some of the intuition associated with being a universal object) with respect to having certain structure, and being related by isomorphism. It makes me wonder if we could relax the "extremal and concisely so" condition and get a notion of morphism that is weaker than isomorphism but still indicates some measure of similarity is present.
For example, we could call a "weak universal object with structure " if for any other with structure there exists at least one morphism from to making some diagrams commute. Then perhaps we would obtain some notion of induced morphism between weakly universal objects with structure , which indicates some degree of similarity?
People do talk about weak limits and especially weak pullbacks, which have the weakened sort of universal property you mention. I've never used them. But I sometimes joke about one object being merely "morphic" to another, as opposed to isomorphic.
Your argument for transferring structure just needs an isomorphism, not a unique isomorphism right? Is there some other feature of limits that are preserved because it's a unique isomorphism?
My intuition is that there's a contractible groupoid of objects described by the limit, making the limit "unique", so "everything" gets translated even "stuff", because there's only one thing up to equivalence.
But the way you explained the above is making wonder if that's not quite right. That I'm confused about some detail.
Alex Kreitzberg said:
Your argument for transferring structure just needs an isomorphism, not a unique isomorphism right?
It needs to be a specified isomorphism: that is, an isomorphism you actually know.
The existence and uniqueness clauses in the definition of any universal property guarantee that whenever we have two objects and with the same universal property, we get a specified isomorphism between them. Without the existence clause we might not have any isomorphism at all; without uniqueness there wouldn't be a particular one.
There are other ways to specify isomorphisms between objects, but a universal property quickly and efficiently specifies an isomorphism between any two objects with that property!
My intuition is that there's a contractible groupoid of objects described by the limit, making the limit "unique",
Yes, in the appropriate categorical sense of "unique".
so "everything" gets translated even "stuff", because there's only one thing up to equivalence.
Let me think about that! That's an interesting point. My intuition was that since a functor that "forgets stuff" is not faithful, even if is a contractible groupoid, may not be.
John Baez said:
My intuition is that there's a contractible groupoid of objects described by the limit, making the limit "unique",
Yes, in the appropriate categorical sense of "unique".
The groupoid need not be contractible! There can be multiple isomorphisms between the objects, that's why we need to specify one.
I guess that the precise statement would be that the category of limit cones over a diagram is a contractible groupoid, and its objects are "labelled in objects of " via the functor that sends a cone to its tip. So perhaps "a contractible groupoid labelled in objects" rather than "contractible groupoid of objects" is the accurate rephrasing?
Yes, I wasn't being too deliberately pedantic, just underlining the point John was making about how much information you need about isomorphisms to transfer properties vs structure.
Returning to the topos theory blog posts, my current goal is as follows. We have an adjunction of functors . In particular, this implies we have .
This means that given a functor there is a corresponding functor . I want to figure out how acts!
This feels like an important thing to know how to do, but I'm a bit unsure how to get started!
The idea of using the Yoneda lemma somehow vaguely occurs to me, but I don't quite see how that would help.
Maybe this is an idea: because we have the adjunction above, in particular we have a natural isomorphism . Expressing this in terms of the opposite category of we have a natural isomorphism .
The Yoneda lemma then tells us that this natural isomorphism corresponds to an element of , which is an element of .
This special element I bet is going to be an "evaluation" functor, and working out exactly what it is will probably be useful.
Are you familiar with [[cartesian closed categories]]? There are lots of categories where the functor "taking the product with the object x" has a right adjoint called [x, -], and Cat is one of those.
John Baez said:
Are you familiar with [[cartesian closed categories]]? There are lots of categories where the functor "taking the product with the object x" has a right adjoint called [x, -], and Cat is one of those.
Somewhat! I've been mostly referring to the article [[closed monoidal category]]. As far as I understand, a cartesian closed category is a closed monoidal category where the monoidal product is given by taking the product.
Scrolling down in the article you linked above, I see a section called "Some basic consequences" which looks like it might be helpful.
Right. I think you can find a formula for D' in terms of D just by writing down the simplest thing that parses and then checking it works - that's the way to solve 90% of problems in category theory. :smirk:
I could be wrong... but did you try that? Yoneda feels like overkill to me.
I figured that I could probably guess how should work in the way you describe, but I was somehow feeling that I wanted to understand more generally the process of hopping across an adjunction.
I think the article you linked gives an answer that makes me somewhat happy though: In particular, that article notes that we can get from a morphism to a morphism as follows: we apply the evaluation morphism after the morphism .
So, I can just spell out what the evaluation morphism does, and I should be in business!
I also find it satisfying to note that the evaluation morphism probably comes from applying the Yoneda lemma to the adjunction in question, as I was begin to investigate above.
Alright, we want some "evaluation morphism" . This should be a functor. On objects, intuitively it will map . On morphisms, it needs to send a pair where and to some morphism from to .
Since is a natural transformation, we have this commutative square:
naturality square
So I'm going to guess that sends to . This morphism goes from to as required.
Is this proposed a functor?
, so acts as it should on identity morphisms.
It remains to show that preserves composition. But I will leave that for next time!
This is interesting, I wasn't expecting the use of the Yoneda lemma to highlight the evaluation functor.
When exercising with adjunctions, I found interesting to describe their units and counits.
I don't want to interrupt your flow here, so I'll just write in a "spoiler" box :)
I want to show that this preserves composition.
So let us consider a composite morphism . Here , , and . This morphism is .
maps this to:
That morphism is the bottom left path from top left to bottom right in the following diagram:
diagram
Every (small) square in this diagram is a naturality square, so any path from the top left to the bottom right composes to form the same morphism.
Now we want to compute . We have and .
So this composite is .
This is another path from top left to bottom right in our diagram, so this is equal to . We conclude that does indeed preserve composition!
Peva Blanchard said:
This is interesting, I wasn't expecting the use of the Yoneda lemma to highlight the evaluation functor.
When exercising with adjunctions, I found interesting to describe their units and counits.
I don't want to interrupt your flow here, so I'll just write in a "spoiler" box :)
Thanks for pointing that out! Now I feel more incentivized to think about the unit and counit of adjunctions that I may run across in the future...
Now, the reason I did all this was because I wanted to figure out how acts on morphisms, given a -shaped diagram of presheaves on . We should be able to spell out in detail now!
We start with . Using , we'll first form . Then we'll apply our evaluation functor to end up in .
So let's consider
On objects, it acts like this:
On morphisms, it acts like this:
The situation is pictured here:
picture
Hopefully I did that right. It feels like it would be easy to make a mistake here.
Scrolling way back, I think my reason for spelling out how works was to figure out how works. This would involve "hopping across" the adjunction again :sweat_smile:!
That sounds like a lot of work, so I might instead take some time to rethink my strategy. I'll stop here for today.
It's always useful to start by guessing what the answer must be: in problems of this sort, there is usually an "obvious best guess".
The physicist John Wheeler gave some advice that really affected me, even though I'd already half-known it before. Namely: never do a calculation unless you already know the answer.
(It's actually enough to think you know the answer - then the calculation will prove you wrong.)
So, when you get time you might just write down what you think should be, without calculating it.
That's an interesting perspective! I think I sometimes use calculations as a sort of "extension ladder" to reach beyond what my intuition is telling me. But if I started reaching too far beyond things get a bit wobbly. So the idea of consistently grounding calculation in a specific guess or intuition sounds potentially quite helpful!
I'll see what guess I can dream up for next time I work on this.
Yes, Wheeler was exaggerating for effect; both perspectives on calculation are important!
John Baez said:
The physicist John Wheeler gave some advice that really affected me, even though I'd already half-known it before. Namely: never do a calculation unless you already know the answer.
Do you have a cute story/anecdote that insists on calculating the answer when you believe you already know it? The temptation to say "I already know this, what's the point of the calculation?" Feels far more lethal to me. (Of course all advice is tailored to who is receiving it)
I don't have a specially cute story like that: I just know I'm pretty much unable to do a serious calculation correctly unless I have a good idea about where it's going. Here I'm generally talking about calculations that involve integrals, algebraic equations, etc. - since those are the most complicated calculations I've done. So typically what happens is that I calculate rather quickly but make lots of copying mistakes, where when copying from one line of text to the next and doing some manipluations a double minus sign turns into a single minus sign, I forget to distribute a factor over all the terms in a sum, and so on. If I know where the calculation should be going, I can tell when something is going wrong, so I can diagnose these errors. But if I have no idea what the result should be, it takes a long time, because I tend to become 'blind' to these errors: I can look at them over and over, and still not see them.
The same general principles apply to category-theoretic computations, especially with enormous commutative diagrams.
However, what I like about category theory is that it's harder to make computational mistakes, because in many situations there's only one possible expression that can possibly parse: if you write down the wrong thing, you get something that has the wrong type or is undefined. I like to say that category theory is 'rigid', not flexible: if you accidentally bend things a bit, they tend to break completely, so you can tell.
Physicists try to get themselves into similar situations by relentlessly using dimensional analysis. Then many mistakes can be spotted by noticing that what you wrote has the wrong dimensions. This amounts to replacing the ring of real numbers by a graded commutative ring, graded in some abelian group, where you're not allowed to add things of different grades.
James Dolan noticed that graded commutative rings of this sort can also be seen as categories called 'dimensional categories'... so using dimensional analysis in physics gives it more of the 'rigidity' we expect from category theory:
Recently I spent two or three weeks trying to correctly do some computations in statistical mechanics, essentially taking the limit of an integral, and I screwed up about 100 times before getting it right! The problem was that I really didn't know the right answer ahead of time: I had a rough idea of it, but I quickly discovered that rough idea was wrong and then I was lost at sea. In the end I had to do a very concrete example of these calculations, rather than the general abstract calculations, before I discovered a conceptual error I was making:
I actually enjoyed these few weeks very much, since it's been a long time since I've done calculations that were so involved, and so deeply reliant on ideas from physics. When I finally straightened out all the mistakes it was glorious!
That rings true for me as well! Certainly in the context of math it often helps me to try and consider a specific example, but that's also true in the case of writing a program. I've spent weeks slowly debugging a program that is supposed to reconstruct images, where my only clue that something is wrong is that the image just doesn't look right at all. Without that clue, just reading the code, I would have been very hard pressed to identify what was wrong!
Let me see if I can dream up a guess for how the functor should work. The context is that we have a -shaped diagram of presheaves on .
On objects, I expect to send to the -shaped diagram in given by evaluating each presheaf in our original diagram at .
On morphisms, I don't have a guess for what should do yet. If in , . This is to be a natural transformation from to . To specify it, it suffices to specify its components.
So let in . Here's the naturality square for :
naturality square
Let's see if we can figure out a guess for . Now, is a diagram of sets obtained by evaluating each presheaf at . If we just grab the -th set from that diagram, this should just be what we get when we evaluate the -th presheaf at .
So should be a function from (the set obtained by evaluating the -th presheaf in at ) to (the set obtained by evaluating the -th presheaf in at ).
The -th presheaf is . This is a functor, so we have . I think we have and . So, .
I can now form a guess for what does on morphisms. It takes a morphism in and sends it to the natural transformation from to with -th component given by .
I'd next want to check that this guess makes the above naturality square commute. But I'll stop here for today.
David Egolf said:
Let me see if I can dream up a guess for how the functor should work. The context is that we have a -shaped diagram of presheaves on .
I'm losing track of what you're doing. Are you trying to turn a functor
into a functor
?
When I look at these two I think:
Oh, they're basically just the same thing! takes a guy (= an object or a morphism) in and produces something that takes a guy in and gives a set. takes a guy in and produces something that takes a guy in and gives a set.
In other words, for any guys and , which could be objects or morphisms, we have
Yes, @John Baez that is what I'm trying to do! I agree that for objects , but I hadn't considered that the same equation could hold for morphisms!
Okay. It's good to notice that when you have a two-variable functor like or , it makes sense when both and are objects, when both of them are morphisms, and also when one is an object and another is a morphism!
So, it's good to do as many computations as you can while remaining noncommital about whether the variables are objects or morphisms. Then you can effectively do multiple computations at once.
That sounds cool! I'm not quite understanding yet how that equation can make sense when both the things we feed in our morphisms.
Let's say in . Then is a natural transformation between two functors in . I'm not seeing how it makes sense to then feed a morphism in to ; I don't think of as something that takes in morphisms - it's just a bunch of component functions.
That's indeed a good way to make it confusing!
Now maybe you're at the stage before you're convinced that Cat is cartesian closed. Once you're convinced, you know that
is just another way of talking about
and this has no trouble eating a morphism in and a morphism in and producing a morphism in .
But what you're wondering is how
eats a morphism in and a morphism in and produces a morphism in .
Scrolling way back, I reached a point where I needed to know what does to morphisms. That's what I would like to understand.
I already am convinced that is cartesian closed. So I agree with you when you talk about how the functor is just another way of talking about some functor . I think my problem is that I don't understand exactly how moving across this adjunction actually works.
It's one thing to know that I can exchange one functor for another using this adjunction. It's another thing to understand exactly what functor I get out from this exchange process.
It's possibly my original approach was just a needlessly painful way to go about things, and that this could all be avoided with a different strategy.
To see "directly" how
eats a morphism in and a morphism in and produces a morphism in - that is, to get myself into trouble as you just have, and get back out- I need to remember more precisely how
eats a morphism in and a morphism in and produces a morphism in .
On the one hand, we just take the morphism and feed it in . But on the other hand, it's good to remember that
so we can play various tricks.
This formula I just urged you to "remember", which may have some typos in it, allows you to break things down in a way where you don't bust your brain wondering what does a natural transformation do to a morphism???
(What it does to a morphism, ultimately, is give a commutative square, meaning an equation of some sort... so, in a way it doesn't do much!)
This formula
is a completely general formula that says: when you've got a morphism in a product of categories, you can always write it as
or
This is why, when you have a functor going out of a product category like , you never need to think about what it does to a pair of morphisms, one in and one in . You can always just think about what it does to a pair consisting of one object and one morphism.
And I feel that allows us to avoid the problem you were facing (and ultimately also confront that problem and solve it).
That formula is interesting!
By the way, I think I ran across earlier today a way that one can think of applying a natural transformation to a morphism. One starts by contemplating the naturality square for the morphism in question, which is in the picture below:
naturality square
Then one can define .
I don't know if that will be helpful in this context, but it might be!
I like the formula . I don't immediately see how to use it to figure out what does to a morphism. Maybe something will occur to me when I give this another proper try, hopefully tomorrow.
Well, I think you ran into trouble figuring out what is when both and are morphisms, and this formula helps with that. If both of them are morphisms, you can use the formula I gave to reduce to the case where just one is an interesting morphism, and the other is an identity morphism (and thus essentially an object). Since
we get
and I would usually write this simply as
I could be mixed up, but I feel this would help you.
That sounds helpful - thanks! Unfortunately the pieces aren't quite coming together for me right now. I'm not even sure I could clearly explain my point of confusion at the moment. I'll plan to sleep on this and give it a solid try tomorrow.
Sleep solves many math problems! Good night!
Sometimes, finding the relevant notation can help.
Let's start with .
For every object , is a functor from to .
Hence, for every object and , we have
And, for every morphisms in we have a morphism
Now, let be a morphism of . should be a natural transformation from the functor to the functor . I find useful to denote a natural transformation explicitly as a family of morphisms, here indexed by a variable running over the objects of . I.e.,
In other words, is the component of the natural transformation at object .
I find that these notations, and John's hint, should help defining the functor on objects and morphisms.
Thanks to both of you, I feel that I understand this adjunction a lot better now!
I'm going to try and start this puzzle over again, using the using recent discussion to help solve it. I'm going to try to keep my discussion of the puzzle much more concise this time.
Great!
We start with a -shaped diagram of presheaves on , where is a small category. Our goals are to: (1) describe a candidate colimit for this diagram and (2) show that the candidate colimit really is a colimit.
We will make use of this adjunction: . Given we can use this bijection to know there is a corresponding .
We have on objects.
A morphism in is of the form . We can rewrite this as . So to describe what does on morphisms it suffices to describe and .
We have . Here is a natural transformation from to , which are both functors . By we mean the component of .
We also have . Here and in so it makes to directly supply to .
We now have defined .
Next, we introduce a functor , defined by , where is the identity functor and is the functor constant at .
We can then form . Since has colimits, we can take the colimit of this diagram to get some set, .
Intuitively, is the diagram obtained by evaluating each of our presheaves at , and by taking the -th component of each natural trasnformation as ranges over morphisms in .
Given a morphism we define a natural transformation , by setting . (To be concise, I won't spell out here the details involved with checking that this really is a natural transformation.)
For each morphism , we then obtain a natural transformation by using the universal property of a colimit in :
diagram
Here and are the natural transformations corresponding to the universal cones with apex and under the diagrams and , respectively.
I will use to refer to the unique dashed natural transformation that makes the above diagram commute.
We can now complete the object part of goal (1). Here is the object part of our candidate colimit for , which I'll call :
To complete goal (1), we also need to give a universal cone under with apex . For each position in our diagram , we need a natural transformation .
We can achieve this by setting for each , where is the -th component of the natural transformation . (To be concise, I will not spell out here the verification that is really a natural transformation.)
Defining each in this way does indeed define a cone under with tip . (Again, I won't spell out the verification here).
So we have accomplished goal (1); we have found a colimit candidate for our diagram .
It remains to show that is really a colimit of . So, for any other cone under , we need to show there is a unique morphism to that from our candidate universal cone with tip .
Referencing the diagram above, we have a cone under with tip and "legs" of the form . We wish to show there is a unique natural transformation that induces a morphism of cones from our candidate colimit cone to the cone with tip .
Earlier, we saw that there is an evaluation functor . We also have a functor induced by the constant functor at and the identity functor of .
By composing these two, we obtain a functor that evaluates any presheaf at .
Since applying a functor preserves composition, we can "evaluate at " the diagram above to get another commuting diagram. This tells us that the -th component of is forced to be the function that makes the following diagram commute:
diagram
Thus, if exists, it is unique. It remains to show that all these assemble to form a natural transformation, and that this natural transformation corresponds to the morphism of cones pictured earlier.
We begin by checking that is a natural transformation. So, for any in we want to show that this naturality square commutes:
naturality square
I'm running out of steam, so I'll stop here for now. I think I got further than I did last time, at least!
Next time, I may try to work out an example, to see how the assemble to form a naturality square in practice. Hopefully that will help! But if I can't figure this out next time, I may give up and consult "Categories for the Working Mathematician".
This feels super close to working, and I had an idea. The morphisms and are both induced by universal properties. I think it could be helpful to see if the morphism they compose to is also given by a universal property.
Using slightly different notation, this is our situation:
diagram
means the diagram evaluated at , and similarly for . To avoid overloading , I'm now using to refer to the cone under with tip .
This diagram illustrates that we have a cone under with tip , given by composing the natural transformations .
The universal property of then ensures that there is a unique morphism such that . But because the rectangle and the triangle in the above square commute, this unique morphism must be .
My next idea is to try and show that also satisfies this condition, and hence by uniqueness is equal to .
We obtain a new diagram, where our goal is to show that :
diagram
If we can show that two certain sub-diagrams commute, we can paste them together to conclude that the outermost part of this diagram commutes. We want to show:
We immediately have because is by definition the unique morphism that makes this triangle commute.
The condition looks a lot like a naturality square condition.
This still feels tricky, but I think it's something I could ask a question about concisely. Maybe I'll start a new thread to ask a specific related question. [edit: I have now done so!]