You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
The nLab page on forcing takes a category-theoretic attitude toward it which makes it sound approachable to me... but I would like to be able to read textbooks on model theory that use forcing as a tool, and they describe it quite differently.
The category-theoretic attitude is the correct one though: you can do all the steps of forcing in toposes and they all make sense separately, whereas with the classical approach you have to use all the steps together to turn all sorts of exotic objects into each other.
The outline of the process becomes simple:
Yes, sort of like it's better to use an electric car, but in large portions of the US it's much easier to find a gas station. :cry: Until someone translates a lot of interesting model theory done the old way into a better categorical framework, I'd have to learn the old way to understand the proofs, even if my main goal were to translate it into the new framework.
In practice what happens is that I learn some statements of results, find them interesting, but don't try to learn the proofs.
James Deikun said:
The outline of the process becomes simple:
- You start with a Boolean, two-valued topos.
- You freely add an object with desired properties, making it no longer Boolean or two-valued.
- You take the smallest dense subtopos to make it Boolean again.
- You arbitrarily choose a typical point of the topos and trim away the rest, to make it two-valued again.
I have been known to say things like this myself. But to be honest, there's still an aspect of the classical story I don't completely understand, namely the reason that set theorists usually start with a countable model. I think maybe this has something to do with ensuring that the topos has at least one point in the last step? Or is it ensuring that you can get a model in which is interpreted by the "real" of ZFC?
Ah, the countable model. I think it's mostly it being easier to be sure the topos has points, yeah. To be sure you can get an inner model, it suffices for the "worlds" or "locations" to be well-quasi-ordered I think.
(You can get an inner model as long as the relation after step 2 is well-founded according to your step 1 ZFC.)
(Although it's a slightly weird "well-founded as a 3-place relation"...)
Anyway, even with a countable model it can still be not-well-founded, being countable doesn't really help with that. I think most of what the countable model is for though is really just so size issues don't come up. We need to make the thing set-sized so why not make it as small as possible?
And that does figure in to making sure the topos has points, tying things back: the thing (generic ultrafilter) that represents a point of the topos won't exist if your starting model is "the" universe. (Sorry for the mess while shaking the rust off my forcing knowledge; I haven't used it in a while.)
I thought countabilty of the model (not just smallness) was actually necessary for there to exist a generic filter.
Apparently countability is always sufficient but whether it's necessary depends on the particular forcing you are doing.
https://mathoverflow.net/questions/48522/forcing-over-an-arbitrary-model-of-zfc
Essentially you want to use https://en.wikipedia.org/wiki/Rasiowa%E2%80%93Sikorski_lemma, and a countable model ensures a generic filter exists in the ambient set theory. Since the forcing notion is externally countable, and various other bits and pieces you get stay countable, you get the generic filter, then perform the filterquotient construction to squash the subobject classifier of 1 from a big boolean algebra down to {0,1}.
Note that it's independent of ZFC+notCH if stronger versions of Rasiowa–Sikorski (i.e. for larger cardinals aleph0< kappa < 2^aleph0) hold; these are captured by https://en.wikipedia.org/wiki/Martin%27s_axiom
And the analogue for kappa=the continuum is outright false.
26 messages were moved here from #meta: off-topic > Hard learning by Morgan Rogers (he/him).
It's always a fun coincidence when things you discuss irl independently show up here! @Dianthe Basak @Ivan Di Liberti
Lawvere discusses his view of forcing, and how it inspired his and Tierney's discovery of the notion of "elementary topos", on the first pages of [1]. (but he doesn't comment on the fact that one has to assume countability of the ground model for a generic filter to exist)
@John Baez The book "Set theory: an introduction to independence proofs" by Ken Kunen gives a clear treatment of forcing
It's unfortunate that Kunen's attitude toward forcing was "incorrect", as he was a set theorist, but I suspect you'll muddle through regardless. It's something of a miracle that set theorists keep proving all these theorems in spite of their "incorrect" attitudes towards mathematics. /s
I'm thinking of the recent posts of categories on the mailing list, where topologists spent decades knowing about homology groups but only talking about things like Betti numbers, seemingly because they didn't think some other "complex" mathematical object like a group counted as an "invariant". They still proved a bunch of theorems, probably many of them proved more theorems about topology than Emmy Noether. Just because people are succeeding doesn't mean they're doing everything right.
I've never felt in my bones the drive others have to make the Boolean topos furthermore two-valued or well-pointed, if the goal is simply to prove independence/relative consistency results, for which this is unnecessary (thus, no need for step 4 or for starting from a countable model). Why make arbitrary choices when they are irrelevant?
My impression is that the answer is that that isn't the only goal. It was the original goal of forcing, but since then set theorists have started using it for other things too. But I don't completely understand.
I have a newbie question about forcing.
Since the standard open-interval topology on forms a complete Boolean algebra, can I not use the space under the inclusion of open-intervals as a forcing poset (this is what I think is shown in Quanta magazine's article How Many Numbers Exist? Infinity Proof Moves Math Closer to an Answer?
But to me, this just looks like we are extending the reals to the hyperreals by adding in all the infinitesimals.
Likewise, the Wikipedia page on the hyperreals states that the hyperreal field is order isomorphic iff the continuum hypothesis holds (though, it doesn't specific further).
Is adding infintesimals to (likewise infinitesimals of infinitesimals to , infintesimals of infinitesimals of infinitesimals to , etc) a kind of forcing?
This does seem to make some intuitive sense to me, but I usually doubt my intuitions.
That's really a bit of a cartoon. When set theorists talk about adding a real, they mean adding a function . And the forcing poset is partial functions with finite domain. It's a bit like adding a new binary expansion, but of course you can't actually know what this is, since the new real you add depends on a choice of generic filter.
This generic filter exists in the standard version of forcing over a countable model of ZFC because you can enumerate (in the ambient set theory) all the dense open subsets in the partial order (open here using the order topology, where basic opens are the sets of all elements below a given element), then apply the Rasiowa–Sikorski lemma, which constructs a generic filter using the axiom of dependent choice (I believe).
So the picture really doesn't help you know how the forcing really works.
I've not seen a source that tries to use the Euclidean topology on the reals as the partial order. In principle, one could try to use Baire space instead (which is homeomorphic to the irrationals), and add a new function , by considering the forcing poset of partial such functions with finite domain. But please take this with a grain of salt, I've not sat down and checked anything like this works!
Mike Shulman said:
My impression is that the answer is that that isn't the only goal. It was the original goal of forcing, but since then set theorists have started using it for other things too. But I don't completely understand.
Hi, my work is in set theory so maybe I should say something. To my mind there are two main advantages of collapsing to a two valued model V[G]. The first is, two repeatedly expand quantifiers in the forcing extension as ands and ors in the boolean algebra very quickly becomes cumbersome, especially if we recall that the usual forcing poset is not a complete boolean algebra, and its completion is not easy to describe. If V[G] is made of the same "stuff" as V, then we can reason as in any model, and only replace one or two key steps with an argument from the poset (an argument involving the dense topology).
This is of course a slightly subjective issue, and certainly a holdover from set theorists' intuition of "model" as being inherently two-valued. However it helps set theorists come up with posets for adding complicated sets on the fly (without having to worry about reasoning in a Boolean algebra that might be unreasonably cumbersome to describe).
A less subjective problem is that it is a very useful property that "V is a transitive submodel of V[G]", in order to transfer logical properties from one to the other. The word "transitive" is hard (though not impossible, due to the work of people in this thread!) to express topos theoretically.
Keith Elliott Peterson said:
I have a newbie question about forcing.
Since the standard open-interval topology on forms a complete Boolean algebra, can I not use the space under the inclusion of open-intervals as a forcing poset (this is what I think is shown in Quanta magazine's article How Many Numbers Exist? Infinity Proof Moves Math Closer to an Answer?
But to me, this just looks like we are extending the reals to the hyperreals by adding in all the infinitesimals.
Likewise, the Wikipedia page on the hyperreals states that the hyperreal field is order isomorphic iff the continuum hypothesis holds (though, it doesn't specific further).
Is adding infintesimals to (likewise infinitesimals of infinitesimals to , infintesimals of infinitesimals of infinitesimals to , etc) a kind of forcing?
This does seem to make some intuitive sense to me, but I usually doubt my intuitions.
Hi! Thank you for this question, I should point out two things:
First, the open interval topology on isn't a complete boolean algebra, but a complete Heyting algebra (for instance the negation of is empty, but this set is not ). However, we can still force with it using the double negation sublocale (if locales are not your thing, this is just the poset of open sets which are fixed by double negation (the interior of closure operation), with intersection and union altered to stay within this poset).
Second, forcing with this poset has some key differences from the hyperreals. To understand them let's call the original set of reals the Old Reals and the set of reals obtained by forcing the New Reals.
In the traditional forcing picture:
a) The new reals remain archimedean, so it's not exactly that the new reals are in the infinitesimal gaps between the old ones (if a new real is < 1/n for all n, it is still 0).
b) It is more like the new reals are the undefinable cousins of the old ones. For instance, if you were working with the computable/constructible reals as the old reals, this is similar to the new reals being "constructible from a turing jump" or "constructible from the halting problem". So perplexingly, we've not introduced infinitesimal gaps, just more reals densely packed in with the old ones.
However, to complicate things, one can also do forcing with a non-generic ultrafilter U:
V -> W -> W[G]
where W is the "boolean ultrapower by U" and W[G] is a generic extension of W (by a G that is canonically determined by U). In this picture the reals of W are exactly a hyperreal line, and then W[G] extends them by "a few more undefinable reals". In this case the reals of W[G] are not archimedean from the perspective of V, but are archimedean if we use the integers of W (although W doesn't know this, because of the transfer principle).
I should add that we can also force directly with the poset of open sets of , if we always remember to use the dense topology/use Kripke-Joyal semantics for the dense topology. This does indeed directly add a real as we commonly understand the term, not just a function $$\mathbb N \to \{0, 1\}$ (although of course you can obtain a new such function by looking at the binary expansion of the added real).
Keith Elliott Peterson said:
Since the standard open-interval topology on forms a complete Boolean algebra,
It doesn't. The complement of an open set is not open, and it's not true that open sets are closed under arbitrary intersections.
Dianthe definitely said that @John Baez :sweat_smile:
Yeah, I don't always read all comments in a long thread before responding to something that catches my eye.
The regular opens form a boolean algebra though, as Dianthe pointed out, under the name of the "double negation sublocale". But forcing doesn't need a complete boolean algebra as input....
Perhaps I should stop trusting ChatGPT (or me misunderstanding, or misrepeating what it outputs).
Either way, whether or not the forcing poset is a regular-open topology (to make sure the semantics of the new set theory are sufficiently like standard set theory) or a different topology (i.e. an Alexandroff topology from some poset) just seems to be a matter of taste.
However (and I might be wrong here) the use of a regular-open topology "looks" like we are defining a neighborhood filter of these "new" reals, where taking intersections gets us more and more information about the location of these "new" reals. Or in ultrafilter terminology, we are treating the free ultrafilter that corresponds to the regular-open topology as if it's actually is a principle ultrafilter.
To me, this seems to be a lot like describing via Dedekind cuts of . In fact, could the Dedekind cuts be seen as an ultrafilter construction on ?
As for my comments on the hyperreals, my intuition was that the set , which certainly has a (in the standard set theory) size greater than .
Putting on my SGA cap, I'm thinking of as a real number line sitting above each real number in the base space. So we have -many real number lines, each indexed by a real number,
with relations such as,
So we now have things like . We could see this as a real number line centered around each real number, which we can view as an infinitesimal halo. So, we can think of a number like as being another name for , for some infinitesimal .
And indeed, to specify a number in this new continuum, we must first give a Cauchy sequence of rationals/Dedekind cuts/whatever to specify the index , and then another infinite Cauchy sequence of rationals/Dedekind cuts/whatever in to get a "new" real . The union of all these new reals should be our new real number line,
Now I suppose the real (heh) trick here is to consider which will count towards the construction.
For a single , we can certainly think of as being given by the constant function over , .
Things get trickier when we consider two reals, , but it is still manageable if we remember that open intervals of are order isomorphic to all of , so we can specify two open intervals,
of course with . Thus given a , we have as specifying . Likewise for
The other finite cases and countable case proceeds similarly.
For the uncountable case, things are much trickier, and you'll have to make infinitely many choices as to which interval maps to which real number in the base, each choice specifying whether we are using a constant partial function for a given interval or an identity partial function. This is probably where a forcing poset is going to be needed.
Sorry if I'm math rambling.
The collection of regular-open sets in the real line do not form a topology . More generally, in any space, the regular open sets are not closed even under binary union (https://en.wikipedia.org/wiki/Regular_open_set). So "regular open topology" is a misnomer.
You need to stop saying "topology" and start saying "lattice", I think. The join operation on regular opens is not union, though, but the interior of the closure of the union. So in the lattice of regular opens in the real numbers, the join of (-1,0) and (0,1) is not (-1,1) \ {0} (the naive union, the join in the lattice that is the topology), but (-1,1).
Also, by assuming the reals have size aleph_1 and the set of functions from the reals to the reals has size aleph_2, you are assuming more than ZFC, namely a fragment of the generalised continuum hypothesis.
Also, you can't think of R^R as a real number line sitting above each real number in the base space, that's merely RxR, which has the same size as R.
More generally, in _any_ space, the regular open sets are not closed even under binary union
I might be missing something, but as stated, this looks false to me. In a discrete space, the regular opens are closed under binary union, since all opens are regular, and the opens are closed under union. The Alexandroff topology of a bounded linear order constitutes an extreme example of the opposite kind: the only regular opens are the whole space and the empty set, and these trivially form a complete Boolean algebra.
Generally, regular opens are closed under union in an extremally disconnected space.
Sorry, that was bad wording. I should have said: in an arbitrary space, you aren't guaranteed regular opens are closed under binary union.
Keith Elliott Peterson said:
Perhaps I should stop trusting ChatGPT (or me misunderstanding, or misrepeating what it outputs).
You should never have started trusting ChatGPT...
I find it interesting that someone would try to learn math from ChatGPT. I guess it's like talking to a chatty grad student who doesn't really know when they're bullshitting. I actually did that when I was a grad student. It probably was helpful overall. But one of the things I learned was to mentally flag everything interesting they said with mental note: "big iff true" - and then check it. And I tried to switch to talking to grad students I could trust.
This response might be better as an off-topic post.
I'm becoming concerned about the current impact of ChatGPT in mathematics education. As a professional computer scientist I find the mathematics and technology behind ChatGPT to be fascinating. But as the moderator of a mathematics bulletin board I became so overwhelmed with odd postings from new members that I made the board read only. Now educators have to contend with bright students using ChatGPT to help with their assignments. The educational process is being hacked and there needs to be a helpful response to the situation. I have no idea what that might be, but I should research how others are responding.
John Baez said:
I find it interesting that someone would try to learn math from ChatGPT. I guess it's like talking to a chatty grad student who doesn't really know when they're bullshitting. I actually did that when I was a grad student. It probably was helpful overall. But one of the things I learned was to mentally flag everything interesting they said with mental note: "big iff true" - and then check it. And I tried to switch to talking to grad students I could trust.
In my experience, it's more like chatting with a middle schooler who knows lots of fancy math words but has zero idea what they mean. Poor grad students are better than this.
Keith Elliott Peterson said:
Perhaps I should stop trusting ChatGPT (or me misunderstanding, or misrepeating what it outputs).
With my mod hat on: please do some personal research/checks between asking chatgpt and posting here, given that chatgpt is not a reliable source of information. Let's not repeat Daniel's experience and erode people's trust in this space.