Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: understanding "dual" statements


view this post on Zulip David Egolf (Dec 07 2023 at 19:10):

I'm trying to understand this idea: "If a statement Σ\Sigma is true for a partial order PP, then the 'dual' of that statement Σ\Sigma^* is true in the dual partial order PP^*. (For context, I'm reading "Introduction to lattices and order" by Davey and Priestley. I'm also referencing this webpage).

By the dual PP^* of a partial order PP, I mean that PP^* and PP have the same elements, and aba \leq b in PP^* exactly if bab \leq a in PP. By the dual of a statement, I mean the same statement where every occurrence of \leq is replaced by \geq.

view this post on Zulip David Egolf (Dec 07 2023 at 19:10):

For example, assume that the following statement Σ\Sigma is true in PP: "there is an element aa so that aba \leq b for all bb". Then the dual statement Σ\Sigma^* I think is: "there is an element aa so that aba \geq b for all bb." I think that Σ\Sigma^* is true in PP^*. We know that aba \leq b for all bb in PP. Thus bab \leq a for all bb in PP^*. So this works out!

view this post on Zulip David Egolf (Dec 07 2023 at 19:12):

I would like to prove in general that if Σ\Sigma is true in PP, then Σ\Sigma^* is true in PP^*. But I don't know where to start! I feel like I don't even really know what a "statement" is, or exactly what it means for a statement Σ\Sigma to be "true for a partial order PP".

Any thoughts are appreciated!

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:18):

What you have is basically correct, but it should be added that if the statement includes operations on a partial order, such as \wedge and \vee, then those have to be dualized as well. Let's talk about this.

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:21):

So one very categorically minded definition of \wedge is that it is the cartesian product in a poset. When you spell this out, it means:

xabiffxa  and  xbx \leq a \wedge b \qquad \text{iff} \qquad x \leq a\; \text{and}\; x \leq b

(Note: this is a very "Yoneda-esque" description: it uniquely characterizes aba \wedge b in terms of "maps", i.e., instances of \leq, coming out of arbitrary objects/elements xx.)

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:24):

Similarly, or rather dually, the characterization of joins \vee is:

xabiffxa  and  xbx \geq a \vee b \qquad \text{iff} \qquad x \geq a\; \text{and}\; x \geq b

or (of course)

abxiffax  and  bx.a \vee b \leq x \qquad \text{iff} \qquad a \leq x\; \text{and}\; b \leq x.

view this post on Zulip David Egolf (Dec 07 2023 at 19:25):

Thanks for pointing this out!

And I think I see that the first bijection you have provided is a particular component of the natural isomorphism which declares aba \land b to be a product: α:P(,ab)P(,a)×P(,b)\alpha:P(-, a \land b) \cong P(-, a) \times P(-,b). So, αx:P(x,ab)P(x,a)×P(x,b)\alpha_x: P(x, a \land b) \cong P(x,a) \times P(x,b)

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:26):

That's right.

So ,\vee, \wedge can be expressed in the core language of posets, where they emerge as concepts dual to one another, and therefore when you dualize statements which involve them, you have to interchange them.

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:32):

Now: the language of posets is (depending on terminological conventions) the collection of all first-order formulas that you can build up using any core relations of the theory (for posets, there's only one such predicate, which is \leq) and constants of the theory (there are no primitive constants for the language of posets per se, although for some extensions of the language, you might bring in constant symbols like 00 (bottom) and 11 (top), which by the way are dual to one another as well, being the initial/terminal elements), and finally any primitive operations (again, for the core language of posets, there are no primitive operations per se, but again, some extensions may include operations like \vee and \wedge, and there may be others besides depending on the language extension).

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:33):

So to make use of this, you have to understand "first-order formulas", but that's a recursive matter, much like algebraic expressions are built up recursively.

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:33):

A "statement" is a first-order formula in which every variable is quantified, either universally or existentially.

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:34):

Finally, to take the dual of a statement (in the core language), you reverse all instances of \leq in your statement, but otherwise leave its logical structure unchanged.

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:41):

Duality is an absolutely fundamental theme in category theory, and this was recognized from the very beginning by Eilenberg and Mac Lane: I think they might have mentioned the analogy with duality in projective geometry, or duality in the theory of Boolean algebras, but I don't remember for sure. Anyway, like the theory of posets, the theory of categories is "self-dual". In essence this means that if, in the core language which specifies function symbols dom\text{dom} and cod\text{cod}, identity function symbols, and the composition symbol, the "dual" of a statement in this language is obtained by swapping dom\text{dom} and cod\text{cod}, swapping instances of fgf \circ g with gfg \circ f, and leaving everything else unchanged, then the dual of any theorem in the theory of categories is again a theorem. This is because the dual of an axiom in the theory of categories is again an axiom in this theory.

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:43):

I think I will stop there for now.

view this post on Zulip David Egolf (Dec 07 2023 at 19:45):

Thanks, that is all quite helpful! That gives me a lot of things to look up and review. (I looked at "first-order formulas" a while ago, but I've forgotten most of the details! This gives me a good reason to review that.)

Todd Trimble said:

This is because the dual of an axiom in the theory of categories is again an axiom in this theory.

I bet an analogous statement is important for proving what I want to prove for partial orders!

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:45):

Absolutely. The theory of posets is also self-dual (exercise!).

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:51):

Some of this can be confusing at times. When we teach Boolean algebra, for example, and dualize a statement, the symbols \wedge and \vee which belong to the object language are interchanged, but not the words "and" and "or" in the meta-language that we use to talk about logical formulas in the theory!

view this post on Zulip Todd Trimble (Dec 07 2023 at 19:52):

Those meta-usages are unchanged in the process of dualizing.

view this post on Zulip David Egolf (Dec 07 2023 at 20:09):

Todd Trimble said:

Those meta-usages are unchanged in the process of dualizing.

That makes sense! I was just thinking about this, trying to imagine an inductive procedure to show that more and more statements stay true when dualized. We might first show that if aa is true in PP then aa^* is true in PP^*, and if bb is true in PP then bb^* is true in PP^*.

view this post on Zulip David Egolf (Dec 07 2023 at 20:09):

Then we might assume that "aa and bb" is true in PP and ask if "aa^* and bb^*" is true in PP^*. Since "aa and bb" is true, then aa is true and bb is true (all in PP) therefore aa^* is true and bb^* is true (in PP^*) and hence "aa^* and bb^*" is true in PP^*.

view this post on Zulip David Egolf (Dec 07 2023 at 20:11):

So the "and" didn't need to get changed as we moved from PP to PP^*.

view this post on Zulip Todd Trimble (Dec 07 2023 at 20:25):

Yes, exactly.

view this post on Zulip Leopold Schlicht (Dec 07 2023 at 21:17):

Todd Trimble said:

A "statement" is a first-order formula

Why first-order?

view this post on Zulip Todd Trimble (Dec 07 2023 at 21:33):

Doesn't have to be, necessarily. But typical texts on logic often stop at this point, for a variety of reasons, so it's a useful stopgap that seems appropriate for this level of conversation. Did you want to say something more on this?

view this post on Zulip John Baez (Dec 07 2023 at 22:06):

@David Egolf wrote:

I would like to prove in general that if Σ\Sigma is true in PP, then Σ\Sigma^* is true in PP^*. But I don't know where to start! I feel like I don't even really know what a "statement" is, or exactly what it means for a statement Σ\Sigma to be "true for a partial order PP".

There are lots of levels of generality for what we can take a "statement" to be, and the precise details of how we dualize a statement depends on what framework we use to define "statements".

view this post on Zulip John Baez (Dec 07 2023 at 22:08):

I was going to recommend starting with the approach of first-order logic, since this is a classic approach to working with "statements", but I see now that Todd Trimble beat me to it:

A "statement" is a first-order formula in which every variable is quantified, either universally or existentially.

view this post on Zulip John Baez (Dec 07 2023 at 22:09):

where "first-order formula" needs to be defined inductively. I'm actually glad Todd beat me to it, because it takes a fair amount of work.

view this post on Zulip John Baez (Dec 07 2023 at 22:14):

We could then move on to second-order logic, etcetera, but many of the classic results in logic (like Godel's completeness theorem, soundness, compactness, etc.) work nicely for first-order logic, so I think it's good to get comfy with first-order logic before moving on.

view this post on Zulip Leopold Schlicht (Dec 07 2023 at 22:36):

John Baez said:

We could then move on to second-order logic, etcetera, but many of the classic results in logic (like Godel's completeness theorem, soundness, compactness, etc.) work nicely for first-order logic

These results are completely unrelated to the duality principle under discussion.

view this post on Zulip John Baez (Dec 07 2023 at 22:37):

I know. I'm just trying to explain to David Egolf why it's good to learn first-order logic.

view this post on Zulip John Baez (Dec 07 2023 at 22:40):

Actually I think it's going too far to say anything in logic is completely unrelated to anything else: I feel confident that with a little work I could find a fun application of soundness or completeness to the first-order theory of posets, and how this theory is "self-dual".

But I was really just trying to get David more interested in logic.

view this post on Zulip Leopold Schlicht (Dec 07 2023 at 22:42):

Todd Trimble said:

Doesn't have to be, necessarily. But typical texts on logic often stop at this point, for a variety of reasons, so it's a useful stopgap that seems appropriate for this level of conversation. Did you want to say something more on this?

I just wondered why stop at first-order when discussing the duality principle David mentioned. Many properties of partial orders like "well-ordered" are higher-order.

view this post on Zulip John Baez (Dec 07 2023 at 22:43):

If you're teaching someone first-order logic, as Todd just did, maybe you'd want to wait a few days before explaining second-order logic.

view this post on Zulip David Egolf (Dec 07 2023 at 22:50):

John Baez said:

We could then move on to second-order logic, etcetera, but many of the classic results in logic (like Godel's completeness theorem, soundness, compactness, etc.) work nicely for first-order logic, so I think it's good to get comfy with first-order logic before moving on.

Yes, I hope to get a bit more comfortable with some first-order logic by thinking about the topic of this thread. I am hopeful that having a specific question I want to answer (as described above) will be helpful motivation!

view this post on Zulip David Egolf (Dec 07 2023 at 22:52):

One thing that I'm noticing right away is that we have two "layers" going on:

view this post on Zulip David Egolf (Dec 07 2023 at 22:54):

Referencing the below definition of an alphabet of a first-order language, I am wondering if the "inside the poset" expressions need to be our variables. Or maybe instead our variables are the elements of the poset and we have a "relation symbol" \leq in our first-order alphabet. Probably this second approach is simpler!

alphabet of a first-order language

view this post on Zulip Leopold Schlicht (Dec 07 2023 at 23:05):

David Egolf said:

I would like to prove in general that if Σ\Sigma is true in PP, then Σ\Sigma^* is true in PP^*. But I don't know where to start!

I don't think you will be able to formulate such a theorem in the first place! Here is a related MathOverflow question.

view this post on Zulip Leopold Schlicht (Dec 07 2023 at 23:19):

I wonder how people would use the duality principle in a proof assistant. Is that what "metaprogramming" is?

view this post on Zulip John Baez (Dec 07 2023 at 23:21):

David Egolf said:

One thing that I'm noticing right away is that we have two "layers" going on:

This is because you're studying the theory of a poset using first-order logic, which has its own poset structure (called implication).

That's a bit more confusing than if you were studying, say, the theory of a group using first-order logic.

view this post on Zulip John Baez (Dec 07 2023 at 23:22):

If you were studying that, the concept you'd be studying would not be so similar to the framework you were studying it in.

But if you want to study duality, the theory of posets is a nice example. (The theory of projective planes also has a nice duality, of a rather different sort.)

view this post on Zulip John Baez (Dec 07 2023 at 23:25):

Leopold Schlicht said:

David Egolf said:

I would like to prove in general that if Σ\Sigma is true in PP, then Σ\Sigma^* is true in PP^*. But I don't know where to start!

I don't think you will be able to formulate such a theorem in the first place!

Of course he can formulate such a theorem. Todd Trimble began outlining how. There are many such theorems along these lines, but Todd is leading him to a nice simple one.

view this post on Zulip Leopold Schlicht (Dec 07 2023 at 23:32):

Note that David in his question didn't mention that the "statements" he is talking about are first-order. First-order sentences almost surely don't capture what he meant by "statement" (since, for instance, the duality principle should apply to the property of being well-ordered).

view this post on Zulip David Egolf (Dec 07 2023 at 23:40):

Leopold Schlicht said:

Note that David in his question didn't mention that the "statements" he is talking about are first-order. First-order sentences almost surely don't capture what he meant by "statement" (since, for instance, the duality principle should apply to the property of being well-ordered).

I'm happy to start with simpler statements, like first-order sentences! If I can understand how those behave with respect to "duality" maybe I can start thinking about more complex ones. But I appreciate you pointing out that more complex statements are important!

view this post on Zulip Leopold Schlicht (Dec 07 2023 at 23:52):

That's a nice exercise if you want to familiarize yourself with first-order logic! I didn't know that that's what you are interested in from reading your question.

However, if you are interested in making precise the duality principle you mentioned, I claim that no formal logic captures what you mean by "statement", though. So the only way I see to make the duality principle precise would be to investigate how such a situation is handled in a proof assistant.

view this post on Zulip Mike Shulman (Dec 08 2023 at 00:17):

If we're just talking about duality of a single poset (or category), then it seems to me that any sort of statement you like should be captured by saying there is an involution on the syntax of the theory "T + a poset", where T is whatever foundational theory of mathematics you like (ZFC, MLTT, HoTT, etc.), which intertwines with the duality construction on models of that theory. The MO question you linked to is about duality statements involving multiple categories and functors between them, which is somewhat more complicated, but is probably still captured by involutions of a theory like "T + a diagram of posets/categories, functors, and natural transformations between them".

view this post on Zulip Ralph Sarkis (Dec 08 2023 at 06:18):

I really like to rephrase statements about partial orders in the calculus of relations. It is a language without variables (which to me makes it feel more categorical), and it makes duality very easy to see (almost too trivial) because there is an converse operation changing \leq into \geq. One caveat is that it is not as expressive as FO logic.

view this post on Zulip Ralph Sarkis (Dec 08 2023 at 06:18):

Here is a simple example (first in FO logic) (EDIT: ok I wrote these right after waking up and I didn't realize any poset is dense and reflexive so the properties are not that interesting...).
Def: We say a poset (X,)(X,\leq) is dense if

x.y.xy    z.xzzy.\forall x. \forall y. x\leq y \implies \exists z. x\leq z \wedge z \leq y.

In words, you can always find an element between two others like inside the real or rational numbers (Q\mathbb{Q} and R\R with the strict orders << are dense).

view this post on Zulip Ralph Sarkis (Dec 08 2023 at 06:18):

Def: We say a poset (X,)(X,\leq) is reflexive if

x.xx.\forall x. x \leq x.

In words, every element is related to itself in the order.

view this post on Zulip Ralph Sarkis (Dec 08 2023 at 06:18):

Thm: If (X,)(X,\leq) is reflexive, then it is dense.
Proof: For any xyXx \leq y \in X, let z=xz =x (or z=y)z = y) and by reflexivity, we have xzyx\leq z \leq y.

view this post on Zulip Ralph Sarkis (Dec 08 2023 at 06:19):

Now the same example with the calculus of relations (I can go into the details of why these are equivalent formulations).
Def: We say that a poset (X,)(X,\leq) is dense if ;{\leq} \subseteq {\leq} ; {\leq}.
Def: We say that a poset (X,)(X,\leq) is reflexive if idX\mathrm{id}_X \subseteq {\leq}.
Proof of Thm: idX;;{\leq} \subseteq \mathrm{id}_X ; {\leq} \subseteq {\leq};{\leq}.

view this post on Zulip Morgan Rogers (he/him) (Dec 08 2023 at 09:27):

Leopold Schlicht said:

However, if you are interested in making precise the duality principle you mentioned, I claim that no formal logic captures what you mean by "statement", though. So the only way I see to make the duality principle precise would be to investigate how such a situation is handled in a proof assistant.

In order to formally discuss statements, you need to fix a language (you need to decide whether you allow joins and meets in your statements, say). Once you've made that choice, I would argue that you in fact can prove a duality meta-theorem by arguing by substitution into proofs. That is, let π\pi be a proof that PP satisfies Σ\Sigma; then the proof π\pi^* obtained by substituting every formula by its syntactic dual is a proof that PP^* satisfies Σ\Sigma^*. The only requirement here is that of invariance of deduction rules under dualization; you don't need to have decided in advance what your deduction rules are (i.e. which fragment of logic you're working in).

view this post on Zulip David Egolf (Dec 08 2023 at 18:46):

I've just learned about induction in the calculus of formulas for a first order language. I think this will be a helpful tool for proving that if Σ\Sigma is any formula in the first order language for a poset PP, then Σ\Sigma^* is true in PP^* exactly when Σ\Sigma is true in PP.

To do this, we first need to define what the formulas are for the poset PP. To do that, we first define what terms are. And to define what terms are, we need to first define the alphabet for PP. The alphabet for PP is a set of symbols containing exactly the following:

view this post on Zulip David Egolf (Dec 08 2023 at 18:51):

The terms for the poset PP are exactly the variables of our alphabet, the elements of PP.

The formulas for PP are exactly the strings obtained by finitely many applications of the following rules:

view this post on Zulip David Egolf (Dec 08 2023 at 18:51):

view this post on Zulip Todd Trimble (Dec 08 2023 at 20:39):

I had begun writing a long response, but everything got screwed up because one of the comments was "too long" for this stupid zulip to handle -- and then everything got fouled up.

If I have the time and energy, then I'll reconstruct the series of messages. But maybe someone else would like to intervene (I hope so -- that would save me from having to do it all over again).

view this post on Zulip David Egolf (Dec 08 2023 at 20:51):

Todd Trimble said:

I had begun writing a long response, but everything got screwed up because one of the comments was "too long" for this stupid zulip to handle -- and then everything got fouled up.

If I have the time and energy, then I'll reconstruct the series of messages. But maybe someone else would like to intervene (I hope so -- that would save me from having to do it all over again).

I have been struggling a lot with this problem! I noticed that long messages tend to (thankfully) still get saved to "drafts", which might help in your case too, depending on what exactly went wrong.

view this post on Zulip Todd Trimble (Dec 08 2023 at 20:57):

No, there are no drafts of mine, alas.

view this post on Zulip John Baez (Dec 09 2023 at 11:22):

David Egolf said:

I've just learned about induction in the calculus of formulas for a first order language. I think this will be a helpful tool for proving that if Σ\Sigma is any formula in the first order language for a poset PP, then Σ\Sigma^* is true in PP^* exactly when Σ\Sigma is true in PP.

I agree with your main point here: since formulae are defined inductively, to prove a claim about all formulae it's very natural (and maybe even inevitable?) that you'll use induction.

view this post on Zulip John Baez (Dec 09 2023 at 11:23):

But I have a nitpick to point out:

Note that a formula can have free variables, while a statement is a formula with no free variables. It makes plenty of sense to say a statement is true in a poset PP, but not that a formula is true in a poset PP. Here's a formula:

xy x \le y

Is this true in the poset (N,)(\mathbb{N}, \le)? Is it false? Not really either. It does however define a function from N2\mathbb{N}^2 to truth values, sending each pair (x,y)N(x,y) \in \mathbb{N} to the truth value of xyx \le y. Or in other words, it defines a binary relation on N\mathbb{N}.

view this post on Zulip John Baez (Dec 09 2023 at 11:23):

So, instead of saying

if Σ\Sigma is any formula in the first order language for a poset PP, then Σ\Sigma^* is true in PP^* exactly when Σ\Sigma is true in PP.

it might be better to say it a bit differently.

Of course, it's possible you meant exactly what I'm trying to make explicit here! And in fact I don't even know the standard jargon for what I'm trying to say.

view this post on Zulip John Baez (Dec 09 2023 at 11:29):

But anyway, an induction over formulae should make it really easy to prove what you're trying to prove.

view this post on Zulip John Baez (Dec 09 2023 at 14:05):

So here's how I might rather clumsily state what you're trying to prove:

If Σ\Sigma is a formula with nn free variables in the first-order language of posets, for any poset PP it defines a subset S(Σ,P)PnS(\Sigma, P) \subseteq P^n, namely the set of nn-tuples for which the formula holds. There is a "dual" formula Σ\Sigma^\ast, defined inductively, and also a dual poset PP^\ast (usually called the opposite poset), and

S(Σ,P)=S(Σ,P)Pn S(\Sigma^\ast, P^\ast) = S(\Sigma, P) \subseteq P^n

where I'm using the fact that the underlying set of PP^\ast equals that of PP.

view this post on Zulip Todd Trimble (Dec 09 2023 at 15:03):

Yesterday I had started a series of comments regarding the use of variables in logic, with some appropriate categorical glosses on the subject, because "variables, which are exactly the elements of PP" (back here) had triggered me. But then Zulip punished me by carping about a too-long comment, and I couldn't put my comments back in order, and I gave up.

view this post on Zulip Todd Trimble (Dec 09 2023 at 15:04):

Variables, which are usually introduced to kids starting around middle school age, are usually explained not too well in that time period, and can cause a lot of puzzlement (in what sense does a letter "vary"?). In later courses before a more sophisticated audience, a certain amount of wind has to be expended on the syntax of free variables, bound variables, and so on. The notion of a truth set of a formula (which John denoted as S(Σ,P)S(\Sigma, P)), a semantic concept, becomes pretty important around this point.

view this post on Zulip Todd Trimble (Dec 09 2023 at 15:04):

For someone who likes category theory, I think one can get a lot of explanation-mileage by talking about the uses of free structures in mathematics and the role that so-called "variables" play there. At some point the yoga of "quantifiers as adjoints" also comes into play, rather beautifully.

view this post on Zulip John Baez (Dec 09 2023 at 15:42):

Yes, @Todd Trimble, I can easily imagine you getting lured into a quite precise account of how variables in first-order logic are connected to - but certainly not the same as! - elements in a specific model of a specific theory! Getting this right indeed requires "expending a certain amount of wind" - yet it's worthwhile, because it's part of the whole story about syntax versus semantics... a story that's key to understanding mathematical logic. There are old-fashioned versions of this story still worth hearing, and also modern versions that use category theory.

So it's a pity you were punished by Zulip, the goddess of brevity.

view this post on Zulip Mike Shulman (Dec 09 2023 at 16:11):

John Baez said:

Note that a formula can have free variables, while a statement is a formula with no free variables.

A formula can definitely have free variables, but I'm more used to the word "sentence" for a formula with no free variables. I'm not very familiar with "statement" being given a formal meaning at all, and the Internet didn't help me find examples. Do you have a reference?

view this post on Zulip John Baez (Dec 09 2023 at 16:17):

No, I just assumed that when @David Egolf said "statement" he meant "sentence", so I went along with his terminology.

view this post on Zulip John Baez (Dec 09 2023 at 16:19):

If by "statement" he meant "formula", then a lot of my comments were a bit off the mark. But he said stuff like "a statement is true in PP", which made me think he was talking about a sentence being satisfied in some model of the theory of posets.

view this post on Zulip Todd Trimble (Dec 09 2023 at 17:19):

Some introductory textbooks use "statement" to mean a sentence (i.e., a grammatical form with nouns and verbs) that has a definite truth value (when interpreted in a model), and I've gotten used to that usage.

view this post on Zulip Mike Shulman (Dec 09 2023 at 20:00):

Ah, I think I've seen that before too.

view this post on Zulip David Egolf (Dec 10 2023 at 06:13):

John Baez said:

But I have a nitpick to point out:

Note that a formula can have free variables, while a statement is a formula with no free variables. It makes plenty of sense to say a statement is true in a poset PP, but not that a formula is true in a poset PP. Here's a formula:

xy x \le y

Is this true in the poset (N,)(\mathbb{N}, \le)? Is it false? Not really either. It does however define a subset of N2\mathbb{N}^2, the set of pairs (x,y)(x,y) for which xyx \le y.

Thanks for raising this point! Let me look back at what I said: "if Σ\Sigma is any formula in the first order language for a poset PP, then Σ\Sigma^* is true in PP^* exactly when Σ\Sigma is true in PP".

I think what I meant is this: if the formula Σ\Sigma has a truth value in the context of PP, and that truth value is "true", then the formula Σ\Sigma^* also is assigned the truth value of "true" in the context of PP^*. So I was trying to keep in mind the idea that some formulas don't necessarily have a truth value assigned to them.

view this post on Zulip David Egolf (Dec 10 2023 at 06:13):

On a related note, I haven't said how I'm assigning truth values to a formula. That seems like something important I should try to understand! Considering your example xyx \leq y, if xx and yy are both variables, that means that (by my definition above for the variables in the alphabet for PP) both xx and yy are specific elements of PP. [I now realize that this definition is not consistent with the definition of "alphabet" I am referencing.] So it seems like xyx \leq y is the sort of formula I would want to assign a truth value to: it should be true if in the poset PP we really have xyx \leq y (and it should be false otherwise).

Interestingly, your discussion of the formula xyx \leq y seems to indicate that this formula specifies something that can be true or false, and only takes on a particular truth value once we assign specific poset elements to xx and yy. We then end up having some subset of P2P^2 for which xyx \leq y is true. So it seems like we are considering how to assign a truth value to xyx \leq y in quite different ways! Upon reflection, I think this is because I was confused about how to use "variables".

view this post on Zulip David Egolf (Dec 10 2023 at 06:17):

I think my (incorrect) understanding of "variables" was that they are a set of symbols that one is free to define as one wishes. I was employing this freedom to define the set of variables in terms of the set of elements in PP. But I think now that I actually don't have this freedom according to the definition of an alphabet I am trying to use. Instead, I believe that "variables" are supposed to be a fixed set of symbols that I don't get to choose, and in particular I don't get to define this set in terms of the elements of a chosen poset.

view this post on Zulip David Egolf (Dec 10 2023 at 06:24):

Here's the definition of an alphabet for a first-order language that I want to use:
alphabet

I see that the set of variables v0,v1,v2,v_0, v_1, v_2, \dots is fixed in this definition. So, I was not working with this definition correctly when I defined the set of variables to be the set of elements of the poset PP!

In summary, I think your comments helped me realize that I was confused about what a "variable" is in the context of the above definition for an alphabet of a first-order language. That should help me as I continue to think about this!

view this post on Zulip Mike Shulman (Dec 10 2023 at 07:03):

Fundamentally, a variable (in mathematics) is a placeholder, like     \underline{\;\;} or \Box, that can be filled with anything (of the appropriate type). So if I ask you what the value of x2+y2x^2+y^2 is, you can't answer (with a number) until I tell you what numbers to plug in for xx and yy. Similarly, if I ask you whether xyx\le y, you can't answer until I tell you what elements of the poset to plug in for xx and yy.

view this post on Zulip Mike Shulman (Dec 10 2023 at 07:05):

By contrast with variables, a formal language can also contain constants, which are symbols that have a particular fixed meaning (once a particular model is chosen). So for instance in the theory of the real numbers, there are constants like 00 and 11 and π\pi and so on. So if you start from some poset and are constructing a first-order theory for it, you could choose to include constants corresponding to some or all of the elements of that poset.

view this post on Zulip John Onstead (Dec 10 2023 at 08:40):

I'm not sure if this is relevant at all, and I know almost no categorical logic, but I wanted to mention it as it involves logic, statements, and posets. What if there was a category where the objects were true statements and morphisms were a form of implication? For instance, if statement A is true implies that statement B is true, then there is a morphism A -> B. And if you need statements A, B, and C to show that D is true, then you can have some sort of product (like a tensor product) A x B x C -> D. I believe this would form a poset? Please correct me if I'm wrong!

view this post on Zulip Mike Shulman (Dec 10 2023 at 08:50):

If you change "the objects are true statements" to "the objects are all statements", then what you have is the [[Lindenbaum algebra]] of the theory. This is the "0-categorical" version of the [[syntactic category]] that plays an essential role in categorical logic.

view this post on Zulip Mike Shulman (Dec 10 2023 at 08:52):

Hmm, the nLab page [[Lindenbaum algebra]] redirects to [[propositional theory]], which is not very helpful. But [[Lindenbaum-Tarski algebra]] is not much better, nor is the wikipedia page on Lindenbaum-Tarski algebra. Someone should improve the situation...

view this post on Zulip John Onstead (Dec 10 2023 at 09:17):

True, the actually relevant information I found under "type theory" of all places: [[type theory]]

view this post on Zulip David Egolf (Dec 10 2023 at 20:31):

With my better understanding of variables, I now define the alphabet for our first order language to be:

The terms are then the variables and the constants.
The formulas are then defined inductively essentially as before (except now our terms are different).

view this post on Zulip David Egolf (Dec 10 2023 at 20:34):

Next, to start to interpret these formulas, we can define a "structure" consisting of:

This structure will help us interpret formulas in PP, and I think will help us talk about whether formulas (or sentences in particular) are true or false.

view this post on Zulip Mike Shulman (Dec 10 2023 at 20:38):

That's looking good! However, it's a bit archaic to describe formulas as untyped strings involving explicit parentheses and so on. When one is actually implementing a proof assistant, there is of course a parsing step that has to take an untyped string of characters as input and know how to reject malformed strings like )x¬y\le)x\neg\wedge y, but when doing mathematics it's generally better to leave that step for the programmers and work with abstract syntax trees.

view this post on Zulip Todd Trimble (Dec 10 2023 at 20:42):

Building a language (or theory) based on a given fixed poset PP, in particular positing a constant term for each element of PP, is something you can do, and it's also a maneuver that is used for certain purposes in model theory, but I'm not quite clear on why you feel a need for this move here.

(If it were me doing the exercise, I think I'd work with the language of posets plain and simple, and not deal with all these constants running around. The language of posets doesn't have constants.)

view this post on Zulip Mike Shulman (Dec 10 2023 at 20:44):

An AST is a rooted tree where each vertex is labeled by a symbol of the alphabet (now without parentheses) and has the correct number of children for that operator, labeled by things of the correct type. So a node labeled \le has two children labeled by terms and is itself a formula, a node labeled \wedge has two children labeled by formulas and is itself another formula, a node labeled by a variable has no children and is a term, etc.

view this post on Zulip David Egolf (Dec 10 2023 at 23:57):

Todd Trimble said:

Building a language (or theory) based on a given fixed poset PP, in particular positing a constant term for each element of PP, is something you can do, and it's also a maneuver that is used for certain purposes in model theory, but I'm not quite clear on why you feel a need for this move here.

(If it were me doing the exercise, I think I'd work with the language of posets plain and simple, and not deal with all these constants running around. The language of posets doesn't have constants.)

I think I want to be able to consider formulas where we are comparing (using \leq) a specific element of the poset to a variable that can range over different elements in the poset. I thought that introducing these constants would make it easy to consider such formulas. But maybe these constants aren't needed to achieve this! I can try proceeding without the constants, and see what happens...

view this post on Zulip David Egolf (Dec 11 2023 at 00:02):

Mike Shulman said:

An AST is a rooted tree where each vertex is labeled by a symbol of the alphabet (now without parentheses) and has the correct number of children for that operator, labeled by things of the correct type. So a node labeled \le has two children labeled by terms and is itself a formula, a node labeled \wedge has two children labeled by formulas and is itself another formula, a node labeled by a variable has no children and is a term, etc.

Thanks for telling me about abstract syntax trees! If we were to define formulas using these trees instead of strings of symbols, I am guessing the process is similar? We presumably define some small "basic" trees (that correspond to certain simple formulas) and then describe some operations that assemble smaller trees into bigger trees (presumably using a symbol like \land as the root of the new tree). So we'll again end up with an inductive definition for formulas, but now in terms of trees.

view this post on Zulip Todd Trimble (Dec 11 2023 at 00:31):

David Egolf said:

Todd Trimble said:

Building a language (or theory) based on a given fixed poset PP, in particular positing a constant term for each element of PP, is something you can do, and it's also a maneuver that is used for certain purposes in model theory, but I'm not quite clear on why you feel a need for this move here.

(If it were me doing the exercise, I think I'd work with the language of posets plain and simple, and not deal with all these constants running around. The language of posets doesn't have constants.)

I think I want to be able to consider formulas where we are comparing (using \leq) a specific element of the poset to a variable that can range over different elements in the poset. I thought that introducing these constants would make it easy to consider such formulas. But maybe these constants aren't needed to achieve this! I can try proceeding without the constants, and see what happens...

There's nothing (that I can see) that is particularly wrong doing what you are doing, so don't worry about my comment too much. Whatever sees the problem through to completion is fine. I was just wondering what you had in mind, and you've explained, so go ahead and carry on.

view this post on Zulip John Baez (Dec 11 2023 at 10:01):

There's a conceptual difference, of course, in using "the language of posets", which is essentally the bare minimum version of first-order logic needed to write down the axioms for posets, and a language that also has a constant for each element of a specific poset PP. In the first there's a standard way to get any nn-ary relation on any poset from a formula with nn free variables, where in the latter it's only reasonable to hope for such relations on a single poset, namely PP. So the former allows for some tricks that the latter does not.

For example, we can check that given an isomorphism of posets f:PPf: P \to P', any sentence holds in PP iff the same sentence holds in PP'. (Remember, sentences are formulae with no free variables.)

view this post on Zulip John Baez (Dec 11 2023 at 10:07):

But for the question you're considering, you are lucky: PP and PP^\ast have the same underlying set, so you'll give each one the same language even if your language includes a constant for each element of your poset.

view this post on Zulip Mike Shulman (Dec 11 2023 at 16:41):

John Baez said:

PP and PP^\ast have the same underlying set, so you'll give each one the same language even if your language includes a constant for each element of your poset.

Well, you'll have the same set of formulas, but not the same set of axioms: the axiom aba\le b for PP becomes the axiom bab\le a for PP^\ast. So yes if by "language" you mean the set of formulas, but I think usually the whole phrase "internal language" refers to the whole theory, including the axioms.

view this post on Zulip John Baez (Dec 11 2023 at 17:36):

I was using "language" in the sense of (old-fashioned?) first-order classical logic, e.g.:

view this post on Zulip John Baez (Dec 11 2023 at 17:39):

That's the sense I grew up with: first you choose a language, which specifies "what you can say", and then you choose axioms, which specify "what's true".

view this post on Zulip John Baez (Dec 11 2023 at 17:41):

I think "internal language" is a later concept - I don't know who started talking about that, but maybe it came along after the "Mitchell-Benabou language of a topos"?

view this post on Zulip David Egolf (Dec 11 2023 at 19:03):

Now that I've defined a structure (although I probably want to change it, because I suspect I don't need the constants I introduced!), we can introduce the concept of an "assignment". An assignment β\beta is a function from the set of variables {v0,v1,,}\{v_0, v_1, \dots, \} to the "domain" set in our structure. In this case, β:{v0,v1,,}A\beta: \{v_0, v_1, \dots, \} \to A, where AA is the underlying set of the poset PP.

view this post on Zulip David Egolf (Dec 11 2023 at 19:03):

Then, an "interpretation" JJ of our terms and formulas is given by a pair: a structure and an assignment in that structure. For a formula ϕ\phi, we say that JϕJ \models \phi if applying our structure and assignment to ϕ\phi yields a true statement in the context of our structure. (The definition in the book I'm following is much longer, but I hope this is the basic idea).

view this post on Zulip David Egolf (Dec 11 2023 at 19:04):

I hope I can state the thing I want to prove this way:

view this post on Zulip David Egolf (Dec 11 2023 at 19:04):

view this post on Zulip David Egolf (Dec 11 2023 at 19:16):

To give an example, if our original formula is v0v1v_0 \leq v_1, our structure is the underlying set of PP and the P\leq^P from PP, and our assignment is β:v0a\beta: v_0 \mapsto a and β:v1b\beta: v_1 \mapsto b for a,bPa,b \in P, then this formula becomes aPba \leq^P b. If JJ is the interpretation we just used, then Jv0v1J \models v_0 \leq v_1 exactly if aPba \leq^P b is true.

view this post on Zulip David Egolf (Dec 11 2023 at 19:24):

To finish this example, let JJ^* be the same as JJ except it uses the structure which contains P\geq^{P^*} instead of P\leq^P. The the interpretation of v0v1v_0 \leq v_1 under JJ^* is aPba \geq^{P^*} b, which is true exactly if bPab \leq^{P*} a, which is true exactly if aPba \leq^P b.

So, if Jv0v1J \models v_0 \leq v_1, then Jv0v1J^* \models v_0 \leq v_1.

view this post on Zulip John Onstead (Dec 12 2023 at 02:42):

As I am learning categorical logic, I find it immensely helpful to always keep what I believe to be the central concept of categorical logic in mind: the syntax-semantics duality. IE, the adjunctions between categories of theories (like type theories) and certain category of categories that assigns a type theory to the syntactic category and a category its internal or "underlying" type theory, which then corresponds to its internal logic. My main question relative to dual posets and more generally opposite categories is where the internal logic functor within this adjunction maps opposite categories to in the category of type theories. My conjecture is that there exists an isomorphism (or at least an equivalence if isomorphism is too strong) between the corresponding underlying type theories of opposite categories within that category of type theories. If this is the case, it would perfectly explain why proving something in one category automatically proves the dual case in the opposite category. As an alternative conjecture, if it isn't true that there exists such an isomorphism or equivalence, there should at the very least be some relationship between these underlying type theories that could explain this, right? So, is my "conjecture" or its alternatives true? Thanks!

view this post on Zulip John Baez (Dec 12 2023 at 09:23):

David Egolf said:

I hope I can state the thing I want to prove this way:

view this post on Zulip John Baez (Dec 12 2023 at 09:24):

So you're saying the relation P\geq^{P^*} is the same as the relation P\le^P?

view this post on Zulip John Baez (Dec 12 2023 at 09:27):

If so, what's the difference between UPU_P and UPU_{P^\ast}? If I understand you correctly, they have the same underlying set AA and the same relation on it.

view this post on Zulip David Egolf (Dec 12 2023 at 18:18):

Yes, @John Baez, I think P\geq^{P^*} and P\leq^P specify the same relationship on AA. I think that makes sense because aba \leq b in PP exactly when aba \geq b in PP^* (by the definition of the dual of a partial order).

So, I suppose the only difference between UPU_P and UPU_{P^*} is that they use a different symbol for the specified relation on AA.

view this post on Zulip David Egolf (Dec 12 2023 at 18:28):

I notice that JJ \models is defined inductively on formulas in "Mathematical Logic" (by Ebbinghaus):
definition

So if we can show that Jt1t2J \models t_1 \equiv t_2 iff Jt1t2J^* \models t_1 \equiv t_2, and that Jt1,t2J \models \leq t_1, t_2 iff Jt1,t2J^* \models \leq t_1, t_2, then I am guessing that implies that JϕJ \models \phi iff JϕJ^* \models \phi. (In our case the only nn-ary relationship RR is the 2-ary relationship \leq).

view this post on Zulip David Egolf (Dec 12 2023 at 18:32):

And since JJ and JJ^* are extremely similar - they just use a different symbol for the same relationship on AA in their structures - it seems that we should be able to show this quickly. Since the assignment β\beta is the same in JJ and JJ^*, we immediately have that Jt1t2J \models t_1 \equiv t_2 iff Jt1t2J^* \models t_1 \equiv t_2. That is because Jt1t2J \models t_1 \equiv t_2 iff J(t1)=J(t2)J(t_1) = J(t_2) and in our case J(t1)=β(t1)J(t_1) = \beta(t_1) and J(t2)=β(t2)J(t_2) = \beta(t_2) for variables t1t_1 and t2t_2.

view this post on Zulip David Egolf (Dec 12 2023 at 18:37):

Further, Jt1t2J \models \leq t_1 t_2 iff PJ(t1)J(t2)\leq^P J(t_1) J(t_2) which in our case can be rewritten as β(t1)Pβ(t2)\beta(t_1) \leq^P \beta(t_2). Similarly, Jt1t2J^* \models \leq t_1 t_2 iff β(t1)Pβ(t2)\beta(t_1) \geq^{P^*} \beta(t_2). Since P\leq^P and P\geq^{P^*} specify the same relationship, we have Jt1t2    Jt1t2J \models \leq t_1 t_2 \iff J^* \models \leq t_1 t_2.

Because of how JJ \models is defined inductively on formulas, I am hoping this means that Jϕ    JϕJ \models \phi \iff J^* \models \phi for any formula ϕ\phi in our first order language.

view this post on Zulip David Egolf (Dec 12 2023 at 18:45):

Reflecting on this process:

view this post on Zulip John Baez (Dec 12 2023 at 21:06):

At the end you claimed, or at least hoped to show, that

Jϕ    JϕJ \models \phi \iff J^* \models \phi

for any formula ϕ\phi. This would be automatic if we had J=JJ = J^*. But as far as I can tell you're not claiming J=JJ = J^\ast; instead you say "JJ and JJ^* are extremely similar - they just use a different symbol for the same relationship on AA in their structures". If this is what's going on, a formula ϕ\phi that contains the first symbol (P\le^P) will not contain the second symbol (P\ge^{P^*}), so I don't think

Jϕ J^* \models \phi

literally makes sense.

view this post on Zulip John Baez (Dec 12 2023 at 21:10):

Personally I would go about this a bit differently. For example, I might define a way of transforming the formula ϕ\phi into a formula ϕ\phi^\ast by replacing every instance of the symbol P\le^P with the symbol P\ge^{P^\ast}, and then show

Jϕ    JϕJ \models \phi \iff J^* \models \phi^\ast

view this post on Zulip John Baez (Dec 12 2023 at 21:12):

Or, I might try something else: I might define a "language for posets" with a symbol \le that doesn't mention the name of the poset, and define a way of transforming any formula ϕ\phi in this language into a formula ϕ\phi^\dagger by replacing every expression of the form xyx \le y with an expression of the form yy \le , and then show

Jϕ    JϕJ \models \phi \iff J^\dagger \models \phi^\dagger

view this post on Zulip John Baez (Dec 12 2023 at 21:15):

for some suitable JJ^\dagger. The slight advantage here is that ϕ\phi and ϕ\phi^\dagger are in the same language, and this language applies to all posets.

view this post on Zulip John Baez (Dec 12 2023 at 21:18):

It's possible I'm not understanding you or your setup exactly, so take everything I say with suitable grains of salt.

view this post on Zulip David Egolf (Dec 12 2023 at 21:23):

John Baez said:

At the end you claimed, or at least hoped to show, that

Jϕ    JϕJ \models \phi \iff J^* \models \phi

for any formula ϕ\phi. This would be automatic if we had J=JJ = J^*. But as far as I can tell you're not claiming J=JJ = J^\ast; instead you say "JJ and JJ^* are extremely similar - they just use a different symbol for the same relationship on AA in their structures". If this is what's going on, a formula ϕ\phi that contains the first symbol (P\le^P) will not contain the second symbol (P\ge^{P^*}), so I don't think

Jϕ J^* \models \phi

literally makes sense.

A formula in our first order language doesn't contain either P\le^P or P\ge^{P^*}. It might contain \leq, though. This symbol gets swapped out for the appropriate symbol once we pick a particular structure though, and interpret that formula using that structure.

view this post on Zulip David Egolf (Dec 12 2023 at 21:25):

For example, once we pick a structure and an assignment, the formula t1t2\leq t_1 t_2 gets turned into something like Pab\leq^P ab, for some a,bAa,b \in A. If we pick a different structure (but the same assignment), interpreting the same formula could yield Pab\geq^{P^*} ab. (I prefer to write these expressions more intuitively as aPba \leq^P b and aPba \geq^{P^*} b.)

view this post on Zulip David Egolf (Dec 12 2023 at 21:28):

John Baez said:

for some suitable JJ^\dagger. The slight advantage here is that ϕ\phi and ϕ\phi^\dagger are in the same language, and this language applies to all posets.

I like the idea of having only one language. I think there is only one language (for formulas) in the approach that I am describing, as well.

view this post on Zulip David Egolf (Dec 12 2023 at 21:33):

In case having another example would be useful, here is an example (from the book I'm referencing) of using a structure and an assignment to interpret a formula:
example

In this example, the fancy UU is the structure and β\beta is the assignment. We start with the formula v2(v1+v2)v4v_2 \cdot (v_1 + v_2) \equiv v_4, and by making use of the specified structure and assignment we end up with 4(2+4)=84 \cdot (2 + 4) = 8. Somewhat confusingly, the ++ in the original formula is just a function symbol, while the ++ in 4(2+4)=84 \cdot (2 + 4) = 8 is referring to the addition function +:N×NN+: \mathbb{N} \times \mathbb{N} \to \mathbb{N}.

view this post on Zulip David Egolf (Dec 12 2023 at 21:39):

Similarly, returning to the main example we care about, this "interpretation" process moves us from formulas with the relation symbol \leq to expressions involving a relation on AA (either P\leq^P or P\geq^{P^*}).

(I hope I explained this correctly! I just read about these concepts a couple days ago, and it's possible I'm misunderstanding something.)

view this post on Zulip David Egolf (Dec 13 2023 at 00:01):

John Baez said:

But as far as I can tell you're not claiming J=JJ = J^\ast; instead you say "JJ and JJ^* are extremely similar - they just use a different symbol for the same relationship on AA in their structures".

I was reflecting on this a bit more. I've realized that I'm not 100% sure what it means for two interpretations (say JJ and JJ^*) to be equal. JJ and JJ^* are the same in the sense that they contain all the same sets, functions, and relations. But they are not the same in the sense that JJ and JJ^* yield syntactically different expressions when used to interpret a formula. So if we view JJ and JJ^* as specifying functions from formulas (like t1t2\leq t_1 t_2) to expressions (like Pab\leq^P a b with a,bAa,b \in A), then the resulting functions are not equal.

view this post on Zulip David Egolf (Dec 13 2023 at 00:26):

My current guess is that JJ and JJ^* are equal, but the "interpretation functions" they induce are not equal.

I don't know a formal definition of what it means for two things like JJ and JJ^* to be equal though, so I'm not sure about this. (Is a tuple of things equal to another tuple if (1) the two tuples have the same number of elements and (2) the ithi_{th} element in one tuple is equal to the ithi_{th} element in the other tuple, for all ii? If that is true, then I think J=JJ = J^*.)

view this post on Zulip John Baez (Dec 13 2023 at 13:58):

David Egolf said:

A formula in our first order language doesn't contain either P\le^P or P\ge^{P^*}. It might contain \leq, though. This symbol gets swapped out for the appropriate symbol once we pick a particular structure though, and interpret that formula using that structure.

Okay. I guess I don't completely understand your attitude toward syntax and semantics. The traditional approach to model theory in first-order logic doesn't think of an interpretation as "swapping out one symbol for another": instead, it maps each formula (which is a syntactic entity, a thing built from symbols) to a subset of XNX^N, where XX is some set (a semantic entity, not just the letter XX) and NN is the set of free variables in the formula.

view this post on Zulip John Baez (Dec 13 2023 at 14:00):

I suppose if you are taking an ultra-syntactic approach, basically denying that semantics has a separate life from syntax, you may think of this subset of XNX^N as itself described by symbols! This is a coherent viewpoint, and I've actually advocated it at times, but it's not the traditional one I learned in school. I figure if you're going to be radical, you should at least know you're being radical. :upside_down:

view this post on Zulip John Baez (Dec 13 2023 at 14:01):

Returning to your example, here's a more traditional way to say what's going on. We have a formula in the language of posets. To keep things simple, take the formula

xyx \le y

view this post on Zulip John Baez (Dec 13 2023 at 14:02):

A model of the theory of posets is a set - let me call it PP instead of XX, just to be friendly - together with a recipe for assigning a subset of PNP^N to each formula in the language of posets whose set of free variables is NN. This recipe needs to obey some rules.

In the end, a model of the theory of poset amounts to a poset: a set PP together with a binary relation which I'll call xPyx \le^P y. Note: the symbol P\le^P is part of my meta-mathematical chat here. I'm giving it a symbol just because I need symbols to talk to you: we say it's part of the metalanguage. It exists on a different plane than our friend \le.

view this post on Zulip John Baez (Dec 13 2023 at 14:04):

In our example formula, N={x,y}N = \{x,y\}.

view this post on Zulip John Baez (Dec 13 2023 at 14:11):

So, a model of the formula

xyx \le y

view this post on Zulip John Baez (Dec 13 2023 at 14:12):

is a subset of PNP2P^N \cong P^2 consisting of pairs a,bPa, b \in P such that aPba \le^P b.

view this post on Zulip John Baez (Dec 13 2023 at 14:13):

So, you see, the model is sending a formula to a certain subset of a certain set, not another formula with different symbols in it.

view this post on Zulip John Baez (Dec 13 2023 at 14:14):

The difference may seem a petty one, and indeed I'm starting to feel petty writing so much about this. But logicians argue endlessly about nuances like this - one reason I'm not a logician. :upside_down:

view this post on Zulip John Baez (Dec 13 2023 at 14:15):

David Egolf said:

Is a tuple of things equal to another tuple if (1) the two tuples have the same number of elements and (2) the ithi_{th} element in one tuple is equal to the ithi_{th} element in the other tuple, for all ii? If that is true, then I think J=JJ = J^*.)

Yes, that's the usual definition of equality for tuples.

view this post on Zulip John Baez (Dec 13 2023 at 14:17):

So my (mild) complaint was that in your discussion of duality, you wound up giving a rather elaborate discussion leading up to a proof that

Jϕ    JϕJ \models \phi \iff J^* \models \phi

followed ultimately by an admission that J=JJ = J^*, which makes the proof exceedingly easy: you are just proving that

Jϕ    JϕJ \models \phi \iff J \models \phi

view this post on Zulip John Baez (Dec 13 2023 at 14:18):

I was offering a way to think about duality where duality changes a formula ϕ\phi, say

xy x \le y

into a different formula, say ϕ\phi^\dagger, namely

yx y \le x

view this post on Zulip John Baez (Dec 13 2023 at 14:20):

In my approach duality also changes the interpretation JJ to a different interpretation, say JJ^\dagger. And then we can show

Jϕ    JϕJ \models \phi \iff J^\dagger \models \phi^\dagger

view this post on Zulip John Baez (Dec 13 2023 at 14:21):

This seems a bit more exciting to me than showing

Jϕ    JϕJ \models \phi \iff J \models \phi

view this post on Zulip John Baez (Dec 13 2023 at 14:22):

By the way, I'm using daggers instead of stars just so we can talk about your approach and mine together without fighting over what notation means. Secretly I use stars too. :upside_down:

view this post on Zulip David Egolf (Dec 13 2023 at 17:07):

Awesome, thanks for explaining that @John Baez ! I was not appreciating that a structure (and apparently a "model" as well) is really about specifying certain functions and relations. And although we use new symbols in the resulting interpreted formulas, these symbols can be viewed as shorthand (I think) for the full ordered-pair information of the functions and relations. From this perspective, aPba \leq^P b and aPba \geq^{P^*} b specify the exact same thing: the claim that (a,b)(a,b) is in the subset of A2A^2 specified by both P\leq^P and P\geq^{P^*}.

view this post on Zulip David Egolf (Dec 13 2023 at 17:08):

John Baez said:

This seems a bit more exciting to me than showing

Jϕ    JϕJ \models \phi \iff J \models \phi

Now that I understand that my JJ and JJ^* are equal, I agree! I will plan to give your approach a try, hopefully later today.

view this post on Zulip David Egolf (Dec 15 2023 at 17:14):

I now begin again, with a different approach. (I'm trying to follow the approach John Baez sketched above, although I'm not sure I'll actually manage to do so right away!)
These things stay the same:

But we now define our two structures in a different way:

view this post on Zulip David Egolf (Dec 15 2023 at 17:16):

Notice that \leq and \leq^\dagger are different relationships in general.

Then, for each assignment β\beta, we get two different interpretations:

view this post on Zulip David Egolf (Dec 15 2023 at 17:25):

Next, we wish to define a function ff on formulas that intuitively "swaps" the variables provided to our 2-ary relationship symbol \leq. If we start with the formula v0v1v_0 \leq v_1, then f(v0v1)=v1v0f(v_0 \leq v_1) = v_1 \leq v_0.

We want to define what ff does for any formula ϕ\phi. Once we have done so, we will then denote f(ϕ)f(\phi) by ϕ\phi^\dagger.

view this post on Zulip David Egolf (Dec 15 2023 at 17:35):

I will now attempt to define ff on all formulas:

view this post on Zulip David Egolf (Dec 15 2023 at 17:35):

view this post on Zulip David Egolf (Dec 15 2023 at 17:38):

view this post on Zulip David Egolf (Dec 15 2023 at 17:38):

view this post on Zulip David Egolf (Dec 15 2023 at 17:44):

Denoting f(ϕ)f(\phi) by ϕ\phi^\dagger, we now wish to show that Jϕ    JϕJ \models \phi \iff J^\dagger \models \phi^\dagger.

As a small check on this setup, I first try this out for the formula ϕ\phi equal to v0v1v_0 \leq v_1. Let β\beta be the assignment that sends v0v_0 to aAa \in A and v1v_1 to bAb \in A. Then JϕJ \models \phi is true exactly if aba \leq b in PP.

view this post on Zulip David Egolf (Dec 15 2023 at 17:44):

Since ϕ=v1v0\phi^\dagger = v_1 \leq v_0, we have that JϕJ^\dagger \models \phi^\dagger is true exactly if bab \leq^\dagger a. Since ab    baa \leq b \iff b \leq ^\dagger a we see that Jϕ    JϕJ \models \phi \iff J^\dagger \models \phi^\dagger for this particular ϕ\phi.

I think the next step is probably to do a kind of induction over formulas to show that Jϕ    JϕJ \models \phi \iff J^\dagger \models \phi^\dagger for any formula ϕ\phi in our first-order language.

view this post on Zulip John Baez (Dec 16 2023 at 11:29):

This looks great to me! You'll need to do the induction for formulae, not just sentences (= formulae without free variables), since sentences are made of formulae, not just sentences. Thus, in the induction you'll need to talk about something other than just

JϕJ \models \phi

since this makes sense only when ϕ\phi is a formula. I think you'll need to talk, for each formula with a set NN of free variables, about the subset of PNP^N that it defines.

view this post on Zulip John Baez (Dec 16 2023 at 11:31):

For a sentence ϕ\phi, N=N = \emptyset so PNP^N has one element, so a subset of it has either 0 or 1 elements, which correspond to the two options: JϕJ \models \phi is either true or false.

view this post on Zulip David Egolf (Dec 16 2023 at 17:35):

That's great to hear!

I think that JϕJ \models \phi makes sense for any formula ϕ\phi, not just when ϕ\phi is a sentence. That's because an interpretation JJ includes an assignment β\beta, which assigns particular values (in AA) to all free variables.

At any rate, I hope to give some version of the induction a try in the next few days!

view this post on Zulip David Egolf (Dec 17 2023 at 19:41):

I'm going to be busy spending time with family for probably the next couple weeks. So, it may be a while before I get back to this! But I do look forward to finishing this up.

view this post on Zulip John Baez (Dec 17 2023 at 20:49):

David Egolf said:

That's great to hear!

I think that JϕJ \models \phi makes sense for any formula ϕ\phi, not just when ϕ\phi is a sentence. That's because an interpretation JJ includes an assignment β\beta, which assigns particular values (in AA) to all free variables.

Okay, if that's how it works you can induct over formulae, not just sentences.