You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I'm trying to understand this idea: "If a statement is true for a partial order , then the 'dual' of that statement is true in the dual partial order . (For context, I'm reading "Introduction to lattices and order" by Davey and Priestley. I'm also referencing this webpage).
By the dual of a partial order , I mean that and have the same elements, and in exactly if in . By the dual of a statement, I mean the same statement where every occurrence of is replaced by .
For example, assume that the following statement is true in : "there is an element so that for all ". Then the dual statement I think is: "there is an element so that for all ." I think that is true in . We know that for all in . Thus for all in . So this works out!
I would like to prove in general that if is true in , then is true in . But I don't know where to start! I feel like I don't even really know what a "statement" is, or exactly what it means for a statement to be "true for a partial order ".
Any thoughts are appreciated!
What you have is basically correct, but it should be added that if the statement includes operations on a partial order, such as and , then those have to be dualized as well. Let's talk about this.
So one very categorically minded definition of is that it is the cartesian product in a poset. When you spell this out, it means:
(Note: this is a very "Yoneda-esque" description: it uniquely characterizes in terms of "maps", i.e., instances of , coming out of arbitrary objects/elements .)
Similarly, or rather dually, the characterization of joins is:
or (of course)
Thanks for pointing this out!
And I think I see that the first bijection you have provided is a particular component of the natural isomorphism which declares to be a product: . So,
That's right.
So can be expressed in the core language of posets, where they emerge as concepts dual to one another, and therefore when you dualize statements which involve them, you have to interchange them.
Now: the language of posets is (depending on terminological conventions) the collection of all first-order formulas that you can build up using any core relations of the theory (for posets, there's only one such predicate, which is ) and constants of the theory (there are no primitive constants for the language of posets per se, although for some extensions of the language, you might bring in constant symbols like (bottom) and (top), which by the way are dual to one another as well, being the initial/terminal elements), and finally any primitive operations (again, for the core language of posets, there are no primitive operations per se, but again, some extensions may include operations like and , and there may be others besides depending on the language extension).
So to make use of this, you have to understand "first-order formulas", but that's a recursive matter, much like algebraic expressions are built up recursively.
A "statement" is a first-order formula in which every variable is quantified, either universally or existentially.
Finally, to take the dual of a statement (in the core language), you reverse all instances of in your statement, but otherwise leave its logical structure unchanged.
Duality is an absolutely fundamental theme in category theory, and this was recognized from the very beginning by Eilenberg and Mac Lane: I think they might have mentioned the analogy with duality in projective geometry, or duality in the theory of Boolean algebras, but I don't remember for sure. Anyway, like the theory of posets, the theory of categories is "self-dual". In essence this means that if, in the core language which specifies function symbols and , identity function symbols, and the composition symbol, the "dual" of a statement in this language is obtained by swapping and , swapping instances of with , and leaving everything else unchanged, then the dual of any theorem in the theory of categories is again a theorem. This is because the dual of an axiom in the theory of categories is again an axiom in this theory.
I think I will stop there for now.
Thanks, that is all quite helpful! That gives me a lot of things to look up and review. (I looked at "first-order formulas" a while ago, but I've forgotten most of the details! This gives me a good reason to review that.)
Todd Trimble said:
This is because the dual of an axiom in the theory of categories is again an axiom in this theory.
I bet an analogous statement is important for proving what I want to prove for partial orders!
Absolutely. The theory of posets is also self-dual (exercise!).
Some of this can be confusing at times. When we teach Boolean algebra, for example, and dualize a statement, the symbols and which belong to the object language are interchanged, but not the words "and" and "or" in the meta-language that we use to talk about logical formulas in the theory!
Those meta-usages are unchanged in the process of dualizing.
Todd Trimble said:
Those meta-usages are unchanged in the process of dualizing.
That makes sense! I was just thinking about this, trying to imagine an inductive procedure to show that more and more statements stay true when dualized. We might first show that if is true in then is true in , and if is true in then is true in .
Then we might assume that " and " is true in and ask if " and " is true in . Since " and " is true, then is true and is true (all in ) therefore is true and is true (in ) and hence " and " is true in .
So the "and" didn't need to get changed as we moved from to .
Yes, exactly.
Todd Trimble said:
A "statement" is a first-order formula
Why first-order?
Doesn't have to be, necessarily. But typical texts on logic often stop at this point, for a variety of reasons, so it's a useful stopgap that seems appropriate for this level of conversation. Did you want to say something more on this?
@David Egolf wrote:
I would like to prove in general that if is true in , then is true in . But I don't know where to start! I feel like I don't even really know what a "statement" is, or exactly what it means for a statement to be "true for a partial order ".
There are lots of levels of generality for what we can take a "statement" to be, and the precise details of how we dualize a statement depends on what framework we use to define "statements".
I was going to recommend starting with the approach of first-order logic, since this is a classic approach to working with "statements", but I see now that Todd Trimble beat me to it:
A "statement" is a first-order formula in which every variable is quantified, either universally or existentially.
where "first-order formula" needs to be defined inductively. I'm actually glad Todd beat me to it, because it takes a fair amount of work.
We could then move on to second-order logic, etcetera, but many of the classic results in logic (like Godel's completeness theorem, soundness, compactness, etc.) work nicely for first-order logic, so I think it's good to get comfy with first-order logic before moving on.
John Baez said:
We could then move on to second-order logic, etcetera, but many of the classic results in logic (like Godel's completeness theorem, soundness, compactness, etc.) work nicely for first-order logic
These results are completely unrelated to the duality principle under discussion.
I know. I'm just trying to explain to David Egolf why it's good to learn first-order logic.
Actually I think it's going too far to say anything in logic is completely unrelated to anything else: I feel confident that with a little work I could find a fun application of soundness or completeness to the first-order theory of posets, and how this theory is "self-dual".
But I was really just trying to get David more interested in logic.
Todd Trimble said:
Doesn't have to be, necessarily. But typical texts on logic often stop at this point, for a variety of reasons, so it's a useful stopgap that seems appropriate for this level of conversation. Did you want to say something more on this?
I just wondered why stop at first-order when discussing the duality principle David mentioned. Many properties of partial orders like "well-ordered" are higher-order.
If you're teaching someone first-order logic, as Todd just did, maybe you'd want to wait a few days before explaining second-order logic.
John Baez said:
We could then move on to second-order logic, etcetera, but many of the classic results in logic (like Godel's completeness theorem, soundness, compactness, etc.) work nicely for first-order logic, so I think it's good to get comfy with first-order logic before moving on.
Yes, I hope to get a bit more comfortable with some first-order logic by thinking about the topic of this thread. I am hopeful that having a specific question I want to answer (as described above) will be helpful motivation!
One thing that I'm noticing right away is that we have two "layers" going on:
Referencing the below definition of an alphabet of a first-order language, I am wondering if the "inside the poset" expressions need to be our variables. Or maybe instead our variables are the elements of the poset and we have a "relation symbol" in our first-order alphabet. Probably this second approach is simpler!
alphabet of a first-order language
David Egolf said:
I would like to prove in general that if is true in , then is true in . But I don't know where to start!
I don't think you will be able to formulate such a theorem in the first place! Here is a related MathOverflow question.
I wonder how people would use the duality principle in a proof assistant. Is that what "metaprogramming" is?
David Egolf said:
One thing that I'm noticing right away is that we have two "layers" going on:
- we can form expressions "inside" a partial order, like or
- we can form expressions "one level up" like " and "
This is because you're studying the theory of a poset using first-order logic, which has its own poset structure (called implication).
That's a bit more confusing than if you were studying, say, the theory of a group using first-order logic.
If you were studying that, the concept you'd be studying would not be so similar to the framework you were studying it in.
But if you want to study duality, the theory of posets is a nice example. (The theory of projective planes also has a nice duality, of a rather different sort.)
Leopold Schlicht said:
David Egolf said:
I would like to prove in general that if is true in , then is true in . But I don't know where to start!
I don't think you will be able to formulate such a theorem in the first place!
Of course he can formulate such a theorem. Todd Trimble began outlining how. There are many such theorems along these lines, but Todd is leading him to a nice simple one.
Note that David in his question didn't mention that the "statements" he is talking about are first-order. First-order sentences almost surely don't capture what he meant by "statement" (since, for instance, the duality principle should apply to the property of being well-ordered).
Leopold Schlicht said:
Note that David in his question didn't mention that the "statements" he is talking about are first-order. First-order sentences almost surely don't capture what he meant by "statement" (since, for instance, the duality principle should apply to the property of being well-ordered).
I'm happy to start with simpler statements, like first-order sentences! If I can understand how those behave with respect to "duality" maybe I can start thinking about more complex ones. But I appreciate you pointing out that more complex statements are important!
That's a nice exercise if you want to familiarize yourself with first-order logic! I didn't know that that's what you are interested in from reading your question.
However, if you are interested in making precise the duality principle you mentioned, I claim that no formal logic captures what you mean by "statement", though. So the only way I see to make the duality principle precise would be to investigate how such a situation is handled in a proof assistant.
If we're just talking about duality of a single poset (or category), then it seems to me that any sort of statement you like should be captured by saying there is an involution on the syntax of the theory "T + a poset", where T is whatever foundational theory of mathematics you like (ZFC, MLTT, HoTT, etc.), which intertwines with the duality construction on models of that theory. The MO question you linked to is about duality statements involving multiple categories and functors between them, which is somewhat more complicated, but is probably still captured by involutions of a theory like "T + a diagram of posets/categories, functors, and natural transformations between them".
I really like to rephrase statements about partial orders in the calculus of relations. It is a language without variables (which to me makes it feel more categorical), and it makes duality very easy to see (almost too trivial) because there is an converse operation changing into . One caveat is that it is not as expressive as FO logic.
Here is a simple example (first in FO logic) (EDIT: ok I wrote these right after waking up and I didn't realize any poset is dense and reflexive so the properties are not that interesting...).
Def: We say a poset is dense if
In words, you can always find an element between two others like inside the real or rational numbers ( and with the strict orders are dense).
Def: We say a poset is reflexive if
In words, every element is related to itself in the order.
Thm: If is reflexive, then it is dense.
Proof: For any , let (or and by reflexivity, we have .
Now the same example with the calculus of relations (I can go into the details of why these are equivalent formulations).
Def: We say that a poset is dense if .
Def: We say that a poset is reflexive if .
Proof of Thm: .
Leopold Schlicht said:
However, if you are interested in making precise the duality principle you mentioned, I claim that no formal logic captures what you mean by "statement", though. So the only way I see to make the duality principle precise would be to investigate how such a situation is handled in a proof assistant.
In order to formally discuss statements, you need to fix a language (you need to decide whether you allow joins and meets in your statements, say). Once you've made that choice, I would argue that you in fact can prove a duality meta-theorem by arguing by substitution into proofs. That is, let be a proof that satisfies ; then the proof obtained by substituting every formula by its syntactic dual is a proof that satisfies . The only requirement here is that of invariance of deduction rules under dualization; you don't need to have decided in advance what your deduction rules are (i.e. which fragment of logic you're working in).
I've just learned about induction in the calculus of formulas for a first order language. I think this will be a helpful tool for proving that if is any formula in the first order language for a poset , then is true in exactly when is true in .
To do this, we first need to define what the formulas are for the poset . To do that, we first define what terms are. And to define what terms are, we need to first define the alphabet for . The alphabet for is a set of symbols containing exactly the following:
The terms for the poset are exactly the variables of our alphabet, the elements of .
The formulas for are exactly the strings obtained by finitely many applications of the following rules:
I had begun writing a long response, but everything got screwed up because one of the comments was "too long" for this stupid zulip to handle -- and then everything got fouled up.
If I have the time and energy, then I'll reconstruct the series of messages. But maybe someone else would like to intervene (I hope so -- that would save me from having to do it all over again).
Todd Trimble said:
I had begun writing a long response, but everything got screwed up because one of the comments was "too long" for this stupid zulip to handle -- and then everything got fouled up.
If I have the time and energy, then I'll reconstruct the series of messages. But maybe someone else would like to intervene (I hope so -- that would save me from having to do it all over again).
I have been struggling a lot with this problem! I noticed that long messages tend to (thankfully) still get saved to "drafts", which might help in your case too, depending on what exactly went wrong.
No, there are no drafts of mine, alas.
David Egolf said:
I've just learned about induction in the calculus of formulas for a first order language. I think this will be a helpful tool for proving that if is any formula in the first order language for a poset , then is true in exactly when is true in .
I agree with your main point here: since formulae are defined inductively, to prove a claim about all formulae it's very natural (and maybe even inevitable?) that you'll use induction.
But I have a nitpick to point out:
Note that a formula can have free variables, while a statement is a formula with no free variables. It makes plenty of sense to say a statement is true in a poset , but not that a formula is true in a poset . Here's a formula:
Is this true in the poset ? Is it false? Not really either. It does however define a function from to truth values, sending each pair to the truth value of . Or in other words, it defines a binary relation on .
So, instead of saying
if is any formula in the first order language for a poset , then is true in exactly when is true in .
it might be better to say it a bit differently.
Of course, it's possible you meant exactly what I'm trying to make explicit here! And in fact I don't even know the standard jargon for what I'm trying to say.
But anyway, an induction over formulae should make it really easy to prove what you're trying to prove.
So here's how I might rather clumsily state what you're trying to prove:
If is a formula with free variables in the first-order language of posets, for any poset it defines a subset , namely the set of -tuples for which the formula holds. There is a "dual" formula , defined inductively, and also a dual poset (usually called the opposite poset), and
where I'm using the fact that the underlying set of equals that of .
Yesterday I had started a series of comments regarding the use of variables in logic, with some appropriate categorical glosses on the subject, because "variables, which are exactly the elements of " (back here) had triggered me. But then Zulip punished me by carping about a too-long comment, and I couldn't put my comments back in order, and I gave up.
Variables, which are usually introduced to kids starting around middle school age, are usually explained not too well in that time period, and can cause a lot of puzzlement (in what sense does a letter "vary"?). In later courses before a more sophisticated audience, a certain amount of wind has to be expended on the syntax of free variables, bound variables, and so on. The notion of a truth set of a formula (which John denoted as ), a semantic concept, becomes pretty important around this point.
For someone who likes category theory, I think one can get a lot of explanation-mileage by talking about the uses of free structures in mathematics and the role that so-called "variables" play there. At some point the yoga of "quantifiers as adjoints" also comes into play, rather beautifully.
Yes, @Todd Trimble, I can easily imagine you getting lured into a quite precise account of how variables in first-order logic are connected to - but certainly not the same as! - elements in a specific model of a specific theory! Getting this right indeed requires "expending a certain amount of wind" - yet it's worthwhile, because it's part of the whole story about syntax versus semantics... a story that's key to understanding mathematical logic. There are old-fashioned versions of this story still worth hearing, and also modern versions that use category theory.
So it's a pity you were punished by Zulip, the goddess of brevity.
John Baez said:
Note that a formula can have free variables, while a statement is a formula with no free variables.
A formula can definitely have free variables, but I'm more used to the word "sentence" for a formula with no free variables. I'm not very familiar with "statement" being given a formal meaning at all, and the Internet didn't help me find examples. Do you have a reference?
No, I just assumed that when @David Egolf said "statement" he meant "sentence", so I went along with his terminology.
If by "statement" he meant "formula", then a lot of my comments were a bit off the mark. But he said stuff like "a statement is true in ", which made me think he was talking about a sentence being satisfied in some model of the theory of posets.
Some introductory textbooks use "statement" to mean a sentence (i.e., a grammatical form with nouns and verbs) that has a definite truth value (when interpreted in a model), and I've gotten used to that usage.
Ah, I think I've seen that before too.
John Baez said:
But I have a nitpick to point out:
Note that a formula can have free variables, while a statement is a formula with no free variables. It makes plenty of sense to say a statement is true in a poset , but not that a formula is true in a poset . Here's a formula:
Is this true in the poset ? Is it false? Not really either. It does however define a subset of , the set of pairs for which .
Thanks for raising this point! Let me look back at what I said: "if is any formula in the first order language for a poset , then is true in exactly when is true in ".
I think what I meant is this: if the formula has a truth value in the context of , and that truth value is "true", then the formula also is assigned the truth value of "true" in the context of . So I was trying to keep in mind the idea that some formulas don't necessarily have a truth value assigned to them.
On a related note, I haven't said how I'm assigning truth values to a formula. That seems like something important I should try to understand! Considering your example , if and are both variables, that means that (by my definition above for the variables in the alphabet for ) both and are specific elements of . [I now realize that this definition is not consistent with the definition of "alphabet" I am referencing.] So it seems like is the sort of formula I would want to assign a truth value to: it should be true if in the poset we really have (and it should be false otherwise).
Interestingly, your discussion of the formula seems to indicate that this formula specifies something that can be true or false, and only takes on a particular truth value once we assign specific poset elements to and . We then end up having some subset of for which is true. So it seems like we are considering how to assign a truth value to in quite different ways! Upon reflection, I think this is because I was confused about how to use "variables".
I think my (incorrect) understanding of "variables" was that they are a set of symbols that one is free to define as one wishes. I was employing this freedom to define the set of variables in terms of the set of elements in . But I think now that I actually don't have this freedom according to the definition of an alphabet I am trying to use. Instead, I believe that "variables" are supposed to be a fixed set of symbols that I don't get to choose, and in particular I don't get to define this set in terms of the elements of a chosen poset.
Here's the definition of an alphabet for a first-order language that I want to use:
alphabet
I see that the set of variables is fixed in this definition. So, I was not working with this definition correctly when I defined the set of variables to be the set of elements of the poset !
In summary, I think your comments helped me realize that I was confused about what a "variable" is in the context of the above definition for an alphabet of a first-order language. That should help me as I continue to think about this!
Fundamentally, a variable (in mathematics) is a placeholder, like or , that can be filled with anything (of the appropriate type). So if I ask you what the value of is, you can't answer (with a number) until I tell you what numbers to plug in for and . Similarly, if I ask you whether , you can't answer until I tell you what elements of the poset to plug in for and .
By contrast with variables, a formal language can also contain constants, which are symbols that have a particular fixed meaning (once a particular model is chosen). So for instance in the theory of the real numbers, there are constants like and and and so on. So if you start from some poset and are constructing a first-order theory for it, you could choose to include constants corresponding to some or all of the elements of that poset.
I'm not sure if this is relevant at all, and I know almost no categorical logic, but I wanted to mention it as it involves logic, statements, and posets. What if there was a category where the objects were true statements and morphisms were a form of implication? For instance, if statement A is true implies that statement B is true, then there is a morphism A -> B. And if you need statements A, B, and C to show that D is true, then you can have some sort of product (like a tensor product) A x B x C -> D. I believe this would form a poset? Please correct me if I'm wrong!
If you change "the objects are true statements" to "the objects are all statements", then what you have is the [[Lindenbaum algebra]] of the theory. This is the "0-categorical" version of the [[syntactic category]] that plays an essential role in categorical logic.
Hmm, the nLab page [[Lindenbaum algebra]] redirects to [[propositional theory]], which is not very helpful. But [[Lindenbaum-Tarski algebra]] is not much better, nor is the wikipedia page on Lindenbaum-Tarski algebra. Someone should improve the situation...
True, the actually relevant information I found under "type theory" of all places: [[type theory]]
With my better understanding of variables, I now define the alphabet for our first order language to be:
The terms are then the variables and the constants.
The formulas are then defined inductively essentially as before (except now our terms are different).
Next, to start to interpret these formulas, we can define a "structure" consisting of:
This structure will help us interpret formulas in , and I think will help us talk about whether formulas (or sentences in particular) are true or false.
That's looking good! However, it's a bit archaic to describe formulas as untyped strings involving explicit parentheses and so on. When one is actually implementing a proof assistant, there is of course a parsing step that has to take an untyped string of characters as input and know how to reject malformed strings like , but when doing mathematics it's generally better to leave that step for the programmers and work with abstract syntax trees.
Building a language (or theory) based on a given fixed poset , in particular positing a constant term for each element of , is something you can do, and it's also a maneuver that is used for certain purposes in model theory, but I'm not quite clear on why you feel a need for this move here.
(If it were me doing the exercise, I think I'd work with the language of posets plain and simple, and not deal with all these constants running around. The language of posets doesn't have constants.)
An AST is a rooted tree where each vertex is labeled by a symbol of the alphabet (now without parentheses) and has the correct number of children for that operator, labeled by things of the correct type. So a node labeled has two children labeled by terms and is itself a formula, a node labeled has two children labeled by formulas and is itself another formula, a node labeled by a variable has no children and is a term, etc.
Todd Trimble said:
Building a language (or theory) based on a given fixed poset , in particular positing a constant term for each element of , is something you can do, and it's also a maneuver that is used for certain purposes in model theory, but I'm not quite clear on why you feel a need for this move here.
(If it were me doing the exercise, I think I'd work with the language of posets plain and simple, and not deal with all these constants running around. The language of posets doesn't have constants.)
I think I want to be able to consider formulas where we are comparing (using ) a specific element of the poset to a variable that can range over different elements in the poset. I thought that introducing these constants would make it easy to consider such formulas. But maybe these constants aren't needed to achieve this! I can try proceeding without the constants, and see what happens...
Mike Shulman said:
An AST is a rooted tree where each vertex is labeled by a symbol of the alphabet (now without parentheses) and has the correct number of children for that operator, labeled by things of the correct type. So a node labeled has two children labeled by terms and is itself a formula, a node labeled has two children labeled by formulas and is itself another formula, a node labeled by a variable has no children and is a term, etc.
Thanks for telling me about abstract syntax trees! If we were to define formulas using these trees instead of strings of symbols, I am guessing the process is similar? We presumably define some small "basic" trees (that correspond to certain simple formulas) and then describe some operations that assemble smaller trees into bigger trees (presumably using a symbol like as the root of the new tree). So we'll again end up with an inductive definition for formulas, but now in terms of trees.
David Egolf said:
Todd Trimble said:
Building a language (or theory) based on a given fixed poset , in particular positing a constant term for each element of , is something you can do, and it's also a maneuver that is used for certain purposes in model theory, but I'm not quite clear on why you feel a need for this move here.
(If it were me doing the exercise, I think I'd work with the language of posets plain and simple, and not deal with all these constants running around. The language of posets doesn't have constants.)
I think I want to be able to consider formulas where we are comparing (using ) a specific element of the poset to a variable that can range over different elements in the poset. I thought that introducing these constants would make it easy to consider such formulas. But maybe these constants aren't needed to achieve this! I can try proceeding without the constants, and see what happens...
There's nothing (that I can see) that is particularly wrong doing what you are doing, so don't worry about my comment too much. Whatever sees the problem through to completion is fine. I was just wondering what you had in mind, and you've explained, so go ahead and carry on.
There's a conceptual difference, of course, in using "the language of posets", which is essentally the bare minimum version of first-order logic needed to write down the axioms for posets, and a language that also has a constant for each element of a specific poset . In the first there's a standard way to get any -ary relation on any poset from a formula with free variables, where in the latter it's only reasonable to hope for such relations on a single poset, namely . So the former allows for some tricks that the latter does not.
For example, we can check that given an isomorphism of posets , any sentence holds in iff the same sentence holds in . (Remember, sentences are formulae with no free variables.)
But for the question you're considering, you are lucky: and have the same underlying set, so you'll give each one the same language even if your language includes a constant for each element of your poset.
John Baez said:
and have the same underlying set, so you'll give each one the same language even if your language includes a constant for each element of your poset.
Well, you'll have the same set of formulas, but not the same set of axioms: the axiom for becomes the axiom for . So yes if by "language" you mean the set of formulas, but I think usually the whole phrase "internal language" refers to the whole theory, including the axioms.
I was using "language" in the sense of (old-fashioned?) first-order classical logic, e.g.:
That's the sense I grew up with: first you choose a language, which specifies "what you can say", and then you choose axioms, which specify "what's true".
I think "internal language" is a later concept - I don't know who started talking about that, but maybe it came along after the "Mitchell-Benabou language of a topos"?
Now that I've defined a structure (although I probably want to change it, because I suspect I don't need the constants I introduced!), we can introduce the concept of an "assignment". An assignment is a function from the set of variables to the "domain" set in our structure. In this case, , where is the underlying set of the poset .
Then, an "interpretation" of our terms and formulas is given by a pair: a structure and an assignment in that structure. For a formula , we say that if applying our structure and assignment to yields a true statement in the context of our structure. (The definition in the book I'm following is much longer, but I hope this is the basic idea).
I hope I can state the thing I want to prove this way:
To give an example, if our original formula is , our structure is the underlying set of and the from , and our assignment is and for , then this formula becomes . If is the interpretation we just used, then exactly if is true.
To finish this example, let be the same as except it uses the structure which contains instead of . The the interpretation of under is , which is true exactly if , which is true exactly if .
So, if , then .
As I am learning categorical logic, I find it immensely helpful to always keep what I believe to be the central concept of categorical logic in mind: the syntax-semantics duality. IE, the adjunctions between categories of theories (like type theories) and certain category of categories that assigns a type theory to the syntactic category and a category its internal or "underlying" type theory, which then corresponds to its internal logic. My main question relative to dual posets and more generally opposite categories is where the internal logic functor within this adjunction maps opposite categories to in the category of type theories. My conjecture is that there exists an isomorphism (or at least an equivalence if isomorphism is too strong) between the corresponding underlying type theories of opposite categories within that category of type theories. If this is the case, it would perfectly explain why proving something in one category automatically proves the dual case in the opposite category. As an alternative conjecture, if it isn't true that there exists such an isomorphism or equivalence, there should at the very least be some relationship between these underlying type theories that could explain this, right? So, is my "conjecture" or its alternatives true? Thanks!
David Egolf said:
I hope I can state the thing I want to prove this way:
- let be the structure for the poset , which specifies the underlying set of and the relation (from the poset ) on
- let be the structure for the poset , which specifies the underlying set of and the relation on . We have , where is the relation from the poset .
So you're saying the relation is the same as the relation ?
If so, what's the difference between and ? If I understand you correctly, they have the same underlying set and the same relation on it.
Yes, @John Baez, I think and specify the same relationship on . I think that makes sense because in exactly when in (by the definition of the dual of a partial order).
So, I suppose the only difference between and is that they use a different symbol for the specified relation on .
I notice that is defined inductively on formulas in "Mathematical Logic" (by Ebbinghaus):
definition
So if we can show that iff , and that iff , then I am guessing that implies that iff . (In our case the only -ary relationship is the 2-ary relationship ).
And since and are extremely similar - they just use a different symbol for the same relationship on in their structures - it seems that we should be able to show this quickly. Since the assignment is the same in and , we immediately have that iff . That is because iff and in our case and for variables and .
Further, iff which in our case can be rewritten as . Similarly, iff . Since and specify the same relationship, we have .
Because of how is defined inductively on formulas, I am hoping this means that for any formula in our first order language.
Reflecting on this process:
At the end you claimed, or at least hoped to show, that
for any formula . This would be automatic if we had . But as far as I can tell you're not claiming ; instead you say " and are extremely similar - they just use a different symbol for the same relationship on in their structures". If this is what's going on, a formula that contains the first symbol () will not contain the second symbol (), so I don't think
literally makes sense.
Personally I would go about this a bit differently. For example, I might define a way of transforming the formula into a formula by replacing every instance of the symbol with the symbol , and then show
Or, I might try something else: I might define a "language for posets" with a symbol that doesn't mention the name of the poset, and define a way of transforming any formula in this language into a formula by replacing every expression of the form with an expression of the form , and then show
for some suitable . The slight advantage here is that and are in the same language, and this language applies to all posets.
It's possible I'm not understanding you or your setup exactly, so take everything I say with suitable grains of salt.
John Baez said:
At the end you claimed, or at least hoped to show, that
for any formula . This would be automatic if we had . But as far as I can tell you're not claiming ; instead you say " and are extremely similar - they just use a different symbol for the same relationship on in their structures". If this is what's going on, a formula that contains the first symbol () will not contain the second symbol (), so I don't think
literally makes sense.
A formula in our first order language doesn't contain either or . It might contain , though. This symbol gets swapped out for the appropriate symbol once we pick a particular structure though, and interpret that formula using that structure.
For example, once we pick a structure and an assignment, the formula gets turned into something like , for some . If we pick a different structure (but the same assignment), interpreting the same formula could yield . (I prefer to write these expressions more intuitively as and .)
John Baez said:
for some suitable . The slight advantage here is that and are in the same language, and this language applies to all posets.
I like the idea of having only one language. I think there is only one language (for formulas) in the approach that I am describing, as well.
In case having another example would be useful, here is an example (from the book I'm referencing) of using a structure and an assignment to interpret a formula:
example
In this example, the fancy is the structure and is the assignment. We start with the formula , and by making use of the specified structure and assignment we end up with . Somewhat confusingly, the in the original formula is just a function symbol, while the in is referring to the addition function .
Similarly, returning to the main example we care about, this "interpretation" process moves us from formulas with the relation symbol to expressions involving a relation on (either or ).
(I hope I explained this correctly! I just read about these concepts a couple days ago, and it's possible I'm misunderstanding something.)
John Baez said:
But as far as I can tell you're not claiming ; instead you say " and are extremely similar - they just use a different symbol for the same relationship on in their structures".
I was reflecting on this a bit more. I've realized that I'm not 100% sure what it means for two interpretations (say and ) to be equal. and are the same in the sense that they contain all the same sets, functions, and relations. But they are not the same in the sense that and yield syntactically different expressions when used to interpret a formula. So if we view and as specifying functions from formulas (like ) to expressions (like with ), then the resulting functions are not equal.
My current guess is that and are equal, but the "interpretation functions" they induce are not equal.
I don't know a formal definition of what it means for two things like and to be equal though, so I'm not sure about this. (Is a tuple of things equal to another tuple if (1) the two tuples have the same number of elements and (2) the element in one tuple is equal to the element in the other tuple, for all ? If that is true, then I think .)
David Egolf said:
A formula in our first order language doesn't contain either or . It might contain , though. This symbol gets swapped out for the appropriate symbol once we pick a particular structure though, and interpret that formula using that structure.
Okay. I guess I don't completely understand your attitude toward syntax and semantics. The traditional approach to model theory in first-order logic doesn't think of an interpretation as "swapping out one symbol for another": instead, it maps each formula (which is a syntactic entity, a thing built from symbols) to a subset of , where is some set (a semantic entity, not just the letter ) and is the set of free variables in the formula.
I suppose if you are taking an ultra-syntactic approach, basically denying that semantics has a separate life from syntax, you may think of this subset of as itself described by symbols! This is a coherent viewpoint, and I've actually advocated it at times, but it's not the traditional one I learned in school. I figure if you're going to be radical, you should at least know you're being radical. :upside_down:
Returning to your example, here's a more traditional way to say what's going on. We have a formula in the language of posets. To keep things simple, take the formula
A model of the theory of posets is a set - let me call it instead of , just to be friendly - together with a recipe for assigning a subset of to each formula in the language of posets whose set of free variables is . This recipe needs to obey some rules.
In the end, a model of the theory of poset amounts to a poset: a set together with a binary relation which I'll call . Note: the symbol is part of my meta-mathematical chat here. I'm giving it a symbol just because I need symbols to talk to you: we say it's part of the metalanguage. It exists on a different plane than our friend .
In our example formula, .
So, a model of the formula
is a subset of consisting of pairs such that .
So, you see, the model is sending a formula to a certain subset of a certain set, not another formula with different symbols in it.
The difference may seem a petty one, and indeed I'm starting to feel petty writing so much about this. But logicians argue endlessly about nuances like this - one reason I'm not a logician. :upside_down:
David Egolf said:
Is a tuple of things equal to another tuple if (1) the two tuples have the same number of elements and (2) the element in one tuple is equal to the element in the other tuple, for all ? If that is true, then I think .)
Yes, that's the usual definition of equality for tuples.
So my (mild) complaint was that in your discussion of duality, you wound up giving a rather elaborate discussion leading up to a proof that
followed ultimately by an admission that , which makes the proof exceedingly easy: you are just proving that
I was offering a way to think about duality where duality changes a formula , say
into a different formula, say , namely
In my approach duality also changes the interpretation to a different interpretation, say . And then we can show
This seems a bit more exciting to me than showing
By the way, I'm using daggers instead of stars just so we can talk about your approach and mine together without fighting over what notation means. Secretly I use stars too. :upside_down:
Awesome, thanks for explaining that @John Baez ! I was not appreciating that a structure (and apparently a "model" as well) is really about specifying certain functions and relations. And although we use new symbols in the resulting interpreted formulas, these symbols can be viewed as shorthand (I think) for the full ordered-pair information of the functions and relations. From this perspective, and specify the exact same thing: the claim that is in the subset of specified by both and .
John Baez said:
This seems a bit more exciting to me than showing
Now that I understand that my and are equal, I agree! I will plan to give your approach a try, hopefully later today.
I now begin again, with a different approach. (I'm trying to follow the approach John Baez sketched above, although I'm not sure I'll actually manage to do so right away!)
These things stay the same:
But we now define our two structures in a different way:
Notice that and are different relationships in general.
Then, for each assignment , we get two different interpretations:
Next, we wish to define a function on formulas that intuitively "swaps" the variables provided to our 2-ary relationship symbol . If we start with the formula , then .
We want to define what does for any formula . Once we have done so, we will then denote by .
I will now attempt to define on all formulas:
Denoting by , we now wish to show that .
As a small check on this setup, I first try this out for the formula equal to . Let be the assignment that sends to and to . Then is true exactly if in .
Since , we have that is true exactly if . Since we see that for this particular .
I think the next step is probably to do a kind of induction over formulas to show that for any formula in our first-order language.
This looks great to me! You'll need to do the induction for formulae, not just sentences (= formulae without free variables), since sentences are made of formulae, not just sentences. Thus, in the induction you'll need to talk about something other than just
since this makes sense only when is a formula. I think you'll need to talk, for each formula with a set of free variables, about the subset of that it defines.
For a sentence , so has one element, so a subset of it has either 0 or 1 elements, which correspond to the two options: is either true or false.
That's great to hear!
I think that makes sense for any formula , not just when is a sentence. That's because an interpretation includes an assignment , which assigns particular values (in ) to all free variables.
At any rate, I hope to give some version of the induction a try in the next few days!
I'm going to be busy spending time with family for probably the next couple weeks. So, it may be a while before I get back to this! But I do look forward to finishing this up.
David Egolf said:
That's great to hear!
I think that makes sense for any formula , not just when is a sentence. That's because an interpretation includes an assignment , which assigns particular values (in ) to all free variables.
Okay, if that's how it works you can induct over formulae, not just sentences.