Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: mathematics

Topic: Free Boolean Algebras


view this post on Zulip Julius Hamilton (Jul 21 2024 at 18:31):

If anyone would like to give me a few hints in attempting to formulate the following mathematically, it could benefit me. Thanks.

Here is the idea:

I will assume that it is not that important how we generate terms, for now. If a signature includes some constants, some variables, and some functions, we can generate a term algebra, which is related to the concept of a free algebra. However, I hope that I can ignore how the terms were generated, and just assume I have some set of terms, {t1,,tn}\{t_1, \ldots, t_n\}. (My only concern is if we need to know which terms have variable symbols in them once we start generating formulae from terms.)

Now, it seems to me that when we generate formulae from terms, we can look at those rules of formula-formation in a few different ways.

For example, we could distinguish between formulae of the form t1t2t_1 \equiv t_2 and t1Rt2t_1Rt_2; or we could say that \equiv is a relation symbol, and so combine those two "cases" into a single case: that of generating formulae by applying relation symbols to terms. This also seems similar to a free algebra, to me.

Similarly, we can consider what formulae can be generated by application of one of various logical connectives to formulae. However, I believe it is known that for a collection of formulae in propositional logic, the algebraic structure generated by application of logical connectives to those formulae will be a Boolean algebra. So to me, it seems like if we want to know what formulae can be generated for a given signature, we can create a layer of abstraction between terms and formulae. Once we know what terms we have, we know which atomic formulae we have. Once we know what atomic formulae we have, I think it follows that our logic would be a free Boolean algebra of nn generators, where each atomic formula is one of the generators.

Or rather, it would be, if this were propositional logic. Because quantifiers are another way to create formulae from formulae, it would be nice to identify the algebraic structure they impart on a collection of formulae. I was thinking about how we could consider closed formulae as a special case of open formulae. For every open formula, when we map a signature to a structure, there is a set of elements in our domain which satisfy the formula, and a set of elements which do not. We cannot evaluate if an open formula is satisfied or not until we provide a variable assignment. This makes me think of closed formulae simply as "open formulae with zero free variables", analogous to other contexts in math where we have a "zero" version of a concept (like a function from an empty set).

I looked a little at the definition of Boolean algebras. They interested me, but I also want to explore different formulations for them. I found it interesting that the definition I saw included an element for tautology and for contradiction. I can imagine that this is an elegant algebraic formulation, as these elements can act as "distinguished elements", for example in a Boolean ring. On the other hand, I would prefer to think of the elements of a Boolean algebra as formulae from a signature, with a mapping from the elements into a set of truth-values, {0,1}\{0, 1\}. It would be nice to not include {0,1}\{0,1\} as elements in the algebra.

It seems to me like it is the atomic formulae which are the most “interesting” part of any theory. If there is consistency in how term algebras are generated from a signature, and composite formulae from atomic formulae, it seems the thing that varies the most from theory to theory is that collection of atoms, which are the axioms.

Please let me know if you would like to suggest any ideas for me to think about regarding this. Thank you very much.

view this post on Zulip John Baez (Jul 21 2024 at 19:07):

Since your post is titled "free boolean algebra", I feel like posing a puzzle for you. How many elements does the free boolean algebra on an n element set have?

Even if it's orthogonal to your concerns, figuring this out is a good way to start understanding free boolean algebras.

view this post on Zulip Julius Hamilton (Jul 21 2024 at 19:51):

Thank you, I would love to undertake such an exercise. I'll work on it and get back to you. Thank you.

view this post on Zulip Julius Hamilton (Jul 23 2024 at 02:01):

I have been thinking about this and wanted to share where I am so far.

The axioms of a Boolean algebra are symmetrical. For conjunction and for disjunction, we have an axiom of commutativity, associativity, distributivity, identity, complement, and idempotence. For example, for conjunction, we have:

Now, for a free algebra, you might think that if we begin with a set of nn elements, representing some formulae, they would be generators of the algebra, but there would be infinite elements of the algebra, because we can keep applying the operations; for example, from aa and bb, we can form aba \wedge b, abaa \wedge b \vee a, a¬ba¬ba \wedge \neg b \wedge a \vee \neg b, etc.

The key thing is that this does not happen, because the axioms alone can prove that any such expression can actually be reduced to a simpler form - a so-called “normal form” (for example, conjunctive or disjunctive normal form, with expressions something like (xyz)(x¬q)(x \wedge y \wedge z) \vee (x \wedge \neg q)).

I am pretty sure if every formula in the deductive closure of a set of nn formulas has a normal form, we can create equivalence classes of the formulas, and these become the elements of the Boolean algebra.

I need to keep thinking from here.

view this post on Zulip Ryan Wisnesky (Jul 23 2024 at 02:12):

a good exercise is to compare this reasoning with what happens in a heyting algebra

view this post on Zulip David Egolf (Jul 23 2024 at 02:30):

In case you are looking for references on this topic:

For myself, I also found it helpful to think about the free Boolean algebra on a one element set.

view this post on Zulip David Egolf (Jul 23 2024 at 02:48):

Looking at the Wikipedia page, I just noticed it discusses an interesting way to think about the elements of a free Boolean algebra.

Each element of a Boolean algebra can be expressed in terms of some generating elements and the symbols 0,1,,,¬0,1,\land,\lor,\neg. For example, if aa and bb are elements of a Boolean algebra, then so is aba \land b.

If we imagine we can assign some "truth value" (either 0 or 1) to each of the generating elements in our Boolean algebra, then expressions like aba \land b can be understood as functions. In this case, we would have a function that takes in the assigned truth value of aa and the assigned truth value of bb, and then returns the "and" of those two truth values. So, aba \land b can be associated to a function f:{0,1}2{0,1}f:\{0,1\}^2 \to \{0,1\}, which takes in two truth values and returns the "and" of them. Different expressions, like (¬a)b(\neg a) \lor b for example, will correspond to different functions.

If I am understanding the Wikipedia article correctly, the idea is that two expressions in a free Boolean algebra that may look different will actually correspond to the same element if they induce the same functions on truth values.

If that is true, then it seems like these two things are closely related:

view this post on Zulip Zoltan A. Kocsis (Z.A.K.) (Jul 23 2024 at 07:50):

Ryan Wisnesky said:

a good exercise is to compare this reasoning with what happens in a heyting algebra

One should clarify that characterizing free Heyting algebras ab ovo is at best an open-ended, at worst a futile exercise, especially for free HAs on more than one generator, double especially for somebody who just encountered free Boolean algebras for the first time.

view this post on Zulip John Baez (Jul 23 2024 at 08:28):

@Julius Hamilton - Please don't read those references @David Egolf suggested - not just yet, anyway. This puzzle is not so hard, and if you 'cheat' and look up the answer you'll miss a lot of its benefits. I'll give you a hint though:

As a trained mathematician, my natural instinct would be to study:

and after that practice warmup,

Counting the elements of these gives me a sequence of 3 numbers:

b0,b1,b2b_0, b_1, b_2

and I would try to guess the next one. Then I would either try to prove the pattern I'm seeing - or if I couldn't do that, try to count the elements of the free Boolean algebra on 3 elements. If I liked programming, maybe I'd write a program to list the elements of that.

view this post on Zulip John Baez (Jul 23 2024 at 08:33):

I'm trying to get you to think concretely and engage your pattern recognition - a key skill even in category theory. You can get fooled, around here, into thinking that all problems should be solved in a top-down deductive way. In fact that step often comes only after you know the answer.

view this post on Zulip Julius Hamilton (Jul 23 2024 at 20:27):

Thanks. I am not finished yet, but I decided to share some work in progress.

I think the idea behind these diagrams is that the Boolean set {0,1}\{0,1\} is closed under the Boolean operations, and if we augment it with an element aa, we also have to add ¬a\neg a, and then the set {0,1,a,¬a}\{0, 1, a, \neg a\} is closed. I believe the point is that all possible sequences of applications of Boolean operations will always reduce to an element in the set {0,1,a,¬a}\{0, 1, a, \neg a \}. I expect this to get a little more complicated when I add in another element bb, but the principle will be of the same nature - that there will be a fixed set of elements which all the elements of the free Boolean algebra reduce to. I will explore that next.

Then, maybe, a proof of the size of a Boolean algebra can be related to a proof by induction that a Boolean algebra augmented with an element is a Boolean algebra.

B2.png

B2_diagram.png
B1.png

F1.png
F1_diagram.png

F1_detailed.png
F1_detailed2.png

view this post on Zulip John Baez (Jul 23 2024 at 20:31):

It's hard to read those pictures. Can you say how many elements there are in the free Boolean algebra on an n-element set in some examples like n = 0, 1, 2 and maybe even 3?

view this post on Zulip Julius Hamilton (Jul 23 2024 at 20:36):

Ok.

n=0n = 0
S={}S = \{\}
B(n)={}B(n) = \{\}
B(n)=0|B(n)| = 0

(assuming that this is valid because we can say that all the axioms are "for all elements in the set", so theyre trivially met.)

n=1n = 1
S={1}S = \{1\}
B(n)={1}B(n) = \{1\}
B(n)=1|B(n)| = 1

(in this scenario, 0=10 = 1)

n=2n = 2
S={0,1}S = \{0, 1\}
B(n)={0,1}B(n) = \{0, 1\}
B(n)=2|B(n)| = 2

n=3n = 3
S={0,1,a}S = \{0, 1, a\}
B(n)={0,1,a,¬a}B(n) = \{0, 1, a, \neg a\}
B(n)=4|B(n)| = 4

Let me know if I made any mistakes. Thanks.

I think my approach is to assume that the elements of the set are generators, so for S={0,1,a}S = \{0, 1, a\}, a Boolean algebra will require the closure of this set with B(n)={0,1,a,¬a}B(n) = \{0, 1, a, \neg a\}. But I now realize that instead of assuming there is an element 00 or 11 in the set, I think I can infer that one of the elements acts as 00 or 11, and start with a set of "anonymous" elements {a,b,}\{a, b, \ldots\}.

I'll now be walking through n=4n = 4 and exploring what I said above about starting with a set of arbitrary elements. Thank you very much.

view this post on Zulip John Baez (Jul 23 2024 at 21:04):

You're making an easily correctable mistake. To get the free boolean algebra on some elements x1,,xnx_1, \dots, x_n we build up all possible expressions in these elements using 0,1,,0, 1, \vee, \wedge and ¬\neg, and use the boolean algebra laws to say when two of these expressions are equal.

Any boolean algebra is required to have 0 and 1, so when we talk about the free boolean algebra on some set SS, we don't include 0 and 1 in that set SS: they come 'for free', as it were.

So what you're calling B(3)B(3) is actually the free boolean algebra on one element, namely aa.

With this correction, you are correctly showing that the free boolean algebra on 1 element aa has 4 elements: 0,1,a,¬a0, 1, a, \neg a.

Similarly, you showed that the free boolean algebra on no elementss has 2 elements: 0,10, 1.

view this post on Zulip John Baez (Jul 23 2024 at 21:05):

So, you have begun computing the number of elements in the free boolean algebra on nn elements, say bnb_n, and you've reported these findings:

b0=2b_0 = 2, b1=4b_1 = 4.

view this post on Zulip John Baez (Jul 23 2024 at 21:05):

This is not quite enough to guess a pattern, so b3b_3 should be helpful.

view this post on Zulip John Baez (Jul 24 2024 at 08:16):

Actually, now that I think of it, there's an obvious guess about b3b_3 based on the above data. So if I were in your situation I would check that guess.

view this post on Zulip Morgan Rogers (he/him) (Jul 24 2024 at 09:02):

Do you mean b2b_2?

view this post on Zulip John Baez (Jul 24 2024 at 12:11):

I'm sorry, I meant to say b2b_2. b2b_2 is much easier to compute by hand than b3b_3. In fact all the stuff we learn about "truth tables" for binary operations in a boolean algebra is really an investigation of the free boolean algebra on 2 generators!

view this post on Zulip Julius Hamilton (Jul 25 2024 at 04:12):

You're making an easily correctable mistake. To get the free boolean algebra on some elements x1,,xnx_1, \dots, x_n we build up all possible expressions in these elements using 0,1,,0, 1, \vee, \wedge and ¬\neg, and use the boolean algebra laws to say when two of these expressions are equal.

Any boolean algebra is required to have 0 and 1, so when we talk about the free boolean algebra on some set SS, we don't include 0 and 1 in that set SS: they come 'for free', as it were.

So what you're calling B(3)B(3) is actually the free boolean algebra on one element, namely aa.

With this correction, you are correctly showing that the free boolean algebra on 1 element aa has 4 elements: 0,1,a,¬a0, 1, a, \neg a.

Similarly, you showed that the free boolean algebra on no elementss has 2 elements: 0,10, 1.

Ok, got you.

Now assume we have two elements, aa and bb.

I prefer to think about what elements we can generate through one application of any of the operations, before considering a “second round” of application. I don’t know if people do it other ways, for example exploring many applications of one operation before moving to the next.

Something like this:

Current elements: {0,1,a,b}\{0, 1, a, b\}

One application of ¬\neg:

¬a,¬b\neg a, \neg b

One application of \wedge:

aba \wedge b

One application of \vee:

aba \vee b

(I am skipping elements which reduce via the axioms. For example, aa=aa \wedge a = a by idempotence. a0=aa \vee 0 = a by identity. a1=1a \vee 1 = 1 by absorption. ba=abb \wedge a = a \wedge b via commutativity).

Current set of elements:
{0,1,a,¬a,b,¬b,ab,ab}\{0, 1, a, \neg a, b, \neg b, a \wedge b, a \vee b\}

Second round of application of ¬\neg:

¬(ab)\neg (a \wedge b)
¬(ab)\neg (a \vee b)

Second round of application of \wedge:

a¬ba \wedge \neg b
¬ab\neg a \wedge b
¬a¬b\neg a \wedge \neg b

I am pretty sure any expression with 3 variables, such as a(b¬a)a \vee (b \wedge \neg a), can be reduced to an expression with at most 2 variables. For example, a(b¬a)=(ab)(a¬a)=aba \vee (b \wedge \neg a) = (a \vee b) \wedge (a \vee \neg a) = a \vee b.

Second round of application of \vee:

a¬ba \vee \neg b
¬ab\neg a \vee b
¬a¬b\neg a \vee \neg b

I happen to have gotten the tip to reduce things to conjunctive or disjunctive normal form. So ¬(ab)=¬a¬b\neg (a \wedge b) = \neg a \vee \neg b.

So I think these are the unique terms for B2B_2:

{0,1,a,b,¬a,¬b,ab,a¬b,¬ab,¬a¬b,ab,a¬b,¬ab,¬a¬b}\{0, 1, a, b, \neg a, \neg b, a \wedge b, a \wedge \neg b, \neg a \wedge b, \neg a \wedge \neg b, a \vee b, a \vee \neg b, \neg a \vee b, \neg a \vee \neg b\}.

My guess is B2=14|B_2| = 14.

But as of writing this, I realize there might be terms like (a¬b)(¬ab)(a \wedge \neg b) \vee (\neg a \wedge b), which I’ll check next.

view this post on Zulip Julius Hamilton (Jul 25 2024 at 04:31):

I think there’s probably a better way to see what terms can and can’t be generated than just trying to list them all, but for now I’ll just say, I know we can also form (ab)(¬a¬b)(a \wedge b) \vee (\neg a \wedge \neg b), and the number of elements is 16, and 2, 4, 16 indicates some pattern related to exponentials.

I understand there’s an idea that the number is equivalent to the number of distinct Boolean functions of nn-arguments, I believe, but I haven’t yet seen perfectly clearly how to draw that connection, which I want to think about tomorrow.

view this post on Zulip John Baez (Jul 25 2024 at 06:49):

Great! Yes, it's nice to build up the free boolean algebra on a set in "rounds" as you've done, and this can be useful for any sort of "free" algebraic gadget.

view this post on Zulip John Baez (Jul 25 2024 at 06:49):

And yes, the free boolean algebra on 2 elements has 16 elements. So you have

b0=2,b1=4,b2=16b_0 = 2, b_1 = 4, b_2 = 16

view this post on Zulip John Baez (Jul 25 2024 at 06:52):

Now: can you guess the pattern? E.g. can you guess what b3b_3 is? You said there's "some pattern related to exponentials", and that's right, but you can go out on a limb and say more than that. The more specific you dare to be, the more useful mental associations you will form. It's even good to guess the specific pattern: guessing patterns is a key mathematical skill, and one should never let your fear of guessing wrong paralyze you. The only way to build your pattern-matching skills is to exercise them a lot.

view this post on Zulip John Baez (Jul 25 2024 at 06:55):

The ultimate point of guessing this pattern is to get some deep structural insights in free boolean algebras: these insights will explain the pattern. But when you're thrust into a mathematical situation you don't fully understand, the patterns you see are often the keys to the insights you need to get.

view this post on Zulip Julius Hamilton (Jul 25 2024 at 15:43):

Here is the next round of thinking on the matter. I think I could benefit from any tiny hints or comments pointing out something I missed as I was thinking through this. I will keep in mind your advice to "make my guesses as specific as possible", going forward. I guess I can still try to make more specific guesses than I have so far. Thanks a lot!
boolalg4.pdf

view this post on Zulip John Baez (Jul 25 2024 at 18:08):

Okay, good - a guess: b3=256b_3 = 256. Let's see if that's right. Scribble scribble. Yes, it's right!

view this post on Zulip John Baez (Jul 25 2024 at 18:11):

So, what's interesting is that you said "each term is the square of the previous term". Yes, that's correct! But I never thought of it that way before. I just thought in terms of a formula for bnb_n. I think this formula may be more illuminating in terms of understanding the structure of the free boolean algebra on nn elements.

But it would be interesting to see if you can get there starting from where you are. So:

Puzzle. Given a sequence defined recursively by b0=2b_0 = 2 and bn+1=bn2b_{n+1} = b_n^2, what is a simple closed-form formula for bnb_n?

view this post on Zulip John Baez (Jul 25 2024 at 18:14):

I see you also tried various ways to organize the elements in one of these free boolean algebras. There's a very nice way to organize them which make it obvious that bn=22nb_n = 2^{2^n}. In fact there are two. I don't think you're close to stating those.

view this post on Zulip John Baez (Jul 25 2024 at 18:16):

Let's see...

Puzzle. Do one of the operations \vee, \wedge distribute over the other? Can you use this to take all elements in the free boolean algebra on nn generators and put them in a standard form?

view this post on Zulip John Baez (Jul 25 2024 at 18:20):

What I mean is something like this: in ordinary algebra, multiplication distributes over addition, and we can use this to take an arbitrary expression like

(x+x2)(x3+x)+x2(x+7) (x + x^2)(x^3 + x) + x^2(x + 7)

and rewrite it in a standard form:

x5+x4+2x3+8x2 x^5 + x^4 + 2x^3 + 8x^2

I may have done it wrong, but you get the idea: the distributive law brings order to what was previously a vast wilderness of expressions. We all learn this in school.

view this post on Zulip John Baez (Jul 25 2024 at 18:20):

So if \vee distributed over \wedge, or vice versa, you could do something like that for a boolean algebra.

view this post on Zulip Julius Hamilton (Jul 25 2024 at 23:24):

I'll return to this tomorrow when I can give it more in-depth treatment. Thanks! :)

view this post on Zulip Julius Hamilton (Jul 26 2024 at 22:39):

I'll work on this now. I wanted to say one thing that I just thought about though.

The axioms of a Boolean algebra, or another formula-formation rule set, do not generate a consistent logical theory. We wouldn't expect them to. In a Boolean algebra, every proposition's negation exists. A Boolean algebra could be seen as a way to generate all meaningful sentences; but it does not know how to generate sentences that follow logically from other sentences.

For that, we have a deduction system or deductive calculus. A simple deductive calculus might be as little as modus ponens and universal instantiation. But I read that then you need to add more axioms to the theory. But natural deduction creates a clean layer of separation from formulae versus actions you can take on them to construct new formulae. Natural deduction is like a machine that reads in logical connectives and has rules about what to do with them. From that perspective, a deductive system should provide a rule for every logical symbol. Or, perhaps there are incomplete deductive systems that allow us to rewrite some formulae and not others.

Here is what piques my interest.

The axioms of a Boolean algebra, seen as operations on a set of elements, are "unrestricted". They can be applied to all combinations of elements in the set. The same appears to be true for term algebras.

Not so with deductive systems. Their application is "conditional", and often takes the form, "If you have formula xx, you may construct formula yy. I am very interested to explore, next, what kind of algebraic structure we need to model the enumeration of theorems from axioms. (My guess: partial algebraic operations, which can be related to categories.)

view this post on Zulip Julius Hamilton (Jul 27 2024 at 00:41):

I can't claim to understand this well but it's the work of another day.

I think you can prove that for formulae a, b, c, ..., if you combine them with conjunction and negation, all of them are logically unique. In a boolean algebra, we often deal with logically equivalent formulae, but this is not the case for formulae like these: a¬bca \wedge \neg b \wedge c, ¬ab¬c\neg a \wedge b \wedge \neg c, etc.

This means that for nn variables, if we start by generating strings of the above form, we are just choosing between positive or negative form for each variable in one of these "and" strings.

So for nn variables, it just means 2×2×2=2n2 \times 2 \times 2 \ldots = 2^n. 2 choices, nn times.

The second useful piece of information is that if you combine these "and"-strings with disjunction, you will also create logically unique terms. For any of the above formulae, like (a and not b and not c), (not a and not b and not c), etc, if you create ((a and not b and not c) or (not a and not b and not c)), this formula will not be logically equivalent to any other such formulae you can make from disjunctions on the previous "and" strings.

For that reason, we again are making "choices" with 2 options, for each element in our and-strings: include or exclude, in an "or"-string. So that means 2 to the power of however many "and" strings we made (which was 2^n). So we get 22n2^{2^n}.

I don't feel finished here. I think there are more intuitive ways of looking at this. But it's a start. Thank you very much.

view this post on Zulip Todd Trimble (Jul 27 2024 at 04:43):

The axioms of a Boolean algebra, or another formula-formation rule set, do not generate a consistent logical theory.

How so? Do you think you can prove \top \leq \bot from the axioms of a Boolean algebra?

(My point is that inconsistencies as such are assertions, not expressions.)

Here is a simple deductive system for Boolean algebra, suitable for classroom presentation and discussion. A,B,CA, B, C etc. denote well-formed propositional expressions, constructed as usual from variables, constants ,\top, \bot, and the operations ,,¬\wedge, \vee, \neg.

Convention: we write ABA \equiv B if ABA \leq B and BAB \leq A are derivable in this deductive system.

It's interesting and moderately challenging (but I think fun!) to derive the the usual equations ABA \equiv B of Boolean algebra as valid inferences based on these axioms and rules. (Hard challenge: derive the distributive laws.)

But I am confident that you will never be able to derive an inconsistency \top \leq \bot.

view this post on Zulip John Baez (Jul 27 2024 at 10:37):

Julius Hamilton said:

I can't claim to understand this well but it's the work of another day.

I think you can prove that for formulae a, b, c, ..., if you combine them with conjunction and negation, all of them are logically unique. In a boolean algebra, we often deal with logically equivalent formulae, but this is not the case for formulae like these: a¬bca \wedge \neg b \wedge c, ¬ab¬c\neg a \wedge b \wedge \neg c, etc.

This means that for nn variables, if we start by generating strings of the above form, we are just choosing between positive or negative form for each variable in one of these "and" strings.

So for nn variables, it just means 2×2×2=2n2 \times 2 \times 2 \ldots = 2^n. 2 choices, nn times.

Yes, very good!

The second useful piece of information is that if you combine these "and"-strings with disjunction, you will also create logically unique terms. For any of the above formulae, like (a and not b and not c), (not a and not b and not c), etc, if you create ((a and not b and not c) or (not a and not b and not c)), this formula will not be logically equivalent to any other such formulae you can make from disjunctions on the previous "and" strings.

For that reason, we again are making "choices" with 2 options, for each element in our and-strings: include or exclude, in an "or"-string. So that means 2 to the power of however many "and" strings we made (which was 2^n). So we get 22n2^{2^n}.

Excellent! This is a famous thing. It's called 'disjunctive normal form' and there's a Wikipedia article about it:

Can you prove that every element of the free boolean algebra on nn elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.

view this post on Zulip John Baez (Jul 27 2024 at 10:38):

I don't feel finished here. I think there are more intuitive ways of looking at this. But it's a start. Thank you very much.

Sure! There are beautiful conceptual ways of thinking about why the free boolean algebra on nn elements has 22n2^{2^n} elements. But disjunctive normal form is one very nice way.

view this post on Zulip Julius Hamilton (Jul 28 2024 at 14:10):

Todd Trimble said:

The axioms of a Boolean algebra, or another formula-formation rule set, do not generate a consistent logical theory.

How so? Do you think you can prove \top \leq \bot from the axioms of a Boolean algebra?

(My point is that inconsistencies as such are assertions, not expressions.)

Here is a simple deductive system for Boolean algebra, suitable for classroom presentation and discussion. A,B,CA, B, C etc. denote well-formed propositional expressions, constructed as usual from variables, constants ,\top, \bot, and the operations ,,¬\wedge, \vee, \neg.

Convention: we write ABA \equiv B if ABA \leq B and BAB \leq A are derivable in this deductive system.

It's interesting and moderately challenging (but I think fun!) to derive the the usual equations ABA \equiv B of Boolean algebra as valid inferences based on these axioms and rules. (Hard challenge: derive the distributive laws.)

But I am confident that you will never be able to derive an inconsistency \top \leq \bot.

This seems like a really fun and perfect challenge for me.

I have some questions to make sure I grasp the situation well, first.

You don’t necessarily have to answer these questions, as just stating them aloud makes me start to think about them with more focus.

My first question is about the elimination rules for \vee. At first glance, I can’t see why ABCAC\frac{A \vee B \leq C}{A \leq C} would be a good representation of the laws of disjunction.

My guess: the exercise may show me an interesting kind of “ordering” of truths. So that this rule is not saying that if from ABA \vee B one can conclude CC, then from AA one can conclude CC. The order relation \leq may be referring to, for example, binary truth values! So that we literally mean to say that if ABA \vee B, evaluated to a truth value of 00 or 11, is C\leq C, then AA is also C\leq C.

More to come soon.

view this post on Zulip Todd Trimble (Jul 28 2024 at 14:24):

So that this rule is not saying that if from ABA\vee B one can conclude CC, then from AA one can conclude CC.

It is saying that. Think of it this way: we have AABA \leq A \vee B (AA is stronger than "AA or BB"). If also ABCA \vee B \leq C, then by transitivity of \leq or the cut rule, we may deduce ACA \leq C.

view this post on Zulip Julius Hamilton (Jul 28 2024 at 14:36):

Fascinating. I’ll get back to you on that. I also wanted to respond to John Baez for a moment:

John Baez said:

Can you prove that every element of the free boolean algebra on nn elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.

I’ll also be warming up my thoughts on this exercise as well, now.

My first thought was a general proof that for any set of nn-ary operations, if one imposes an arbitrary ordering on them, it determines a unique form for every element of the free algebra under them (or could we say their “algebraic closure”?) I would want to exclude anything here to do with infinity in an appropriate way, and focus on finite cases (finite sets of elements, operations of finite arity, etc.) Maybe this is not true. I also wonder if it depends which ‘equational’ axioms we add, or if it that does not affect this.

My thought was that if I could show that for a collection of operations, there exists at least one consistent way to write all the elements of the free algebra under them (which I think could be equivalent to an ordering on the operations), it induces a partition on the elements of the free algebra, and that if you can do this as a preliminary step before pursuing any questions such as counting the number of logically unique elements in this algebra, it makes the question much easier.

But you suggest a proof by induction, which sounds simpler and like a very good exercise for me.

Not asking for the answers, just digesting. Thanks!

view this post on Zulip John Baez (Jul 28 2024 at 16:07):

Julius Hamilton said:

Fascinating. I’ll get back to you on that. I also wanted to respond to John Baez for a moment:

John Baez said:

Can you prove that every element of the free boolean algebra on nn elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.

I’ll also be warming up my thoughts on this exercise as well, now.

My first thought was a general proof that for any set of nn-ary operations, if one imposes an arbitrary ordering on them, it determines a unique form for every element of the free algebra under them (or could we say their “algebraic closure”?)

The problem in getting a normal form is the equational laws between operations, e.g. a¬aa \wedge \neg a and 00 (false) and a0a \wedge 0 and a(b¬b)a \wedge (b \wedge \neg b) and so on are all equal. You could pick the first one in alphabetical order (given an ordering of the symbols) as your "normal form" - maybe that's what you meant. But this is not a practical system, because then to determine the normal form of some element you need to list all elements alphabetically, for each one determine whether it's equal to the one you're interested in - which is, in itself, not an easy thing to do - and then choose the first one that does.

A practical normal form takes advantage of the equational laws to come up with an algorithm to reduce an arbitrary element of the free algebra to its normal form. I briefly sketched how we use the distributive law relating ×\times and ++ to reduce any polynomial to its normal form. We can do something quite similar to reduce any element of the free boolean algebra on nn elements to its disjunctive normal form. That's why I asked you if \wedge distributes over \vee or vice versa. You also need to think about the laws relating these operations to ¬\neg.

view this post on Zulip John Baez (Jul 28 2024 at 16:18):

Maybe it'll help if I do an easy example of reducing something to disjunctive normal form. Let's start with

a¬(bc) a \wedge \neg(b \vee c)

Then using the rules of a boolean algebra we get

a¬(bc)=a(¬b¬c)=(a¬b)(a¬c) a \wedge \neg(b \vee c) = a \wedge (\neg b \vee \neg c) = (a \wedge \neg b) \vee (a \wedge \neg c)

and we're done! It's not always this easy.

view this post on Zulip Julius Hamilton (Jul 28 2024 at 18:37):

I can see an algorithm of this form.

It can check the type of an element; and it can also construct elements of a type (so maybe Haskell is somewhat better for this than Python). The types are atomic, negation, conjunction, and disjunction. Also, an element "contains" elements. For example, "¬(ab)\neg (a \vee b)" is of type negation, and contains element "aba \vee b". The element "aba \vee b" is of type "disjunction", and contains elements aa and bb.

The reduce-algorithm has to check for a condition between the type of an element and the type of its element(s). My hope is that a Boolean algebra can be normalized with only a conditional check of "depth 1" (one below the top level, call it depth 2 if you like for 2 levels).

reduce(a):

If a is of type negation:
b = elements(a)

# double negation elimination
if b is of type negation:
c = elements(b)
return reduce(c)

# de morgans law for disjunction
if b is of type disjunction:
c, d = elements(b)
return conjunction((negation reduce(c)), (negation reduce(d))))

# de morgans law for conjunction
if b is of type conjunction:
c, d = elements(b)
return disjunction((negation reduce(c)), (negation reduce(d))))

If a is of type conjunction:
p, q = elements(a)
# idempotence
if p == q:
return atomic(p) # or just "p"

# left identity

if p == 1:
return atomic(q)

# right identity

if q == 1:
return atomic(p)

# left complement
if negation(p) == q: # ?
return 0

right complmeent

if negation(q) == p:
return 0

# deal with the arguments of the conjunction one at a time?
for b in elements(a):

# distribution of conjunction over disjunction:

if b is of type disjunction:
 c, d = elements(b)
    return disjunction((conjunction(reduce(b), reduce(c)), (conjunction(reduce(b), reduce(d)))

and so on. One can imagine nearly the same rules for disjunction, except we will not be using distributivity, since we do not want to distribute "or" over "and". lastly, the program can check if an element is atomic, and return that element:

if a is of type atomic:
return a

this is just a starter example. the key appears to be that I think all of the rules are fortunately about relationships between at most 2 successive applications of operations. so, not (a and b) has an associated transformation, not (not (a)) has an associated transformation, a and (b or c) has an associated transformation - we never have to check for a certain pattern of 3 operations (I think).

I think one way we can prove that this works, is because, for any of the above rules, we might worry, what if a is of type conjunction, and the algorithm calls reduce on both of its arguments, but what it returns requires further simplification? my hope is, the answer is actually that the algorithm always returns a normal form, and the key thing about a normal form is that any of the constructions in the algorithm on a normal form form a normal form. for example, if "a" is the negation of a negation of normal, then removing the double negation returns a normal. if a is a negation of a conjunction of two normals, returning the disjunction of the negation of both normals is a normal. this is only true on the precondition that both of the arguemtns of the orignal conjunciton are already normal. so we can probably "type check" the algorithm to prove that if it receives a construction that is "two operations applied to normals, it is guaranteed to return a normal". and maybe this can be reflected algebraically - try to convert the free boolean algebra into the "algebra of normals", or something.

Some details will need to be worked out to make the algorithm actually work, but I'm quite excited to work on that.

Ill clean this up and make it more presentabel. My intention was to explore this question in a much more mathematical way, but first, I did decide to explore presenting normalization as an actual algorithm. Thanks very much!

view this post on Zulip John Baez (Jul 29 2024 at 11:19):

John Baez said:

Maybe it'll help if I do an easy example of reducing something to disjunctive normal form. Let's start with

a¬(bc) a \wedge \neg(b \vee c)

Then using the rules of a boolean algebra we get

a¬(bc)=a(¬b¬c)=(a¬b)(a¬c) a \wedge \neg(b \vee c) = a \wedge (\neg b \vee \neg c) = (a \wedge \neg b) \vee (a \wedge \neg c)

and we're done!

Actually we're not done, since for the free boolean algebra on three elements, disjunctive normal forms are disjunctions of conjuncts of three elements or their negations.

view this post on Zulip John Baez (Jul 29 2024 at 11:21):

So, I need to use

a¬b=(a¬bc)(a¬b¬c)a \wedge \neg b = (a \wedge \neg b \wedge c) \vee (a \wedge \neg b \wedge \neg c)

and

a¬c=(ab¬c)(a¬bc) a \wedge \neg c = (a \wedge b \wedge \neg c) \vee (a \wedge \neg b \wedge c)

to expand

(a¬b)(a¬c) (a \wedge \neg b) \vee (a \wedge \neg c)

to its disjunctive normal form:

(a¬bc)(a¬b¬c) (a \wedge \neg b \wedge c) \vee (a \wedge \neg b \wedge \neg c) \vee
(ab¬c)(a¬bc) (a \wedge b \wedge \neg c) \vee (a \wedge \neg b \wedge c)

view this post on Zulip John Baez (Jul 29 2024 at 11:36):

Julius Hamilton said:

Some details will need to be worked out to make the algorithm actually work, but I'm quite excited to work on that.

Ill clean this up and make it more presentable. My intention was to explore this question in a much more mathematical way, but first, I did decide to explore presenting normalization as an actual algorithm. Thanks very much!

Sure! It should be a fun project. There is a mathematical subject of "rewriting systems" for reducing expressions to normal forms, which one could in theory apply to this project. However, I'm not exactly sure how it applies to this particular example, and sometimes it's good to just dive in and do stuff and ignore the 3000-year weight of already explored mathematics. If you want to see a bit of the theory, there's this:

and the connected article "Normal form". It's definitely fun stuff.

view this post on Zulip Ryan Wisnesky (Jul 29 2024 at 14:55):

depending on what exactly is meant by a 'rewriting system' (does it bake in commutativity, etc), there may not exist one for boolean algebras: https://link.springer.com/chapter/10.1007/3-540-53904-2_102 . This manifests for us in CQL directly, where certain axiomatizations of boolean algebras are 'too symmetric' for our theorem prover to handle, so we use boolean rings instead, which are easier for re-writing techniques to handle.

view this post on Zulip John Baez (Jul 29 2024 at 15:03):

One thing about 'disjunctive normal form' is that due to the commutativity of \vee and \wedge it's not really unique, e.g. I might say (a¬b)(¬ab)(a \wedge \neg b) \vee (\neg a \wedge b) but you might say (b¬a)(a¬b)(b \wedge \neg a) \vee (a \wedge \neg b) . This feels like a petty issue, but what (little) I know about "terminating confluent rewriting systems" is that they lead to a unique normal form, not unique up to some rather boring symmetries. I suspect this just means I haven't read enough, since it seems like an issue that would show up in many other examples.

view this post on Zulip John Baez (Jul 29 2024 at 15:06):

Is that reference you pointed me to complaining about these symmetries? I can only see the first page, but sounds like that's what they're complaining about, since they blame "commutativity axioms" for the lack of a terminating set of rewrite rules.

It seems someone should have defined termination and confluence up to a collection of reversible rewrites, meaning that 1) if you run your rewriting long enough, all you can do is a certain specified collection of reversible rewrites, and 2) the result is then unique up to those reversible rewrites.

view this post on Zulip Todd Trimble (Jul 29 2024 at 15:08):

Quoting from back here:

This seems like a really fun and perfect challenge for me.

Good! Here are some exercises to get you started, roughly in order of increasing size/difficulty:

Once you have these, then you have their dual statements as well. This is because the dual of an axiom is also an axiom, and the dual of an application of a rule of inference is an application of another rule of inference. Therefore you can automatically dualize any derivation in this deductive system.

But perhaps I'm getting ahead of myself: first we should define the dual of a statement. Informally, the dual of an expression is obtained by swapping all instances of \wedge with \vee, swapping all instances of \top with \bot, and leaving all instances of ¬\neg alone. Letting AA' denote the dual of AA, the dual of an assertion ABA \leq B is defined to be BAB' \leq A'. The dual of an assertion ABA \equiv B is ABA' \equiv B'. Now I can leave it to you to check that the preceding paragraph is indeed correct, if you'll allow me please to make one small amendment to the rules I gave for ¬\neg,

ABCA¬BCABC¬CAB\frac{A \wedge B \leq C}{A \leq \neg B \vee C}\qquad \frac{A \leq B \vee C}{\neg C \wedge A \leq B},

and to clarify that for those rules with two premises (cut rule, and the first rules given for \wedge and \vee), the order of those premises doesn't matter. Read all the rules as "given premise(s) blah, one may conclude di-blah". Read axioms as "from nothing, conclude blah". All derivations in the system start from nothing and eventually conclude something.

Anyway, what I was trying to say before is that by the principle of duality, we have (automatically) that \vee is also idempotent, commutative, associative, and has \bot for its unit.

view this post on Zulip Ryan Wisnesky (Jul 29 2024 at 15:15):

re: @John Baez that is correct, non-uniqueness w.r.t commutativity or other rules (associativity) is a huge technical problem for re-writing systems, still studied today, and your suggested approach is sometimes taken. it isn't a panacea though; the linked paper shows it doesn't work for (a particular presentation of) boolean algebra (that's what "modulo AC" means). here's a paper formalizing the idea of 'rewriting modulo equations' https://core.ac.uk/download/pdf/82774966.pdf that describes some of the issues touched on here

view this post on Zulip John Baez (Jul 29 2024 at 15:23):

Thanks! I knew "AC" couldn't mean "axiom of choice" here, but I didn't realize it meant "associativity and commutativity". I hate acronyms.

(I think I see the problem with associativity. At the end of your rewriting you can use associativity to arbitrarily push all the parentheses to the left, and say that's your desired form. There's nothing like this for commutativity. But unfortunately, at earlier stages of your rewriting you typically need to push some parentheses to the right sometimes, so you can't only include the rule a(bc)(ab)ca(bc) \to (ab)c.)

view this post on Zulip Julius Hamilton (Jul 30 2024 at 14:36):

All of the above stuff is awesome. I am under a little bit of time constraints and stress right now regarding some other things, so I am going to come back to all of the above within the next few days. I really appreciate everyone's input. Thanks very much.

view this post on Zulip Julius Hamilton (Aug 01 2024 at 01:35):

I’m a bit tired and I’ll just try to summarize some points from the above that will help me, maybe tomorrow, to figure out what to work on next.

The axioms of a Boolean algebra, or another formula-formation rule set, do not generate a consistent logical theory.

How so? Do you think you can prove \top \leq \bot from the axioms of a Boolean algebra?

(My point is that inconsistencies as such are assertions, not expressions.)

So if I tried to prove \top \leq \bot from the axioms of a Boolean algebra, I guess I would interpret the axioms like deduction rules. And now it seems unlikely that rules like double negation elimination or de Morgan’s law could achieve this. Now I see the axioms as actually corresponding to logically valid inferences, like the idempotence of “and”. So I wonder, was the only reason I thought it created an inconsistent theory because I was thinking about how when we freely generate terms, we form the negation of every variable? I’m thinking maybe your point was that it doesn’t make sense to say a Boolean algebra “forms an inconsistent theory” because there is are no deduction rules regarding what is “provable”?

Here is a simple deductive system for Boolean algebra, suitable for classroom presentation and discussion. A,B,CA, B, C etc. denote well-formed propositional expressions, constructed as usual from variables, constants ,\top, \bot, and the operations ,,¬\wedge, \vee, \neg.

Just curious if \leq is a common choice of symbol for “deduction”, “consequence” or “proves”?

Now the cut rule I think I should learn more about. I’m curious how deductive systems that do or do not have it, differ. The cut rule seems to me to just be “associative composition of proofs”. However, I have lately been thinking about the difference between a “probability” relation vs. a “direct deduction” relation. If from AA we can deduce BB by a direct application of one of the deduction rules, and from BB, CC, it doesn’t follow that from AA we can deduce CC from an application of a deduction rule. Presumably, we would need to apply a deduction rule twice. So, I have been thinking about this in wanting to create categories of propositions with Hom-sets for proofs. Perhaps each morphism can represent a sequence of deduction rules applied, for example.

I don’t think I’ve seen a rule like this in a deductive system before. Perhaps it always follows, in a consistent theory? \top and $$\bottom$$ are strange concepts to me; I feel like they don’t map well to any natural language concept.

What I still haven’t worked out in my thinking about Boolean algebras is that initially, we say the variables represent formulae, which can have a value of true or false. From that perspective, it seems like it doesn’t make sense to include top and bottom in the set of elements; they aren’t formulae, and top can never evaluate to false. On the other hand, I’ve been thinking about how the elements can actually be seen as Boolean operators, rather than formulae. If this is correct, top and bottom make sense; they are Boolean operations. But then I have to rethink how to interpret the variables. If we say they represent Boolean operations, then the question is how we got from interpreting them as formulae to interpreting them as Boolean functions.

*Rules for \wedge: AB          ACABCABCABABCAC\frac{A \leq B\;\;\;\;\; A \leq C}{A \leq B \wedge C} \qquad \frac{A \leq B \wedge C}{A \leq B} \qquad \frac{A \leq B \wedge C}{A \leq C}

These make sense.

ABCACABCBC \frac{A \vee B \leq C}{A \leq C} \qquad \frac{A \vee B \leq C}{B \leq C}

Yeah, now it makes sense to me. Earlier I was confused. But maybe there is a valid ambiguity in natural language here, at least? If I say “A or B proves C”, it might sound like an exclusive or, I guess.

This is cool. It seems like a way to rewrite material implication.

ABCA¬BC\frac{A \leq B \vee C}{A \wedge \neg B \leq C}

Convention: we write ABA \equiv B if ABA \leq B and BAB \leq A are derivable in this deductive system.

It's interesting and moderately challenging (but I think fun!) to derive the the usual equations ABA \equiv B of Boolean algebra as valid inferences based on these axioms and rules.

I’ll have to do this tomorrow. For now I’m summarizing, we have a deductive system with a rule for assumption, cut (or, “composition”), “true and false”, and then introduction and elimination rules for ‘and’, ‘or’, and ‘not’. ‘And’ is the most obvious: from a and b, you can derive a, and you can derive b. A or B prove C, you can derive A proves C, or B proves C. This makes me wonder why these rules have to be expressed in the form “something proves something”, rather than just, say, “something is true”. We could just say, “ABAA \wedge B \leq A”.

(Hard challenge: derive the distributive laws.)

Noted.

But I am confident that you will never be able to derive an inconsistency \top \leq \bot.

So all in all, I think you are trying to show me how, viewed as deduction rules, the axioms of a Boolean algebra are not actually a “free for all”. In which case, even if the theory appears “inconsistent” due to all formulas coexisting with their negation, there is still a certain kind of logical ordering on the formulae, which I think you refer to as their “strength”.

view this post on Zulip John Baez (Aug 01 2024 at 06:24):

⊤ and \bot are strange concepts to me; I feel like they don’t map well to any natural language concept.

They're called 'true' and 'false'.

view this post on Zulip John Baez (Aug 01 2024 at 06:25):

Mathematicians read this as "anything implies true" or, a bit less tersely, "any statement implies a true statement".

view this post on Zulip John Baez (Aug 01 2024 at 06:28):

There is a dual axiom, "false implies anything" or "a false statement implies anything". This axiom is rejected constructively since it's related to proof by contradiction. But it's valid in a boolean algebra.

view this post on Zulip Graham Manuell (Aug 01 2024 at 06:31):

John Baez said:

There is a dual axiom, "false implies anything" or "a false statement implies anything". This axiom is rejected constructively since it's related to proof by contradiction. But it's valid in a boolean algebra.

No. This axiom does hold constructively and has nothing to do with proof by contradiction.

view this post on Zulip John Baez (Aug 01 2024 at 07:08):

Whoops. I'm sorry, Julius, the problem with deducing "p and not p implies anything" in constructive logic is not the step "false implies anything" but rather "p and not p implies false". But never mind, we're talking about boolean algebras and I should not have digressed into a separate topic I don't really understand.

view this post on Zulip Nathan Corbyn (Aug 01 2024 at 07:41):

John Baez said:

Whoops. I'm sorry, Julius, the problem with deducing "p and not p implies anything" in constructive logic is not the step "false implies anything" but rather "p and not p implies false". But never mind, we're talking about boolean algebras and I should not have digressed into a separate topic I don't really understand.

This is also incorrect, P¬PP \wedge \neg P \Rightarrow \bot is also true constructively.

view this post on Zulip Nathan Corbyn (Aug 01 2024 at 07:55):

Obtaining a contradiction and using this to prove anything is not what mathematicians do when they perform ‘proof by contradiction’. ‘Proof by contradiction’ proceeds by assuming the negation of a statement, deriving falsity and concluding the validity of that statement. In other words, ‘proof by contradiction’ has the following type:

(¬P)P(\neg P \Rightarrow \bot) \Rightarrow P

Recalling that, in constructive logic, we define ¬P=P\neg P = P \Rightarrow \bot, this reduces to:

¬¬PP\neg\neg P \Rightarrow P

This is better known as double negation elimination.

view this post on Zulip John Baez (Aug 01 2024 at 09:31):

Okay, it's a wonder I can prove theorems. I really should have stuck to talking about boolean algebras.

view this post on Zulip John Baez (Aug 01 2024 at 10:07):

I'm not quite sure why I made two such stupid mistakes in a row. I've studied logic so I should know better. I could say it's because I'm under stress, but that doesn't feel like the right explanation. I definitely find myself completely forgetting things more and more often as I age, and it could be that I'll start making more and more completely stupid math mistakes and this is just a first sign of that decline. Or maybe it's a random blip. Oh well.

view this post on Zulip Graham Manuell (Aug 01 2024 at 10:46):

I think many people who haven't fully internalised constructive logic could have made such an error, so I wouldn't beat yourself up about it!

view this post on Zulip John Baez (Aug 01 2024 at 12:05):

Thanks.

view this post on Zulip Julius Hamilton (Aug 01 2024 at 15:00):

So, some baby steps.

My goal will be to deduce the axioms of Boolean algebra using Todd’s deductive system.

I will first explore any basic deductions that can be made, using these rules:

  1. AA\frac{}{A \leq A}
  2. ABBCAC\frac{A \leq B \quad B \leq C}{A \leq C}
  3. A\frac{}{A \leq \top}
  4. A\frac{}{\bot \leq A}
  5. ABACABC\frac{A \leq B \quad A \leq C}{A \leq B \wedge C}
  6. ACBCABC\frac{A \leq C \quad B \leq C}{A \vee B \leq C}
  7. ABCA¬BC\frac{A \wedge B \leq C}{A \leq \neg B \vee C}
  8. plus the elimination rules I will add in a bit.

My first thought is to identify what terms can be generated, at all. I’ll go through each rule one by one.

I find myself wondering if I can substitute one of the “sequents” into a variable AA? No, I don’t think so.

Let’s say we have one variable / formula AA. Then by (1), we have AAA \leq A. By (3) and (4), we have AA \leq \top and A\bot \leq A. Using (1) again, we have AA\frac{\bot \leq A \quad A \leq \top}{\bot \leq \top}.

The way is paved for me - in my next message I will derive the next round of deductions.


Question 1: What would organically motivate one to set up a proof system where instead of saying “A proves BA \text{ proves }B”, we use a “double-proof” system where we say things of the form “If A proves BA \text{ proves }B, then you can deduce C proves DC \text{ proves } D”?

Question 2: Why should we include the axioms A\frac{}{A \leq \top} and A\frac{}{\bot \leq A}? Are they intuitively justifiable, or do we notice they are mathematically necessary? What deduction systems allow us to remove these rules?

Question 3: How can we model this deduction system as an algebraic structure?

view this post on Zulip Todd Trimble (Aug 01 2024 at 16:50):

Answer to Question 3: the free Boolean algebra (B(S),,,,,,¬)(\mathbf{B}(S), \leq, \bot, \top, \wedge, \vee, \neg) on a set of variables SS is the set of \equiv-equivalence classes of propositional formulas generated by SS, where those formulas are pre-ordered by the smallest relation \leq that validates the axioms and rules of inference.

view this post on Zulip Todd Trimble (Aug 01 2024 at 16:51):

Answer to Question 2: they just express the defining universal properties of the elements top \top ("truth") and bottom \bot ("falsity") in a poset. The usual notion of Boolean algebra includes a top and bottom, and so in that sense these are necessary. If you think of ABA \leq B as saying that BB is true under all circumstances where AA is true, then of course AA \leq \top tautologically, and A\bot \leq A vacuously (because there are no circumstances where \bot is true, this statement can never be falsified).

You can remove any rule you like, and you'll get a weaker or less expressive system. Some people might want to study Boolean algebras either without a top or without a bottom, but here in this thread we're not.

Likewise, all the rules of inference express the defining universal properties of the operations ,,¬\wedge, \vee, \neg for Boolean algebras.

view this post on Zulip Todd Trimble (Aug 01 2024 at 16:51):

Answer to Question 1: Deductive systems specify how to validly construct proofs. How can we say when a proof is valid without giving rules for that? A proof is a sequence of individual entailments (which I am writing as ABA \leq B) where each entailment in the sequence is a logical consequence of ones coming previously. All proofs are like that. Deductive systems specify the meaning of "is a logical consequence of".

view this post on Zulip Jean-Baptiste Vienney (Aug 01 2024 at 22:17):

In addition to Todd's answer to Question 1: there is a way of encoding proofs which is called natural deduction. In natural deduction you start with hypotheses at the top and then you deduce a concluding formula by using inference rules. There isn't a symbol \vdash, only the horizontal bars. You work with a tree of formulas, not of sequents. I don't know much about natural deduction but this is something well-known and apparently important.

view this post on Zulip Jean-Baptiste Vienney (Aug 01 2024 at 22:23):

I think the other style of writing proofs, which is more common today and is named sequent calculus, has been introduced in order to obtain a theorem named cut elimination. But I'm not sure about the history.

view this post on Zulip Jean-Baptiste Vienney (Aug 01 2024 at 22:26):

By the way, Todd's system is a kind of mix of these two things: he uses the inference rules of natural deduction, but encoded in a sequent calculus way. I think a lot of people do that also.

view this post on Zulip Todd Trimble (Aug 01 2024 at 23:59):

Right, the deductive system I gave is a little bit of a hybrid system. It's not designed with things like cut elimination in mind. What it does have in mind, pedagogically, is the characterization of the connectives in terms of their universal properties, as I already said, and as long emphasized by Lawvere.

view this post on Zulip Julius Hamilton (Aug 02 2024 at 13:59):

I just felt like mentioning:

"Enderton", "Mendelson": Both 3 syllables; first and last syllables rhyme, middle syllables differ by one letter.

"A Mathematical Introduction to Logic", "Introduction to Mathematical Logic" are nearly permutations of the same words.

:face_with_raised_eyebrow:

view this post on Zulip John Baez (Aug 02 2024 at 14:22):

And don't forget "A Logical Introduction to Mathematics" by Endelson.

view this post on Zulip Julius Hamilton (Aug 02 2024 at 14:36):

Haha, very nice!!

view this post on Zulip John Baez (Aug 06 2024 at 15:32):

Julius Hamilton said:

I think that the elements of the algebra have an ordering - maybe a partial order - and maybe a Boolean algebra is, or is close to, a lattice of formulae.

Yes, in fact the nLab defines a [[Boolean algebra]] to be a lattice with certain properties.

view this post on Zulip John Baez (Aug 06 2024 at 15:35):

I want to understand what this ordering “means”.

First of all, what is the ordering? There are various equivalent to define it. One is this:

We say pq p \le q if and only if pq=pp \wedge q = p.

Another is this:

We say pqp \le q if and only if pq=qp \vee q = q

view this post on Zulip John Baez (Aug 06 2024 at 15:36):

It's a good exercise to start with whatever axioms for a Boolean algebra you know, and show that these two definitions are equivalent, and then show that with either one of them a Boolean algebra becomes a poset (and in fact a [[lattice]]) with this definition.

view this post on Zulip John Baez (Aug 06 2024 at 15:37):

But as for what pqp \le q means, it means that pp implies qq.

view this post on Zulip John Baez (Aug 06 2024 at 15:41):

So, it's fundamental to the uses of Boolean algebras in propositional logic! When we have a Boolean algebra whose elements we call "propositions", we not only know how to do the "and" (\wedge), "or" (\vee) and "not" (¬\neg) of propositions, we also can tell when one proposition implies another.

view this post on Zulip John Baez (Aug 06 2024 at 15:44):

John Baez showed me (I believe) that to my surprise, a Boolean algebra on nn generators is actually a finite set.

By the way, you might still want to prove that the free Boolean algebra on nn generators has 22n2^{2^n} elements. You made a lot of progress in that direction, but you didn't show us a proof. This is mainly worthwhile in case you want to practice proving things, since the result is well-known.

One way to prove it is to show that each element in the free Boolean algebra on nn generators is equal to a unique element in disjunctive normal form (where we don't distinguish disjunctive normal forms that differ only by certain permutation symmetries). We talked about that but didn't get to the bottom of it.

view this post on Zulip John Baez (Aug 06 2024 at 15:46):

Another thing we never got to is conjunctive normal form.

I kept trying to get you to tell me that \vee distributes over \wedge and \wedge also distributes over \vee. You never came out and said it. So now I've said it. :stuck_out_tongue_wink: But the point is: using these distributive laws, not only do we get a disjunctive normal form for elements of a free Boolean algebra, we also get a conjunctive normal form, where the roles of \vee and \wedge are switched.

view this post on Zulip John Baez (Aug 06 2024 at 15:50):

I'm mentioning all these things because you're talking about what you want to do next with Boolean algebras, and these are the things I was trying to get you to do. I don't actually care much whether you do these things - but I figured I should mention them before I forget.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 14:37):

Sure. Today I’ll get back to the concrete side of things. I need to summarize some things that have been said in this thread. One moment.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 14:43):

I might just write out all the questions that come to mind reading through the above thread.

  1. I have constantly felt like I need to develop my competence in combinatorics better. Even just a simple observation that “if ni+1=ni2n_{i + 1} = n_{i}^2, then ni=n2in_i = n^{2i}. I often have the feeling that there’s a “simple” explanation that leaves you feeling like you are missing the “big picture”. I will elaborate on this after this message.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 14:56):

  1. Can you prove that every element of the free boolean algebra on nn elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.

I think the key idea here is to show that any “element of a free Boolean algebra” (which could also be thought of syntactically as words/sentences generated by a certain grammar) not only has a “normal form” (if we defined a “normal form” as a certain property that expressions can have), but that to be a “normal form” has to be at minimum functional, not relational (I think). That means that a mapping (well, technically a homomorphism, I think) from F(n)F(n) (the free Boolean algebra on nn variables) to an algebra of “normal forms” (regardless of which one - so far a “normal form” is just defined as basically a functional transformation / function on the elements of F(n)F(n)) cannot assign 2 forms to the same element xF(n)x \in F(n), and also that the normal form transformation function is applicable to every xF(n)x \in F(n). I would like to think about counter examples of this to drive the point home.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 14:58):

It makes me want to analyze “classes of mappings or relations” on F(n)F(n) in terms of properties.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:07):

For example, consider an ordering on properties of binary relations, if possible.

Let’s assume there are no other relation symbols but RR (even excluding equality for simplicity’s sake).

We can choose “enumeration axes”: the number of distinct variables, for example.

  1. \emptyset
  2. {xRx}\{xRx\} - if this is the sole atomic formula for this signature in “single-variable first-order logic”, then I believe I can apply what I know about Boolean algebras here. We can form composite formulae such as xRxxRxxRx \rightarrow xRx, xRx¬xRxxRx \vee \neg xRx, but they will all collect into 4 congruence classes modulo “truth” (so to speak): statements which are tautological (1=1 = \top), statements which are contradictory (0=0 = \bot), and statements whose truth value is the same as the truth value of ϕ=xRx\phi = xRx and ¬ϕ=¬xRx\neg \phi = \neg xRx (an example of the latter is the axiom of foundation in set theory, x[xx]\forall x [x \notin x].)

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:11):

By knowing rules for generating logically equivalent formulae from a given formula, I can rewrite such axioms in a form I might prefer. I’d want to see rules for transformations for the axiom x[¬xx]\forall x[\neg x \in x], to see if any of them strike me as an interesting, alternative way of looking at what that condition is saying.

However, I realize that Boolean algebras are for propositional logic, not predicate logic, so I have to backtrack on the above.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:13):

I was trying to enumerate properties of binary relations, so that I could study what a relation on F(n)F(n) and some other set would be like, depending on the properties of that relation.

For example, if for all xRx \in R, xRxxRx, we say the relation is reflexive. And if for all xRx \in R, ¬xRx\neg xRx, I think we can say it is “antireflexive”.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:16):

If a relation is reflexive, it has to be defined on the product of the set with itself: RX×XR \subseteq X \times X.

I want to make use of this observation later.

If we have 2 variables, I now know that there should be precisely 16 different possible properties of binary relations:

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:21):

The atomic formulae are generated by the number of choices for each argument of RR from the set {x,y}\{x, y\}. That’s 2×2=222 \times 2 = 2^2. The atomic formulae are {xRx,xRy,yRx,yRy}\{xRx, xRy, yRx, yRy\}. From these atomic formulae, there are 256 different properties we can create. For example:

For all x,yx, y, R(x,y)R(x,y). I wonder if this would be called a “total” relation or something.

For all xx, y[R(x,y)]\exists y [R(x, y)]. This is the familiar concept of left-total, for functions.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:23):

For all x,yx, y, (R(x,y)¬R(x,x))(R(y,x))(R(x, y) \wedge \neg R(x, x)) \vee (R(y, x)) (which is in disjunctive normal form.)

Properties like these are fascinating - it isn’t clear if they have names, but it would be really cool to find a concrete example for every single possible such property.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:27):

I now realize that a normal form should be a relation defined on F(n)×F(n)F(n) \times F(n), so that makes things simpler for now.

It would take me much longer to explore the behavior of all possible types of binary relations on F(n)2F(n)^2, so for now I can come back to what I originally said: I believe a “normal form” has to be functional, but it should also be a homomorphism. I am pretty sure that just means it has to preserve identities and it has to compose associatively. I’ll see if I can write this out as a formula.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:34):

A normal form is a function f:F(n)F(n)f : F(n) \to F(n) such that f(0)=0,f(1)=1f(0) = 0, f(1) = 1, and for all x,yf(F(n)),xyx, y \in f(F(n)), x \neq y. (?)

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:36):

Just a guess.

I’m now thinking that actually, what we call the “free Boolean algebra” could be thought of as the free algebra from a signature {2,2,¬1}\{\wedge^2, \vee^2, \neg^1\}, without the axioms for a Boolean algebra like xx=xx \wedge x = x.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:39):

When we add in any axioms for this free algebra - statements of equivalence - this is always a “quotient” of F(n)F(n): it collapses some elements into the same element, even if as trivially as xx=xx \wedge x = x.

Therefore, one great way to understand the structure of Boolean algebras better could be to study the properties of adding just one axiom at a time, to F(n)F(n).

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:49):

Can you prove that every element of the free boolean algebra on nn elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.

Anyway, I’ll try an induction on formulae. So assume we have any number of formulae, nn.

I feel like to even define DNF mathematically, I need to order the symbols, to express constraints that that \vee must be “above”/“before” \wedge, and \wedge above/before ¬\neg. It sounds like a partial order structure to me. I an pretty sure a lattice is a poset where every pair of elements has a supremum and an infimum, and thereby we can have join- or meet-semilattices (has all supremums or has all infimums). (Side-thought: since posets are categories, does that make joins and meets… products and coproducts?)

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:50):

I’m trying to show “we can always move \vee outwards”.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 15:55):

Maybe this is similar to the Python program I wrote, actually.

All I have to show is that it commutes/distributes with any symbol containing it.

That’s already one of the axioms, conventionally - a(bc)=(ab)(ac)a \wedge (b \vee c) = (a \wedge b) \vee (a \wedge c).

view this post on Zulip Julius Hamilton (Aug 13 2024 at 16:09):

I’m not sure if this is the proof you had in mind, @John Baez, but maybe it would look something like this:

Define “disjunctive normal form” as a partial ordering on symbols \wedge, \vee, and ¬\neg. >>¬\vee > \wedge > \neg. I haven’t figured out how to define this yet, though. Maybe with the lattice stuff I said above… using supremums and infimums in some way.

Maybe if we define these operations as functions (which they are), like :X×XX\wedge: X \times X \to X, (x,y)=xy\wedge(x, y) = x \wedge y. Then maybe just state the ordering in terms of the functions? DNF{,,¬}×{,,¬}\text{DNF} \subseteq \{\wedge, \vee, \neg\} \times \{\wedge, \vee, \neg\}, specifically: {,¬,¬,,,¬¬}\{\vee \geq \wedge, \vee \geq \neg, \wedge \geq \neg, \vee \geq \vee, \wedge \geq \wedge, \neg \geq \neg\}. (In this case, we don’t worry about expressions like ¬¬x\neg \neg x, since it respects the ordering.)

And then, I’m thinking about if it’s enough to show axioms like a(bc)=(ab)(ac)a \wedge (b \vee c) = (a \wedge b) \vee (a \wedge c) or not. Maybe it could be simple:

¬(xy)=¬x¬y\neg (x \vee y) = \neg x \wedge \neg y

x(yz)=(xy)(xz)x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z)

x(yz)=xyzx \vee (y \vee z) = x \vee y \vee z

¬(xy)=¬x¬y\neg (x \wedge y) = \neg x \vee \neg y

I’m not sure if this comes close to a full proof, but I think I showed that no matter what symbol \vee is enclosed by (,,¬\vee, \wedge, \neg), there is a logically valid way to transform it so that {,,¬}\vee \geq \{\vee, \wedge, \neg\}; and if \wedge is enclosed by ¬\neg, there is a logically valid way to transform it so that >¬\wedge > \neg. And we aren’t worried about the other cases.

view this post on Zulip Julius Hamilton (Aug 13 2024 at 16:27):

Todd Trimble said:

If you think of ABA \leq B as saying that BB is true under all circumstances where AA is true, then of course AA \leq \top tautologically, and A\bot \leq A vacuously (because there are no circumstances where \bot is true, this statement can never be falsified).

I understand this better now. It does make sense to say “anything follows from a contradiction” if you think about it exactly in the way I often find “trivial cases” of various mathematical structures to fulfill the defining conditions, because the conditions “don’t even apply”: the empty set is a group because we cannot evaluate any of the group axioms on its elements, so we default to true since we never computed a “false”. In this way, since material implication can mainly be seen as checking for the condition “(p,¬q)(p, \neg q)”, for which pqp \to q returns “false”, then for ¬p\neg p, we get true. And since \bot is always false, it follows that “saying “birds are flies therefore I am a boy”/“if birds are flies then I am a boy”” actually evaluates as true… in one way of looking at it, the succedent is totally irrelevant since the antecedent never gets “activated”.

view this post on Zulip Todd Trimble (Aug 14 2024 at 03:47):

Glad this makes sense. But right now there too many comments in this thread to respond to individually. Many of the proposed exercises, far less ambitious in scope than "enumerate all theories" (from the exponential objects thread), haven't been taken up. Like understanding the number of elements in the free Boolean algebra on nn elements.

David Egolf has been setting a fine example by working through, in detail, lots of the nitty-gritties of some precisely formulated problems in sheaf theory. Working through such things is invaluable. So is there anything that is not precise that prevents tackling some of the exercises proposed by Baez? At some point, you have to get your hands dirty, rather than asking yet another question, or speculating about what you might want to do in the future.

view this post on Zulip Julius Hamilton (Aug 14 2024 at 12:09):

I genuinely want to prove that (and all other exercises), I also just find it genuinely challenging. I did try to engage with Baez's exercise in the above.

I don't know if it's my "thinking style" or something, but I feel like when I'm at an impasse, I do best by asking a bunch of surrounding, associated questions. That helps me find new avenues of exploration, which allows me to come back to the original question with a new perspective. Honestly, I feel like I gained immensely by thinking through some of the things I talked about above. It brings new ideas to the forefront of my mind, which I can then bring further in my next stab at the problem.

This is where I currently am, regarding the proof of the number of elements in a free Boolean algebra:

  1. I am not completely certain about the use of the word "free". Question to self: what would a non-free Boolean algebra be like?

  2. I believe I am able to show that if I put all the elements in the free Boolean algebra in "a normal form" (conjunctive or disjunctive), there is a counting argument to show that there are 22n2^{2^n} equivalence classes of elements. But I would prefer to look at this counting argument in a different way, than I did previously.

  3. I think the part that is missing from my proof is proving that every element in the free Boolean algebra can be put in a normal form (disjunctive or conjunctive). I did actually try to do that in my last message, and I did ask if it was correct or not.

  4. I also prefer to try to write proofs in a formal way, say with natural deduction, because sometimes I feel like I am not even aware when my own argument counts as a "proof" or not.

I am genuinely trying to engage with you and John's input in the best way possible. But I feel like the best way to think through alternative ways of looking at the above is by formulating a bunch of questions, because it gives me new ideas, but I have to verbalize them first to clarify them to myself.

If you wish to help me complete this proof, please let me know if you have any advice, since asking sub-questions is my current way of trying to fill in missing information to help me make a proof or argument that satisfies me.


So here is some work on the above:

For #2, the argument I previously provided was that if all of the elements are in a normal form, we can count the number of unique elements like this:

For each of nn variables (= atomic formulae), there are 2 elements (regarding negation) in the Boolean algebra: for pp, we have ¬p\neg p, for example. And we know in disjunctive normal form, for example, the negation symbol is the "innermost" operation, in a kind of "order of operations". (I also asked some questions about this in the above: what is a good way to mathematically define the "order of operations" for a normal form?) So, it's like we first know we can make 2n2n elements in the free algebra: p,¬p,q,¬q,r,¬r,p, \neg p, q, \neg q, r, \neg r, \ldots.

The next operation is conjunction, which has 2 arguments as a binary operation. I think we might think that we can form n2n^2 terms by applying a binary operation to nn elements, but due to properties like commutativity, idempotence, identity, and complement, this will not be the case. I would like to show how we can draw the following conclusion, for now I just state it: if we remove equivalent expressions, we end up with expressions of the form a¬b¬cda \wedge \neg b \wedge \neg c \wedge d \wedge \ldots. Those strings we can count: for each of nn variables, we have 2 choices (positive or negative). So we get 2n2^n possible formulae we can generate from nn elements with the ¬\neg and \wedge operations.

For disjunction, we observe that all of the previously generated elements are logically unique. I feel like I should also prove this or explore this more. But this is what allows you to make the simple argument that we are just choosing to include or exclude such terms, connected by the \vee symbol, which gives us 22 choices for each of the 2n2^n elements, which is 22n2^{2^n}.

So, what do you think? What am I missing in order to complete this proof? Thank you very much.

I'm trying my best to get better at math.

view this post on Zulip Julius Hamilton (Aug 14 2024 at 12:27):

But I will also write the other things you suggested I prove here, to collect them:

  1. Prove that the axioms of Boolean algebra follow from the deductive system you defined.
  2. Prove that every expression has a dual which is logically valid.

view this post on Zulip John Baez (Aug 14 2024 at 12:37):

Julius Hamilton said:

I don't know if it's my "thinking style" or something, but I feel like when I'm at an impasse, I do best by asking a bunch of surrounding, associated questions.

Sometimes that's best - but sometimes it's best to drill straight in, assembling an argument that consists of many lemmas, then proving the lemmas one by one, perhaps using sub-lemmas, and changing the overall strategy minimally each time a lemma turns out to be false.

At the very least it's good to practice this strategy of direct attack, especially if you're naturally inclined to 'lateral thinking'. (If all you could do was the direct attack, I might suggest more lateral thinking.)

Honestly, I feel like I gained immensely by thinking through some of the things I talked about above. It brings new ideas to the forefront of my mind, which I can then bring further in my next stab at the problem.

That makes sense, but listening to someone ask a bunch of surrounding associated questions when the direct approach works just fine can be rather demotivating. That's why I wasn't planning to reply to your last long post.

This is where I currently am, regarding the proof of the number of elements in a free Boolean algebra:

  1. I am not completely certain about the use of the word "free". Question to self: what would a non-free Boolean algebra be like?

Oh, for some reason I thought you knew what a 'free' Boolean algebra was. I gave a rough-and-ready intuitive definition of them, and you seemed to be doing just fine with it.

But here's one rigorous definition:

We say a Boolean algebra FF is a free boolean algebra on a set XX if it is equipped with a function i:XFi: X \to F such that for each function f:XBf: X \to B for any boolean algebra BB there is a unique boolean algebra homomorphism f~:FB\tilde{f} : F \to B with

f~i=f \tilde{f} \circ i = f

Seeing why the rigorous definition matches the rough-and-ready definition is a good thing to do sometime. This often takes people a week or two the first time, but the same idea works for Boolean algebras, groups, vector spaces and many other things, so it's worthwhile.

  1. I believe I am able to show that if I put all the elements in the free Boolean algebra in "a normal form" (conjunctive or disjunctive), there is a counting argument to show that there are 22n2^{2^n} equivalence classes of elements.

Good! Then you can find a non-free Boolean algebra simply by finding a Boolean algebra with 8 elements.

  1. I think the part that is missing from my proof is proving that every element in the free Boolean algebra can be put in a normal form (disjunctive or conjunctive). I did actually try to do that in my last message, and I did ask if it was correct or not.

I didn't follow your argument, perhaps because it was in the middle of a long post full of other stuff.

  1. I also prefer to try to write proofs in a formal way, say with natural deduction, because sometimes I feel like I am not even aware when my own argument counts as a "proof" or not.

If you write proofs in a formal way, I will never read them, nor will most mathematicians. It's important to learn how to communicate with mathematicians, and part of this is learning to write arguments that mathematicians accept as proofs. This is not easy! But it can be done, and every mathematician does it! The basic method is this: we write proofs, get other mathematicians to read them, see if they are convinced, and if not find out why not.

I am genuinely trying to engage with you and John's input in the best way possible. But I feel like the best way to think through alternative ways of looking at the above is by formulating a bunch of questions, because it gives me new ideas, but I have to verbalize them first to clarify them to myself.

Okay, but I don't have the energy to follow all your questions, especially those that roam wildly around the mathematical landscape.

view this post on Zulip John Baez (Aug 14 2024 at 12:57):

Okay, on to the more technical work:

Julius Hamilton said:

For #2, the argument I previously provided was that if all of the elements are in a normal form, we can count the number of unique elements like this:

For each of nn variables (= atomic formulae), there are 2 elements (regarding negation) in the Boolean algebra: for pp, we have ¬p\neg p, for example. And we know in disjunctive normal form, for example, the negation symbol is the "innermost" operation, in a kind of "order of operations".

Good.

(I also asked some questions about this in the above: what is a good way to mathematically define the "order of operations" for a normal form?)

There's a whole theory of that stuff, but I'd rather focus on this one example before broadening out.

So, it's like we first know we can make 2n2n elements in the free algebra: p,¬p,q,¬q,r,¬r,p, \neg p, q, \neg q, r, \neg r, \ldots.

Okay.

The next operation is conjunction, which has 2 arguments as a binary operation. I think we might think that we can form n2n^2 terms by applying a binary operation to nn elements, but due to properties like commutativity, idempotence, identity, and complement, this will not be the case. I would like to show how we can draw the following conclusion, for now I just state it: if we remove equivalent expressions, we end up with expressions of the form a¬b¬cda \wedge \neg b \wedge \neg c \wedge d \wedge \ldots. Those strings we can count: for each of nn variables, we have 2 choices (positive or negative). So we get 2n2^n possible formulae we can generate from nn elements with the ¬\neg and \wedge operations.

Yes, you get exactly 2n2^n distinct propositions if you start by taking nn propositions p1,,pnp_1, \dots , p_n and their negations, and then form new ones using \wedge. But it's not true if you also allow things like

¬(p1p2) \neg(p_1 \wedge p_2)

Your phrase "formulae we can generate from nn elements with the ¬\neg and \wedge operations" is dangerous, since most of u would read that as including things like ¬(p1p2) \neg(p_1 \wedge p_2)

So, you can state your claim here precisely, call it a 'Lemma' and maybe number it, and then sometime come around back and start proving it.

Often a proof works this way: you first show that what you want would follow from a collection of lemmas, and make sure that all the lemmas seem correct. Then you try to prove those lemmas. These in turn may require lemmas of their own - or maybe not, maybe they're easy. Sometimes they turn out to be false. Etc.

For disjunction, we observe that all of the previously generated elements are logically unique. I feel like I should also prove this or explore this more. But this is what allows you to make the simple argument that we are just choosing to include or exclude such terms, connected by the \vee symbol, which gives us 22 choices for each of the 2n2^n elements, which is 22n2^{2^n}.

Again, this is correct - and again you are correctly noting that it deserves to be proved.

So, what do you think? What am I missing in order to complete this proof? Thank you very much.

For starters you're missing the things you said you were missing:

  1. A proof that given nn propositions, there are exactly 2n2^n distinct propositions that can be built from them and their negations using \wedge. That means showing that there are no more, but also importantly no fewer (due to unforeseen equations between the 2n2^n you found).

  2. A proof that given that 2n2^n propositions mentioned in part 1, there are exactly 22n2^{2^n} distinct propositions that can be built from them using \vee. Again this means showing that there are no more and no fewer. These are called disjunctive normal forms.

But also there's this:

  1. A proof that any proposition built from your nn propositions p1,,pnp_1, \dots , p_n using ¬,\neg, \wedge and \vee is equal to one of the 22n2^{2^n} disjunctive normal forms mentioned in part 2. E.g. take

¬(p1¬(p2¬(p3p4))) \neg(p_1 \vee \neg(p_2 \wedge \neg(p_3 \vee p_4)))

This is not in disjunctive normal form, so you need to show it's equal to a proposition in disjunctive normal form.

view this post on Zulip John Baez (Aug 14 2024 at 13:10):

For step 3, you could try to develop an algorithm to reduce any complicated thing like

¬(p1¬(p2¬(p3p4))) \neg(p_1 \vee \neg(p_2 \wedge \neg(p_3 \vee p_4)))

to disjunctive normal form. This is a syntactic approach: you're manipulating strings of symbols.

Or you could show that 1) every proposition built from p1,,pnp_1, \dots , p_n using ¬,,\neg, \wedge, \vee corresponds to some kind of interesting and manageable thing, 2) show there are 22n2^{2^n} of these things, and 3) show that these 22n2^{2^n} things correspond in a one-to-one way with disjunctive normal forms. This is a semantic approach: you're thinking about what the symbols can 'mean', or correspond to.

view this post on Zulip John Baez (Aug 14 2024 at 13:15):

Both the syntactic and semantic approaches work here, and they both give (or maybe I should say require) interesting insights.

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:43):

People talk about disjunctive normal form and conjunctive normal form, and that's one way of calculating the number of elements in a finitely generated free Boolean algebra. There are others.

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:43):

Most students are first exposed to Boolean algebras or propositional logic through truth tables, which provides another point of view on free Boolean algebras and the 22n2^{2^n} result. For example, they learn how to determine the truth table of a propositional expression like (p¬q)r(p \wedge \neg q) \vee r, where the rows in the truth table correspond to the different ways of assigning truth values to each of the variables, here p,q,rp, q, r. If there are three variables, as there are here, then there will be eight truth value assignments: you find all ways of putting T/F values under the pp-, qq-, rr-columns:

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:44):

If there are nn variables, then there will be 2n2^n rows or truth value assignments, since you have to choose between the two values T, F a total of nn times. Then you work through the table to determine, for each row, the corresponding truth value of the target expression. So for the TTT row, you would compute (T¬T)T=(TF)T=FT=T(T \wedge \neg T) \vee T = (T \wedge F) \vee T = F \vee T = T, and similarly for the other seven rows. Let me call the truth value of the target expression the "output value".

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:44):

Two propositional expressions in nn variables are said to be logically equivalent if they yield the same output truth values for each of the 2n2^n rows. So you can think of a propositional expression as a function, where the inputs are the 2n2^n truth-value assignments of variables, or rows, and where the output is the T or F that the expression evaluates to for that row. For example, if there are three variables, then a truth table is in essence a function

{TTT,TTF,,FFF}{T,F}\{TTT, TTF, \ldots, FFF\} \to \{T, F\}.

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:44):

Guess how many possible such functions there are? You have to choose between T and F a total of 2n2^n times, so there will be 22n2^{2^n} such choices in all. How about that?

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:44):

But wait a minute, I didn't show that every function of the form {TTT,TTF,,FFF}{T,F}\{TTT, TTF, \ldots, FFF\} \to \{T, F\} is a truth table function of some propositional expression, did I? But there's a nice way of doing this, and it's connected with conjunctive normal form. Let's take an example. Suppose I wanted to cook up a propositional expression whose truth table function ϕ\phi is given by

ϕ(TTT)=Fϕ(TTF)=Tϕ(TFT)=Tϕ(TFF)=T\phi(TTT) = F \qquad \phi(TTF) = T \qquad \phi(TFT) = T \qquad \phi(TFF) = T

ϕ(FTT)=Fϕ(FTF)=Fϕ(FFT)=Tϕ(FFF)=F\phi(FTT) = F \qquad \phi(FTF) = F \qquad \phi(FFT) = T \qquad \phi(FFF) = F

(I'm just making up a function at random).

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:45):

The trick is: first learn how to solve such problems where there is just a one output that is T and all the rest are F. For example, say ψ(TTT)=T\psi(TTT) = T but all other ψ\psi-values are F. Then pqrp \wedge q \wedge r works. Because TTTT \wedge T \wedge T evaluates to TT, but in all other cases, where at least one of p,q,rp, q, r has been assigned to F, we have that pqrp \wedge q \wedge r evaluates to F.

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:45):

Let's try another. Say we want ψ(TTF)=T\psi(TTF) = T but all other ψ\psi-values are F. Well, we just make a slight adjustment to the idea that worked before: we use instead pq¬rp \wedge q \wedge \neg r. In other words, we still take a conjunction of variables, except that we might first need to "flip the switch" by applying ¬\neg to any variable that was assigned the value F in that row, in order to get a conjunction of T's as the output value, TT¬F=TTT=TT \wedge T \wedge \neg F = T \wedge T \wedge T = T for the TTF row, but the output will be F for any other row.

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:45):

Thus, to solve the problem of finding an expression whose truth table has just one output T, we take an appropriate conjunction of literals, where a literal is either a variable or a negation of a variable. I've seen such expressions called "realizers".

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:46):

Now let's return to the earlier problem where I chose a function ϕ\phi at random. To find a propositional expression that represents it, just take a disjunction of realizers, over all those rows whose output is T. For the example I gave, this would be

(pq¬r)(p¬qr)(p¬q¬r)(¬p¬qr).(p \wedge q \wedge \neg r) \vee (p \wedge \neg q \wedge r) \vee (p \wedge \neg q \wedge \neg r) \vee (\neg p \wedge \neg q \wedge r).

Note that this is a disjunctive normal form.

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:47):

So here is the story so far:

view this post on Zulip Todd Trimble (Aug 14 2024 at 16:47):

(I guess I didn't show that the description of "free Boolean algebra" in the first bullet point matches the description that John gave in terms of a universal property. But it does, and I'm just trying to give some intuition.)

There's still another way of viewing the 22n2^{2^n} result, by exploiting the equivalence between the concepts of Boolean algebra and Boolean ring (by definition, a Boolean ring is a commutative ring in which every element is idempotent), and constructing a free Boolean ring on nn variables by analogy with free commutative rings, whose elements are polynomials in nn variables. This is actually my favorite proof, but I'm going to leave it as a teaser for now.

view this post on Zulip Jean-Baptiste Vienney (Aug 14 2024 at 17:00):

Just to help with the LaTeX since we don't have any information from the system on where typos are when we write a formula on Zulip and it can be quite annoying when you don't find it. I think I found it: you wrote \{T,F} instead of \{T,F\} in all the formulas which don't appear properly.

view this post on Zulip Todd Trimble (Aug 14 2024 at 17:13):

Thanks, Jean-Baptiste. I think I got it.

view this post on Zulip John Baez (Aug 14 2024 at 18:18):

Todd Trimble said:

Most students are first exposed to Boolean algebras or propositional logic through truth tables, which provides another point of view on free Boolean algebras and the 22n2^{2^n} result.

By the way, Julius, this is the 'semantic approach' I was hinting at when mentioning semantic and syntactic approaches to showing the free Boolean algebra has 22n2^{2^n} entries. (There could be other semantic approaches but this is the most famous.) There's a lot of fun involved in working out how truth tables connect to disjunctive and conjunctive normal forms, and Todd has pointed out some of it.

view this post on Zulip Julius Hamilton (Aug 17 2024 at 23:00):

I will read through the above and respond more thoroughly. Thank you.

After a lot of false leads and thinking, I think the following is a very simple and valid argument:

If we want to count the number of terms in a free Boolean algebra, we should carefully consider a formal/precise way to say what we mean by this. In the free algebra over the signature {,,¬,0,1}\{\wedge, \vee, \neg, 0, 1\}, both the elements ¬(ab)\neg (a \vee b) and ¬a¬b\neg a \wedge \neg b are present and are not equal; but in the associated free Boolean algebra, they are equal. And so we need to distinguish between the term algebra for the signature {,,¬,0,1}\{\wedge, \vee, \neg, 0, 1\}, and the free Boolean algebra, whose elements are actually equivalence classes of those terms. So to be more accurate, the elements of the free Boolean algebra are actually more like: {{0,a¬a,b¬b,a¬aa,},{b,bb,bb,b(bb),},}\{\{0, a \wedge \neg a, b \wedge \neg b, a \wedge \neg a \wedge a, \ldots\}, \{b, b \wedge b, b \vee b, b \wedge (b \vee b), \ldots\}, \ldots \}. The way that we determine that two expressions are equivalent is by their “truth tables”: two Boolean expressions are equivalent if they have identical truth tables. Intuitively, they “have the same truth content / truth conditions / meaning”.

Now, noticing this may help us frame the counting question more accurately. If we have nn variables, we want to know how many equivalence classes of formulae built from those variables there are. And by focusing on equivalence classes, we don’t have to worry at all about all the different ways the variables can be combined with Boolean operations, because we know that any such combination will be a function BnBB^n \to B - BnB^n because no matter how the variables are combined by Boolean operations, the inputs to this expression (seen as a whole) will be a selection of a value from {0,1}\{0,1\} for each of the distinct variables featured in the expression. And BB, as codomain, because we know Boolean operations and Boolean values are a closure system, so any arbitrary Boolean expression can only attain values in {0,1}\{0, 1\}.

And so now the question “How many equivalence classes of Boolean expressions for nn variables are there?” is just the question of “How many different Boolean functions for nn variables can there be?”, which is the question “How many distinct functions BnBB^n \to B are there?” And this is simpler: B=2|B| = 2, and the number of functions XYX \to Y is YX|Y|^{|X|}. It directly follows the number is 22n2^{2^n}.

I need to keep thinking but this was one small thought.

view this post on Zulip Julius Hamilton (Aug 17 2024 at 23:23):

In order to show that any element in the equivalence classes in the free Boolean algebra has a normal form…

I have been thinking of defining a “normal form” as a set of deduction rules, of the form pqp \vdash q. This would allow us to state a set of rules and to say that for any element in the equivalence classes of the free Boolean algebra, applying any of those rules which can be applied will eventually terminate in a normal form. Including these rules:

x,¬¬xx\forall x, \neg \neg x \vdash x
x,y,¬(xy)(¬x)(¬y)\forall x, y, \neg (x \wedge y) \vdash (\neg x) \vee (\neg y)
x,y,¬(xy)(¬x)(¬y)\forall x, y, \neg (x \vee y) \vdash (\neg x) \wedge (\neg y)
x,y,z,x(yz)(xy)(xz)\forall x, y, z, x \wedge (y \vee z) \vdash (x \wedge y) \vee (x \wedge z)

The way this helps me is due to the “directionality” of the \vdash relation, I can express the “direction” expressions should be reduced. Whereas the Boolean algebra axioms are expressed as equalities, so I wasn’t sure how to prevent the possibility of them being applied “in the wrong direction”.

This reminds me of how Todd’s natural deduction system is expressed in terms of a \leq relation, and you can use those deduction rules to derive the axioms of a Boolean algebra.

Assuming the entailment relation composes associatively, I think I might actually be able to model this as a category, and we can prove that the way these reduction rules chain together always terminates at something which can be reduced no further. I might seem like I’m getting “off-track”, but right now it’s the most elegant way I can think of try to explicitly prove that these rules will reduce every expression deterministically to a single form that cannot be reduced further.

view this post on Zulip Chris Grossack (they/them) (Aug 18 2024 at 00:02):

These are great ideas! And you're INCREDIBLY close to a complete proof. Let me know if you want a hint that lets you sidestep normal forms, or alternatively a hint for how to show normal forms are "canonical" (exactly one normal form per class)

view this post on Zulip Todd Trimble (Aug 18 2024 at 03:16):

Assuming the entailment relation composes associatively

Yes, that's the "cut rule" (from my July 27 post).

view this post on Zulip John Baez (Aug 18 2024 at 10:52):

Julius Hamilton said:

The way that we determine that two expressions are equivalent is by their “truth tables”: two Boolean expressions are equivalent if they have identical truth tables. Intuitively, they “have the same truth content / truth conditions / meaning”.

That's one way, the semantic way. So let's call that semantic equivalence.

But there's also what I'd consider the basic way: the syntactic way. There are a bunch of equations in the definition of "boolean algebra", like

¬(ab)=¬a¬b \neg(a \vee b) = \neg a \wedge \neg b
a(bc)=(ab)(ac) a \wedge (b \vee c) = (a \wedge b) \vee (a \wedge c)

etcetera. By definition, two expressions xx and yy give the same element of the free boolean algebra on some elements p1,,pnp_1, \dots, p_n if and only if you can repeatedly use these equations to get from xx to yy. So let's call this relationship syntactic equivalence.

Luckily, since you've read Chapter 1 of Mendelson's book on logic, you've seen the Deduction Theorem, Proposition 1.9, which says that syntactic equivalence implies semantic equivalence. (Well, he doesn't say it this way, but this quickly follows from what he says.)

And you've seen the Completeness Theorem, Proposition 1.14, which says that semantic equivalence implies syntactic equivalence. (Again he doesn't say it this way.)

So, while your original quest was definitely to count the number of syntactic equivalence classes of expressions - since this is the number of elements in the free boolean algebra on p1,,pnp_1, \dots, p_n - you are entitled, knowing these theorems, to count the number of semantic equivalence classes.

view this post on Zulip John Baez (Aug 18 2024 at 10:54):

A while back I outlined these two approaches to showing there are 22n2^{2^n} elements in the free boolean algebra on nn elements:

John Baez said:

You could try to develop an algorithm to reduce any complicated thing like

¬(p1¬(p2¬(p3p4))) \neg(p_1 \vee \neg(p_2 \wedge \neg(p_3 \vee p_4)))

to disjunctive normal form. This is a syntactic approach: you're manipulating strings of symbols.

Or you could show that 1) every proposition built from p1,,pnp_1, \dots , p_n using ¬,,\neg, \wedge, \vee corresponds to some kind of interesting and manageable thing, 2) show there are 22n2^{2^n} of these things, and 3) show that these 22n2^{2^n} things correspond in a one-to-one way with disjunctive normal forms. This is a semantic approach: you're thinking about what the symbols can 'mean', or correspond to.

view this post on Zulip John Baez (Aug 18 2024 at 10:56):

You now seem to be taking the semantic approach, where the "interesting and manageable thing" turns out to be a function BnBB^n \to B.

view this post on Zulip John Baez (Aug 18 2024 at 15:32):

That's great - but the fact that it's equivalent to the syntactic approach is quite nontrivial. That's what Mendelson's "Deduction Theorem" and "Completeness Theorem" are all about. These are probably the two most fundamental theorems in propositional logic.

view this post on Zulip Patrick Nicodemus (Aug 18 2024 at 23:59):

John Baez said:

You now seem to be taking the semantic approach, where the "interesting and manageable thing" turns out to be a function BnBB^n \to B.

The distinction between syntactic and semantic is is tricky to motivate because even though it's "Semantic" it's also obviously effective in a way that semantic definitions of truth in arithmetic are not.

view this post on Zulip John Baez (Aug 19 2024 at 09:47):

I don't find that distinction hard to motivate myself: in the syntax of boolean algebras we just fiddle around rewriting expressions like p1(p2p1)p_1 \vee (p_2 \wedge p_1) according to the boolean algebra rules, while in the semantics we step outside and say "hey! What might these expressions mean? How can we interpret them?" - and we discover that each boolean expression maps to an nn-ary function from {F,T}\{F,T\} to {F,T}\{F,T\} in a useful way. This is a shockingly different approach.

view this post on Zulip John Baez (Aug 19 2024 at 09:51):

So then we need to check that this new semantic approach matches nicely with the original syntactic approach - so we need to prove the soundness and completeness theorems, which Mendelson calls the Deduction Theorem (actually something a wee bit more powerful) and Completeness Theorem.

(I know you know all this; I'm just trying to pound it in a bit more forcefully for @Julius Hamilton's sake.)

view this post on Zulip Julius Hamilton (Aug 22 2024 at 02:33):

I actually hadn’t read Mendelson yet, but now I am, and it’s really good so far.

I recently did (half of) a proof exercise with David Egolf and it made me think that proofs are easier when you focus on the definitions very precisely and notice what exactly you can get to follow from them.

For this reason I want to start back at the beginning.

This is my current definition of a Boolean algebra:

A Boolean algebra is a set SS, a unary operation ¬\neg, two binary operations ,\wedge, \vee , and two distinguished elements 0,10,1 such that:

x,y,z[x(yz)=(xy)z]\forall x, y, z [x \wedge (y \wedge z) = (x \wedge y) \wedge z]

x,y,z[x(yz)=(xy)z)]\forall x, y, z [x \vee (y \vee z) = (x \vee y) \vee z)]

x,y[xy=yx]\forall x, y [x \wedge y = y \wedge x]

x,y[xy=yx]\forall x, y [x \vee y = y \vee x]

x[x1=x]\forall x [x \wedge 1 = x]

x[x0=x]\forall x [x \vee 0 = x]

x[xx=x]\forall x [x \wedge x = x]

x[xx=x]\forall x [x \vee x = x]

x[x¬x=0]\forall x [x \wedge \neg x = 0]

x[x¬x=1]\forall x [x \vee \neg x = 1]

x,y,z[x(yz)=(xy)(xz)]\forall x, y, z [x \wedge (y \vee z) = (x \wedge y) \vee (x \vee z)]

x,y,z[x(yz)=(xy)(xz)]\forall x, y, z [x \vee (y \wedge z) = (x \vee y) \wedge (x \vee z)]

x,y[¬(xy)=¬x¬y]\forall x, y [\neg (x \wedge y) = \neg x \vee \neg y]

x,y[¬(xy)=¬x¬y]\forall x, y [\neg (x \vee y) = \neg x \wedge \neg y]

Before going further, I want to establish the following:

Lemma 1. The empty set is not a Boolean algebra.

My intuition for this is because since the signature mandates the presence of elements 00 and 11, there is no valid mapping of those constant symbols to a domain, if the domain is the empty set. There’s nothing to map them to, so the empty set is not a model that satisfies the above theory. Could someone let me know if this small assertion is true, or is there a formal way to evade the need to map those constant symbols to anything?

MϕM \models \phi means ϕ\phi is true for all variable assignments, for a map from the signature σ\sigma to the structure MM. Is it a valid proof to say “no mapping from the signature to the empty set exists, therefore, the empty set cannot be a structure for this signature”, precluding the possibility of it being a model? (Or, “no map from the constant symbols 00 and 11 to elements in the empty set exists, therefore, the empty set is not a structure the theory of Boolean algebras can be interpreted in.”)

view this post on Zulip Julius Hamilton (Aug 22 2024 at 03:06):

I’m pretty confident the above is correct.

My desire is to thoroughly establish a few models of Boolean algebras, then explore a proof by induction on their size.

Let M={x}\mathcal{M} = \{x\}, a set with one element.

Let J:σMJ: \sigma \to \mathcal{M} be the map such that J(0)=x,J(1)=xJ(0) = x, J(1) = x.

I’m not sure how we express what we are mapping the function symbols to. We map them to functions. But how do we define the functions? As sets of ordered pairs, like this? J(¬)={(x,¬x),(¬x,x)},J()={(x,x,x)}J(\neg) = \{(x, \neg x), (\neg x, x)\}, J(\wedge) = \{(x, x, x)\}? It doesn’t seem right.

Nevermind, I think I understand. Since in this case there is only one element, we can define the functions pointwise:

¬(x)=x\neg(x) = x

(x,x)=x\wedge(x, x) = x

(x,x)=x\vee(x, x) = x

We have now established that {x}\{x\} is a valid structure to interpret ϕ\phi in. Since there is only one element, the only interpretation we have to check is when all variables equal x. We run through the axioms. We use the above 3 assertions about the values of our three functions to check if the axioms hold:

x,y,z[x(yz)=(xy)z]\forall x, y, z [x \wedge (y \wedge z) = (x \wedge y) \wedge z]

x(xx)=?(xx)xx \wedge (x \wedge x) \stackrel{?}{=} (x \wedge x) \wedge x

x(xx)=xx=xx \wedge (x \wedge x) = x \wedge x = x

(xx)x=xx=x(x \wedge x) \wedge x = x \wedge x = x :checkbox:

x,y,z[x(yz)=(xy)z)]\forall x, y, z [x \vee (y \vee z) = (x \vee y) \vee z)]

(Being a bit less explicit:)

x(xx)=xx=xx \vee (x \vee x) = x \vee x = x :checkbox:

x,y[xy=yx]\forall x, y [x \wedge y = y \wedge x]

xx=xx \wedge x = x :checkbox:

x,y[xy=yx]\forall x, y [x \vee y = y \vee x]

xx=xx \vee x = x :checkbox:

x[x1=x]\forall x [x \wedge 1 = x]

xx=xx \wedge x = x :checkbox:

x[x0=x]\forall x [x \vee 0 = x]

xx=xx \vee x = x :checkbox:

x[xx=x]\forall x [x \wedge x = x]

xx=xx \wedge x = x :checkbox:

x[xx=x]\forall x [x \vee x = x]

xx=xx \vee x = x :checkbox:

x[x¬x=0]\forall x [x \wedge \neg x = 0]

xx=xx \wedge x = x :checkbox:

x[x¬x=1]\forall x [x \vee \neg x = 1]

xx=xx \vee x = x :checkbox:

x,y,z[x(yz)=(xy)(xz)]\forall x, y, z [x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z)]

x(xx)=xx \wedge (x \vee x) = x

(xx)(xx)=xx=x(x \wedge x) \vee (x \wedge x) = x \vee x = x :checkbox:

x,y,z[x(yz)=(xy)(xz)]\forall x, y, z [x \vee (y \wedge z) = (x \vee y) \wedge (x \vee z)]

(and so on)

x,y[¬(xy)=¬x¬y]\forall x, y [\neg (x \wedge y) = \neg x \vee \neg y]

x,y[¬(xy)=¬x¬y]\forall x, y [\neg (x \vee y) = \neg x \wedge \neg y]

So this arguably is a complete proof that any singleton set models the theory of Boolean algebras.

view this post on Zulip Julius Hamilton (Aug 22 2024 at 03:28):

This is helping me think of things to prove I hadn’t even thought of before, like proving that there is only one unique Boolean algebra on a set with nn elements up to isomorphism.

view this post on Zulip Julius Hamilton (Aug 22 2024 at 03:29):

I’ll continue with this tomorrow.

view this post on Zulip Todd Trimble (Aug 22 2024 at 05:27):

Could someone let me know if this small assertion is true, or is there a formal way to evade the need to map those constant symbols to anything?

It's true. You can't get around the requirement to interpret constants as elements.

Julius Hamilton: This is helping me think of things to prove I hadn’t even thought of before, like proving that there is only one unique Boolean algebra on a set with n elements up to isomorphism.

To save you some time: that's not true. Every finite Boolean algebra has cardinality equal to a power of 2.

view this post on Zulip John Baez (Aug 22 2024 at 08:57):

This looks good, @Julius Hamilton. I guess I forgot which logic book you were using! But many introductions to logic start with propositional logic (\approx boolean algebras) and then move on to the predicate calculus (\approx first-order logic). And I think that's a good order of things.

view this post on Zulip Julius Hamilton (Aug 22 2024 at 20:47):

Pretty sure I just realized something beautiful and obvious.

DNF and CNF explicitly encode the values of a truth table.

mendelson.png

Call the expressions between the parentheses the “disjuncts”. Stack the disjuncts vertically. Supply the variables with a 1 for a nonnegated symbol and a zero otherwise. These values return a 1. The unspecified variables don’t matter.

CNF: do the same thing but stack the conjuncts horizontally.

This might hint why there are only 2 normal forms.

view this post on Zulip Julius Hamilton (Aug 24 2024 at 17:46):

Here’s another cool thing.

A “theory” (collection of formulae) can be specified by choosing elements in a Boolean lattice.

Because the lattice is ordered by logical strength, an element of a Boolean lattice implies all of the elements directly weaker than it. For example, pqpp \wedge q \leq p.

This provides a visual way to see which axioms of a theory are not independent, and it should make it easier to calculate for a collection of independent formulae, how many distinct logical theories can be created from them.

view this post on Zulip Todd Trimble (Aug 24 2024 at 18:54):

A “theory” (collection of formulae) can be specified by choosing elements in a Boolean lattice.

Because the lattice is ordered by logical strength, an element of a Boolean lattice implies all of the elements directly weaker than it.

Yes, for a propositional theory. What you are describing in the second sentence is called a filter of the Boolean algebra, generated by the chosen elements. A filter of a Boolean algebra BB is a subset FF of BB that is upward closed (if bFb \in F and bcb \leq c, then also cFc \in F) and closed under meets (if b,cFb, c \in F, then also bcFb \wedge c \in F). The meaning of the second condition is that if the chosen elements are considered as axioms that are assumed true in the theory, then finite meets of axioms will be true as a consequence.

Alternatively, you can consider ideals in a Boolean algebra: these are precisely complements of ideals. For an algebraist, these might feel more familiar, because one speaks of a quotient of a ring modulo an ideal. But either way, one can talk about the quotient of a Boolean algebra modulo an ideal or of a filter, and the resulting quotient, considered as a Boolean algebra, is an effective algebraic way of thinking about the theory at hand. The elements of the Boolean algebra are equivalence classes of formulas where pqp \equiv q if pqp \Leftrightarrow q is a provable consequence of the theory, i.e., belongs to the filter that the axioms generate. (All this is related to "localization", currently being discussed in another thread.)

Things will generally become more complicated when you have to deal with first-order (or predicate) theories and have to juggle with different domains, quantification, and so on: to each domain there is a corresponding Boolean algebra consisting of equivalence classes of predicate formulas, and all these Boolean algebras interact in complicated ways.

view this post on Zulip Todd Trimble (Aug 24 2024 at 18:55):

This provides a visual way... should make it easier to calculate... how many distinct logical theories can be created from them.

Even though a free Boolean algebra has a very large number of elements, too many to honestly visualize as a lattice, it's indeed intuitively helpful to develop geometric ideas of related things, akin to picturing varieties in algebraic geometry. As was mentioned earlier in this thread, a Boolean algebra can be equivalently thought of as a Boolean ring, which is a special type of commutative ring, namely one for which every element xx is idempotent: satisfies xx=xx \cdot x = x. In algebraic geometry, each commutative ring has a corresponding spectrum which is a kind of "geometric dual" of that ring, which really roughly speaking are like geometric loci (curves, surfaces, and so on), and one can import geometric ideas from algebraic geometry to think about Boolean algebras.

The points of the spectrum of a free Boolean algebra on nn elements correspond to rows in truth tables, consisting of truth-value assignments for the variables.

Enumerating finite propositional theories, or more precisely enumerating the number of filters of a free Boolean algebra on $n$$ generators, can indeed be done. These filters (or ideals) correspond to subsets of the "space" of truth assignments, and these can indeed be counted.

Perhaps the biggest takeaway from this array of ideas is that the spectrum of a Boolean algebra BB, thought of geometrically, is the Stone space of BB. The Stone space is a certain compact Hausdorff space. Stone duality is a wonderful thing to think about, and the compactness theorem for propositional calculus is directly and powerfully related to the compactness of this spectrum. So you're getting into some deep-ish matters here!

view this post on Zulip Julius Hamilton (Aug 24 2024 at 20:09):

Wow. Trying to get back into university so I can just study this stuff all day.

view this post on Zulip Julius Hamilton (Aug 24 2024 at 20:32):

Let Σ={{0,1},{(1,¬),(2,),(2,)},{}}\Sigma = \{\{0, 1\}, \{(1, \neg), (2, \wedge), (2, \vee)\},\{\}\}. Let domain A={a,b}A = \{a, b\}. Let a\mathfrak{a} consist of the following maps:

a(¬)={(a,a),(b,b)}\mathfrak{a}(\neg) = \{(a, a), (b, b)\}
a()={(a,a,a),(a,b,a),(b,a,a),(b,b,b)}\mathfrak{a}(\wedge) = \{(a, a, a), (a, b, a), (b, a, a), (b, b, b)\}
a()={(a,a,a),(a,b,b),(b,a,b),(b,b,b)}\mathfrak{a}(\vee) = \{(a, a, a), (a, b, b), (b, a, b), (b, b, b)\}
a(0)=a\mathfrak{a}(0) = a
b(1)=b\mathfrak{b}(1) = b

Let U\mathfrak{U} be the structure (A,a)(A, \mathfrak{a}).

Lemma: U\mathfrak{U} is the only model of ϕ\phi (the theory of Boolean algebras) for which A=2|A| = 2, up to isomorphism.

Proof:

I can think of these proof strategies:

view this post on Zulip Julius Hamilton (Aug 24 2024 at 20:47):

Attempting the first:

Let Aϕ\mathfrak{A} \models \phi and Bϕ\mathfrak{B} \models \phi, where A=(A,a),\mathfrak{A} = (A, \mathfrak{a}), and B=(B,b)\mathfrak{B} = (B, \mathfrak{b}), and A=B=2|A| = |B| = 2.

view this post on Zulip Todd Trimble (Aug 24 2024 at 21:01):

I think you're making this much harder than it is. A poset can carry at most one Boolean algebra structure, is the general theorem.

view this post on Zulip Julius Hamilton (Aug 24 2024 at 21:04):

Define an isomorphism between models π:AB\pi : \mathfrak{A} \cong \mathfrak{B} as a map f:ABf: A \to B such that:

view this post on Zulip Julius Hamilton (Aug 24 2024 at 21:05):

Todd Trimble said:

I think you're making this much harder than it is. A poset can carry at most one Boolean algebra structure, is the general theorem.

Oh, well, it seems like a good opportunity to apply what I am learning from my logic textbook.

But I can also try a different approach to prove this, sure.

view this post on Zulip Julius Hamilton (Aug 24 2024 at 21:31):

Anyway.

If Aϕ\mathfrak{A} \models \phi, this means that Aψ\mathfrak{A} \models \psi for every ψϕ\psi \in \phi.

For each ψ\psi, we check them in accordance with natural language, basically.

One formula of ϕ\phi, a Σ\Sigma-formula, is:

However, we already assumed A\mathfrak{A} and B\mathfrak{B} are models, so we don’t have to check if they meet the conditions. We assume they already do.

Given that, we have to show that:

a) AA and BB are bijective (they are, by the definition that A=B=2|A| = |B| = 2)
b) That RAa1,,anR^{\mathfrak{A}}a_1,\ldots,a_n iff RBπ(a1),,π(an)R^{\mathfrak{B}}\pi(a_1),\ldots,\pi(a_n).
c) (…)

One thing I am wondering is, in order to show that AB\mathfrak{A} \cong \mathfrak{B}, we have to show there exists an isomorphism π:AB\pi : \mathfrak{A} \cong \mathfrak{B}. Is the only way to prove this by explicitly constructing such an isomorphism? I don’t think we can do that if we are attempting to prove that any arbitrary A,B\mathfrak{A}, \mathfrak{B} are isomorphic without specifying them specifically.

view this post on Zulip Julius Hamilton (Aug 24 2024 at 21:33):

I’ll have to think this over.

view this post on Zulip Julius Hamilton (Aug 24 2024 at 21:52):

But another way to do this is to show what structure must necessarily arise from the axioms, and this would be a direct proof of the structure of a Boolean algebra with 2 elements. For example, we can prove that 101 \neq 0; now we have determined that for a two element set, one element must be 11, the other 00.

So maybe all we have to do is run through the axioms like we did when the only element was xx, but just say, “this time, the only elements are xx and yy, and xyx \neq y”.

view this post on Zulip Julius Hamilton (Aug 24 2024 at 22:00):

Then it would be:

Assume a set {a,b}\{a, b\} is a model of a Boolean algebra. We want to generate all the values for the three function symbols.

Then from the axioms, we have:

aa=a,ab=ba,bb=b,aa=a,bb=b,ab=baa \wedge a = a, a \wedge b = b \wedge a, b \wedge b = b, a \vee a = a, b \vee b = b, a \vee b = b \vee a. Assume a=0a = 0 without loss of generality. Then ab=0,ab=ba \wedge b = 0, a \vee b = b. ¬(ab)=¬b\neg(a \vee b) = \neg b. I’ll just need to continue these derivations until I can prove that b=1b = 1. There’s probably a derivable law that 0=¬10 = \neg 1 that I haven’t used before.

view this post on Zulip Julius Hamilton (Aug 24 2024 at 22:31):

And I think instead of deriving it, I can instead just prove that it’s true.

Assume 0=¬10 = \neg 1. Goal: derive a tautology. You can use complement and identity laws for this: 0a=a,0a=0,1a=1,1a=a0 \vee a = a, 0 \wedge a = 0, 1 \vee a = 1, 1 \wedge a = a.

Maybe we can substitute 0, 1, and ¬1\neg 1 into all these equations to see what we produce.

01=1,01=00 \vee 1 = 1, 0 \wedge 1 = 0. Since we assumed 0=¬10 = \neg 1, we can replace 00 with ¬1\neg 1:

¬11=1\neg 1 \vee 1 = 1. This is a tautology. 0=¬10 = \neg 1 \vdash \top.

Then the shortest proof that the only Boolean algebra with 2 elements is the one where a=¬ba = \neg b might be:

  1. Assume without loss of generality that a=0a = 0.
  2. By assumption, aba \neq b, otherwise A2|A| \neq 2.

Actually we don’t even need the result 0=¬10 = \neg 1, the above is enough to deduce that b=1b = 1, since 11 needs to be represented in the structure {a,b}\{a, b\}. But we could also say that since a=0,b=1a = 0, b = 1 since 1=¬0 1 = \neg 0.

This might not be perfect but has been a good exercise for me.

view this post on Zulip Todd Trimble (Aug 24 2024 at 23:56):

Oh, well, it seems like a good opportunity to apply what I am learning from my logic textbook.

Yes, you're right to say that, and I apologize.

I don't have Mendelson before me. From what you write, I am supposing that he defines a Boolean algebra to have signature (,,¬,0,1)(\vee, \wedge, \neg, 0, 1), satisfying a number of axioms. Different authors may use different signatures (and even if two use the same signature, they might not always list the same equational axioms). But if I were the author, I would also include a binary relation which earlier I denoted by \leq, as part of the signature.

A comment on what you wrote above. To say that a statement AA is a tautology, write A\top \vdash A. Not AA \vdash \top. The latter is true for any statement AA, tautology or not.

For example, we can prove that 101 \neq 0; now we have determined that for a two element set, one element must be 11, the other 00.

I don't think you have quite proved that 101 \neq 0 in a Boolean algebra with more than one element. Depending on which equational axioms you are working with, this might be obvious, but nevertheless I'll pose the question: how would 0=10 = 1 lead to a contradiction? As you say, there's an element 00, and by assumption there exists a non-00 element aa, and that's what we have to work with here. Hint: if 0=10 = 1, see if you can somehow conclude 0=a0 = a.

Once that gets squared away, then in a two-element Boolean algebra, there'd be no getting around the fact that the elements are 00 and 11, and then the \vee and \wedge tables (like addition and multiplication tables) are very quickly established from the axioms.

By far the most interesting thing is the fact that once ,\vee, \wedge are secured, then complements are uniquely determined. I think you know how to show that in the 2-element case, the complement of 00 can't possibly be 00, so then it could only be 11, and in fact 11 works. But here's a little exercise: is it possible to have a Boolean algebra BB with an element bb such that b=¬bb = \neg b? You may assume that BB has more than one element.

view this post on Zulip Julius Hamilton (Aug 31 2024 at 15:23):

I may have gotten overwhelmed by how many new ideas there are in this discussion, and I'd like to return to John Baez's suggestion to count the number of elements in the free Boolean algebra on nn generators. I would like to do this in a clean-cut fashion, with some tips and hints.

Here's what I know so far:

I tend to work with the definition of a Boolean algebra as a set of elements, three operations, and two distinguished elements, which meet the (dual) axioms of associativity, commutativity, identity, complement, idempotence, distributivity, and de Morgan's laws.

I want to focus on the syntactic proof.

Starting with a fresh point of view, I think a syntactic proof should be of this form: "If the free Boolean algebra of nn generators has kk elements, then the free Boolean algebra of n+1n + 1 generators has f(k)f(k) elements."

Can anyone confirm that this is the general proof strategy I should aim for?

view this post on Zulip John Baez (Aug 31 2024 at 16:46):

I probably wouldn't have thought of that proof strategy, but I bet it can work nicely.

I tend to work with the definition of a Boolean algebra as a set of elements, three operations, and two distinguished elements, which meet the (dual) axioms of associativity, commutativity, identity, complement, idempotence, distributivity, and de Morgan's laws.

That seems fine for a syntactic proof, especially if you want to use disjunctive or conjunctive normal form, which are defined in terms of the 3 operations ,,¬\vee, \wedge, \neg.

view this post on Zulip John Baez (Aug 31 2024 at 16:48):

There is probably a slick semantic proof using truth tables, but you seem like a syntax guy, so while the syntax of 'normal forms' involves a bit of bushwhacking (as far as I can tell), it may play to your strengths.

view this post on Zulip Morgan Rogers (he/him) (Sep 02 2024 at 08:40):

John Baez said:

There is probably a slick semantic proof using truth tables

I think that approach pushes all of the hard work into showing that if two logical expressions have identical truth tables, they are equivalent in the original Boolean algebra; you don't get a free lunch!

view this post on Zulip John Baez (Sep 02 2024 at 14:20):

Sounds right - in our earlier conversation about this I started talking to Julius about that theorem, which I'd call "soundness and completeness". Of course it's a good theorem to know, and if you know it, maybe you should use it! But you're right, if you add in the difficulty of proving that, the semantic approach is probably no quicker.

view this post on Zulip Julius Hamilton (Sep 03 2024 at 00:22):

After much thinking, I have had a realization that helps me a lot.

I originally wanted to show a correspondence between ordered pairs from {0,1,a,¬a}2\{0, 1, a, \neg a\}^2 and B(a,b)={0,1,a,b,¬a,¬b,ab,}B(a,b) = \{0, 1, a, b, \neg a, \neg b, a \wedge b, \ldots\}.

That means a binary Boolean operation, b(x,y)b(x,y), but it has to include bb in its output, to ensure it generates the expressions aba \wedge b, etc.

It also must generate a unique expression for all inputs. (It must be injective).

That rules out many Boolean operations. It can’t be xyx \wedge y, which is commutative, therefore, we would have a codomain value b(x,y)=b(y,x)b(x,y) = b(y,x).

A function that does this is (xb)(y¬b)(x \wedge b) \vee (y \wedge \neg b). For ordered pairs (0,0),(0,1),(1,0),(1,a),(1,¬a),(a,¬a),(0,0), (0,1), (1,0), (1, a), (1, \neg a), (a, \neg a), \ldots, it gives a unique expression, 0,¬b,b,1,a,¬a,(ab)(¬a¬b),0, \neg b, b, 1, a, \neg a, (a \wedge b) \vee (\neg a \wedge \neg b), \ldots.

view this post on Zulip Julius Hamilton (Sep 03 2024 at 04:39):

It’s as if you’re taking all the DNFs of the previous round and re-generating them with a bb and ¬b\neg b term.