You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
If anyone would like to give me a few hints in attempting to formulate the following mathematically, it could benefit me. Thanks.
Here is the idea:
I will assume that it is not that important how we generate terms, for now. If a signature includes some constants, some variables, and some functions, we can generate a term algebra, which is related to the concept of a free algebra. However, I hope that I can ignore how the terms were generated, and just assume I have some set of terms, . (My only concern is if we need to know which terms have variable symbols in them once we start generating formulae from terms.)
Now, it seems to me that when we generate formulae from terms, we can look at those rules of formula-formation in a few different ways.
For example, we could distinguish between formulae of the form and ; or we could say that is a relation symbol, and so combine those two "cases" into a single case: that of generating formulae by applying relation symbols to terms. This also seems similar to a free algebra, to me.
Similarly, we can consider what formulae can be generated by application of one of various logical connectives to formulae. However, I believe it is known that for a collection of formulae in propositional logic, the algebraic structure generated by application of logical connectives to those formulae will be a Boolean algebra. So to me, it seems like if we want to know what formulae can be generated for a given signature, we can create a layer of abstraction between terms and formulae. Once we know what terms we have, we know which atomic formulae we have. Once we know what atomic formulae we have, I think it follows that our logic would be a free Boolean algebra of generators, where each atomic formula is one of the generators.
Or rather, it would be, if this were propositional logic. Because quantifiers are another way to create formulae from formulae, it would be nice to identify the algebraic structure they impart on a collection of formulae. I was thinking about how we could consider closed formulae as a special case of open formulae. For every open formula, when we map a signature to a structure, there is a set of elements in our domain which satisfy the formula, and a set of elements which do not. We cannot evaluate if an open formula is satisfied or not until we provide a variable assignment. This makes me think of closed formulae simply as "open formulae with zero free variables", analogous to other contexts in math where we have a "zero" version of a concept (like a function from an empty set).
I looked a little at the definition of Boolean algebras. They interested me, but I also want to explore different formulations for them. I found it interesting that the definition I saw included an element for tautology and for contradiction. I can imagine that this is an elegant algebraic formulation, as these elements can act as "distinguished elements", for example in a Boolean ring. On the other hand, I would prefer to think of the elements of a Boolean algebra as formulae from a signature, with a mapping from the elements into a set of truth-values, . It would be nice to not include as elements in the algebra.
It seems to me like it is the atomic formulae which are the most “interesting” part of any theory. If there is consistency in how term algebras are generated from a signature, and composite formulae from atomic formulae, it seems the thing that varies the most from theory to theory is that collection of atoms, which are the axioms.
Please let me know if you would like to suggest any ideas for me to think about regarding this. Thank you very much.
Since your post is titled "free boolean algebra", I feel like posing a puzzle for you. How many elements does the free boolean algebra on an n element set have?
Even if it's orthogonal to your concerns, figuring this out is a good way to start understanding free boolean algebras.
Thank you, I would love to undertake such an exercise. I'll work on it and get back to you. Thank you.
I have been thinking about this and wanted to share where I am so far.
The axioms of a Boolean algebra are symmetrical. For conjunction and for disjunction, we have an axiom of commutativity, associativity, distributivity, identity, complement, and idempotence. For example, for conjunction, we have:
Now, for a free algebra, you might think that if we begin with a set of elements, representing some formulae, they would be generators of the algebra, but there would be infinite elements of the algebra, because we can keep applying the operations; for example, from and , we can form , , , etc.
The key thing is that this does not happen, because the axioms alone can prove that any such expression can actually be reduced to a simpler form - a so-called “normal form” (for example, conjunctive or disjunctive normal form, with expressions something like ).
I am pretty sure if every formula in the deductive closure of a set of formulas has a normal form, we can create equivalence classes of the formulas, and these become the elements of the Boolean algebra.
I need to keep thinking from here.
a good exercise is to compare this reasoning with what happens in a heyting algebra
In case you are looking for references on this topic:
For myself, I also found it helpful to think about the free Boolean algebra on a one element set.
Looking at the Wikipedia page, I just noticed it discusses an interesting way to think about the elements of a free Boolean algebra.
Each element of a Boolean algebra can be expressed in terms of some generating elements and the symbols . For example, if and are elements of a Boolean algebra, then so is .
If we imagine we can assign some "truth value" (either 0 or 1) to each of the generating elements in our Boolean algebra, then expressions like can be understood as functions. In this case, we would have a function that takes in the assigned truth value of and the assigned truth value of , and then returns the "and" of those two truth values. So, can be associated to a function , which takes in two truth values and returns the "and" of them. Different expressions, like for example, will correspond to different functions.
If I am understanding the Wikipedia article correctly, the idea is that two expressions in a free Boolean algebra that may look different will actually correspond to the same element if they induce the same functions on truth values.
If that is true, then it seems like these two things are closely related:
Ryan Wisnesky said:
a good exercise is to compare this reasoning with what happens in a heyting algebra
One should clarify that characterizing free Heyting algebras ab ovo is at best an open-ended, at worst a futile exercise, especially for free HAs on more than one generator, double especially for somebody who just encountered free Boolean algebras for the first time.
@Julius Hamilton - Please don't read those references @David Egolf suggested - not just yet, anyway. This puzzle is not so hard, and if you 'cheat' and look up the answer you'll miss a lot of its benefits. I'll give you a hint though:
As a trained mathematician, my natural instinct would be to study:
and after that practice warmup,
Counting the elements of these gives me a sequence of 3 numbers:
and I would try to guess the next one. Then I would either try to prove the pattern I'm seeing - or if I couldn't do that, try to count the elements of the free Boolean algebra on 3 elements. If I liked programming, maybe I'd write a program to list the elements of that.
I'm trying to get you to think concretely and engage your pattern recognition - a key skill even in category theory. You can get fooled, around here, into thinking that all problems should be solved in a top-down deductive way. In fact that step often comes only after you know the answer.
Thanks. I am not finished yet, but I decided to share some work in progress.
I think the idea behind these diagrams is that the Boolean set is closed under the Boolean operations, and if we augment it with an element , we also have to add , and then the set is closed. I believe the point is that all possible sequences of applications of Boolean operations will always reduce to an element in the set . I expect this to get a little more complicated when I add in another element , but the principle will be of the same nature - that there will be a fixed set of elements which all the elements of the free Boolean algebra reduce to. I will explore that next.
Then, maybe, a proof of the size of a Boolean algebra can be related to a proof by induction that a Boolean algebra augmented with an element is a Boolean algebra.
F1_detailed.png
F1_detailed2.png
It's hard to read those pictures. Can you say how many elements there are in the free Boolean algebra on an n-element set in some examples like n = 0, 1, 2 and maybe even 3?
Ok.
(assuming that this is valid because we can say that all the axioms are "for all elements in the set", so theyre trivially met.)
(in this scenario, )
Let me know if I made any mistakes. Thanks.
I think my approach is to assume that the elements of the set are generators, so for , a Boolean algebra will require the closure of this set with . But I now realize that instead of assuming there is an element or in the set, I think I can infer that one of the elements acts as or , and start with a set of "anonymous" elements .
I'll now be walking through and exploring what I said above about starting with a set of arbitrary elements. Thank you very much.
You're making an easily correctable mistake. To get the free boolean algebra on some elements we build up all possible expressions in these elements using and , and use the boolean algebra laws to say when two of these expressions are equal.
Any boolean algebra is required to have 0 and 1, so when we talk about the free boolean algebra on some set , we don't include 0 and 1 in that set : they come 'for free', as it were.
So what you're calling is actually the free boolean algebra on one element, namely .
With this correction, you are correctly showing that the free boolean algebra on 1 element has 4 elements: .
Similarly, you showed that the free boolean algebra on no elementss has 2 elements: .
So, you have begun computing the number of elements in the free boolean algebra on elements, say , and you've reported these findings:
, .
This is not quite enough to guess a pattern, so should be helpful.
Actually, now that I think of it, there's an obvious guess about based on the above data. So if I were in your situation I would check that guess.
Do you mean ?
I'm sorry, I meant to say . is much easier to compute by hand than . In fact all the stuff we learn about "truth tables" for binary operations in a boolean algebra is really an investigation of the free boolean algebra on 2 generators!
You're making an easily correctable mistake. To get the free boolean algebra on some elements we build up all possible expressions in these elements using and , and use the boolean algebra laws to say when two of these expressions are equal.
Any boolean algebra is required to have 0 and 1, so when we talk about the free boolean algebra on some set , we don't include 0 and 1 in that set : they come 'for free', as it were.
So what you're calling is actually the free boolean algebra on one element, namely .
With this correction, you are correctly showing that the free boolean algebra on 1 element has 4 elements: .
Similarly, you showed that the free boolean algebra on no elementss has 2 elements: .
Ok, got you.
Now assume we have two elements, and .
I prefer to think about what elements we can generate through one application of any of the operations, before considering a “second round” of application. I don’t know if people do it other ways, for example exploring many applications of one operation before moving to the next.
Something like this:
Current elements:
One application of :
One application of :
One application of :
(I am skipping elements which reduce via the axioms. For example, by idempotence. by identity. by absorption. via commutativity).
Current set of elements:
Second round of application of :
Second round of application of :
I am pretty sure any expression with 3 variables, such as , can be reduced to an expression with at most 2 variables. For example, .
Second round of application of :
I happen to have gotten the tip to reduce things to conjunctive or disjunctive normal form. So .
So I think these are the unique terms for :
.
My guess is .
But as of writing this, I realize there might be terms like , which I’ll check next.
I think there’s probably a better way to see what terms can and can’t be generated than just trying to list them all, but for now I’ll just say, I know we can also form , and the number of elements is 16, and 2, 4, 16 indicates some pattern related to exponentials.
I understand there’s an idea that the number is equivalent to the number of distinct Boolean functions of -arguments, I believe, but I haven’t yet seen perfectly clearly how to draw that connection, which I want to think about tomorrow.
Great! Yes, it's nice to build up the free boolean algebra on a set in "rounds" as you've done, and this can be useful for any sort of "free" algebraic gadget.
And yes, the free boolean algebra on 2 elements has 16 elements. So you have
Now: can you guess the pattern? E.g. can you guess what is? You said there's "some pattern related to exponentials", and that's right, but you can go out on a limb and say more than that. The more specific you dare to be, the more useful mental associations you will form. It's even good to guess the specific pattern: guessing patterns is a key mathematical skill, and one should never let your fear of guessing wrong paralyze you. The only way to build your pattern-matching skills is to exercise them a lot.
The ultimate point of guessing this pattern is to get some deep structural insights in free boolean algebras: these insights will explain the pattern. But when you're thrust into a mathematical situation you don't fully understand, the patterns you see are often the keys to the insights you need to get.
Here is the next round of thinking on the matter. I think I could benefit from any tiny hints or comments pointing out something I missed as I was thinking through this. I will keep in mind your advice to "make my guesses as specific as possible", going forward. I guess I can still try to make more specific guesses than I have so far. Thanks a lot!
boolalg4.pdf
Okay, good - a guess: . Let's see if that's right. Scribble scribble. Yes, it's right!
So, what's interesting is that you said "each term is the square of the previous term". Yes, that's correct! But I never thought of it that way before. I just thought in terms of a formula for . I think this formula may be more illuminating in terms of understanding the structure of the free boolean algebra on elements.
But it would be interesting to see if you can get there starting from where you are. So:
Puzzle. Given a sequence defined recursively by and , what is a simple closed-form formula for ?
I see you also tried various ways to organize the elements in one of these free boolean algebras. There's a very nice way to organize them which make it obvious that . In fact there are two. I don't think you're close to stating those.
Let's see...
Puzzle. Do one of the operations , distribute over the other? Can you use this to take all elements in the free boolean algebra on generators and put them in a standard form?
What I mean is something like this: in ordinary algebra, multiplication distributes over addition, and we can use this to take an arbitrary expression like
and rewrite it in a standard form:
I may have done it wrong, but you get the idea: the distributive law brings order to what was previously a vast wilderness of expressions. We all learn this in school.
So if distributed over , or vice versa, you could do something like that for a boolean algebra.
I'll return to this tomorrow when I can give it more in-depth treatment. Thanks! :)
I'll work on this now. I wanted to say one thing that I just thought about though.
The axioms of a Boolean algebra, or another formula-formation rule set, do not generate a consistent logical theory. We wouldn't expect them to. In a Boolean algebra, every proposition's negation exists. A Boolean algebra could be seen as a way to generate all meaningful sentences; but it does not know how to generate sentences that follow logically from other sentences.
For that, we have a deduction system or deductive calculus. A simple deductive calculus might be as little as modus ponens and universal instantiation. But I read that then you need to add more axioms to the theory. But natural deduction creates a clean layer of separation from formulae versus actions you can take on them to construct new formulae. Natural deduction is like a machine that reads in logical connectives and has rules about what to do with them. From that perspective, a deductive system should provide a rule for every logical symbol. Or, perhaps there are incomplete deductive systems that allow us to rewrite some formulae and not others.
Here is what piques my interest.
The axioms of a Boolean algebra, seen as operations on a set of elements, are "unrestricted". They can be applied to all combinations of elements in the set. The same appears to be true for term algebras.
Not so with deductive systems. Their application is "conditional", and often takes the form, "If you have formula , you may construct formula . I am very interested to explore, next, what kind of algebraic structure we need to model the enumeration of theorems from axioms. (My guess: partial algebraic operations, which can be related to categories.)
I can't claim to understand this well but it's the work of another day.
I think you can prove that for formulae a, b, c, ..., if you combine them with conjunction and negation, all of them are logically unique. In a boolean algebra, we often deal with logically equivalent formulae, but this is not the case for formulae like these: , , etc.
This means that for variables, if we start by generating strings of the above form, we are just choosing between positive or negative form for each variable in one of these "and" strings.
So for variables, it just means . 2 choices, times.
The second useful piece of information is that if you combine these "and"-strings with disjunction, you will also create logically unique terms. For any of the above formulae, like (a and not b and not c), (not a and not b and not c), etc, if you create ((a and not b and not c) or (not a and not b and not c)), this formula will not be logically equivalent to any other such formulae you can make from disjunctions on the previous "and" strings.
For that reason, we again are making "choices" with 2 options, for each element in our and-strings: include or exclude, in an "or"-string. So that means 2 to the power of however many "and" strings we made (which was 2^n). So we get .
I don't feel finished here. I think there are more intuitive ways of looking at this. But it's a start. Thank you very much.
The axioms of a Boolean algebra, or another formula-formation rule set, do not generate a consistent logical theory.
How so? Do you think you can prove from the axioms of a Boolean algebra?
(My point is that inconsistencies as such are assertions, not expressions.)
Here is a simple deductive system for Boolean algebra, suitable for classroom presentation and discussion. etc. denote well-formed propositional expressions, constructed as usual from variables, constants , and the operations .
Identity axioms:
Cut rule:
Axioms for :
Axioms for :
Rules for :
Rules for :
Rules for :
Convention: we write if and are derivable in this deductive system.
It's interesting and moderately challenging (but I think fun!) to derive the the usual equations of Boolean algebra as valid inferences based on these axioms and rules. (Hard challenge: derive the distributive laws.)
But I am confident that you will never be able to derive an inconsistency .
Julius Hamilton said:
I can't claim to understand this well but it's the work of another day.
I think you can prove that for formulae a, b, c, ..., if you combine them with conjunction and negation, all of them are logically unique. In a boolean algebra, we often deal with logically equivalent formulae, but this is not the case for formulae like these: , , etc.
This means that for variables, if we start by generating strings of the above form, we are just choosing between positive or negative form for each variable in one of these "and" strings.
So for variables, it just means . 2 choices, times.
Yes, very good!
The second useful piece of information is that if you combine these "and"-strings with disjunction, you will also create logically unique terms. For any of the above formulae, like (a and not b and not c), (not a and not b and not c), etc, if you create ((a and not b and not c) or (not a and not b and not c)), this formula will not be logically equivalent to any other such formulae you can make from disjunctions on the previous "and" strings.
For that reason, we again are making "choices" with 2 options, for each element in our and-strings: include or exclude, in an "or"-string. So that means 2 to the power of however many "and" strings we made (which was 2^n). So we get .
Excellent! This is a famous thing. It's called 'disjunctive normal form' and there's a Wikipedia article about it:
Can you prove that every element of the free boolean algebra on elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.
I don't feel finished here. I think there are more intuitive ways of looking at this. But it's a start. Thank you very much.
Sure! There are beautiful conceptual ways of thinking about why the free boolean algebra on elements has elements. But disjunctive normal form is one very nice way.
Todd Trimble said:
The axioms of a Boolean algebra, or another formula-formation rule set, do not generate a consistent logical theory.
How so? Do you think you can prove from the axioms of a Boolean algebra?
(My point is that inconsistencies as such are assertions, not expressions.)
Here is a simple deductive system for Boolean algebra, suitable for classroom presentation and discussion. etc. denote well-formed propositional expressions, constructed as usual from variables, constants , and the operations .
Identity axioms:
Cut rule:
Axioms for :
Axioms for :
Rules for :
Rules for :
Rules for :
Convention: we write if and are derivable in this deductive system.
It's interesting and moderately challenging (but I think fun!) to derive the the usual equations of Boolean algebra as valid inferences based on these axioms and rules. (Hard challenge: derive the distributive laws.)
But I am confident that you will never be able to derive an inconsistency .
This seems like a really fun and perfect challenge for me.
I have some questions to make sure I grasp the situation well, first.
You don’t necessarily have to answer these questions, as just stating them aloud makes me start to think about them with more focus.
My first question is about the elimination rules for . At first glance, I can’t see why would be a good representation of the laws of disjunction.
My guess: the exercise may show me an interesting kind of “ordering” of truths. So that this rule is not saying that if from one can conclude , then from one can conclude . The order relation may be referring to, for example, binary truth values! So that we literally mean to say that if , evaluated to a truth value of or , is , then is also .
More to come soon.
So that this rule is not saying that if from one can conclude , then from one can conclude .
It is saying that. Think of it this way: we have ( is stronger than " or "). If also , then by transitivity of or the cut rule, we may deduce .
Fascinating. I’ll get back to you on that. I also wanted to respond to John Baez for a moment:
John Baez said:
Can you prove that every element of the free boolean algebra on elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.
I’ll also be warming up my thoughts on this exercise as well, now.
My first thought was a general proof that for any set of -ary operations, if one imposes an arbitrary ordering on them, it determines a unique form for every element of the free algebra under them (or could we say their “algebraic closure”?) I would want to exclude anything here to do with infinity in an appropriate way, and focus on finite cases (finite sets of elements, operations of finite arity, etc.) Maybe this is not true. I also wonder if it depends which ‘equational’ axioms we add, or if it that does not affect this.
My thought was that if I could show that for a collection of operations, there exists at least one consistent way to write all the elements of the free algebra under them (which I think could be equivalent to an ordering on the operations), it induces a partition on the elements of the free algebra, and that if you can do this as a preliminary step before pursuing any questions such as counting the number of logically unique elements in this algebra, it makes the question much easier.
But you suggest a proof by induction, which sounds simpler and like a very good exercise for me.
Not asking for the answers, just digesting. Thanks!
Julius Hamilton said:
Fascinating. I’ll get back to you on that. I also wanted to respond to John Baez for a moment:
John Baez said:
Can you prove that every element of the free boolean algebra on elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.
I’ll also be warming up my thoughts on this exercise as well, now.
My first thought was a general proof that for any set of -ary operations, if one imposes an arbitrary ordering on them, it determines a unique form for every element of the free algebra under them (or could we say their “algebraic closure”?)
The problem in getting a normal form is the equational laws between operations, e.g. and (false) and and and so on are all equal. You could pick the first one in alphabetical order (given an ordering of the symbols) as your "normal form" - maybe that's what you meant. But this is not a practical system, because then to determine the normal form of some element you need to list all elements alphabetically, for each one determine whether it's equal to the one you're interested in - which is, in itself, not an easy thing to do - and then choose the first one that does.
A practical normal form takes advantage of the equational laws to come up with an algorithm to reduce an arbitrary element of the free algebra to its normal form. I briefly sketched how we use the distributive law relating and to reduce any polynomial to its normal form. We can do something quite similar to reduce any element of the free boolean algebra on elements to its disjunctive normal form. That's why I asked you if distributes over or vice versa. You also need to think about the laws relating these operations to .
Maybe it'll help if I do an easy example of reducing something to disjunctive normal form. Let's start with
Then using the rules of a boolean algebra we get
and we're done! It's not always this easy.
I can see an algorithm of this form.
It can check the type of an element; and it can also construct elements of a type (so maybe Haskell is somewhat better for this than Python). The types are atomic, negation, conjunction, and disjunction. Also, an element "contains" elements. For example, "" is of type negation, and contains element "". The element "" is of type "disjunction", and contains elements and .
The reduce-algorithm has to check for a condition between the type of an element and the type of its element(s). My hope is that a Boolean algebra can be normalized with only a conditional check of "depth 1" (one below the top level, call it depth 2 if you like for 2 levels).
reduce(a):
If a is of type negation:
b = elements(a)
# double negation elimination
if b is of type negation:
c = elements(b)
return reduce(c)
# de morgans law for disjunction
if b is of type disjunction:
c, d = elements(b)
return conjunction((negation reduce(c)), (negation reduce(d))))
# de morgans law for conjunction
if b is of type conjunction:
c, d = elements(b)
return disjunction((negation reduce(c)), (negation reduce(d))))
If a is of type conjunction:
p, q = elements(a)
# idempotence
if p == q:
return atomic(p) # or just "p"
# left identity
if p == 1:
return atomic(q)
# right identity
if q == 1:
return atomic(p)
# left complement
if negation(p) == q: # ?
return 0
if negation(q) == p:
return 0
# deal with the arguments of the conjunction one at a time?
for b in elements(a):
# distribution of conjunction over disjunction:
if b is of type disjunction:
c, d = elements(b)
return disjunction((conjunction(reduce(b), reduce(c)), (conjunction(reduce(b), reduce(d)))
and so on. One can imagine nearly the same rules for disjunction, except we will not be using distributivity, since we do not want to distribute "or" over "and". lastly, the program can check if an element is atomic, and return that element:
if a is of type atomic:
return a
this is just a starter example. the key appears to be that I think all of the rules are fortunately about relationships between at most 2 successive applications of operations. so, not (a and b) has an associated transformation, not (not (a)) has an associated transformation, a and (b or c) has an associated transformation - we never have to check for a certain pattern of 3 operations (I think).
I think one way we can prove that this works, is because, for any of the above rules, we might worry, what if a is of type conjunction, and the algorithm calls reduce on both of its arguments, but what it returns requires further simplification? my hope is, the answer is actually that the algorithm always returns a normal form, and the key thing about a normal form is that any of the constructions in the algorithm on a normal form form a normal form. for example, if "a" is the negation of a negation of normal, then removing the double negation returns a normal. if a is a negation of a conjunction of two normals, returning the disjunction of the negation of both normals is a normal. this is only true on the precondition that both of the arguemtns of the orignal conjunciton are already normal. so we can probably "type check" the algorithm to prove that if it receives a construction that is "two operations applied to normals, it is guaranteed to return a normal". and maybe this can be reflected algebraically - try to convert the free boolean algebra into the "algebra of normals", or something.
Some details will need to be worked out to make the algorithm actually work, but I'm quite excited to work on that.
Ill clean this up and make it more presentabel. My intention was to explore this question in a much more mathematical way, but first, I did decide to explore presenting normalization as an actual algorithm. Thanks very much!
John Baez said:
Maybe it'll help if I do an easy example of reducing something to disjunctive normal form. Let's start with
Then using the rules of a boolean algebra we get
and we're done!
Actually we're not done, since for the free boolean algebra on three elements, disjunctive normal forms are disjunctions of conjuncts of three elements or their negations.
So, I need to use
and
to expand
to its disjunctive normal form:
Julius Hamilton said:
Some details will need to be worked out to make the algorithm actually work, but I'm quite excited to work on that.
Ill clean this up and make it more presentable. My intention was to explore this question in a much more mathematical way, but first, I did decide to explore presenting normalization as an actual algorithm. Thanks very much!
Sure! It should be a fun project. There is a mathematical subject of "rewriting systems" for reducing expressions to normal forms, which one could in theory apply to this project. However, I'm not exactly sure how it applies to this particular example, and sometimes it's good to just dive in and do stuff and ignore the 3000-year weight of already explored mathematics. If you want to see a bit of the theory, there's this:
and the connected article "Normal form". It's definitely fun stuff.
depending on what exactly is meant by a 'rewriting system' (does it bake in commutativity, etc), there may not exist one for boolean algebras: https://link.springer.com/chapter/10.1007/3-540-53904-2_102 . This manifests for us in CQL directly, where certain axiomatizations of boolean algebras are 'too symmetric' for our theorem prover to handle, so we use boolean rings instead, which are easier for re-writing techniques to handle.
One thing about 'disjunctive normal form' is that due to the commutativity of and it's not really unique, e.g. I might say but you might say . This feels like a petty issue, but what (little) I know about "terminating confluent rewriting systems" is that they lead to a unique normal form, not unique up to some rather boring symmetries. I suspect this just means I haven't read enough, since it seems like an issue that would show up in many other examples.
Is that reference you pointed me to complaining about these symmetries? I can only see the first page, but sounds like that's what they're complaining about, since they blame "commutativity axioms" for the lack of a terminating set of rewrite rules.
It seems someone should have defined termination and confluence up to a collection of reversible rewrites, meaning that 1) if you run your rewriting long enough, all you can do is a certain specified collection of reversible rewrites, and 2) the result is then unique up to those reversible rewrites.
Quoting from back here:
This seems like a really fun and perfect challenge for me.
Good! Here are some exercises to get you started, roughly in order of increasing size/difficulty:
Once you have these, then you have their dual statements as well. This is because the dual of an axiom is also an axiom, and the dual of an application of a rule of inference is an application of another rule of inference. Therefore you can automatically dualize any derivation in this deductive system.
But perhaps I'm getting ahead of myself: first we should define the dual of a statement. Informally, the dual of an expression is obtained by swapping all instances of with , swapping all instances of with , and leaving all instances of alone. Letting denote the dual of , the dual of an assertion is defined to be . The dual of an assertion is . Now I can leave it to you to check that the preceding paragraph is indeed correct, if you'll allow me please to make one small amendment to the rules I gave for ,
,
and to clarify that for those rules with two premises (cut rule, and the first rules given for and ), the order of those premises doesn't matter. Read all the rules as "given premise(s) blah, one may conclude di-blah". Read axioms as "from nothing, conclude blah". All derivations in the system start from nothing and eventually conclude something.
Anyway, what I was trying to say before is that by the principle of duality, we have (automatically) that is also idempotent, commutative, associative, and has for its unit.
re: @John Baez that is correct, non-uniqueness w.r.t commutativity or other rules (associativity) is a huge technical problem for re-writing systems, still studied today, and your suggested approach is sometimes taken. it isn't a panacea though; the linked paper shows it doesn't work for (a particular presentation of) boolean algebra (that's what "modulo AC" means). here's a paper formalizing the idea of 'rewriting modulo equations' https://core.ac.uk/download/pdf/82774966.pdf that describes some of the issues touched on here
Thanks! I knew "AC" couldn't mean "axiom of choice" here, but I didn't realize it meant "associativity and commutativity". I hate acronyms.
(I think I see the problem with associativity. At the end of your rewriting you can use associativity to arbitrarily push all the parentheses to the left, and say that's your desired form. There's nothing like this for commutativity. But unfortunately, at earlier stages of your rewriting you typically need to push some parentheses to the right sometimes, so you can't only include the rule .)
All of the above stuff is awesome. I am under a little bit of time constraints and stress right now regarding some other things, so I am going to come back to all of the above within the next few days. I really appreciate everyone's input. Thanks very much.
I’m a bit tired and I’ll just try to summarize some points from the above that will help me, maybe tomorrow, to figure out what to work on next.
The axioms of a Boolean algebra, or another formula-formation rule set, do not generate a consistent logical theory.
How so? Do you think you can prove from the axioms of a Boolean algebra?
(My point is that inconsistencies as such are assertions, not expressions.)
So if I tried to prove from the axioms of a Boolean algebra, I guess I would interpret the axioms like deduction rules. And now it seems unlikely that rules like double negation elimination or de Morgan’s law could achieve this. Now I see the axioms as actually corresponding to logically valid inferences, like the idempotence of “and”. So I wonder, was the only reason I thought it created an inconsistent theory because I was thinking about how when we freely generate terms, we form the negation of every variable? I’m thinking maybe your point was that it doesn’t make sense to say a Boolean algebra “forms an inconsistent theory” because there is are no deduction rules regarding what is “provable”?
Here is a simple deductive system for Boolean algebra, suitable for classroom presentation and discussion. etc. denote well-formed propositional expressions, constructed as usual from variables, constants , and the operations .
- Identity axioms:
Just curious if is a common choice of symbol for “deduction”, “consequence” or “proves”?
- Cut rule:
Now the cut rule I think I should learn more about. I’m curious how deductive systems that do or do not have it, differ. The cut rule seems to me to just be “associative composition of proofs”. However, I have lately been thinking about the difference between a “probability” relation vs. a “direct deduction” relation. If from we can deduce by a direct application of one of the deduction rules, and from , , it doesn’t follow that from we can deduce from an application of a deduction rule. Presumably, we would need to apply a deduction rule twice. So, I have been thinking about this in wanting to create categories of propositions with Hom-sets for proofs. Perhaps each morphism can represent a sequence of deduction rules applied, for example.
I don’t think I’ve seen a rule like this in a deductive system before. Perhaps it always follows, in a consistent theory? and $$\bottom$$ are strange concepts to me; I feel like they don’t map well to any natural language concept.
What I still haven’t worked out in my thinking about Boolean algebras is that initially, we say the variables represent formulae, which can have a value of true or false. From that perspective, it seems like it doesn’t make sense to include top and bottom in the set of elements; they aren’t formulae, and top can never evaluate to false. On the other hand, I’ve been thinking about how the elements can actually be seen as Boolean operators, rather than formulae. If this is correct, top and bottom make sense; they are Boolean operations. But then I have to rethink how to interpret the variables. If we say they represent Boolean operations, then the question is how we got from interpreting them as formulae to interpreting them as Boolean functions.
- Axioms for :
*Rules for :
These make sense.
- Rules for :
Yeah, now it makes sense to me. Earlier I was confused. But maybe there is a valid ambiguity in natural language here, at least? If I say “A or B proves C”, it might sound like an exclusive or, I guess.
- Rules for :
This is cool. It seems like a way to rewrite material implication.
Convention: we write if and are derivable in this deductive system.
It's interesting and moderately challenging (but I think fun!) to derive the the usual equations of Boolean algebra as valid inferences based on these axioms and rules.
I’ll have to do this tomorrow. For now I’m summarizing, we have a deductive system with a rule for assumption, cut (or, “composition”), “true and false”, and then introduction and elimination rules for ‘and’, ‘or’, and ‘not’. ‘And’ is the most obvious: from a and b, you can derive a, and you can derive b. A or B prove C, you can derive A proves C, or B proves C. This makes me wonder why these rules have to be expressed in the form “something proves something”, rather than just, say, “something is true”. We could just say, “”.
(Hard challenge: derive the distributive laws.)
Noted.
But I am confident that you will never be able to derive an inconsistency .
So all in all, I think you are trying to show me how, viewed as deduction rules, the axioms of a Boolean algebra are not actually a “free for all”. In which case, even if the theory appears “inconsistent” due to all formulas coexisting with their negation, there is still a certain kind of logical ordering on the formulae, which I think you refer to as their “strength”.
⊤ and are strange concepts to me; I feel like they don’t map well to any natural language concept.
They're called 'true' and 'false'.
- Axioms for :
Mathematicians read this as "anything implies true" or, a bit less tersely, "any statement implies a true statement".
There is a dual axiom, "false implies anything" or "a false statement implies anything". This axiom is rejected constructively since it's related to proof by contradiction. But it's valid in a boolean algebra.
John Baez said:
There is a dual axiom, "false implies anything" or "a false statement implies anything". This axiom is rejected constructively since it's related to proof by contradiction. But it's valid in a boolean algebra.
No. This axiom does hold constructively and has nothing to do with proof by contradiction.
Whoops. I'm sorry, Julius, the problem with deducing "p and not p implies anything" in constructive logic is not the step "false implies anything" but rather "p and not p implies false". But never mind, we're talking about boolean algebras and I should not have digressed into a separate topic I don't really understand.
John Baez said:
Whoops. I'm sorry, Julius, the problem with deducing "p and not p implies anything" in constructive logic is not the step "false implies anything" but rather "p and not p implies false". But never mind, we're talking about boolean algebras and I should not have digressed into a separate topic I don't really understand.
This is also incorrect, is also true constructively.
Obtaining a contradiction and using this to prove anything is not what mathematicians do when they perform ‘proof by contradiction’. ‘Proof by contradiction’ proceeds by assuming the negation of a statement, deriving falsity and concluding the validity of that statement. In other words, ‘proof by contradiction’ has the following type:
Recalling that, in constructive logic, we define , this reduces to:
This is better known as double negation elimination.
Okay, it's a wonder I can prove theorems. I really should have stuck to talking about boolean algebras.
I'm not quite sure why I made two such stupid mistakes in a row. I've studied logic so I should know better. I could say it's because I'm under stress, but that doesn't feel like the right explanation. I definitely find myself completely forgetting things more and more often as I age, and it could be that I'll start making more and more completely stupid math mistakes and this is just a first sign of that decline. Or maybe it's a random blip. Oh well.
I think many people who haven't fully internalised constructive logic could have made such an error, so I wouldn't beat yourself up about it!
Thanks.
So, some baby steps.
My goal will be to deduce the axioms of Boolean algebra using Todd’s deductive system.
I will first explore any basic deductions that can be made, using these rules:
My first thought is to identify what terms can be generated, at all. I’ll go through each rule one by one.
I find myself wondering if I can substitute one of the “sequents” into a variable ? No, I don’t think so.
Let’s say we have one variable / formula . Then by (1), we have . By (3) and (4), we have and . Using (1) again, we have .
The way is paved for me - in my next message I will derive the next round of deductions.
Question 1: What would organically motivate one to set up a proof system where instead of saying “”, we use a “double-proof” system where we say things of the form “If , then you can deduce ”?
Question 2: Why should we include the axioms and ? Are they intuitively justifiable, or do we notice they are mathematically necessary? What deduction systems allow us to remove these rules?
Question 3: How can we model this deduction system as an algebraic structure?
Answer to Question 3: the free Boolean algebra on a set of variables is the set of -equivalence classes of propositional formulas generated by , where those formulas are pre-ordered by the smallest relation that validates the axioms and rules of inference.
Answer to Question 2: they just express the defining universal properties of the elements top ("truth") and bottom ("falsity") in a poset. The usual notion of Boolean algebra includes a top and bottom, and so in that sense these are necessary. If you think of as saying that is true under all circumstances where is true, then of course tautologically, and vacuously (because there are no circumstances where is true, this statement can never be falsified).
You can remove any rule you like, and you'll get a weaker or less expressive system. Some people might want to study Boolean algebras either without a top or without a bottom, but here in this thread we're not.
Likewise, all the rules of inference express the defining universal properties of the operations for Boolean algebras.
Answer to Question 1: Deductive systems specify how to validly construct proofs. How can we say when a proof is valid without giving rules for that? A proof is a sequence of individual entailments (which I am writing as ) where each entailment in the sequence is a logical consequence of ones coming previously. All proofs are like that. Deductive systems specify the meaning of "is a logical consequence of".
In addition to Todd's answer to Question 1: there is a way of encoding proofs which is called natural deduction. In natural deduction you start with hypotheses at the top and then you deduce a concluding formula by using inference rules. There isn't a symbol , only the horizontal bars. You work with a tree of formulas, not of sequents. I don't know much about natural deduction but this is something well-known and apparently important.
I think the other style of writing proofs, which is more common today and is named sequent calculus, has been introduced in order to obtain a theorem named cut elimination. But I'm not sure about the history.
By the way, Todd's system is a kind of mix of these two things: he uses the inference rules of natural deduction, but encoded in a sequent calculus way. I think a lot of people do that also.
Right, the deductive system I gave is a little bit of a hybrid system. It's not designed with things like cut elimination in mind. What it does have in mind, pedagogically, is the characterization of the connectives in terms of their universal properties, as I already said, and as long emphasized by Lawvere.
I just felt like mentioning:
"Enderton", "Mendelson": Both 3 syllables; first and last syllables rhyme, middle syllables differ by one letter.
"A Mathematical Introduction to Logic", "Introduction to Mathematical Logic" are nearly permutations of the same words.
:face_with_raised_eyebrow:
And don't forget "A Logical Introduction to Mathematics" by Endelson.
Haha, very nice!!
Julius Hamilton said:
I think that the elements of the algebra have an ordering - maybe a partial order - and maybe a Boolean algebra is, or is close to, a lattice of formulae.
Yes, in fact the nLab defines a [[Boolean algebra]] to be a lattice with certain properties.
I want to understand what this ordering “means”.
First of all, what is the ordering? There are various equivalent to define it. One is this:
We say if and only if .
Another is this:
We say if and only if
It's a good exercise to start with whatever axioms for a Boolean algebra you know, and show that these two definitions are equivalent, and then show that with either one of them a Boolean algebra becomes a poset (and in fact a [[lattice]]) with this definition.
But as for what means, it means that implies .
So, it's fundamental to the uses of Boolean algebras in propositional logic! When we have a Boolean algebra whose elements we call "propositions", we not only know how to do the "and" (), "or" () and "not" () of propositions, we also can tell when one proposition implies another.
John Baez showed me (I believe) that to my surprise, a Boolean algebra on generators is actually a finite set.
By the way, you might still want to prove that the free Boolean algebra on generators has elements. You made a lot of progress in that direction, but you didn't show us a proof. This is mainly worthwhile in case you want to practice proving things, since the result is well-known.
One way to prove it is to show that each element in the free Boolean algebra on generators is equal to a unique element in disjunctive normal form (where we don't distinguish disjunctive normal forms that differ only by certain permutation symmetries). We talked about that but didn't get to the bottom of it.
Another thing we never got to is conjunctive normal form.
I kept trying to get you to tell me that distributes over and also distributes over . You never came out and said it. So now I've said it. :stuck_out_tongue_wink: But the point is: using these distributive laws, not only do we get a disjunctive normal form for elements of a free Boolean algebra, we also get a conjunctive normal form, where the roles of and are switched.
I'm mentioning all these things because you're talking about what you want to do next with Boolean algebras, and these are the things I was trying to get you to do. I don't actually care much whether you do these things - but I figured I should mention them before I forget.
Sure. Today I’ll get back to the concrete side of things. I need to summarize some things that have been said in this thread. One moment.
I might just write out all the questions that come to mind reading through the above thread.
Can you prove that every element of the free boolean algebra on elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.
I think the key idea here is to show that any “element of a free Boolean algebra” (which could also be thought of syntactically as words/sentences generated by a certain grammar) not only has a “normal form” (if we defined a “normal form” as a certain property that expressions can have), but that to be a “normal form” has to be at minimum functional, not relational (I think). That means that a mapping (well, technically a homomorphism, I think) from (the free Boolean algebra on variables) to an algebra of “normal forms” (regardless of which one - so far a “normal form” is just defined as basically a functional transformation / function on the elements of ) cannot assign 2 forms to the same element , and also that the normal form transformation function is applicable to every . I would like to think about counter examples of this to drive the point home.
It makes me want to analyze “classes of mappings or relations” on in terms of properties.
For example, consider an ordering on properties of binary relations, if possible.
Let’s assume there are no other relation symbols but (even excluding equality for simplicity’s sake).
We can choose “enumeration axes”: the number of distinct variables, for example.
By knowing rules for generating logically equivalent formulae from a given formula, I can rewrite such axioms in a form I might prefer. I’d want to see rules for transformations for the axiom , to see if any of them strike me as an interesting, alternative way of looking at what that condition is saying.
However, I realize that Boolean algebras are for propositional logic, not predicate logic, so I have to backtrack on the above.
I was trying to enumerate properties of binary relations, so that I could study what a relation on and some other set would be like, depending on the properties of that relation.
For example, if for all , , we say the relation is reflexive. And if for all , , I think we can say it is “antireflexive”.
If a relation is reflexive, it has to be defined on the product of the set with itself: .
I want to make use of this observation later.
If we have 2 variables, I now know that there should be precisely 16 different possible properties of binary relations:
The atomic formulae are generated by the number of choices for each argument of from the set . That’s . The atomic formulae are . From these atomic formulae, there are 256 different properties we can create. For example:
For all , . I wonder if this would be called a “total” relation or something.
For all , . This is the familiar concept of left-total, for functions.
For all , (which is in disjunctive normal form.)
Properties like these are fascinating - it isn’t clear if they have names, but it would be really cool to find a concrete example for every single possible such property.
I now realize that a normal form should be a relation defined on , so that makes things simpler for now.
It would take me much longer to explore the behavior of all possible types of binary relations on , so for now I can come back to what I originally said: I believe a “normal form” has to be functional, but it should also be a homomorphism. I am pretty sure that just means it has to preserve identities and it has to compose associatively. I’ll see if I can write this out as a formula.
A normal form is a function such that , and for all . (?)
Just a guess.
I’m now thinking that actually, what we call the “free Boolean algebra” could be thought of as the free algebra from a signature , without the axioms for a Boolean algebra like .
When we add in any axioms for this free algebra - statements of equivalence - this is always a “quotient” of : it collapses some elements into the same element, even if as trivially as .
Therefore, one great way to understand the structure of Boolean algebras better could be to study the properties of adding just one axiom at a time, to .
Can you prove that every element of the free boolean algebra on elements can be put into disjunctive normal form? One proof is an induction on formulas, where you start with your basic elements and then build more complicated formulas using "and", "or" and "not", and show that you can always move the "ors" to the outside, etc.
Anyway, I’ll try an induction on formulae. So assume we have any number of formulae, .
I feel like to even define DNF mathematically, I need to order the symbols, to express constraints that that must be “above”/“before” , and above/before . It sounds like a partial order structure to me. I an pretty sure a lattice is a poset where every pair of elements has a supremum and an infimum, and thereby we can have join- or meet-semilattices (has all supremums or has all infimums). (Side-thought: since posets are categories, does that make joins and meets… products and coproducts?)
I’m trying to show “we can always move outwards”.
Maybe this is similar to the Python program I wrote, actually.
All I have to show is that it commutes/distributes with any symbol containing it.
That’s already one of the axioms, conventionally - .
I’m not sure if this is the proof you had in mind, @John Baez, but maybe it would look something like this:
Define “disjunctive normal form” as a partial ordering on symbols , , and . . I haven’t figured out how to define this yet, though. Maybe with the lattice stuff I said above… using supremums and infimums in some way.
Maybe if we define these operations as functions (which they are), like , . Then maybe just state the ordering in terms of the functions? , specifically: . (In this case, we don’t worry about expressions like , since it respects the ordering.)
And then, I’m thinking about if it’s enough to show axioms like or not. Maybe it could be simple:
I’m not sure if this comes close to a full proof, but I think I showed that no matter what symbol is enclosed by (), there is a logically valid way to transform it so that ; and if is enclosed by , there is a logically valid way to transform it so that . And we aren’t worried about the other cases.
Todd Trimble said:
If you think of as saying that is true under all circumstances where is true, then of course tautologically, and vacuously (because there are no circumstances where is true, this statement can never be falsified).
I understand this better now. It does make sense to say “anything follows from a contradiction” if you think about it exactly in the way I often find “trivial cases” of various mathematical structures to fulfill the defining conditions, because the conditions “don’t even apply”: the empty set is a group because we cannot evaluate any of the group axioms on its elements, so we default to true since we never computed a “false”. In this way, since material implication can mainly be seen as checking for the condition “”, for which returns “false”, then for , we get true. And since is always false, it follows that “saying “birds are flies therefore I am a boy”/“if birds are flies then I am a boy”” actually evaluates as true… in one way of looking at it, the succedent is totally irrelevant since the antecedent never gets “activated”.
Glad this makes sense. But right now there too many comments in this thread to respond to individually. Many of the proposed exercises, far less ambitious in scope than "enumerate all theories" (from the exponential objects thread), haven't been taken up. Like understanding the number of elements in the free Boolean algebra on elements.
David Egolf has been setting a fine example by working through, in detail, lots of the nitty-gritties of some precisely formulated problems in sheaf theory. Working through such things is invaluable. So is there anything that is not precise that prevents tackling some of the exercises proposed by Baez? At some point, you have to get your hands dirty, rather than asking yet another question, or speculating about what you might want to do in the future.
I genuinely want to prove that (and all other exercises), I also just find it genuinely challenging. I did try to engage with Baez's exercise in the above.
I don't know if it's my "thinking style" or something, but I feel like when I'm at an impasse, I do best by asking a bunch of surrounding, associated questions. That helps me find new avenues of exploration, which allows me to come back to the original question with a new perspective. Honestly, I feel like I gained immensely by thinking through some of the things I talked about above. It brings new ideas to the forefront of my mind, which I can then bring further in my next stab at the problem.
This is where I currently am, regarding the proof of the number of elements in a free Boolean algebra:
I am not completely certain about the use of the word "free". Question to self: what would a non-free Boolean algebra be like?
I believe I am able to show that if I put all the elements in the free Boolean algebra in "a normal form" (conjunctive or disjunctive), there is a counting argument to show that there are equivalence classes of elements. But I would prefer to look at this counting argument in a different way, than I did previously.
I think the part that is missing from my proof is proving that every element in the free Boolean algebra can be put in a normal form (disjunctive or conjunctive). I did actually try to do that in my last message, and I did ask if it was correct or not.
I also prefer to try to write proofs in a formal way, say with natural deduction, because sometimes I feel like I am not even aware when my own argument counts as a "proof" or not.
I am genuinely trying to engage with you and John's input in the best way possible. But I feel like the best way to think through alternative ways of looking at the above is by formulating a bunch of questions, because it gives me new ideas, but I have to verbalize them first to clarify them to myself.
If you wish to help me complete this proof, please let me know if you have any advice, since asking sub-questions is my current way of trying to fill in missing information to help me make a proof or argument that satisfies me.
So here is some work on the above:
For #2, the argument I previously provided was that if all of the elements are in a normal form, we can count the number of unique elements like this:
For each of variables (= atomic formulae), there are 2 elements (regarding negation) in the Boolean algebra: for , we have , for example. And we know in disjunctive normal form, for example, the negation symbol is the "innermost" operation, in a kind of "order of operations". (I also asked some questions about this in the above: what is a good way to mathematically define the "order of operations" for a normal form?) So, it's like we first know we can make elements in the free algebra: .
The next operation is conjunction, which has 2 arguments as a binary operation. I think we might think that we can form terms by applying a binary operation to elements, but due to properties like commutativity, idempotence, identity, and complement, this will not be the case. I would like to show how we can draw the following conclusion, for now I just state it: if we remove equivalent expressions, we end up with expressions of the form . Those strings we can count: for each of variables, we have 2 choices (positive or negative). So we get possible formulae we can generate from elements with the and operations.
For disjunction, we observe that all of the previously generated elements are logically unique. I feel like I should also prove this or explore this more. But this is what allows you to make the simple argument that we are just choosing to include or exclude such terms, connected by the symbol, which gives us choices for each of the elements, which is .
So, what do you think? What am I missing in order to complete this proof? Thank you very much.
I'm trying my best to get better at math.
But I will also write the other things you suggested I prove here, to collect them:
Julius Hamilton said:
I don't know if it's my "thinking style" or something, but I feel like when I'm at an impasse, I do best by asking a bunch of surrounding, associated questions.
Sometimes that's best - but sometimes it's best to drill straight in, assembling an argument that consists of many lemmas, then proving the lemmas one by one, perhaps using sub-lemmas, and changing the overall strategy minimally each time a lemma turns out to be false.
At the very least it's good to practice this strategy of direct attack, especially if you're naturally inclined to 'lateral thinking'. (If all you could do was the direct attack, I might suggest more lateral thinking.)
Honestly, I feel like I gained immensely by thinking through some of the things I talked about above. It brings new ideas to the forefront of my mind, which I can then bring further in my next stab at the problem.
That makes sense, but listening to someone ask a bunch of surrounding associated questions when the direct approach works just fine can be rather demotivating. That's why I wasn't planning to reply to your last long post.
This is where I currently am, regarding the proof of the number of elements in a free Boolean algebra:
- I am not completely certain about the use of the word "free". Question to self: what would a non-free Boolean algebra be like?
Oh, for some reason I thought you knew what a 'free' Boolean algebra was. I gave a rough-and-ready intuitive definition of them, and you seemed to be doing just fine with it.
But here's one rigorous definition:
We say a Boolean algebra is a free boolean algebra on a set if it is equipped with a function such that for each function for any boolean algebra there is a unique boolean algebra homomorphism with
Seeing why the rigorous definition matches the rough-and-ready definition is a good thing to do sometime. This often takes people a week or two the first time, but the same idea works for Boolean algebras, groups, vector spaces and many other things, so it's worthwhile.
- I believe I am able to show that if I put all the elements in the free Boolean algebra in "a normal form" (conjunctive or disjunctive), there is a counting argument to show that there are equivalence classes of elements.
Good! Then you can find a non-free Boolean algebra simply by finding a Boolean algebra with 8 elements.
- I think the part that is missing from my proof is proving that every element in the free Boolean algebra can be put in a normal form (disjunctive or conjunctive). I did actually try to do that in my last message, and I did ask if it was correct or not.
I didn't follow your argument, perhaps because it was in the middle of a long post full of other stuff.
- I also prefer to try to write proofs in a formal way, say with natural deduction, because sometimes I feel like I am not even aware when my own argument counts as a "proof" or not.
If you write proofs in a formal way, I will never read them, nor will most mathematicians. It's important to learn how to communicate with mathematicians, and part of this is learning to write arguments that mathematicians accept as proofs. This is not easy! But it can be done, and every mathematician does it! The basic method is this: we write proofs, get other mathematicians to read them, see if they are convinced, and if not find out why not.
I am genuinely trying to engage with you and John's input in the best way possible. But I feel like the best way to think through alternative ways of looking at the above is by formulating a bunch of questions, because it gives me new ideas, but I have to verbalize them first to clarify them to myself.
Okay, but I don't have the energy to follow all your questions, especially those that roam wildly around the mathematical landscape.
Okay, on to the more technical work:
Julius Hamilton said:
For #2, the argument I previously provided was that if all of the elements are in a normal form, we can count the number of unique elements like this:
For each of variables (= atomic formulae), there are 2 elements (regarding negation) in the Boolean algebra: for , we have , for example. And we know in disjunctive normal form, for example, the negation symbol is the "innermost" operation, in a kind of "order of operations".
Good.
(I also asked some questions about this in the above: what is a good way to mathematically define the "order of operations" for a normal form?)
There's a whole theory of that stuff, but I'd rather focus on this one example before broadening out.
So, it's like we first know we can make elements in the free algebra: .
Okay.
The next operation is conjunction, which has 2 arguments as a binary operation. I think we might think that we can form terms by applying a binary operation to elements, but due to properties like commutativity, idempotence, identity, and complement, this will not be the case. I would like to show how we can draw the following conclusion, for now I just state it: if we remove equivalent expressions, we end up with expressions of the form . Those strings we can count: for each of variables, we have 2 choices (positive or negative). So we get possible formulae we can generate from elements with the and operations.
Yes, you get exactly distinct propositions if you start by taking propositions and their negations, and then form new ones using . But it's not true if you also allow things like
Your phrase "formulae we can generate from elements with the and operations" is dangerous, since most of u would read that as including things like
So, you can state your claim here precisely, call it a 'Lemma' and maybe number it, and then sometime come around back and start proving it.
Often a proof works this way: you first show that what you want would follow from a collection of lemmas, and make sure that all the lemmas seem correct. Then you try to prove those lemmas. These in turn may require lemmas of their own - or maybe not, maybe they're easy. Sometimes they turn out to be false. Etc.
For disjunction, we observe that all of the previously generated elements are logically unique. I feel like I should also prove this or explore this more. But this is what allows you to make the simple argument that we are just choosing to include or exclude such terms, connected by the symbol, which gives us choices for each of the elements, which is .
Again, this is correct - and again you are correctly noting that it deserves to be proved.
So, what do you think? What am I missing in order to complete this proof? Thank you very much.
For starters you're missing the things you said you were missing:
A proof that given propositions, there are exactly distinct propositions that can be built from them and their negations using . That means showing that there are no more, but also importantly no fewer (due to unforeseen equations between the you found).
A proof that given that propositions mentioned in part 1, there are exactly distinct propositions that can be built from them using . Again this means showing that there are no more and no fewer. These are called disjunctive normal forms.
But also there's this:
This is not in disjunctive normal form, so you need to show it's equal to a proposition in disjunctive normal form.
For step 3, you could try to develop an algorithm to reduce any complicated thing like
to disjunctive normal form. This is a syntactic approach: you're manipulating strings of symbols.
Or you could show that 1) every proposition built from using corresponds to some kind of interesting and manageable thing, 2) show there are of these things, and 3) show that these things correspond in a one-to-one way with disjunctive normal forms. This is a semantic approach: you're thinking about what the symbols can 'mean', or correspond to.
Both the syntactic and semantic approaches work here, and they both give (or maybe I should say require) interesting insights.
People talk about disjunctive normal form and conjunctive normal form, and that's one way of calculating the number of elements in a finitely generated free Boolean algebra. There are others.
Most students are first exposed to Boolean algebras or propositional logic through truth tables, which provides another point of view on free Boolean algebras and the result. For example, they learn how to determine the truth table of a propositional expression like , where the rows in the truth table correspond to the different ways of assigning truth values to each of the variables, here . If there are three variables, as there are here, then there will be eight truth value assignments: you find all ways of putting T/F values under the -, -, -columns:
If there are variables, then there will be rows or truth value assignments, since you have to choose between the two values T, F a total of times. Then you work through the table to determine, for each row, the corresponding truth value of the target expression. So for the TTT row, you would compute , and similarly for the other seven rows. Let me call the truth value of the target expression the "output value".
Two propositional expressions in variables are said to be logically equivalent if they yield the same output truth values for each of the rows. So you can think of a propositional expression as a function, where the inputs are the truth-value assignments of variables, or rows, and where the output is the T or F that the expression evaluates to for that row. For example, if there are three variables, then a truth table is in essence a function
.
Guess how many possible such functions there are? You have to choose between T and F a total of times, so there will be such choices in all. How about that?
But wait a minute, I didn't show that every function of the form is a truth table function of some propositional expression, did I? But there's a nice way of doing this, and it's connected with conjunctive normal form. Let's take an example. Suppose I wanted to cook up a propositional expression whose truth table function is given by
(I'm just making up a function at random).
The trick is: first learn how to solve such problems where there is just a one output that is T and all the rest are F. For example, say but all other -values are F. Then works. Because evaluates to , but in all other cases, where at least one of has been assigned to F, we have that evaluates to F.
Let's try another. Say we want but all other -values are F. Well, we just make a slight adjustment to the idea that worked before: we use instead . In other words, we still take a conjunction of variables, except that we might first need to "flip the switch" by applying to any variable that was assigned the value F in that row, in order to get a conjunction of T's as the output value, for the TTF row, but the output will be F for any other row.
Thus, to solve the problem of finding an expression whose truth table has just one output T, we take an appropriate conjunction of literals, where a literal is either a variable or a negation of a variable. I've seen such expressions called "realizers".
Now let's return to the earlier problem where I chose a function at random. To find a propositional expression that represents it, just take a disjunction of realizers, over all those rows whose output is T. For the example I gave, this would be
Note that this is a disjunctive normal form.
So here is the story so far:
The free Boolean algebra on variables , denoted , consists of logical equivalence classes of propositional expressions in those variables, where two expressions are logically equivalent exactly when they have the same truth table function.
Thus we have an injective function , which sends a logical equivalence class of a propositional expression to its truth value function. Here on the right, I'm using the notation to denote the set of functions from a set to a set , so that is the set of assignments of truth values to the variables , corresponding to rows. These rows are possible inputs or domain elements for functions of type .
But the function is also surjective; in other words, every function is the truth table function of some propositional expression, as we realized by taking a disjunction of conjunctions of literals, in other words a disjunctive normal form.
So the function is a bijection, in fact an isomorphism of Boolean algebras if we define the Boolean algebra structure on the codomain in "pointwise fashion". All this comes out of the wash very cleanly if you set up the definitions (of truth table, for example) correctly.
(I guess I didn't show that the description of "free Boolean algebra" in the first bullet point matches the description that John gave in terms of a universal property. But it does, and I'm just trying to give some intuition.)
There's still another way of viewing the result, by exploiting the equivalence between the concepts of Boolean algebra and Boolean ring (by definition, a Boolean ring is a commutative ring in which every element is idempotent), and constructing a free Boolean ring on variables by analogy with free commutative rings, whose elements are polynomials in variables. This is actually my favorite proof, but I'm going to leave it as a teaser for now.
Just to help with the LaTeX since we don't have any information from the system on where typos are when we write a formula on Zulip and it can be quite annoying when you don't find it. I think I found it: you wrote \{T,F} instead of \{T,F\} in all the formulas which don't appear properly.
Thanks, Jean-Baptiste. I think I got it.
Todd Trimble said:
Most students are first exposed to Boolean algebras or propositional logic through truth tables, which provides another point of view on free Boolean algebras and the result.
By the way, Julius, this is the 'semantic approach' I was hinting at when mentioning semantic and syntactic approaches to showing the free Boolean algebra has entries. (There could be other semantic approaches but this is the most famous.) There's a lot of fun involved in working out how truth tables connect to disjunctive and conjunctive normal forms, and Todd has pointed out some of it.
I will read through the above and respond more thoroughly. Thank you.
After a lot of false leads and thinking, I think the following is a very simple and valid argument:
If we want to count the number of terms in a free Boolean algebra, we should carefully consider a formal/precise way to say what we mean by this. In the free algebra over the signature , both the elements and are present and are not equal; but in the associated free Boolean algebra, they are equal. And so we need to distinguish between the term algebra for the signature , and the free Boolean algebra, whose elements are actually equivalence classes of those terms. So to be more accurate, the elements of the free Boolean algebra are actually more like: . The way that we determine that two expressions are equivalent is by their “truth tables”: two Boolean expressions are equivalent if they have identical truth tables. Intuitively, they “have the same truth content / truth conditions / meaning”.
Now, noticing this may help us frame the counting question more accurately. If we have variables, we want to know how many equivalence classes of formulae built from those variables there are. And by focusing on equivalence classes, we don’t have to worry at all about all the different ways the variables can be combined with Boolean operations, because we know that any such combination will be a function - because no matter how the variables are combined by Boolean operations, the inputs to this expression (seen as a whole) will be a selection of a value from for each of the distinct variables featured in the expression. And , as codomain, because we know Boolean operations and Boolean values are a closure system, so any arbitrary Boolean expression can only attain values in .
And so now the question “How many equivalence classes of Boolean expressions for variables are there?” is just the question of “How many different Boolean functions for variables can there be?”, which is the question “How many distinct functions are there?” And this is simpler: , and the number of functions is . It directly follows the number is .
I need to keep thinking but this was one small thought.
In order to show that any element in the equivalence classes in the free Boolean algebra has a normal form…
I have been thinking of defining a “normal form” as a set of deduction rules, of the form . This would allow us to state a set of rules and to say that for any element in the equivalence classes of the free Boolean algebra, applying any of those rules which can be applied will eventually terminate in a normal form. Including these rules:
The way this helps me is due to the “directionality” of the relation, I can express the “direction” expressions should be reduced. Whereas the Boolean algebra axioms are expressed as equalities, so I wasn’t sure how to prevent the possibility of them being applied “in the wrong direction”.
This reminds me of how Todd’s natural deduction system is expressed in terms of a relation, and you can use those deduction rules to derive the axioms of a Boolean algebra.
Assuming the entailment relation composes associatively, I think I might actually be able to model this as a category, and we can prove that the way these reduction rules chain together always terminates at something which can be reduced no further. I might seem like I’m getting “off-track”, but right now it’s the most elegant way I can think of try to explicitly prove that these rules will reduce every expression deterministically to a single form that cannot be reduced further.
These are great ideas! And you're INCREDIBLY close to a complete proof. Let me know if you want a hint that lets you sidestep normal forms, or alternatively a hint for how to show normal forms are "canonical" (exactly one normal form per class)
Assuming the entailment relation composes associatively
Yes, that's the "cut rule" (from my July 27 post).
Julius Hamilton said:
The way that we determine that two expressions are equivalent is by their “truth tables”: two Boolean expressions are equivalent if they have identical truth tables. Intuitively, they “have the same truth content / truth conditions / meaning”.
That's one way, the semantic way. So let's call that semantic equivalence.
But there's also what I'd consider the basic way: the syntactic way. There are a bunch of equations in the definition of "boolean algebra", like
etcetera. By definition, two expressions and give the same element of the free boolean algebra on some elements if and only if you can repeatedly use these equations to get from to . So let's call this relationship syntactic equivalence.
Luckily, since you've read Chapter 1 of Mendelson's book on logic, you've seen the Deduction Theorem, Proposition 1.9, which says that syntactic equivalence implies semantic equivalence. (Well, he doesn't say it this way, but this quickly follows from what he says.)
And you've seen the Completeness Theorem, Proposition 1.14, which says that semantic equivalence implies syntactic equivalence. (Again he doesn't say it this way.)
So, while your original quest was definitely to count the number of syntactic equivalence classes of expressions - since this is the number of elements in the free boolean algebra on - you are entitled, knowing these theorems, to count the number of semantic equivalence classes.
A while back I outlined these two approaches to showing there are elements in the free boolean algebra on elements:
John Baez said:
You could try to develop an algorithm to reduce any complicated thing like
to disjunctive normal form. This is a syntactic approach: you're manipulating strings of symbols.
Or you could show that 1) every proposition built from using corresponds to some kind of interesting and manageable thing, 2) show there are of these things, and 3) show that these things correspond in a one-to-one way with disjunctive normal forms. This is a semantic approach: you're thinking about what the symbols can 'mean', or correspond to.
You now seem to be taking the semantic approach, where the "interesting and manageable thing" turns out to be a function .
That's great - but the fact that it's equivalent to the syntactic approach is quite nontrivial. That's what Mendelson's "Deduction Theorem" and "Completeness Theorem" are all about. These are probably the two most fundamental theorems in propositional logic.
John Baez said:
You now seem to be taking the semantic approach, where the "interesting and manageable thing" turns out to be a function .
The distinction between syntactic and semantic is is tricky to motivate because even though it's "Semantic" it's also obviously effective in a way that semantic definitions of truth in arithmetic are not.
I don't find that distinction hard to motivate myself: in the syntax of boolean algebras we just fiddle around rewriting expressions like according to the boolean algebra rules, while in the semantics we step outside and say "hey! What might these expressions mean? How can we interpret them?" - and we discover that each boolean expression maps to an -ary function from to in a useful way. This is a shockingly different approach.
So then we need to check that this new semantic approach matches nicely with the original syntactic approach - so we need to prove the soundness and completeness theorems, which Mendelson calls the Deduction Theorem (actually something a wee bit more powerful) and Completeness Theorem.
(I know you know all this; I'm just trying to pound it in a bit more forcefully for @Julius Hamilton's sake.)
I actually hadn’t read Mendelson yet, but now I am, and it’s really good so far.
I recently did (half of) a proof exercise with David Egolf and it made me think that proofs are easier when you focus on the definitions very precisely and notice what exactly you can get to follow from them.
For this reason I want to start back at the beginning.
This is my current definition of a Boolean algebra:
A Boolean algebra is a set , a unary operation , two binary operations , and two distinguished elements such that:
Before going further, I want to establish the following:
Lemma 1. The empty set is not a Boolean algebra.
My intuition for this is because since the signature mandates the presence of elements and , there is no valid mapping of those constant symbols to a domain, if the domain is the empty set. There’s nothing to map them to, so the empty set is not a model that satisfies the above theory. Could someone let me know if this small assertion is true, or is there a formal way to evade the need to map those constant symbols to anything?
means is true for all variable assignments, for a map from the signature to the structure . Is it a valid proof to say “no mapping from the signature to the empty set exists, therefore, the empty set cannot be a structure for this signature”, precluding the possibility of it being a model? (Or, “no map from the constant symbols and to elements in the empty set exists, therefore, the empty set is not a structure the theory of Boolean algebras can be interpreted in.”)
I’m pretty confident the above is correct.
My desire is to thoroughly establish a few models of Boolean algebras, then explore a proof by induction on their size.
Let , a set with one element.
Let be the map such that .
I’m not sure how we express what we are mapping the function symbols to. We map them to functions. But how do we define the functions? As sets of ordered pairs, like this? ? It doesn’t seem right.
Nevermind, I think I understand. Since in this case there is only one element, we can define the functions pointwise:
We have now established that is a valid structure to interpret in. Since there is only one element, the only interpretation we have to check is when all variables equal x. We run through the axioms. We use the above 3 assertions about the values of our three functions to check if the axioms hold:
:checkbox:
(Being a bit less explicit:)
:checkbox:
:checkbox:
:checkbox:
:checkbox:
:checkbox:
:checkbox:
:checkbox:
:checkbox:
:checkbox:
:checkbox:
(and so on)
So this arguably is a complete proof that any singleton set models the theory of Boolean algebras.
This is helping me think of things to prove I hadn’t even thought of before, like proving that there is only one unique Boolean algebra on a set with elements up to isomorphism.
I’ll continue with this tomorrow.
Could someone let me know if this small assertion is true, or is there a formal way to evade the need to map those constant symbols to anything?
It's true. You can't get around the requirement to interpret constants as elements.
Julius Hamilton: This is helping me think of things to prove I hadn’t even thought of before, like proving that there is only one unique Boolean algebra on a set with n elements up to isomorphism.
To save you some time: that's not true. Every finite Boolean algebra has cardinality equal to a power of 2.
This looks good, @Julius Hamilton. I guess I forgot which logic book you were using! But many introductions to logic start with propositional logic ( boolean algebras) and then move on to the predicate calculus ( first-order logic). And I think that's a good order of things.
Pretty sure I just realized something beautiful and obvious.
DNF and CNF explicitly encode the values of a truth table.
Call the expressions between the parentheses the “disjuncts”. Stack the disjuncts vertically. Supply the variables with a 1 for a nonnegated symbol and a zero otherwise. These values return a 1. The unspecified variables don’t matter.
CNF: do the same thing but stack the conjuncts horizontally.
This might hint why there are only 2 normal forms.
Here’s another cool thing.
A “theory” (collection of formulae) can be specified by choosing elements in a Boolean lattice.
Because the lattice is ordered by logical strength, an element of a Boolean lattice implies all of the elements directly weaker than it. For example, .
This provides a visual way to see which axioms of a theory are not independent, and it should make it easier to calculate for a collection of independent formulae, how many distinct logical theories can be created from them.
A “theory” (collection of formulae) can be specified by choosing elements in a Boolean lattice.
Because the lattice is ordered by logical strength, an element of a Boolean lattice implies all of the elements directly weaker than it.
Yes, for a propositional theory. What you are describing in the second sentence is called a filter of the Boolean algebra, generated by the chosen elements. A filter of a Boolean algebra is a subset of that is upward closed (if and , then also ) and closed under meets (if , then also ). The meaning of the second condition is that if the chosen elements are considered as axioms that are assumed true in the theory, then finite meets of axioms will be true as a consequence.
Alternatively, you can consider ideals in a Boolean algebra: these are precisely complements of ideals. For an algebraist, these might feel more familiar, because one speaks of a quotient of a ring modulo an ideal. But either way, one can talk about the quotient of a Boolean algebra modulo an ideal or of a filter, and the resulting quotient, considered as a Boolean algebra, is an effective algebraic way of thinking about the theory at hand. The elements of the Boolean algebra are equivalence classes of formulas where if is a provable consequence of the theory, i.e., belongs to the filter that the axioms generate. (All this is related to "localization", currently being discussed in another thread.)
Things will generally become more complicated when you have to deal with first-order (or predicate) theories and have to juggle with different domains, quantification, and so on: to each domain there is a corresponding Boolean algebra consisting of equivalence classes of predicate formulas, and all these Boolean algebras interact in complicated ways.
This provides a visual way... should make it easier to calculate... how many distinct logical theories can be created from them.
Even though a free Boolean algebra has a very large number of elements, too many to honestly visualize as a lattice, it's indeed intuitively helpful to develop geometric ideas of related things, akin to picturing varieties in algebraic geometry. As was mentioned earlier in this thread, a Boolean algebra can be equivalently thought of as a Boolean ring, which is a special type of commutative ring, namely one for which every element is idempotent: satisfies . In algebraic geometry, each commutative ring has a corresponding spectrum which is a kind of "geometric dual" of that ring, which really roughly speaking are like geometric loci (curves, surfaces, and so on), and one can import geometric ideas from algebraic geometry to think about Boolean algebras.
The points of the spectrum of a free Boolean algebra on elements correspond to rows in truth tables, consisting of truth-value assignments for the variables.
Enumerating finite propositional theories, or more precisely enumerating the number of filters of a free Boolean algebra on $n$$ generators, can indeed be done. These filters (or ideals) correspond to subsets of the "space" of truth assignments, and these can indeed be counted.
Perhaps the biggest takeaway from this array of ideas is that the spectrum of a Boolean algebra , thought of geometrically, is the Stone space of . The Stone space is a certain compact Hausdorff space. Stone duality is a wonderful thing to think about, and the compactness theorem for propositional calculus is directly and powerfully related to the compactness of this spectrum. So you're getting into some deep-ish matters here!
Wow. Trying to get back into university so I can just study this stuff all day.
Let . Let domain . Let consist of the following maps:
Let be the structure .
Lemma: is the only model of (the theory of Boolean algebras) for which , up to isomorphism.
Proof:
I can think of these proof strategies:
Attempting the first:
Let and , where and , and .
I think you're making this much harder than it is. A poset can carry at most one Boolean algebra structure, is the general theorem.
Define an isomorphism between models as a map such that:
is a bijection.
For all relation symbols in and all elements in , . This says that for whatever values are in in model , the map must be such that it maps those elements to elements which are in in model .
Todd Trimble said:
I think you're making this much harder than it is. A poset can carry at most one Boolean algebra structure, is the general theorem.
Oh, well, it seems like a good opportunity to apply what I am learning from my logic textbook.
But I can also try a different approach to prove this, sure.
Anyway.
If , this means that for every .
For each , we check them in accordance with natural language, basically.
One formula of , a -formula, is:
However, we already assumed and are models, so we don’t have to check if they meet the conditions. We assume they already do.
Given that, we have to show that:
a) and are bijective (they are, by the definition that )
b) That iff .
c) (…)
One thing I am wondering is, in order to show that , we have to show there exists an isomorphism . Is the only way to prove this by explicitly constructing such an isomorphism? I don’t think we can do that if we are attempting to prove that any arbitrary are isomorphic without specifying them specifically.
I’ll have to think this over.
But another way to do this is to show what structure must necessarily arise from the axioms, and this would be a direct proof of the structure of a Boolean algebra with 2 elements. For example, we can prove that ; now we have determined that for a two element set, one element must be , the other .
So maybe all we have to do is run through the axioms like we did when the only element was , but just say, “this time, the only elements are and , and ”.
Then it would be:
Assume a set is a model of a Boolean algebra. We want to generate all the values for the three function symbols.
Then from the axioms, we have:
. Assume without loss of generality. Then . . I’ll just need to continue these derivations until I can prove that . There’s probably a derivable law that that I haven’t used before.
And I think instead of deriving it, I can instead just prove that it’s true.
Assume . Goal: derive a tautology. You can use complement and identity laws for this: .
Maybe we can substitute 0, 1, and into all these equations to see what we produce.
. Since we assumed , we can replace with :
. This is a tautology. .
Then the shortest proof that the only Boolean algebra with 2 elements is the one where might be:
Actually we don’t even need the result , the above is enough to deduce that , since needs to be represented in the structure . But we could also say that since since .
This might not be perfect but has been a good exercise for me.
Oh, well, it seems like a good opportunity to apply what I am learning from my logic textbook.
Yes, you're right to say that, and I apologize.
I don't have Mendelson before me. From what you write, I am supposing that he defines a Boolean algebra to have signature , satisfying a number of axioms. Different authors may use different signatures (and even if two use the same signature, they might not always list the same equational axioms). But if I were the author, I would also include a binary relation which earlier I denoted by , as part of the signature.
A comment on what you wrote above. To say that a statement is a tautology, write . Not . The latter is true for any statement , tautology or not.
For example, we can prove that ; now we have determined that for a two element set, one element must be , the other .
I don't think you have quite proved that in a Boolean algebra with more than one element. Depending on which equational axioms you are working with, this might be obvious, but nevertheless I'll pose the question: how would lead to a contradiction? As you say, there's an element , and by assumption there exists a non- element , and that's what we have to work with here. Hint: if , see if you can somehow conclude .
Once that gets squared away, then in a two-element Boolean algebra, there'd be no getting around the fact that the elements are and , and then the and tables (like addition and multiplication tables) are very quickly established from the axioms.
By far the most interesting thing is the fact that once are secured, then complements are uniquely determined. I think you know how to show that in the 2-element case, the complement of can't possibly be , so then it could only be , and in fact works. But here's a little exercise: is it possible to have a Boolean algebra with an element such that ? You may assume that has more than one element.
I may have gotten overwhelmed by how many new ideas there are in this discussion, and I'd like to return to John Baez's suggestion to count the number of elements in the free Boolean algebra on generators. I would like to do this in a clean-cut fashion, with some tips and hints.
Here's what I know so far:
I tend to work with the definition of a Boolean algebra as a set of elements, three operations, and two distinguished elements, which meet the (dual) axioms of associativity, commutativity, identity, complement, idempotence, distributivity, and de Morgan's laws.
I want to focus on the syntactic proof.
Starting with a fresh point of view, I think a syntactic proof should be of this form: "If the free Boolean algebra of generators has elements, then the free Boolean algebra of generators has elements."
Can anyone confirm that this is the general proof strategy I should aim for?
I probably wouldn't have thought of that proof strategy, but I bet it can work nicely.
I tend to work with the definition of a Boolean algebra as a set of elements, three operations, and two distinguished elements, which meet the (dual) axioms of associativity, commutativity, identity, complement, idempotence, distributivity, and de Morgan's laws.
That seems fine for a syntactic proof, especially if you want to use disjunctive or conjunctive normal form, which are defined in terms of the 3 operations .
There is probably a slick semantic proof using truth tables, but you seem like a syntax guy, so while the syntax of 'normal forms' involves a bit of bushwhacking (as far as I can tell), it may play to your strengths.
John Baez said:
There is probably a slick semantic proof using truth tables
I think that approach pushes all of the hard work into showing that if two logical expressions have identical truth tables, they are equivalent in the original Boolean algebra; you don't get a free lunch!
Sounds right - in our earlier conversation about this I started talking to Julius about that theorem, which I'd call "soundness and completeness". Of course it's a good theorem to know, and if you know it, maybe you should use it! But you're right, if you add in the difficulty of proving that, the semantic approach is probably no quicker.
After much thinking, I have had a realization that helps me a lot.
I originally wanted to show a correspondence between ordered pairs from and .
That means a binary Boolean operation, , but it has to include in its output, to ensure it generates the expressions , etc.
It also must generate a unique expression for all inputs. (It must be injective).
That rules out many Boolean operations. It can’t be , which is commutative, therefore, we would have a codomain value .
A function that does this is . For ordered pairs , it gives a unique expression, .
It’s as if you’re taking all the DNFs of the previous round and re-generating them with a and term.