You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I've been coming across a lot of papers concerned with the existence (or more commonly non-existence) of finite axiom schemes for this or that. I'm pretty puzzled by the whole thing. Does anyone know where this idea comes from? Do finitely axiomatiazble theories enjoy some extremely nice property?
In particular (and in contrast), it feels like "decidably axiomatizable" is the natural thing to want to be.
There are certainly some special things you can do with finitely axiomatizable theories - e.g. if you have axioms for the theory of groups then you can axiomatize "not a group" as , which doesn't work if you only have an infinite axiomatization.
Though I'm not a logician, so I don't know if that sort of thing is actually the reason people care
Is there a significant difference between finitely axiomatisable and, say, recursively enumerably axiomatisable, in practice? Both describe representability properties by "finite descriptions" in some sense. I understand that when logicians talk about axiomatisability, they usually mean in (some fragment of) first-order logic, but this seems quite arbitrary. If you're willing to use a different metalogic, you could reasonably axiomatise a Turing machine first, then its program, to axiomatise any RE scheme finitely. (I suppose this is saying that I agree that I don't see that "finite axiomatisation in FOL should be considered particularly interesting as a global property of a formal system", though finite presentability in some suitable metalogic does seem reasonable for practical purposes, e.g. implementation.)
Finitely axiomatizable theories in classical FO logic enjoy some nice properties due to compactness, if that helps. The property Thomas pointed out is a nice example.
Ah so compactness says something like "If a theory can be presented as a (possibly infinite) union of finite theories, each of which has a model, then it too has a model" ?
I guess that's pretty good motivation
Is there any way to reformulate Peano arithmetic using finitely many axioms? This system, with its usual axiom schema for induction, is a great example of a decidably but not finitely axiomatized system.
Is that the technical distinction, though? Sometimes it only matters if you have finitely many schemata.
Like, a uniform schema might be different than an enumerable sequence of non-uniform axioms.
If is any finite subsystem of Peano Arithmetic (i.e. set of provable formulas in PA), then PA can prove the consistency of . By Gödel's second incompleteness theorem, PA cannot prove its own consistency, so it is not finitely axiomatizable (at least not in usual first-order logic).
The above is a consequence of Kreisel's reflection schema: for a finite subsystem of PA, and is a formula, there is a proof in PA of "if is provable from then is true". I learned this from Girard's The Blind Spot (Section 3.B.4); the reference (that I never looked at) seems to be a 1968 paper by Kreisel and Levy.
Nice, thanks - that's great! I didn't know this stuff.
It reminds me of how there's an end run around Tarski's undefinability of truth: there's a truth predicate for statements with at most quantifiers! I still don't understand how that works.
It's making me think some of these classic negative results are less severe than I'd thought.
John Baez said:
It reminds me of how there's an end run around Tarski's undefinability of truth: there's a truth predicate for statements with at most quantifiers! I still don't understand how that works.
I'd treat this as analogous to the fact that (at least classically) there is a program that decides the halting problem for programs of length at most . The negative result still has teeth: there is no way to describe these programs uniformly as a function of n, lest you solve the halting problem itself. I'd expect that something similar holds for these restricted truth predicates.
Interesting! How do you decide the halting problem for programs of length at most n? I don't see how to do it in general except by running the program and waiting for it to halt, but that's only a "semi-decision" procedure: I don't know how long to wait before I say it won't halt.
The argument I had in mind is not constructive: any finite set is decidable by a program that has that particular set hard-coded into it. The set of programs of length at most n that halt is finite. Concretely, for any subset of programs of length there is a program that accepts exactly that subset. One of these will be the correct one, but I don't know which.
I guess the same idea gives a recipe for the restricted truth predicates: given a subset of formulas of length at most n, there is a formula that describes exactly (the codes) of these. One such formula will be a truth predicate for formulas of length at most n.
Martti Karvonen said:
The argument I had in mind is not constructive: any finite set is decidable by a program that has that particular set hard-coded into it. The set of programs of length at most n that halt is finite. Concretely, for any subset of programs of length there is a program that accepts exactly that subset. One of these will be the correct one, but I don't know which.
So it is the nonconstructiveness that forces the general case to not be uniform?
I don't think you can make the argument work this way; the thing with the result John mentioned is that you can actually have a truth predicate for an arbitrary big formula once you fix the number of quantifier alternations (typically, the truth predicate for gives you a formula that can tell you if the Turing Machine number halts or not).
Oops you're right, my reply gives a predicate just for formulas of length at most n, whereas John was talking about formulas with at most n quantifiers. I don't know what the trick for that is.
Cole Comfort said:
Martti Karvonen said:
The argument I had in mind is not constructive: any finite set is decidable by a program that has that particular set hard-coded into it. The set of programs of length at most n that halt is finite. Concretely, for any subset of programs of length there is a program that accepts exactly that subset. One of these will be the correct one, but I don't know which.
So it is the nonconstructiveness that forces the general case to not be uniform?
The general case is not uniform because halting problem is undecidable: if I had a an algorithm that, given n, outputs the code for the program that decides the halting problem for programs of length at most n, I could solve the halting problem as well. I didn't have a particular precise definition of "uniform" in mind, but I'm tempted to say that a uniform description of these programs would let you extract the correct program for any n.
The jist of the argument as I would imagine it goes is saying that any formula with quantifier alternations can be rewritten in prenex form where is simple (). This is a primitive recursive procedure (let's call it that you can formalize in arithmetics. Then, if you have a truth predicate for the fragment, a general truth predicate for is
Sorry if this is a bit jargony; I think the basic idea is that, once you do the base case () which involves already a bit of annoying tinkering with Gödel numberings/formalization of computable functions in arithmetic, generalizing to the case with many quantifier alternations is not too big of a leap (in particular, the number of additional quantifiers that one needs to go from case to should be 1)
Here's a description of the argument as Michael Weiss gave it:
Start where he says "As Tarski taught us, truth is undefinable..."
The good explanation comes after I say "First of all, what’s this about sneaking around Tarski’s theorem on the undefinability of truth by restricting to sentences with at most d quantifiers? How come nobody ever told me about this? It seems like such cheap dodge."
(also now that I think about it, I don't know how this plays out in intuitionistic arithmetic)
I need to reread what Michael wrote, but I think he gives a decent sketch of what's going on without sinking into too much details.
It's all for classical logic, though!
Yeah that's interesting, and it's not that analogous to what I said above: the fact that the halting problem is decidable for programs of a bounded size does not really help with the negative result as the proof doesn't tell us what the correct programs are, whereas here one can write down explicitly the restricted truth predicates.
Yes, thanks for explaining why the halting problem is decidable for programs of a bounded size. This is a dirty trick, sort of like those cons where you fool people into thinking you can predict the stock market by starting with 1024 people and emailing them 10 times predicting whether some stock will go up or down the next day. You can make sure one of them will get all correct predictions.
A result in similar spirit is the non-constructive proof that defined by
is computable. Either has arbitrarily long sequences of in which case we have a constant function (certainly computable), or we have a function that is until some fixed threshold and afterwards - and these are all computable as well. I vaguely remember someone quipping that in the "real world" saying that "I have a proof that the correct program is one of these but I don't know which one" won't quite cut it.
Yes. That is a good example of why excluded middle is wrong. :)
I knew about these dirty tricks but I didn't notice (or forgot) that in theory you can use them to decide the halting program for the first googol Turing machines.
"I can do it, just don't ask me to do it."
Or: "there exists someone who can do it, just don't ask me who."
So this result seems useless (though good to know), while the end run around Tarski's undefinability of truth seems potentially useful to me.
Useful in theoretical ways, that is.
There's a possibly related paper by Sergei Artemov about Gödel's incompleteness theorems actually not meaning what most people say they mean (which is also related to some type theory results not meaning what people say they mean, I think).
The idea is that the notion of "consistency" used in the incompleteness theorems is not actually adequately modeling what people were concerned about with respect to consistency. It requires that consistency would hold in a uniform way in all models, because it quantifies over the internal natural numbers in its definition. But, that is too strong a notion for caring about whether or not people could theoretically deduce a contradiction on paper.
I don't get how consistency quantifies over internal natural numbers.
Because it's like .
Okay, sure.
Whereas the on-paper version is more like wanting that to be provable for specific numerals, and you can give a meta way of constructing a PA proof of each particular proposition for each numeral, I think.
Right.
I took a long break from my discussion with Michael Weiss on nonstandard natural numbers, but I ought to get back to it because I think there's something really important about this circle of ideas. We all too easily think the natural numbers are something understood.
The problem is that "each specific numeral" doesn't really mean much. It means 0, S0, SS0, SSS0, .... - but the "..." conceals an abyss.
Anyway, I'm not arguing against Sergei Artemov's point that the usual version of "consistency" is sort of weird. I'm just saying it's dangerous to act like we know what "all specific numerals" are.
There's a nice series of papers by V. Sorge, A. Sexton and S. Watt about the uses (and some formalizations) of "..." in mathematics.
The type theory version is that people would give 'proofs' that a normalizing language cannot contain a type describing its own terms, and an interpreter from those terms into their actual values in the theory. But then someone did it, and the way that you do not try to code the terms as a single type, like , but have a type of terms of type .
So there were assumptions about how you were allowed to assemble "a self-interpreter" that are integral to whether or not it's possible.
And I'm extremely proud that "finally tagless" played a role in Brown and Palsberg's work. The shift in encoding was a key idea that let them bootstrap things. It helped shift the thinking of how to encode languages in other languages. They then added a bunch of clever, subtle ingredients, and voila! Contradiction to the conventional wisdom. Unlike what most people use tagless for these days, it was designed for partial evaluation.
Anyhow, the point on the math end is, I think, even though "..." is a little dicey, don't forget that the problem is what a human being could write in a book, or a computer could do in limited memory. At least, assuming you're attempting to address that problem.
John Baez said:
Is there any way to reformulate Peano arithmetic using finitely many axioms? This system, with its usual axiom schema for induction, is a great example of a decidably but not finitely axiomatized system.
I understand people gave you negative answers for this, but there are trivial ways to take any theory, add a finite number of symbols, and make it a finitely axiomatizable theory. Basically you add a 1 parameter predicate symbol for each axiom in an axiom schema, and some axioms to encode its denotation at each instance, e.g. where is the "zeroth" predicate, etc.
This doesn't contradict the theorem that says PA has no finite axiomatization, since it extends the language.
I don't think finite axiomatization is all it's chalked up to be. I don't think logicians care much about it anymore.
@Cody Roux How would you finitely axiomatize the meaning of without using an unrestricted truth predicate?
I don't quite get the precise scheme Cody has in mind here, but it is possible to finitely axiomatize , a conservative extension of PA capturing a subset of second-order arithmetic. The list of axioms (which relies heavily on formalizing basic recursion theory in arithmetic) is mentioned there : https://math.stackexchange.com/questions/955523/finite-list-of-axioms-of-mathsfaca-0-reference (there is no cheating by hiding infinitely many instances of the comprehension scheme under the carpet. You can think of as a two-sorted first-order logic if you like)
I don't know of a procedure of the sort Cody alludes to, but I would not be surprised if there was one if we allow for extending the language. IIUC, the purpose of bothering with and formalizing recursion theory within arithmetic is to study subsystems of second-order arithmetic rather than addressing finite axiomatizability of PA
Martti Karvonen said:
The argument I had in mind is not constructive: any finite set is decidable by a program that has that particular set hard-coded into it. The set of programs of length at most n that halt is finite. Concretely, for any subset of programs of length there is a program that accepts exactly that subset. One of these will be the correct one, but I don't know which.
A more constructive approach allows us to make some concrete progress on this problem. The idea being that if a program fails to halt, it must do some for some reason (like the program falling into a cycle of states that it can never leave), and for small lengths of program, there are few enough such reasons that they can be manually identified, essentially identifying the bugs in the code. Put this way, the stumbling block becomes Rice's theorem, which says that the identification of bugs is an undecidable problem. We can make some headway, but there is no way to systematically check for all possible bugs.
Nguyễn Lê Thành Dũng said:
Cody Roux How would you finitely axiomatize the meaning of without using an unrestricted truth predicate?
It's not unrestricted, because the definition of never refers to itself. Basically you're just encoding some subset of ACA0 in first-order logic.
Cody Roux said:
I understand people gave you negative answers for this, but there are trivial ways to take any theory, add a finite number of symbols, and make it a finitely axiomatizable theory. Basically you add a 1 parameter predicate symbol for each axiom in an axiom schema, and some axioms to encode its denotation at each instance, e.g. where is the "zeroth" predicate, etc.
This doesn't contradict the theorem that says PA has no finite axiomatization, since it extends the language.
Thanks, this matches my vague memories. When I learned Gödel's incompleteness theorems they were for recursively axiomatizable theories; lots of people seem to state them for finitely axiomatizable theories, but I got the feeling that the difference was only technical because if something is recursively axiomatizable you can pack the information in those axioms into a finite-sized "program" that "prints out" those axioms.
Relating to both Schemes and Gödel, there's this paper by a known logican
https://arxiv.org/pdf/1902.07404.pdf
arguing against the conclusion to be drawn from Gödel's lemma on consistency. At the risk of being vague or misunderstanding it, this is saying that the particular notion of consistency used in its formalization shouldn't be seen as a definite one. What can't be proven in PA is that
"for all numbers n, the proof encoded by n is not a proof of absurdity"
But in the paper he expresses consistency with a Provability logic modality and argues that what can be proven is
"for all numbers n, there's exists a proof that n is not a proof of absurdity"
Yeah, that's the paper I was talking about. :)
Oh, right you actually even name him on Jul 3rd.
This is the paper for the programming language analogue I mentioned.
The relations between the different varieties of axiomatizability, and their inter-convertibility, were discussed here. https://www.jstor.org/stable/2964289?seq=1