You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Sorry I don't understand where are you going with this. A Turing Machine can be "implemented" with pen and paper. It does not have to be mechanic to be a Turing Machine.
Turing's Thesis from his paper "On Computable Numbers..." said that a Universal Turing Machine is as capable of making computation as a person with pen and paper and he gives some arguments supporting this:
"The behaviour of the computer at any moment is determined by the symbols which he is observing, and his " state of mind " at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite. The reasons for this are of the same character as those which restrict the number of symbols. If we admitted an infinity of states of mind, some of them will be '' arbitrarily close " and will be confused. Again, the restriction is not one which seriously affects computation, since the use of more complicated states of mind can be avoided by writing more symbols on the tape. Let us imagine the operations performed by the computer to be split up
into "simple operations" which are so elementary that it is not easy to imagine them further divided. Every such operation consists of some change of the physical system consisting of the computer and his tape. We know the state of the system if we know the sequence of symbols on the tape, which of these are observed by the computer (possibly with a special order), and the state of mind of the computer. We may suppose that in a simple operation not more than one symbol is altered. Any other changes can be split up into simple changes of this kind. The situation in regard to the squares whose symbols may be altered in this way is the same as in regard to the observed squares. We may, therefore, without loss of generality, assume that the squares whose symbols are changed are always "observed" squares."
https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf (page 250)
Sorry, I guess there might have been a confusion, because I've been reacting to multiple things at once.
First, I was continuing the sentence "Take the word 'computer' in its broadest sense". By Turing machine model, I didn't mean a mechanical realization, but the formal definition (a set of states, a tape made of cells, a reading/writing head, and transition rules). The Church-Turing thesis roughly states that this formal definition is indeed the "broadest sense" by which to understand what a computer is. Another way to put it is as follows: if I believe in the Church-Turing thesis, and if someone asks me "what does 'computer' mean in the most general sense?", I would reply by pointing out the formal definition of a Turing machine.
Second, in an attempt to relate computation and physical processes, I wanted to share the fact that there are variants of the Church-Turing thesis that try to do exactly that. I find this point of view fascinating. E.g., as illustrated by Scott Aaronson's paper, it could be that Complexity Theory (in computer science) has something to say about laws of physics.
@Peva Blanchard No, the thesis says that a Turing Machine is one way to define what computing is, and (let's not forget the Church part) Lambda calculus is another.
But, more importantly, it postulates that there are many methods of computing that are all equivalent and equally powerful.
Jencel Panic said:
Peva Blanchard No, the thesis says that a Turing Machine is one way to define what computing is, and (let's not forget the Church part) Lambda calculus is another.
But, more importantly, it postulates that there are many methods of computing that are all equivalent and equally powerful.
I disagree, this is not what the Church-Turing thesis is about.
The fact that Lambda calculus, the class of -recursive partial functions, Post machine model, etc are equivalent is a matter of proving theorems. The proofs usually involve encoding or simulating things back and forth. You are right in saying that there are many equally powerful models of computation. But this is not a postulate, this a series of theorems.
The Church-Turing thesis cannot be formally proven, it is not a formal statement. It is a philosophical thesis, hence can only be discussed. In its most well-known formulation, it states that the formal Turing machine model entirely captures the informal notion of "effective computation". Because we have theorems relating the Turing machine model to, e.g., Post machine model, we can rephrase the previous statement in an equivalent form: the formal Post machine entirely captures the informal notion of "effective computation". It is an equivalent formulation, but not as famous.
The point of the Church-Turing thesis is really about capturing, through a formal definition, the informal notion of effective computation.
See Wikipedia's article for more details. There is a section about variations of this thesis, e.g. the physical Church-Turing thesis which seems relevant to the current topic of discussion. I learned all of that from reading the thesis of an acquaintance, Sur les limites empiriques du calcul: calculabilité, complexité et physique. Unfortunately, it is in french.
One thing which I think it's worth adding:
You'd agree that, while a philosophical thesis indeed cannot be proven, it still needs to have some arguments to back it up with in order to be credible. And the main argument supporting the Church Turing thesis is precisely this --- that every computational model is reducible to every other one i.e. that although many people invented many formulations of what computability meant, all of those turned to be actually one and the same thing.
Jencel Panic said:
One thing which I think it's worth adding:
You'd agree that, while a philosophical thesis indeed cannot be proven, it still needs to have some arguments to back it up with in order to be credible. And the main argument supporting the Church Turing thesis is precisely this --- that every computational model is reducible to every other one i.e. that although many people invented many formulations of what computability meant, all of those turned to be actually one and the same thing.
I agreed at first, but then I remembered an interesting paragraph from M. Pégny's thesis. He argues that this argument (the fact the different formulations are equivalent to one another) appeals to mathematicians, but may not be the strongest one. There is something special about Turing's machine specifically, as illustrated by Church's review of Turing's paper
Alonzo Church. Review : AM Turing, On Computable Numbers, with an Application to the Entscheidungsproblem. Journal of Symbolic Logic, 2(1) :42-43, 1937.
As a matter of fact, there is involved here the equivalence of three different notions : computability by a Turing machine, general recursiveness in the sense of Herbrand-Godel-Kleene, and -definability in the sense of Kleene and the present reviewer. Of these, the first has the advantage of making the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately - i.e. without the necessity of proving preliminary theorems.
M. Pégny does a good amount of work to articulate the underlying idea: in short, the identification with effectiveness is "intuitive" because it is "intuitive" that the working of a Turing machine formalizes the activity of a human "computing" with pen and paper, as you mentioned earlier in a different manner.
So I think I was wrong when I said that the Church-Turing thesis can be equivalently formulated w.r.t. any Turing-complete model such as Post machine. There seems to be something special about Turing machines specifically.
Yes, I take this into account. As a matter of fact Turing's original philosophical argument regarding Turing machines being intuitively equivalent to human computers, which Church is probably referencing, is precisely in the paragraph that I quoted several replies above. But there are other arguments for other models, i.e. Labda Calculus is simpler and it is also used in a lot of places e.g. I doubt that you can have a proof that a certain categories are equivalent to Turing Machines (thought I may be wrong, of course). But we have to make another topic to discuss this (perhaps we should actually move this entire discussion).
A message was moved here from #theory: philosophy > physical implementation of abstract computations by Peva Blanchard.
A message was moved here from #theory: philosophy > physical implementation of abstract computations by Peva Blanchard.
A message was moved here from #theory: philosophy > physical implementation of abstract computations by Peva Blanchard.
A message was moved here from #theory: philosophy > physical implementation of abstract computations by Peva Blanchard.
Ah, thanks for moving this, I just wanted to mention something interesting that I thought about:
I am almost certain that you can define a "Lambda machine"that behaves in the same way as a Turing machine, just it executes Lambda expressions.
You just need a tape containing a Lambda expression, written using de Bruijn indexes and a head, that converts the expression to it's normal form.
All theorems for Turing machines are valid also for Lambda machines.
https://www.sciencedirect.com/science/article/pii/1385725872900340?via%3Dihub
Yes, I don't know how, but it looks like I have been granted the right to move messages around (or maybe I always had the right to do so but didn't know).
I experimented some time ago with "compiling functional languages". I learned about De Brujin indices as a convenient way to avoid dealing with substitutions, but I never read De Brujin's original article. Thanks for the reference.
In the same vein, there has been work on categorical abstract machines:
One thing that is special about Turing machines is that it has enabled talking about computational complexity, because it makes "obvious" the resources (time steps and tape length) you need to compute.
A lot of complexity classes (P, NP, PSPACE, etc.) ultimately refer to the resources of a Turing machine (or variants, to accomodate for classical randomness, or quantum stuff, or access to an oracle, etc.).
There is a field of computer science which tries to characterize some of these classes purely in a "logical manner". I guess they aim at results stating that, e.g., a problem belongs to some complexity class if and only if it is expressible in some formal language. I really don't know much, but I'd be happy to learn more about it.
P NP are defined with Turing machine, but there definitely are other ways of defining them, the computational model is just notation.
I think (although I am not sure) they can be defined using Lambda Calculus, that's why I gave the example with the Lambda machine, an extension of Lambda calculus, where you can also measure the tape length.