As discussed discussed previously, some additional restrictions we could place on Turing machines, like determinism or coordinating operations on the two stacks, turn out not to matter, as long as we’re only concerned with what can or can’t be done by such a machine and not the precise mechanics of how it is done. What about removing restrictions rather than adding them? Could we, for example, do things with a three stack machine which we can’t do with a two stack machine? The answer, both for three and for any higher number, turns out to be no. It’s possible to simulate a machine with more stacks on one with just two, just as we could simulate a non-deterministic machine on a deterministic one.
The example above turns out to be typical. Every less restrictive idealised machine people have been able to imagine, as long as we allow only operations which could be mechanically realised in finite time, turns out to be no more powerful than a Turing machine in terms of what it can compute. This observation has led to what’s called the Church-Turing thesis, which is essentially the statement that computable means computable by a Turing machine, i.e. that there is no more powerful notion of computability waiting to be discovered. This thesis isn’t susceptible to rigorous proof, or even really a rigorous formulation, but it is almost universally believed by people who study the theory of computation.