Paul Cockshott and Greg Michaelson

Preface

The topic of  hyper computation is arcane to say the least. Why then have we spent considerable intellectual effort[6] on it this last decade?

We argue against it because it is an unscientific reversion to idealist models of mathematics. By itself that would be an issue only for computational theorists, but it becomes a philosophical and political matter when authors like Marciszewiski and  Bringsjord start to use ideas of hyper computation as a basis for the defence of free market capitalism. We have specifically argued against the first author already. At this point we are not going to go into Bringsjord’s economic arguments, we are going to concentrate on his attempt to establish the concept of hyper computation – a sort of mystical calculation that human souls and the divine free market are alleged to be able to perform.

1  Introduction

“Inside the museums, Infinity goes up on trial

Voices echo this is what salvation must be like after a while…”

Bob Dylan, Visions of Johanna, from Blonde on Blonde, CBS, 1966.

Since Turing and Church’s pioneering work in the 1930s, the dominant paradigm in computability theory has been characterised by the Church-Turing Thesis (CTT) that all models that capture the notion of algorithm are equivalent. An important corollary of CTT is that limitations established for any one model hold for all other models. In particular, all CTT models share some equivalent of the undecidability of the halting problem for Turing machines, placing fundamental restrictions on what can be computed both theoretically and practically.

CTT has certainly not gone unchallenged. Enthusiasts of what are variously termed hypercomputation, superTuring computing and super recursion claim that such limitations are due to the inadequacies of classic models of algorithms which may be overcome by new models, in particular those that draw on transfinite mathematics or exotic properties of physics.

Martin Davis has been a leading proponent of the CTT for over 50 years and a vocal opponent of the idea of hyper-computing[9,10]. In paper playfully entitled The Myth of Hypercomputation[8], written for the 2004 Turing Festschrift edited by Teuscher[24], Davis reprises his critiques of hypercomputation, stating that such claims:

…amount to little more than the obvious comment that if non-computable inputs are permitted, then non-computable outputs are attainable.(p1)

Davis further observes, in response to claims that physical devices might overcome theoretical limits:

A crucial and often ignored aspect … is that while the abstract theory is involved essentially with the mathematical infinite, physical computers are necessarily finite objects. (p2)

Thus, Davis notes that attempting to lever some uncomputable physics has no practical value:

What is usually taken to be the ultimate test of a physical theory, agreements with measurements to the extent that instruments permit, would be of no use, because no finite amount of information can verify the value of an infinite precision real number. (p13)

One of the targets of Davis’s criticism, the roboticist Selmer Bringsjord, has, with Naveen Sundar, written a trenchant reply to Davis[23]. Sundar and Bringsjord say that hypercomputing as a research field would be a success if any of the following possibilities is true:

• POSSIBILITY 1 or η1: The CTT is false; and it follows that there exist effective computations for functions that aren’t Turing-computable.

• POSSIBILITY 2 or η2: There are hypercomputational physical phenomena that may or may not be harnessable. In this case, the functions representing the dynamics of such phenomena are of course Turing-uncomputable.

• POSSIBILITY 3 or η3: There are hypercomputational cognitive phenomena that may or not be harnessable. In this case, the functions representing the dynamics of such phenomena are of course Turing-uncomputable. (p1)

Sundar and Bringsjord do not claim that their paper proves that any of these possibilities is true; instead they want to defend them against Davis’s scepticism. This distinction between proving that hypercomputation is a successful research field and defending it against scepticism means that they do not have to actually demonstrate that there is any substance to hypercomputation. Instead they set themselves the somewhat easier task of trying to show that Davis has not fully proven the falsity of all three propositions.

But this is to shift the responsibility somewhat. If we say that unicorns are a myth, we do not have to prove that unicorns could not possibly ever have existed. It is enough to point out that there is nothing in the way of evidence for real life unicorns1. On the other hand, if you dispute a creature’s mythical status, whether it be unicorns, or the Loch Ness Monster, you had better produce some convincing evidence if you want to be taken seriously. We will see whether Sundar and Bringsjord have even a grainy photo to support their claims.

2  Sundar and Bringsjord’s Possibility 1

Let us take the first possibility. Sundar and Bringsjord accuse Davis of simply assuming the truth of the Church Turing thesis. To back this up they quote Davis as saying:

During the 1930s, as the result of the work of a number of logicians, it became possible to explain with full precision what it means to say for some given problem that an algorithm exists providing a solution to that problem.

This, they allege means that Davis is assuming the negation of what he is trying to disprove ( proposition 1).

Bear in mind that these three propositions are ones advanced by Bringsjord and Sundar after they read Davis[8]. They are not propositions that Davis himself states or sets out to disprove. Davis was not addressing their proposition 1 here. Sundar and Bringsjord’s article that advances these propositions had yet to be written. Instead Davis is just recounting the history of the development of the formal theory of algorithms. Immediately before the passage Bringsjord and Sundar cite, Davis wrote:

mathematicians had often found solutions to problems in the form of procedures that could be carried out in a step by step fashion, where each step was entirely mechanical and capable of being carried out without any creative thought, such procedures are called algorithms.[8]

There follows a footnote in which he discusses the origins of the term algorithm and its use prior to the 20th century. So Davis is saying that the concept of algorithm was well known prior to the 1930s but it was only then that it was given a precise meaning, and only then that the limitations of algorithms were, for the first time, discovered. There is nothing exceptional about this unless Bringsjord and Sundar are arguing that the logicians of the 1930s had in some way misunderstood or misrepresented the concept of an algorithm as known to maths prior to that decade. But, recall the specific context of Turing’s seminal paper[26] revealed in the title as “With an Application to the Entscheidungsproblem”. His paper addressed the outstanding problem in Hilbert’s formalist programme: the decision problem (Entscheidungsproblem).

The Entscheidungsproblem is solved when one knows a procedure by which one can decide in a finite number of operations whether a given logical expression is generally valid or is satisfiable. The solution of the Entscheidungsproblem is of fundamental importance for the theory of all fields, the theorems of which are at all capable of logical development from finitely many axioms.[11]

Hilbert was explicitly asking for an algorithm that will be performed in a finite number of operations. Turing’s paper was a proof that in the general case no such finite algorithm exists; from which the implication was that Hilbert’s formalist programme for the foundations of mathematics must fail. It is unclear whether, as an advocate of hyper-computing, Bringsjord is disputing this fundamental result of Turing. Does he think that a general decision procedure can exist which will determine if an arbitrary theorem T can be derived from the axioms A of a formal system?

So was the formalist programme of Hilbert justified after all?

In one sense, advocates of hypercomputing, presumably believe this to be the case since they claim that in principle it is possible to construct some form of physical apparatus that could solve the halting problem. Turing showed that if we assume that a programme can exist that will determine if an arbitrary TM will halt, then we can derive a paradox which implies that no such programme can exist. The arbitrary TM could be Hilbert’s putative decision procedure D, the general algorithm for deriving the validity of any T from any A. Since we can not in general show that such an algorithm,D(T,A) will halt there can be no such decision procedure. If the advocates of hyper-computing were right, though, and there could exist some sort of general purpose hyper-computing device H that will determine if an arbitrary TM will halt, then the application H(D(T,A)) would let you determine if any theorem is deducible from any set of axioms. So the hypercomputationists will seem to have answered Hilbert in the affirmative.

But not quite.

Hilbert was asking for an algorithmic decision procedure that operates by a finite number of steps. Whilst D is an algorithm the hypercomputer applied to it, HD is not since H is assumed to be some kind of non algorithmic process. H can not be algorithmic, since if it were one could, by negated self application derive the same sort of paradox as Turing derived.

So, paradoxically, rather than refuting Turing, the existence of H would dash Hilbert’s hopes, leaving untouched Turing’s proof that there exists no finite algorithmic procedure that can act as a general decision process. Unless, that is, the Church-Turing thesis in the narrow sense turns out to be invalid. In the narrow sense this thesis asserts that :

CTT-Narrow
: Either the Lambda calculus or Universal Computing Machines as proposed by Turing can stand in for the finite algorithmic procedure that Hilbert was talking about.

The question then turns on what Sundar and Bringsjord mean by the Church Turing Thesis. The paper is not clear on this but the slides that they used for their conference presentation does give a definition as follows:

TT
: A numerical (total) function is effectively computable by some algorithmic routine if and only if (= iff) it is computable by a Turing machine.
CT
: A numerical (total) function is effectively computable by some algorithmic routine if and only if (= iff) it is μ -recursive.
CTT
: The effectively computable total numerical functions are the μ- recursive/Turing computable functions.

For the Turing Thesis (TT) and the Church Thesis (CT) they are careful to specify that they are talking about effectively computable by algorithmic routines. But for the Church Turing Thesis (CTT) they drop the specification that the computation is to be algorithmic. They have strengthened the thesis from one about algorithmic computation to one about all forms of computation, including for example analog non algorithmic calculations. It is widely believed by computer scientists that the Church Turing Thesis in this wider sense does hold, but this wider sense of the Church Turing Thesis is not what is relevant either to Turing’s writing on the decision problem or to the passage they quoted from Martin Davis. For Turing’s argument against the possibility of a Hilbertian decision procedure, we only need CTT-Narrow. The same applies to Davis’s passage that his critics quote about the limits to algorithms. To challenge this the critics would have to exhibit a way of expressing and evaluating algorithms that is capable of implementing the Hilbertian decision procedure, i.e., it would have to avoid the Turing critique or something equivalent to it. Sundar and Bringsjord give no such algorithmic system.

3  Sundar and Bringsjord’s Possibility 2

The literature on hypercomputing focuses instead on proposing various physical devices which it is hoped would be able to compute things that no algorithm could. This is the subject of Sundar and Bringsjord’s second possibility:

There are hypercomputational physical phenomena that may or may not be harnessable.

This is a proposition that tries to have it both ways. They want to be able to claim success even if turns out that there may be no physical phenomena that can be harnessed to perform computations beyond the reach of algorithms. It would be OK if the phenomena in question might just be unharnessable for temporary technical reasons; in the way Babbage could not have built a computer using switching transistors given 19th century technology. Of course we can not predict how physical theory will develop in the future, but if existing physics tells us that the proposed processes may not be harnessed for computation, then one can hardly hold up such proposed processes in support of hypercomputation.

Sundar and Bringsjord claim that so far some 20 models of hypercomputation have been proposed and that a refutation of hypercomputing would require a demonstration that these are physically impossible. They claim that no such demonstration can be found in the literature. There is a body of literature that examines individual proposals for hypercomputational processes and shows flaws in them.

In [8] for example, Davis critiques models such as those of Siegelmann[21] which require uncomputable inputs. Davis observed that proposals for hypercomputing rely on infinite resources of one sort or another.

In this article it will be shown that such claims fly in the face of the inability of all currently accepted physical theories to deal with infinite precision real numbers. When the claims are viewed critically, it is seen that they amount to little more than the obvious comment that if non-computable inputs are permitted, then non-computable outputs are attainable. [8, abstract]

Sundar and Bringsjord object that not all hypercomputation models demand uncomputable inputs. Yes but the others rely infinities in other ways.

In [6, Chap 6] the critique of Davis is extended to all proposals that require exact real numbers either as inputs, outputs or intermediate values. Zeno machines[7], black hole machines[17], Newtonian machines[2], analogue optical machines[4,3] all of which use infinities, are critiqued in [6,chapter 8]. The proposals of Wegner and Eberbach[29,30] to use new calculi or interaction machines are critiqued in [5]. Adiabatic quantum relaxation approaches[13] which involve infinitely accurate measurements of energies have been critiqued in [22,25,10,12].

4  Sundar and Bringsjord’s Possibility 3

The third possibility is that there may be hypercomputations based on cognitive phenomena. To explore this, Sundar and Bringsjord consider a model of a scientist presented with symbol sequences they hypothesise belong to the same language. By observing the sequences, the scientist hopes to learn a language which can generate them.

As we argue above, in their defence of Possibility 1, Sundar and Bringsjord wrongly castigate Davis for allegedly assuming CTT; that is, for implicitly assuming CTT’s negation that Possibility 1 is false. Ironically, here Sundar and Bringsjord explicitly assume that their scientist has hypercomputational properties:

In most analyses in CLT [Computational Learning Theory], no computational limits are placed on the learner F. Only information theoretic properties of F are considered. So the scientist F can well be hypercomputational. (p7)

Sundar and Bringsjord are in a longstanding tradition of special pleading for human abilities. Thus, Nagel and Newman[16] deploy Gödel’s Theorems to claim that human cognition transcends formal systems, and Penrose[18] claims that human consciousness cannot be CTT computable.

At heart, possibility 3 depends on the explicit distinction between cognition and physics. We dispute that cognitive processes may have anything other than physical actuality: humans are constructed of finite matter, and exist for finite time in bounded space. Certainly, Sundar and Bringsjord present neither evidence nor argument for this distinction.

5  Infinite and unbounded tapes

Finally, Sundar and Bringsjord claim that different standards are applied to CTT models and hypercomputation. In particular, they say that it is unreasonable to criticise hypercomputations for requiring infinite resources when Turing machines require infinite tapes, citing a hypercomputing folk argument:

If one believes than an idealised Turing machine can be implemented, then one cannot dismiss hypercomputation on the grounds of implausibility of infinite resources. In other words, if approximate implementations of Turing machines are enough for Turing machines to be considered “real”, then why apply a radically more demanding set of desiderata to hypercomputers? (p9)

This is a perfectly reasonable objection to those that portray Turing machines as having infinite tapes, as indeed Davis does. However, Turing’s original Entscheidungsproblem paper[26] makes no such assumption. Rather, at each stage in a computation the tape is of fixed length and additional tape cells are added as required. Thus, the tape is finite but unbounded.

We think that there are two reasons for the widespread presentation of Turing machines having infinite tapes. First of all, Post machines[19], which are contemporaneous with and very like Turing machines, do require infinite tapes, and Post machines are CTT. Thus, it might appear that whether a tape is finite but unbounded or infinite has no effect on the computational power of the machine. More significantly, Turing himself in a later paper[27] refers to his machines as having infinite tapes.

Our objection to the use of infinities in hypercomputation applies equally to the popular characterisation of infinite Turing machine tapes. In Aristotelian terms[1], these require the presence of an actualised infinity of resource as a prerequisite for their realisation. In contrast, Turing’s original formulation only requires what Aristotle termed a potential infinity, that is the ability to add individual units of resource piecemeal as required.

Sundar and Bringsjord are protagonists of exploiting actualised infinite resource in hypercomputing and appeal to infinitary logics in justification:

Standard formal proofs in first-order logic, when checked, are finite in length; but formal proofs in even the smallest infinitary logics (e.g., Lw1w) are allowed to be infinitely long (specifically, countably infinite). In decades of mathematical work since the advent of infinitary logics, no one to our knowledge has even entertained the notion that such work on infinitary logic is anything but just good, solid formal science. (p9)

However, infinitary logics, like all mathematics of infinite constructs, deal in rules governing the manipulation of finite representations. That is, an infinity, even a countable infinity, never needs to be actualised. In contrast, as noted above, many hypercomputational schemes require an actualised infinity upfront.

6  Milner and hypercomputation

Amongst their references for research on hypercomputation (p1), Bringsjord and Sundar list Milner’s 1993 Turing Award lecture[14] and his 1999 book on the π-calculus[15]. However, neither reference actually mentions hypercomputation.

Milner’s book does note that both the λ-calculus and Turing machines can be encoded in the π-calculus but says nothing about modelling the π-calculus in either formalism. Thus, without further consideration, it is conceivable that the π-calculus is more powerful than either of the other formalisms.

However, Sangiorgi and Walker[20] discuss establishing the undecidability of π-calculus equivalence by reducing the problem to that of λ-calculus equivalence, which is undecidable. Indeed, equivalence has long been known as undecidable for CTT systems.

Furthermore, Vasconcelos and Ravara[28] establish an undecidability proof for the notion of communication errors in the π-calculus. Again, they reduce the problem to one in the λ-calculus and shows it equivalent to that of establishing whether or not an arbitrary λ expression has a normal form, again long known to be undecidable.

Sundar and Bringsjord also cite Wegner[29], who, with Eberbach[30], has claimed that the π-calculus is hypercomputational. We have refuted this claim[5].

7  Conclusion

Healthy science is characterised by strong debate. On that measure, as the papers by Davis, and Sundar and Bringsjord attest, computability is in robust health. Nonetheless, ultimately, as the old saw has it, the proof of the pudding is in the eating.

Sundar and Bringsjord acknowledge that there is, as yet, no evidence that hypercomputers can actually be constructed. We remain unconvinced that there is any evidence that hypercomputation is even possible.

Despite the limitations inherent in CTT computing, digital computers have truly revolutionised how the inhabitants of this peculiar planet of ours model and manage reality, in the 75 years or so since Turing’s seminal paper. Thus, with Davis, we think that there is some onus on those who claim to be able to better this astonishing track record, to demonstrate an actual, not a potential, solution to some concrete instance of a CTT undecidable problem.

References

[1]
Aristotle. Metaphysics. University of Michigan Press, [Ann Arbor, Mich.], 1960.

[2]
EJ Beggs and JV Tucker. Embedding infinitely parallel computation in Newtonian kinematics. Applied mathematics and computation, 178(1):25-43, 2006.

[3]
E. Blakey. A Model-Independent Theory of Computational Complexity. Price: From Patience to Precision (and Beyond), 2008.

[4]
Olivier Bournez and Michesl Cosnard. On the computational power and super-turing capabilities of dynamical systems. Technical Report 95-30, Ecole Normal Superior de Lyons, 1995.

[5]
P. Cockshott and G. Michaelson. Are There New Models of Computation? Reply to Wegner and Eberbach. The Computer Journal, 50(2):232, 2007.

[6]
Paul Cockshott, Lewis M Mackenzie, and Gregory Michaelson. Computation and its Limits. Oxford University Press, 2012.

[7]
J. Copeland. Accelerated Turing Machines. Minds and Machines, 12:281-301, 2002.

[8]
Martin Davis. The myth of hypercomputation. In C. Teuscher, editor, Alan Turing: life and legacy of a great thinker, pages 195-211. Springer, 2004.

[9]
Martin Davis. Logical Approaches to Computational Barriers, chapter The Church-Turing Thesis: Consensus and Opposition, pages 125-132. LNCS. Springer, 2006.

[10]
Martin Davis. Why there is no such discipline as hypercomputation. Applied Mathematics and Computation, 178(1):4-7, 2006.

[11]
David Hilbert and Wilhelm Ackermann. Principles of mathematical logic, volume 69. American Mathematical Soc., 1950.

[12]
Andrew Hodges. Can quantum computing solve classically unsolvable problems? arXiv preprint quant-ph/0512248, 2005.

[13]
Tien D Kieu. Quantum algorithm for Hilbert’s tenth problem. International Journal of Theoretical Physics, 42:1461 – 1478, 2003.

[14]
R. Milner. Elements of Interaction: Turing Award Lecture. Commmunications of the ACM, 36(1), 1993.

[15]
R. Milner. Communicating and mobile systems: the π-calculus. CUP, 1999.

[16]
E. Nagel and J. Newman. Gödel’s Proof. New York University, 1958.

[17]
I. Németi and G. David. Relativistic computers and the Turing barrier. Applied Mathematics and Computation, 178(1):118-142, 2006.

[18]
Roger Penrose. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford University Press, March 1999.

[19]
E. L. Post. Finite Combinatory Processes. Formulation 1. Journal of Symbolic Logic, 1:103-105, 1936.

[20]
D. Sangiorgi and D. Walker. The π-calculus: A Theory of Monile Processes. CUP, 2001.

[21]
H. T. Siegelmann. “computation beyond the turing limit”. Science, 268(5210):545-548, 1995.

[22]
Warren D. Smith. Three counterexamples refuting Kieu’s plan for “quantum adiabatic hypercomputation” and some uncomputable quantum mechanical tasks. J.Applied Mathematics and Computation, 187(1):184-193, 2006.

[23]
N. G. Sundar and S. Bringsjord. The Myth of `The Myth of Hypercomputation’. Parallel Processing Letters, 22(03), 2012.

[24]
C. Teuscher, editor. Alan Turing: life and legacy of a great thinker. Springer, 2004.

[25]
Boris Tsirelson. The quantum algorithm of kieu does not solve the hilbert’s tenth problem. Technical Report quant-ph/0111009, arXiv, 2001.

[26]
A. Turing. On Computable Numbers, With an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42:230-65, 1937.

[27]
A. Turing. Lecture on the Automatic Computing Engine, 1947 . In B. J. Copeland, editor, The Essential Turing. OUP, 2004.

[28]
V. T. Vasconcelso and A. Ravara. Communication Errors in the pi-Calculus are Undecidable. Information Processing Letters, 71(5-6):229-233, 1999.

[29]
P. Wegner. Why interaction is more powerful than algorithms. Communications of the ACM, 40(5):80-91, 1997.

[30]
P. Wegner and E. Eberbach. New Models of Computation. Computer Journal, 47:4-9, 2004.

Footnotes:

1Yes there are so-called unicorn horns, but these are all narwal tusks.

Advertisement