After posting a video of the Turing Centenary talk that Greg Michaelson and I did, I got sent a question : “I didn’t exactly get it why didn’t you consider both UTC and LC to be equivalent.”

The question is understandable. After all, are we not taught that the Lambda Calculus and the Universal Digital Computer are equivalent?

It depends on what one means by equivalent. In an ideal, or idealist, sense they are equivalent. If a function can be computed by the lambda calculus, the function can be translated into a form that a universal digital computer can also compute and vice versa.

But from a materialist standpoint they are very different.

The lambda calculus by itself can compute nothing. It is only useful when you have a lamda calculus interpreter running on an actual computer. In which case it is not the lamda calculus that actually does the computation. The LC is a virtual machine running on a real machine which does the actual work.

In principle the actual computer that the LC is running on could be a person, a mathematician who has learned the rules of, or who has a crib sheet giving them the rules of the LC. Indeed when Turing was writing in the 1930s the word computer normally referred to a person who did computation.

The machine that Turing proposes is one directly modelled on what a person does when they compute. So Turing was from the start going in at a lower level than Church. Church relied on there being a human computer to do the actual work with his calculus. Turing asks what does a mathematician do when they apply any calculus?

He then proposes a mechanism that mimics the essential steps that the human does. He thus comes up with something more general than Church, since once you build the machine and supply it with energy it can do any calculations – including acting as an interpreter for the LC.

This forces us to consider computation as a material rather than an ideal process, and actually computer designers end up being preoccupied with very material considerations: minimising power use, disposing of waste heat, ventilation etc.

From the Universal Digital Computer you can move on to see that there are material limits to computation over and above the logical limits to computability originally outlined by Turing in his first paper. Limits to speed set by the speed of light, thermodynamic limits (Landauer limits). If you approach the issue from the level of the LC none of this is apparent.

Indeed the idealist abstractions that the LC encourages have in the past led to serious diversions of effort by platonist computer scientists. For example during the Alvey Programme in the 1980s in the UK, the influence of the platonists was so strong that they were able to promote multi million pound programmes to build LC machines that would, they claimed, achieve a high level of parallelism. The high parallelism was promised because it is in principle possible to evaluate the arguments of a pure functional language in parallel. This approach was claimed to hold great promise.

A machine called Alice^{1} was built using over 100 32 bit processors ( Transputers ). Greg and I were able to get access to it and benchmark it and discovered that it actually performed worse when running a functional language than a single processor made by a competing firm (a HLH Orion).

Actual parallel performance is constrained by ability to transmit data , bandwidth of channels and similar considerations. The ideal functional independence of parameters of LC like languages plays a very small part since the LC like languages have to run as virtual machines on top of a physical machine and it is by concentrating on the parallel physical machine constraints that advances were made. The whole project of functional language as a route to parallelism died a death on this account.

It is the geometry of 3D space within which we must transmit information that imposes the fundamental limits to parallel computation.

1HARRISON PG, REEVE MJ, 1987, THE PARALLEL GRAPH REDUCTION MACHINE, ALICE, *LECTURE NOTES IN COMPUTER SCIENCE*, Vol: 279, Pages: 181-202, ISSN: 0302-9743

A couple of reflections on this Paul.

1. This is a clear argument that while LC and UTC are “ideal[ly …] equivalent” they differ from a materialist standpoint, and why the materialist view imposes additional limits on computation. Perhaps be careful that naive readers may contrast “ideal equivalence” with “material equivalence” as this is not what you mean.

2. I’ve worked for many years on parallel (LC-based) functional languages, and like many others in the field have battled with “the idealist abstractions that the LC encourages”. For example countering arguments that “of course all reducable expressions in a lambda term can be reduced in parallel”.

3. Interestingly one of the key aims in the Alvey programme was far from the LC idealist abstractions. It aimed to explore alternative LC-based computer architectures to address a Physical limitation: the von Neumann bottleneck identified by John Backus in his 1978 Turing Award lecture.

Many believe that LC architecture efforts like Alvey failed for a combination of practical reasons, primarily (1) the performance of von Neumann machines improved rapidly during the development of the new LC architectures (often based on last-generation chips) (2) The new machines lacked system software: Operating System, Compilers & other tools.

Interesting points Phil.

I was on a competing Alvey strand to be build persistence machines.

On the von Neuman bottleneck, this was of course very real. But to solve it you had to go much lower level I think. You had to distribute processing elements in the memory the way that the Distributed Array Processor did, or Hillis’s first design did.

That path led to array processors, array lisp and fortran dialects. In additon it is the path that has eventually given us GPU’s and CUDA once it became possible to put large numbers of caches on the die.

The ideas of combinators and data flow had in a sense already been standard in IBM mainframes since the 360 series with their common data bus and their reservation stations. But here it was being done by hardware engineers at a sub instructionset level. The Alvey effort attempted to solve the problem at the high level language framework, which was too removed from the bare metal to pay off. Another comparable failure was the Intel 432 project which again tried to pose the problem at the high level language plane – for smalltalk in this case. Again the performance was absolutely terrible for the number of transistors used.