Wednesday, December 10, 2003

Turing Gets the Last Laugh?

In 1936, twenty-two year-old Alan Turing published On Computable Numbers (how are you doing?), which both set out the architecture of (what would later be called) the Turing Machine and answered Hilbert's third problem: is there a method to determine whether or not a proof exists of a given problem? The answer was "no," and thus Turing ushered in the idea of noncomputable functions. Well, not exactly. It turned out that Princetonian Alonzo Church had beaten Turing to the punch, publishing a similar proof only a few months earlier. Church, however, used the lambda calculus. It was later shown that both proofs were equivalent, and that Turing Machines and the lambda calculus had equivalent power. This wider idea, that all specifications of solving a method by mechanical means have equivalent power, is known thus as the Church-Turing Thesis. (It is, contrary to many references, not proven.)

Flash forward fifty or sixty years. Or even to today. Two camps, one albeit much smaller than the other, believe in different kinds of programming. One believes that programs should address memory, should read and write variables, should execute loops, and generally act as if they are specifying a set of rules to be followed by a machine. The other believes, well, that programs should be written in a version of the lambda calculus. The first is, of course, programmers who use C and its ilk, the other is Lisp. And of course these are equivalent in expressive power, because Turing and Church showed that they are equivalent formulations. But these camps are essentially followers of Alonzo or Alan, sticking to their favorite metaphor.