Thursday, November 20, 2003

Why Analog?

Why is it called "analog" computing? Why, when digital clocks came out (meaning, of course, both that they displayed digits as well as that they offered a digital abstraction of time, of absolute discrete units), did we start calling clockwork clocks "analog"?

At least according to Andrew Hodges, it's for an incredibly obvious and good reason. Maybe this was obvious to everyone but me? The earliest analog computers were things that calculated things like the tide. Calculating tides, like lots of good computing applications, is a conceptually easy but tedious operation. You've got the Earth turning, the moon slowly sliding around the Earth, and the Earth turning around the Sun. All factors (plus, of course, local geography) in determining when the water will come up over your toes. So the "computers" that calculated the tides used physical representations; interlocking gears that were the right relative size to each other that the output was calibrated to the right degree of input from each gravitational source.

The system was -- get ready for it -- an analog of the physical system. Analog, in the sense of analogous or analogy. The physicality of the computer was directly analgous to the physicality of the system being modeled. Analog computing, then, isn't just the opposite of digital, in the sense of using real numbers instead of the digital abstraction, it's computing that rather than discrete math, is constructed by physical analogy.

This reveals why analog computing is potentially so much more efficient at some simulations; it has the universe on its side. In some sense it isn't even simulating as much as it is similar ("similarating?"). It also suggests why it is ultimately limited: it may be using the universal similarity, but it is not universal; it can only be an analogy to one kind of system. If you want to have an analog to another system, you'll need another computer.