Wednesday, December 24, 2003

Trickster Redux

I'll have some more content-ful reflection on Hyde's book after the holidays (and after I finish it), but it's so full of great quotes that I'll just pass on one more.
"We may well hope our actions carry no moral ambiguity, but pretending that is the case when it isn't does not lead to greater clarity about right and wrong; it more likely leads to unconscious cruelty masked by inflated righteousness."
I'll be taking some time off for travel and family. See everyone soon!

Monday, December 22, 2003

Words from the Trickster

I'm travelling today, so no time for a long post. I'll leave you with a quote from Lewis Hyde's amazing book, Trickster Makes This World:
"There is no way to suppress change... not even in heaven; there is only a choice between a way of living that allows constant, if gradual, alterations and a way of living that combines great control and cataclysmic upheavals."

Thursday, December 18, 2003

You'll Believe a Robot Can Fly!

On this only-late-a-day anniversary of the Wright brothers' feat, I thought I'd talk about flying robots.

Paul Hoffman writes in the New York Times about the historical significance of the brothers' flight. Not the first airborne flight (that was the Montgolfiers), not the first heavier-than-air craft, not even the first powered flight. It was the first controlled flight; the genius was the use of wing-warping to independently control the wings. Like many technological advances, it was a new way for a human to extend his will through the limbs of a device: just another step toward human-machine symbiosis.

Control is still the hallmark of progress in aviation. Airframes, for most of the history of flight, were stable. That is, a sudden loss of power -- or of control -- and the plane would glide ahead in a straight line (and, of course, a little downward). With the advent of fly-by-wire (that is, electronic control of control surfaces, rather than hydraulic), of computers small enough to place on board, and (most importantly) of computers powerful enough to model and design airframes, a new generation was born. These new planes were dynamically unstable; a loss of control would quickly send the plane hurtling away from its original bearing. Replacing control born of aerodynamic stability was a constant adjustment by computer; too fast and too small for a human to perform, or even notice.

In a real sense, modern aircraft are robots; they receive directions from humans indicating bearing, they sense the minute changes in control surface status and environmental conditions, and they actuate controls to return to the original state. No, they are not autonomous robots in the sense of UAVs (although many of those are just remote control) or the hey-the-whole-ship-is-a-Cylon in Battlestar Galactica (but seriously, how cool was that?). But in a more interesting and probably more common way, they represent the mode of robotic technology that is otherwise invisible, but capable of handling tasks impossible for humans. And doing so in concert with humans so that the sum total is more capable than either is alone.

Wednesday, December 17, 2003

Scripting the Player

One of the best ideas in Janet Murray's Hamlet on the Holodeck is the idea of "scripting the player," that is, subtely (or, I suppose not) influencing the player (or any active participant of any interactive experience) to stay within a set of reasonable actions. It's an intruiging and provocative idea, and it's too bad she doesn't do more with it.

The idea (as she does point out) is intimately tied to the concepts of narrative and genre; expectations that we bring with us when we encounter new situations. A player/reader encountering a mystery novel would intuitively know, given our culture, that there are certain reasonable actions -- questioning witnesses, sending clues to the lab, shading the top sheet of a notebook to look for impressions, etc. -- that the player can take. And, conversely, no actions that don't make sense. This isn't a science fiction story, so no beaming up, no Vulcan nerve pinch, no super-duper gadgets. Even better, if handled correctly the player doesn't need to be informed directly that she's in a mystery story, she'll pick that up from the cues in the environment - the trench coats, the rain, the baffled police, and, of course, the mystery.

Literature breaks these unwritten rules of genre only at great peril. When it's successful -- Jonatham Lethem's Gun, With Occasional Music, for example -- it's sublime, and the dissonance between genre and non-genre creates useful narrative tension. When it's not -- the film adaptation of Smilla's Sense of Snow -- it's a disaster and leaves the (in this case) viewer confused and dissatisfied.

How broad is this idea? How powerful? Can we use it as a design element in non-literary situations, such as collaborative software? Can we do it without being obvious? How much is culturally encoded, and what happens when people from other cultures (or just unaware of the appropriate subculture) encounter the same situation?

Tuesday, December 16, 2003

Some Geometry Problems

1. A heptiamond is a shape constructed from seven equilateral triangles. How many unique heptiamonds are there?

2. Of those shapes, how many can monohedrally tile the plane?

Monday, December 15, 2003

Dueling Science Books

Is it fair to review a book that you didn't finish? It's not because of the book -- well, not entirely -- but only because with an infant son, I tend not to finish books from the "New Releases" section from the library.

Two recent (well, somewhat recent) science books cover very similar territory and use a very similar device. It's pretty common in science writing to start off by saying something like, "To understand this issue, we'll have to travel to the moons of Jupiter, the Tomb of the Unknown Soldier, and the source of the Amazon..." or some such. It's a useful and sometimes effective gimmick. I took a class while I was in college advertised as being about "Time Travel;" in fact, it was just an excuse to visit the usual topics of metaphysics. Every year the professor came up with a new theme gimmick, and, in his words, used it like a coathook to hang the same syllabus. It was successful.

The two books that use this are Rare Earth: Why Complex Life is Uncommon in the Universe, by Peter D. Ward and Donald Brownlee, and In the Blink of an Eye, by Andrew Parker. Rare Earth is a good book, made especially interesting because it stakes out ground -- that any life more complex than bacteria is unlikely to exist except on Earth -- that is at odds with mainstream thought. Blink (which I've mentioned here before) touches a similarly interesting idea, a proposed explanation for the Cambrian explosion, but is less successful. Each is making an important argument about the nature of life and its evolution on earth.

To explain their main theses, each book covers a wide variety of topics, including geology, optics, astrophysics, paleontology, archaeology, and evolution, among others. However, they structure their argument in opposite ways. Rare Earth sets out its thesis in clear terms in the introduction, and then, piece by piece, sets out the evidence supporting it. Each chapter nails the idea a little more, until the reader is solidly convinced that they must be right (of course, they might not be). Blink, however, is structured like a mystery; each chapter provides a clue, which Parker coyly suggests will lead to the shocking conclusion. This is irritating in two ways: one, the conclusion is revealed on the dust jacket, and two, it means that Parker can't directly connect his reasoning with his conclusions the way Ward and Brownlee do. It makes for a frustrating reading experience, because each chapter feels like an arbitrary topic, and the reader is uncertain how to understand it. It's a basic rule of pedagogy that a teacher must provide a framework into which the student can place new facts, and Parker doesn't do a good job doing that.

Worse, Parker makes a couple of really, really basic science errors. Dr. Parker, bats do not use radar to find their prey. The Hawaiian islands are not at a plate boundary. Errors like this make one doubt, however unfairly, the accuracy of the rest of the science. Because of these problems, I didn't finish Blink before it was due back at the library. Perhaps it improved in the last few chapters.

Thursday, December 11, 2003

Ontologies of Knowledge, A Quote Not By Will Rogers, and a Dash of Politics
or: John Kerry, Get Out!

Donald Rumsfeld has been widely quoted saying:
"...as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know." (original link)
He's right, of course, but he left out the last, perhaps most important set of "knowns" - "the stuff you know that just ain't so." (This quote is usually attributed to Will Rogers, but if you search for it, you'll see dozens of variants, implying strongly that there's no canonical original. It might have been said originally by Josh Billings.) Rumsfeld's failure to acknowledge that is a typical blindness of the Bush administration; the entire Iraq operation has been informed by a sad lack of questioning the "facts" that they considered as "known knowns."

To turn around and punch the other party, too, let's consider John Kerry. Kerry is particularly fond of criticizing the Bush Administration's handling of Iraq, and I'd suspect that he'd strongly agree with the above paragraph. So it's time that he show more evidence of leadership than Bush. He needs to face up to an unpleasant truth for which he's been in self-denial: he will not be president. He won't win a single state in the Democratic primary. There is no scenario in which he becomes a contender. He has two choices now. He can stay in, get trounced by Dean in New Hampshire, and become instantly irrelevant, and Dean marches to victory. Or, in a statesmanlike move, he can get out of the race now, endorsing another candidate. If he endorses Clark, Clark is likely to place a strong second in New Hampshire, which in turn will give him a shot at first place in South Carolina. To sum up, Kerry's choices are to stay in and let Dean have the nomination, or get out and give Clark a shot.

And people say that ontologies of knowledge aren't useful.

Wednesday, December 10, 2003

Turing Gets the Last Laugh?

In 1936, twenty-two year-old Alan Turing published On Computable Numbers (how are you doing?), which both set out the architecture of (what would later be called) the Turing Machine and answered Hilbert's third problem: is there a method to determine whether or not a proof exists of a given problem? The answer was "no," and thus Turing ushered in the idea of noncomputable functions. Well, not exactly. It turned out that Princetonian Alonzo Church had beaten Turing to the punch, publishing a similar proof only a few months earlier. Church, however, used the lambda calculus. It was later shown that both proofs were equivalent, and that Turing Machines and the lambda calculus had equivalent power. This wider idea, that all specifications of solving a method by mechanical means have equivalent power, is known thus as the Church-Turing Thesis. (It is, contrary to many references, not proven.)

Flash forward fifty or sixty years. Or even to today. Two camps, one albeit much smaller than the other, believe in different kinds of programming. One believes that programs should address memory, should read and write variables, should execute loops, and generally act as if they are specifying a set of rules to be followed by a machine. The other believes, well, that programs should be written in a version of the lambda calculus. The first is, of course, programmers who use C and its ilk, the other is Lisp. And of course these are equivalent in expressive power, because Turing and Church showed that they are equivalent formulations. But these camps are essentially followers of Alonzo or Alan, sticking to their favorite metaphor.

Tuesday, December 09, 2003

The Turing Test

Andrew Hodges strongly implies, although I don't believe he actually states it directly, that the Turing Test -- the classic test of artificial intelligence in which a computer is considered intelligent if it can fool a human into thinking it's another human -- was inspired by the fact that Alan Turing was gay, and so lived in a society where "passing" was of prime concern. This theory is only marginally weakened by the fact that Turing was fairly openly out.

Thursday, December 04, 2003

Biology vs Computer Science

I used to have this recurring argument with my boss. He (although he had no formal training in biology that I know of) insisted on the coming supremacy of biology over all other sciences and human activities. The information age was over, he'd say, to be supplanted by the biological age. When I was feeling especially snarky (not that rarely), I would characterize this as "In the future, we'll get food, clothing, and building materials from plants!" Of course, biology will become a more powerful generator of economic activity, no doubt. But what I disagreed with was that it was principles of biology that would reign supreme. On the contrary, I'd argue, it is the principles of information that will transform biology.

This is not a particularly popular belief among biologists these days, even as they query gigabyte databases to identify gene regulation networks. And the science of information is so inchoate these days that it's hard to imagine it transforming a field as broad, diverse, and established as biology. But to accomplish the very kinds of societal transformations that my boss dreamt of, biology must borrow the intellectual apparatus of computation, of complexity, of hierarchies of grammar, of, well, I hate to say it, math.

The influence will, of course, flow the other way. Just as the science of thermodynamics did not arise until Carnot studied the steam engine, the science of information could not -- still cannot -- arise until we study the computer. But just as thermodynamics is central to all nature, not just steam engines, we'll find that information is central to life, and not just in silicon.

Wednesday, December 03, 2003

The Mystery of Harry Turtledove

Harry Turtledove is an award-winning "alternate history" author, best known for The Guns of the South, his novel of the Civil War in which the South receives AK-47s from time travelling racists. There's not a lot of science fiction or Civil War novels with blurb quotes from James McPherson (winner of the Pulitzer Prize for Battle Cry of Freedom), so you know it's good.

There are two bizarre mysteries about Turtledove. One is that he's an uncanny imaginer of alternate histories. What if the South won the Civil War? What if the Spanish Armada had successfully invaded Britain? What if Germany won WWII? What if WWII was fought with magic? And on and on. What's original is not the idea of these plot twists - countless authors have attempted similar stories. What's amazing is the intense and convincing sense of verisimilitude, of "oh yeah, this is how it would have been." How does he do it?

The second mystery is that he is shockingly, unbelievably, suspiciously prolific. In 2003, he published five novels (In the Presence of Mine Enemies, American Empire: Victorious Opposition, Gunpowder Empire, Jaws of Darkness, and Conan of Venarium, a total of 2110 hardcover pages). This is no isolated case; last year, he published four novels (Ruled Britannia, American Empire: The Center Cannot Hold, Rulers of the Darkness, and Advance and Retreat for a total of 2026 pages). Calculate the pages per day any way you want. This isn't possible.

So, to recap: a person with eerie accuracy and precision in discussing alternate worlds is also more prolific than a single person could be. What could be the common explanation?



Tuesday, December 02, 2003

Eyetoy

I got to play with an Eyetoy (warning: irritatingly Flashy site) over Thanksgiving. For those without the patience to sit through Flash, it's a peripheral for the Playstation 2 that uses a camera to image you, and places you on the screen, interacting with the graphical elements, such as baddies. It's a load of fun, or at least it was for an hour or so, playing the fairly ditzy games that come with it. It's a brilliant idea and reasonably well implemented, and should give pause to the school of thought that worries that kids sitting in front of their PS2 aren't getting enough exercise. Of course, it awaits a real game that uses it, or a mod that can run the Eyetoy interface at the same time as a standard game, with some set of motions transforming into the standard two-hand controller.

For larger reasons, it's an interesting interface. It's not really virtual reality - you're still watching the action on the same screen as before. But it's not quite just another input device; your actions are being directly transferred into the virtual world. I saw a video game once in an arcade. It was a kung-fu fighting game, and, like the Eyetoy, the input device was your body, watched by a camera. To punch, you moved your hand forward; to advance, you moved your foot forward, etc. But it wasn't exactly the same, because there was a translation. A small move forward of your hand translated into a dramatic punch by your virtual avatar. With Eyetoy, your actions are identical on the screen, and in a more immediate and real sense, that's "you" up there.

Virtual reality has been a hit in the movies for decades, but still struggles as an actual technology. One reason may be that it holds so closely to the central architecture of a surrounding world that encompasses your actions. But the potential vocabulary of physical actions, animated responses, and interfaces can be much richer than that, and, as the Eyetoy proves, there are other points in the design space worth exploring.

Monday, December 01, 2003

World Creation, World Consumption

One of the highest praises that can be sung of speculative fiction authors is that they are accomplished "world builders;" that they can paint a portrait of a fictional world (Earth or otherwise) so convincing yet so innovative that readers are caught up in it and find it difficult to believe that it isn't real. Tolkien's Middle Earth, of course, but also Anne McCaffery's Pern, Robert Silverberg's Majipoor, Walter Jon Williams' Metropolitan, etc.

There's another version of this, people who can imagine parts of our world that seem like they must be real: fictional companies, countries, products, etc., that we expect to see on supermarket shelves or wherever. (In a weird inversion of this, I was convinced that James Blaylock had invented a cereal called "Weetabix" for one of his novels. One day I did see it on the supermarket shelf and received a rather large shock.) But of course these things don't exist, and part of the wonder of the book is that they don't exist, and we wonder at the author's skill. At the same time, these products (or whatever) serve as important signifiers in the book that we are reading about something somewhat magical.

What, then, of the commercial forces that bring this fictional items into existence? Three examples: Willy Wonka candies, Buzz Lightyear cartoons, and sundry products from Harry Potter (Bertie Bott's Every-Flavour Beans, Butterbeer, etc.). Now that children encounter these products as part of the everyday world, will the inventiveness of the authors seem as great? Will the novels seem as fantastic? Every time I see another cherished fictional product brought into the world in the name of profit, I feel like a little bit of magic has not entered the world, but left it.