Tuesday, November 25, 2003

Happy Thanksgiving

I'll be taking a few days off for the holiday. Back on Friday.

Travel safely, Americans.
Puzzles and Games

I'm working myself up to a large-scale assault on Greg Costikyan's definition of a game, but that'll take some time. He makes a distinction between a puzzle and a game, and while I agree with his conclusion, I'm left a little frustrated by the lack of a rigorous definition of "puzzle."

So batter up, here's a swing at it. I'll define a "choice activity" as a sort of activity in which a) a person can choose a series of actions or symbols, arranged in a temporal or spatial order, in such a way that those actions/symbols satisfy a set of constraints. A puzzle is a choice activity in which there is only a single person participating, or a group of people acting with a shared goal. In simple puzzles, the constraints are fixed; in most puzzles, the selection of various actions changes the constraints. For example, writing down a word in a crossword puzzle adds the constraints that those letters must be used by the crosswise words.

A game, then, is just a choice activity in which there are two or more participants, and there is some utility function that defines whether, at any time, if a player is winning.

What's interesting about these definitions is their parallelism to Turing Machines and hypercomputation. Puzzles, with their "arrangement of symbols according to constrains" is almost a classic Turing Machine definition. And games, with their interactivity between competing forces, reflect Robin Milner and others' proposals for augmenting Turing Machines.

Monday, November 24, 2003

More on Generation

Generative functions in music are not nearly as rare as I implied. Given thought, all sorts of generative music comes to mind. Fugues, canons (or is it "canon"?), rounds: all are in some sense generative. That's interesting because it elucidates an important aspect of generative functions: the split between the content (the single line of music) and the rule for combining it (in a round, the rule that says the second voice plays the line one measure, say, later, as in the beautiful Telemann duet Concerto in D).

With a much broader view, you could even think of a given tonic scale and a set of "accepted" rules of melody and harmony as a powerful if very imprecise generative system. Certainly, there was some set of rules that produced Baroque music, or Indian music, or Scottish bagpipe music. They are recognizable as types precisely because the ear and brain can detect, half-hidden but inescapable, the lurking presence of a set of generative rules.

Enough about music! Tomorrow, what's the difference between a puzzle and a game?

Friday, November 21, 2003

Art is a Generative Function

I wanted to expand on some ideas I touched on earlier. Any work of art is what I'd call a generative function; it can produce many, possibly an infinite number of, reactions in the mind of the reader/listener/viewer. But most art is a particularly static kind of generative function; it produces a single kind of experience, which then is perceived in manifold ways. In other words, the generative nature of the work is in the brain of the perceiver rather than in some intermediate.

In the example of the symphony, the sheet music is the generative function. If the composer and all of the people who had ever heard a symphony died, and all recordings were destroyed, the symphony could be reconstructed from the sheet music. Without the sheet music, even with people who had heard it, it would be difficult to reconstruct. Of course, the essense of the symphony as a work of art is the sound; reading the sheet music gives almost no sense of the art. And the conductor, the individual performers, and the listener all have roles in further interpreting the intention of the composer and turning it into an irreproducible, singular work of art.

But still, the interpretation happens within narrowly confined limits. I don't know of any works of music or drama, for example, in which the order of movements or scenes is ambiguous or up to the conductor/director to decide. (Such things undoubtedly exist, but not among major works.)

Software-mediated art is simply a different class of generative functions, and it is, in the computational complexity sense, more powerful. A single "work" of software-mediated art can be expressed in a countably infinite number of ways, even before the interpretation-in-the-brain stage. And that's why the potential of such work is so exciting.

Thursday, November 20, 2003

Why Analog?

Why is it called "analog" computing? Why, when digital clocks came out (meaning, of course, both that they displayed digits as well as that they offered a digital abstraction of time, of absolute discrete units), did we start calling clockwork clocks "analog"?

At least according to Andrew Hodges, it's for an incredibly obvious and good reason. Maybe this was obvious to everyone but me? The earliest analog computers were things that calculated things like the tide. Calculating tides, like lots of good computing applications, is a conceptually easy but tedious operation. You've got the Earth turning, the moon slowly sliding around the Earth, and the Earth turning around the Sun. All factors (plus, of course, local geography) in determining when the water will come up over your toes. So the "computers" that calculated the tides used physical representations; interlocking gears that were the right relative size to each other that the output was calibrated to the right degree of input from each gravitational source.

The system was -- get ready for it -- an analog of the physical system. Analog, in the sense of analogous or analogy. The physicality of the computer was directly analgous to the physicality of the system being modeled. Analog computing, then, isn't just the opposite of digital, in the sense of using real numbers instead of the digital abstraction, it's computing that rather than discrete math, is constructed by physical analogy.

This reveals why analog computing is potentially so much more efficient at some simulations; it has the universe on its side. In some sense it isn't even simulating as much as it is similar ("similarating?"). It also suggests why it is ultimately limited: it may be using the universal similarity, but it is not universal; it can only be an analogy to one kind of system. If you want to have an analog to another system, you'll need another computer.

Tuesday, November 18, 2003

An Open Letter to Rodney Brooks and Victor Zue

With the possible exception of "political science," I think that perhaps we could all agree that "computer science" is one of the most misleading, inaccurate, and misinforming names of a field of academic endeavor. Besides not being particularly a science, it doesn't even necessarily deal with computers. Computer scientists must deal on an almost daily basis with being confused with programmers, and when teaching undergraduates, must patiently explain the difference between using a computer or even creating one, and the science (if it is that) of understanding the role of algorithms and information processing. And if there's any runner up to this misbegotten moniker, it must surely be "artificial intelligence," that mess of a term with no agreed-upon definition, that bends to the breaking point the already-contentious term "intelligence."

I offer no specific suggestion for a replacement, but one is clearly needed. While the European usage of "informatics" is certainly not far off, it is not quite right either, as it misses the fundamental idea of computation and algorithms. Others have suggested "algorithmics," more accurate but perhaps less melodious.

With the combination of MIT's Laboratory for Computer Science and the Aritificial Intelligence Laboratory, MIT has a historic opportunity to frame these fields as they ought to be framed, with a new name. Perhaps only MIT, of all the universities in the world, has the leadership to rechristen this field (for certainly they are truly one field), and thus it has a singular responsibility to do so.

Friday, November 14, 2003

Convergent and Divergent Discussion

A thought inspired by a talk I went to today by Judith Donath of the Media Lab:

Although Donath described Usenet as having an information landscape, I'd disagree. The information in a Usenet group is a derivative function of the social landscape - the "content" (such as it is) exists only as part of social interactions. That content can be differentiated as one-to-many (first posts) or a hybrid one-to-one/one-to-many (a reply), but there's no freefloating content (except, perhaps, FAQs, which probably show up in Marc Smith's work as a one-post-no-replies uninteresting spam.)

In contrast, the information landscape on a wiki is the primary function, and social interactions are at best secondary/implicit. It's my hypothesis that the kinds of social interactions and information evolution found on a wiki are fundamentally different from those found on Usenet. Anecdotal evidence would suggest that the difference is at least in part that Usenet dialog has a strong tendency toward divergence, while wikis tend toward convergence. Testing this hypothesis would require some way to computationally determine con/divergence, an interesting challenge itself.

Wednesday, November 12, 2003

I Must Never Be the Main Character

One of the reasons that talking about electronic stories is confusing is because it overlaps and coexists with many other, similar, applications. It's a little like adventure games, and it's a little like social software, and it's a little like persistant world games, and it's a little like avant garde fiction that bends the limits of the medium. But -- and this is another place where I think Murray let us down -- it's not those things. The borders may be fuzzy; indeed, there may be no border per se between electronic stories and adventure games. But there is a difference. And we can complete our mining of E. M. Forster by introducing the difference.

Character. Literature -- stories, novels, text in any form -- must be about characters. Ideally, those characters should have an arc where they change. In adventure games, in any first person interactive story where the "I" is really the reader, transposed into the fictional worls, there is no character. "I" cannot be the main character, because "I" won't undergo change, at least not in any way dictated by the author. It's a wonderful dream to believe it possible, and I do hold out some dim hope that some future master of the medium will produce it, but for now, Zork, for all it's one of the greatest games ever created, is not a story. It might borrow elements of a story, and this is one reason telling the difference is confusing, but with no character, it cannot be literature.

Tuesday, November 11, 2003

In the Round

One of the greatest challenges that stage magicians can attempt is to perform their magic "in the round," that is, to allow the audience to surround the magician on all sides. This vastly complicates the design of the magic trick, because there is no "safe" direction in which the trick's secret could be visible (with the exception, perhaps, of directly behind the magician for handheld tricks). (Since I'm writing about literature here, I'll point to Christopher Priest's The Prestige, the finest novel about stage magicians ever written.)

Full-on hyperstories have a similar problem. For the kind of stories in which a reader can hop from viewpoint character to viewpoint character, the act of design of the story becomes much more difficult. Just as stage magic in the round, it's not impossible, but it removes an important tool. In fact, it removes the same tool for both magicians and authors: obscurity. Imagine writing a murder mystery, for example, in which the reader could freely inspect what each character in the old hotel was doing at any time.

The problem goes deeper. Forster talks about "round" characters and "flat" characters, by which he really just means characters with convincing inner lives as opposed to simple characters that serve only as reflections or dialog. (Perhaps it's characters that could pass the Turing test and those that can't.) But, just as in a movie set of a Western town, where the storefronts are really just single sheets of plywood with nothing behind them, giving the reader the ability to walk around the set will reveal the flatness. Again, this doesn't make it impossible, but it represents a significant design constraint. How does the author create so many convincing characters without revealing too much information? How does the author indicate to the reader which characters have inner lives, and which don't, without spoiling the sense of openness?

Wednesday, November 05, 2003

Computational Complexity of Narrative

According to Forster, there's a distinction between story -- merely a list of events -- and plot, which includes the causal relationship between events. A story, in his mind, is the lowest common denomintor of the novel. Many popular works, from 1001 Arabian Nights to Tom Clancy work at this level. The attraction to the reader is the constant need to find out what happens next. And then... and then... A plot, in contrast, the question isn't always what happens next, but why something happens. Forster's pithy example is "The king died and then the queen died." is a story. "The king died, and then the queen died of grief." is a plot.

What's interesting is that according to Forster, a plot requires that the reader have memory. To follow a story, a reader just worries about the very next event, and doesn't need to remember previous events, as there's no structure more sophisticated than a chain of events. To follow a plot, in contrast, the reader must hold many past events in her head. Again, Forster's example: "The queen died, but no one knew why, until it was discovered that she died from grief from the king's death."

What's fascinating about this is that this maps perfectly well to the Chomskian hierarchy of language: a story can be understood by a finite automaton, but the ability to understand a plot in general requires a more sophisticated computational model: a pushdown automata, perhaps, or even a Turing Machine. This distinction raises the exciting possibility that we may be able to construct even more precise and detailed computational models of different tales.

Tuesday, November 04, 2003

The Hypernovel Only Lives When Hosted by a Brain

Any form of literature must be interpreted. A symphony, for example, is only a set of symbols in a given notation until an orchestra plays it. A play is a sequence of words that is interpreted by actors, who may radically change the meaning (Hamlet as insane, Hamlet as cunning). Taken radically, this argument could apply to any form of communication, since the symbols in a text must be transmitted to meaning within the brain of the reader. (This is one of the subtle and lovely themes of John Crowley's sadly underappreciated novel Engine Summer). But we speak here of not quite so radical a notion, merely of literature that must survive a transformation between the form in which it is transmitted (the sheet music of a symphony, the script of a play) and the form in which it is perceived (the sound waves, the ensemble acting).

The hypernovel, text intermediated by computer, goes one step further. For while (as we discussed yesterday) the hypernovel can break the bound of linear time, nevertheless the reader must reserialize the experience somehow, for our consciousness must perceive things one at a time. No matter how often we intercut between parallel sequences, we nevertheless are single processors at the level of our conscious minds (ironic, as the brain is a massive parallel processor. Curious.). Indeed, the hypernovel must also be created in linear fashion (excepting for the moment a purely computer-created novel).

So we see an odd form of reconstitution. The author uses a serial, linear process to create a hypernovel, but the hypernovel's form, however it is embodied as information, reflects non-linear structure. That form is perceived -- experienced -- by the reader, linearally, but not in the same linear fashion as the author. Only when the work is completely experienced can we imagine that, within the brain of the reader, some form analogous to that in the mind of the author has been created. But since any two walks through the hypernovel are likely to be different (we might even hold that principle up as a metric for value), the resulting in-brain structures might be different, even radically so.

This is a strength, not a weakness, of the form, just as the fact that plays are filtered through fallible, weak, often misinformed and self-motivated actors and directors. As Forster points out, has any experience reading a play been superior to seeing one? It is in the interpretation, the bending of meaning and the imposition of other minds, that plays gain much of their power. And so it will be with the hypernovel.

Monday, November 03, 2003

The Shackles of Time

One of the exciting potential features of electronic text (by which I mean any computer-intermediated textual story, static or dynamic) is the ability to break the linearity of traditional prose. This theme comes up again and again in Murray's Hamlet on the Holodeck. Hypertext, so that the arc of the story can branch and go in multiple directions ("If you fight the dragon, turn to page 51. If you run away, turn to page 36.") rather than a single line as in most novels. Parallel stories, like Rosencrantz and Guildenstern are Dead to Hamlet; while novels could "show" simultaneous action by cutting rapidly between scenes, just as a film would, e-text could actually (for some value of "actually" and we'll get to that shortly) maintain simultaneous parallel lines, possibly showing the same event from multiple points of view (like Rashomon) or simultaneous events in different points of space.

In E. M. Forster's Aspects of the Novel, he talks about the use of time as the backbone of the novel. The basic nature of the novel is a series of events, or "And then... and then... and then..." He laments this, finding it the least interesting and lowest aspect of the novel (as compared to things like character or plot), but acknowledges it as indispensible. To completely discard time is to leave us with no way to understand the relationship of elements. At best we are left with poetry, at worst an unintelligible mess. So as in any good time-travel science-fiction story, we must be left with the moral that when we mess with time, we do so at our own danger.