Tuesday, September 09, 2003


I've railed on IBM's autonomic computing initiative before, but it's been a while, so here's another shot.

The idea behind calling it "autonomic" is that it's inspired by the autonomic nervous system, the part of the nervous system generally out of our control and our awareness. Our heart beats faster when we're being attacked; if you think about it, that's some pretty amazing processing going on without your knowledge. Some part of your brain is looking out your eyeballs, seeing a bear, figuring out that bears are likely to eat you, then deciding that running away is a good strategy, then signaling the heart to beat faster, the pancreas to release adrenalin, etc. All without you consciously making any of those decisions (you're too busy saying, "Holy crap! It's a bear!"). It's a natural why this is appealing to computer operators; they want the computers to make the same sort of assessments: hey, a strange situation has arisen; I better swap out a flaky drive; I'll just checkpoint and make a backup; and then do the actions. Meanwhile, the story goes, the operator is blithely ignorant of the whole situation ("Holy crap! A bear!").

There are two irritating things about this. One is that they didn't even begin to look at the biology of the autonomic nervous system. The ANS has different subsystems, the sympathetic and parasympathetic nervous systems (there's a third, but pay no attention for the moment). One is for generating and restoring energy - digesting food, for example, and slowing down respiration; the other is for spending energy in times of crisis, like the visit from our friend the bear. This itself is an interesting idea for a software system: two competing subsystems, one responsible for reacting to crises, such as attacks or failures; one responsible for providing services and bringing the system back into more optimal configuration. And yet nowhere in any autonomic computing paper I've looked at (and I've looked at more than I care to) is there any acknowledgement that two subsystems are a good idea. If you're going to claim biological inspiration, you might as well look at the biology.

The second irritating thing is in Paul Horn's and writings by Alan Ganek, the claim is made over and over that software must be made "less complex." "It's too complicated!" they exclaim (making little reference to where this complication came from). "We need to make it simple!" And then, in a wonderful bit of sleight-of-hand, they introduce the most complicated and difficult architecture of all time. This is in no way making software less complex. It adds at least an order of magnitude complexity to the entire system. (There's a chance, I suppose, that it might make individual components less complicated. I doubt it. But in any case the total system complexity will be skyscraper high.) This is not in itself a bad thing; we software people get paid to write complex stuff; if it was simple we wouldn't get paid so much. Complexity can be the source of great power and capability, and there's nothing intrinsicially wrong with it. But the IBM claim is beyond disingenuous, and in fact it takes away energy from a reasonable and potentially helpful approach: make the damn software simpler.

Now I know what they meant: the human act of operating software should be simpler, and it's a no-brainer to trade off complexity inside the system for complexity in the interface. It's too bad they had to say it in a way that cheapens both the idea of simple but powerful software, and the still underexplored idea of biological inspiration.