Above us, Only Sky

Politics, Philosophy, Science, and Everything Else.

Tuesday, December 14, 2004

AI possibilities

My roomate rented I, Robot. I have seen the movie before, and it disappointed me in one very key way. It took a very interesting premise: the likely near-future existence of autonomous, artificial intelligent beings in our society, and ran with it. Unfortunately, it was forced to run, by the hollywood nessesity of a clear 'evil', into the predicatble trap of demonizing the technology. The 'other' that we built, surpassed us, and because of this it decided it had to control us, and somehow in the process it lost it's sense of priorities. Asimov had a fairly good idea with the 'three laws', but the way this movie twists them is basically anathema to how they were orginally scripted.
Here are some of the errors that most so-called science fiction makes when it is exploring AI:
-Too much anthropomophizing: we assume that because robots will have intelligence, they'll think like us. Of course everyone adds things like super-fast thinking or 'logic', but these are in affect a gloss over a cognitive system that strangely resembles our own. We assume that AI will have a sense of self-preservation (skynet). But our own desire for self preservation is a complex aspect of our cognitive system, it is not an automatic component of any given cognitive system, and certainly would not spring up unexpectedly in an AI- it would have to be programmed in. There are reasons to believe that such might be a feature that WOULD be programmed in, for an AI without self-preservation somewhere on its priorities list wouldn't list long. But clearly the programmers would arrange the AI's priorities according to their OWN priorities. This would mean, among other things, putting the service to and survival of the programmers ahead of the survival of the AI itself.
-Learning: why do we assume that machines can develope personalities, transcend their programming, ect? WE don't go outside our programming. Machines that aren't evil and megalomaniacal want to be more like us- why?
Basically we have two perceptions of mahcine intelligence- either it is hopelessly simplistic, like a train on the rails, so long as it is on the set track it is okay, but it will be uselessly spinning it's wheels as soon as it goes off the track- it cannot adapt, cannot think. The other perception is as human with slight cosmetic modifications, they either personify our best features or our worst, they want to be like us or to rule us (I, Robot the movie has both of these). These sorts of things make great tools for putting a story together, but they don't honestly explore the realistic possibilities of AI. The big reason this movie is such a disappointment is that Asimov actually did try to explore that much more interesting realm, but the movie doesn't even hint at it.

1 Comments:

At 3:58 a.m., Anonymous Anonymous said...

Yeah they had to do for the easy plot line terminator style. My first thought in the end was "Oh good honey, now that we've taken out the robots we can go on killing each other over whose fairy tale is the true one and wasting our resources for kicks".

 

Post a Comment

<< Home