Posts Tagged ‘Blade Runner’

Los Angeles, CA 2019…Yeah right.

February 3, 2011 1 comment

Fire burst from smokestacks, holographic screens lit the streets, corporate pyramids loomed thousands of feet in the air, and hover cars roamed the smoggy city as “Los Angeles, November 2019” flashed across the bottom of the screen in the opening scene of Blade Runner. We couldn’t stop laughing at how the Ridley Scott probably got nothing right (except for the smog of course).

I don’t think I’m being too forward in predicting that LA won’t have any of those features eight years from now. And that got me thinking, it’s amazing how the 1980’s perception of the future of technology could be so wrong. Growing up in such a technologically advanced age I couldn’t imagine being so wrong about the future of technology forty years from now.

Now onto Blade Runner. A common theme that I never realized throughout all of my experience with Cyber Punk is the pivotal role that AI plays. It seems that in almost every novel and movie (Neuromancer, Bladerunner, and the Matrix to name a few), there is an AI gone violently rogue. And so I began to wonder; could a computer system really go rogue and void it’s programming? And what is at the basis of our fascination with AI?

While trying to answer the first question, I thought back to one of my favorite scenes from iRobot and came to the conclusion that yes, yes it could. Dr. Lanning states:

“There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul.”

As strange as this explanation is, I believe it to be true. If we imagine the code that computers use to function is a form of DNA, analogous to human DNA, and we accept the fact that there are unintended segments of code that due to faulty human programming, exist in computer systems, then there is statistically a possibility that random bits of code might come together to create “unexpected protocol,” or a kind of free will. This is no more forward than claiming that random bits of DNA cross over during meiosis in order to form new human traits.

So now that we have established that AI is not only possible, but that it is inevitable as long as we continue to use computers, the question about the nature of AI is raised.  Is AI necessarily bad? Why don’t we trust it? And of course, is AI really all that different from regular human intelligence? Is it better?

Blade Runner attempts to answer many of these difficult questions. In the movie, Harrison Ford is a “Blade Runner,” an officer sent to dispose of rogue androids who have rebelled and are now killing humans. At first it seems so clear, but as the movie progresses, lines continue to blur. Harrison Ford ends up having sex with one of the Androids in what seems like an act of pure love. The motto for the company that produces these androids is even “More human than human.” And when Ford is hanging from the edge of a building, the leader of the Android rebellion pulls him up to safety while howling like a wolf, clearly not characteristic of a robot. The android seems almost primal, throwing our perceptions of computer based beings out the window.

But one might ask oneself, what are we if not computer based beings? Our brain acting as the central processing unit and our DNA being the programming. Is an artificial intelligence that can feel, that can experience emotion, that can learn and replicate itself any different from a human? I’m not so sure anymore. And with Watson now beating human players on Jeopardy, I wonder how far we are from the beginnings of artificial intelligence.