Friday, February 25, 2011

Singularity. Singularity.

Technological change, much like population, is measured in exponential rather than linear growth. With linear growth, the line on the graph increases at a steady but even rate. The line on the exponential growth chart starts with a minimal curve then explodes into an almost vertical line. But unlike population growth, where the end result could be an alarming lack of resources like clean air, food, and brain cells in the Utah legislature, technological growth will culminate with each of us having a gazillion Facebook friends, smart-toilets, and robots on the Utah legislature. Oh, and a little something called singularity.

Just what the bleep is singularity, you ask? Well, according to the world's top singularitarians (that just happened) singularity occurs when computers become more intelligent than people. Once this happens, the development and manufacturing of computers and like applications will be too advanced for our simple human brains and eventually be turned over to these computers to make, get this, even faster computers. At this point, artificial intelligence will be far superior to our organic intelligence thus usurping humanity as the top dog on this planet.

Before SkyNet-related anxiety forces all seven of my readers to go the way of the Heaven's Gate Away Team, I'd like to say that singularity is anything but a foregone conclusion. In fact, Ray Kurzweil, singularity's top supporter, claims that the type of artificial intelligence needed to get the ball rolling hasn't been invented yet. In other words, we're not even close to the kind of computing power needed to make computers as smart as people. I can certainly vouch for that. I tried to text my wife that I'd be "coming home late," but the predictive text feature on my smartphone sent "You f***ed up my life!" instead.

Freudian texting snafus and the continual crashing of anything Microsoft may become old news if singularity comes to fruition. In fact, other not so simple problems may also be resolved, such as aging or programming a microwave. Some singularitarians suggest that if the aging process can't be overcome, an easier solution can be used like transferring our minds into robots with parts that can be easily replaced once they break or wear out. For kicks, we may be able to put our minds into something better than a robot, like a toaster or a can-opener or Rep. Christopher Herrod, R-Provo. Whatever scenario you choose, singularity will be an entertaining event.

Or maybe not. Perhaps SkyNet-related anxiety is the correct response from humans. After all, we believe that we're the most rational and intelligent (don't forget moral) beings on the planet and because of that, we can subjugate most other things (including some humans) as a means to our ends. It seems perfectly reasonable to assume that, once machines obtain a level of rationality and intelligence far superior to ours, we'll end up being just another thing to dominate. In fact, it's already happening.

On February 10, 1996, World chess champion Gary Kasparov began a set of six chess matches with an IBM super chess computer named Deep Blue. Kasparov defeated Deep Blue 4-2 but didn't fare as well in the 1997 rematch where Deep Blue was the victor 3.5-2.5. Kasparov was discarded like so much human trash. It gets worse. In July of 2008 a poker bot (poker bot!) named Polaris defeated six professional Texas hold 'em players in Las Vegas. But wait, there's more. In October of last year a computer named Akara 2010, defeated the top ranked female Japanese Shogi (Japanese chess) champion. But thankfully, for the good of all humanity, Japan won't cede the victory until Akara 2010 defeats the male champion. And finally, perhaps the most chilling example of machine supremacy, Jeopardy! champions Ken Jennings and Brad Rutter were crushed by something calling itself Watson (another creation from those Godless technophiles at IBM). 

This is scary stuff. The machines have added chess, poker, and trivia to their list of triumphs that already includes Minesweeper, Solitaire, and Spider Solitaire. What are we to think? It looks like we're in the beginning stages of singularity as we speak, but whether the outcome will be rapturous or sorrowful is yet to be determined. Let's look at some of the more popular singularity arguments cited by scientists and compare them to popular expert opinion to figure out what to do. And by 'expert' I mean Hollywood.

Argument 1)
Scientists claim that after singularity, when the computers are designing themselves as well as self-replicating, these super-intelligent computers will realize that human beings are ridiculous creatures and immediately annihilate them with nuclear weapons or turn them into Duracell batteries. Whichever form of the destroyer you choose, enslavement or death are the only options.

Hollywood's take)
Great character names get bogged down by too much philosophical drivel on the one hand and a leading lady with bigger arms than me coupled with a clear time travel paradox on the other. Both films end with a sense of hope, but that eight minute Architect speech really confused me. It's obvious that Hollywood is preaching human perseverance, but conventional wisdom says destroy them before they can destroy you. And so does U.S. foreign policy. Decision: Kill all robots.

Argument 2)
Scientists claim that as the first few machines become super-intelligent, we will recognize their superiority and destroy them before they can replicate and make Ken Jennings look like an idiot again. Hence, there's really no need to worry about singularity at all.

Hollywood's take)
In much the same way that NOVA tried to destroy Number 5 after a lightning strike induced consciousness, we will try to destroy the computers with a simple game of tic-tac-toe. I know that a Matthew Broderick trumps a Steve Guttenberg every time but John Badham's direction was uninspired on both counts. Again, conventional wisdom says to shoot first and try to understand the problem when it's dead. Decision: Robot destruction.

Argument 3)
Scientific supporters of singularity want to focus on the relationships that these super-intelligent computers will have with each other and what we can learn from those relationships. They claim that destroying these machines before we can understand them may be a mistake. After all, it could be important for humanity to observe how truly intelligent beings treat one another with new-found levels of respect and dignity.

Hollywood's take)
When Val Com 17485 (a valet and an expert in lumber commodities) and Aqua Com 89045 (a pool-side parties specialist) try to give love a shot they still encounter societal backlash even though they're super-smart robots. And they're straight robots to boot! We all know everyone's perfect example of robot love is a gay one. And, no, I'm not referring to C-3PO and R2-D2 but to Twiki and Dr. Theopolis. The way that little guy wore that even littler guy around his neck while Draconian lasers blasted all around him was nothing less than Brokeback Mountain in the 25th century--minus Erin Gray, of course. Unfortunately, the robotic gay community will end up being just as marginalized as their human counterparts, at least until full singularity is realized and the robots kill or enslave everyone. Decision: De-rez the bots.

So what have we learned? Well, singularitarians believe that super-intelligent computers will benefit humanity while orthodox Hollywoodians believe that super-smart computers and their capabilities represent a significant threat to our species. Or maybe we shouldn't be that frightened of something that doesn't even get the first search return on Google or refuses to open the pod bay doors? Whatever the true answer is, regardless of the consequences of singularity, we won't know it until a computer tells us.

Twiki (left) and life partner Dr. Theopolis (center) with special guest star
 Gary Coleman (not a robot) on Buck Rogers in the 25th Century.
Just in case you didn't know.

 

2 comments:

  1. You are brilliant!! The only thing better would have been to have this conversation in person, hahahaha, thanks for making me laugh as usual.

    ReplyDelete
  2. Hollywood take #1. The architect was a blatant product placement for KFC.

    And yes, I will make this pic into a tshirt. And get a signed 8x10 that says "from Gary with love"
    Oh wait...

    ReplyDelete