Professor Stephen Hawking and Tesla chief executive Elon Musk have both warned about the potential dangers of Artificial Intelligence. Musk warned that A.I. could be “more dangerous than nukes” and “an existential threat.” Hawking was fearful that a machine with “full artificial intelligence” could “take off on its on” and guide the direction of its own development in a way that humans couldn’t keep up with. It would be like Skynet, the computer system on the Terminator movies that became self-aware and launched a preemptive strike on the human race.
It’s admittedly unsettling when some of this century’s greatest scientific minds make that kind of predictions. Like the “doomsday” books and movies I wrote about yesterday, though, it’s also kind of exciting. We don’t really expect it to happen anytime soon, but the thought that it might does give us a little bit of thrill. Evil computers have been a mainstay of the science fiction community for decades. Two of the most recognized, of course, are Hal 3000 from Arthur C. Clarke’s 2001: A Space Odyssey and Skynet from the Terminator franchise. There have been a host of others. In Star Trek: The Original Series, Captain Kirk and his crew dealt with all manner of out-of-control A.I.s whether in the form of computers or androids. Kirk became quite adept at dealing with them. As the clip I’ve attached to the picture below indicates, he didn’t even have to shoot at them. He just found ways to talk them to death.
What is it that keeps pulling us back to the idea of evil computers? Is it the idea of intelligence without a conscience or the idea of our own imperfections being replicated in our artificial creations? In other words, is it their inhumanity or their humanity that we fear? I guess we could ask what makes an evil computer evil. Skynet, the computer in the Terminator films, was supposed to have attained self-awareness before it decided to wipe humans out of existence. Self-awareness wouldn’t really be necessary for that. The bubonic plague virus destroyed a good many of us with no consciousness at all. Self-awareness would make the computer more frightening though because that would mean it tried to destroy humanity on purpose.
The question of whether a computer could become self-aware is a huge philosophical question. Humans not only think and feel, but we know we’re thinking and feeling and sometimes question our thoughts and feelings. Most of us link that conscious self-awareness to the idea of a soul. I’ve been reading a book called Is Data Human? The Metaphysics of Star Trek by Richard Hanley in which he explores some of those questions. If I get any brilliant ideas from it, I’ll share them in a later post.
Why am I writing about this topic, you wonder? Does this mean the Gogue entity in Intrepid Force: Invasion is a computer? Not exactly. Is he an A.I.? You’ll have to read to find out. I’ll see you tomorrow.
P.S. If all this talk of alien invasions, biblical prophecy, and crazy computers has piqued your thirst for science fiction drama, here’s the link to my book.