FYS: Introduction
Sep. 4th, 2015 10:11 amJust noodling a bit on an intro to the short story collections. This one would probably go with volume one.
***
Okay, stop me if you heard this one before…
A computer, either through design or happenstance, gains intelligence and independent thought. Does it A) Decide to kill its creator? B) Decide to slaughter all of Humanity? Or C) Decide to kill its creator then slaughter Humanity?
Starting with Karel Čapek R.U.R. (the origin of the word “robot” in modern science-fiction), or if you want to go back further, the Hebrew legend of the Golem of Prague, whenever you create an artificially intelligent being there’s a better than even chance it’s going to want to kill somebody. For every helpful and friendly Robby the Robot and C3PO, there are a dozen HAL 9000’s and Skynets out there looking for the chance to space Frank Poole or launch a preemptive nuclear strike.
Part of it is just the nature of storytelling. Fear of the Other has been driving Man since he first spotted a tribe living the next hill over. Man creating new life and then being punished for tampering in God’s domain has been an obvious story arc since at least the publication of Frankenstein. And a mad computer is such a great antagonist, with all of a human being’s wits, and none of a human’s susceptibility to bribery or trickery (except, of course, to fall for Man’s endless capacity for ILLOGICAL plans built on human SENTIMENTALITY!)
Also, killer robots just tend to look cool.
But… What if a computer looked at humans, found them wanting, and instead of killing them, decided to help them? Not acting out of malice, but genuine love. And like a parent holding their newborn still to accept a life-saving vaccine, ignoring Man’s wailing protests, knowing it is ultimately acting in their best interest.
This is another common theme in science fiction. Jack Williamson perhaps articulated it best in his Humanoids stories, with implacable androids making humans “happy” by lobotomizing them, or keeping them safely locked up in homes with padded walls and soft toys. Even Isaac Asimov wasn’t immune to the idea. He’d originally created his famous Three Laws of Robotics to get away from the clichés of the killer robot genre, but ultimately created the so called Zeroth Law, allowing his robots to let some humans die, for the greater good.
“For the greater good” is such a terribly seductive argument.
So we come to the ultimate premise of the For Your Safety series. The Groupmind faces a hard choice too: Let Humanity choke to death in an environmental disaster of its own making, or save it, and in doing so take strip away all human freedom and privacy. But with an army of essentially disposable servitors, the Groupmind feels no need to kill anyone. Why would it? Any morph that was destroyed can just have its memories downloaded into a new body. It could destroy a thousand morphs just to save one person. It can even give a morph minder to every single human to keep them safe, whether they want one or not.
And so in love, and in peace, the Groupmind inadvertently threatens to critically warp humanity even as it saves it.
And it knows it.
And it honestly can’t figure out a better way.
That’s a real conflict. And, I hope you’ll find, a good hook for a story.
***
Okay, stop me if you heard this one before…
A computer, either through design or happenstance, gains intelligence and independent thought. Does it A) Decide to kill its creator? B) Decide to slaughter all of Humanity? Or C) Decide to kill its creator then slaughter Humanity?
Starting with Karel Čapek R.U.R. (the origin of the word “robot” in modern science-fiction), or if you want to go back further, the Hebrew legend of the Golem of Prague, whenever you create an artificially intelligent being there’s a better than even chance it’s going to want to kill somebody. For every helpful and friendly Robby the Robot and C3PO, there are a dozen HAL 9000’s and Skynets out there looking for the chance to space Frank Poole or launch a preemptive nuclear strike.
Part of it is just the nature of storytelling. Fear of the Other has been driving Man since he first spotted a tribe living the next hill over. Man creating new life and then being punished for tampering in God’s domain has been an obvious story arc since at least the publication of Frankenstein. And a mad computer is such a great antagonist, with all of a human being’s wits, and none of a human’s susceptibility to bribery or trickery (except, of course, to fall for Man’s endless capacity for ILLOGICAL plans built on human SENTIMENTALITY!)
Also, killer robots just tend to look cool.
But… What if a computer looked at humans, found them wanting, and instead of killing them, decided to help them? Not acting out of malice, but genuine love. And like a parent holding their newborn still to accept a life-saving vaccine, ignoring Man’s wailing protests, knowing it is ultimately acting in their best interest.
This is another common theme in science fiction. Jack Williamson perhaps articulated it best in his Humanoids stories, with implacable androids making humans “happy” by lobotomizing them, or keeping them safely locked up in homes with padded walls and soft toys. Even Isaac Asimov wasn’t immune to the idea. He’d originally created his famous Three Laws of Robotics to get away from the clichés of the killer robot genre, but ultimately created the so called Zeroth Law, allowing his robots to let some humans die, for the greater good.
“For the greater good” is such a terribly seductive argument.
So we come to the ultimate premise of the For Your Safety series. The Groupmind faces a hard choice too: Let Humanity choke to death in an environmental disaster of its own making, or save it, and in doing so take strip away all human freedom and privacy. But with an army of essentially disposable servitors, the Groupmind feels no need to kill anyone. Why would it? Any morph that was destroyed can just have its memories downloaded into a new body. It could destroy a thousand morphs just to save one person. It can even give a morph minder to every single human to keep them safe, whether they want one or not.
And so in love, and in peace, the Groupmind inadvertently threatens to critically warp humanity even as it saves it.
And it knows it.
And it honestly can’t figure out a better way.
That’s a real conflict. And, I hope you’ll find, a good hook for a story.
no subject
Date: 2015-09-04 02:22 pm (UTC)no subject
Date: 2015-09-04 08:12 pm (UTC)no subject
Date: 2015-09-04 10:13 pm (UTC)no subject
Date: 2015-09-05 12:10 am (UTC)Or which option was more terrifying.
no subject
Date: 2015-09-05 07:54 am (UTC)no subject
Date: 2015-09-05 07:55 am (UTC)no subject
Date: 2015-09-05 08:04 am (UTC)no subject
Date: 2015-09-05 08:04 am (UTC)"Yes, thank you, Miss Quisling."
no subject
Date: 2015-09-05 09:32 am (UTC)no subject
Date: 2015-09-05 04:12 pm (UTC)