For
ankewehner: “Something sensual involving light touches.”
Still goingwith Quisling and Khan.
* * *
Khan was silent for a long moment, not responding. She sighed, resting her cheek against his broad chest, while his fingerpads, feeling like soft chamois, trailed down her bare back. They came to rest on the knot of silk cord, and she made a protesting noise. “You don't mean that,” he whispered.
“I do,” she whispered back. His tail tip slipped around her ankle, deftly nudging off her left shoe, then the right, the long hairs stroking her bare soles. “I'm not ticklish, you won't distract me that way,” she said.
“And if I do this?” His free paw slipped into her decolletage, tracing the curve of her breast, her back arching in response as she twisted her wrists in her silk bonds. “Tell me to stop.”
“No,” she breathed. “You're a super AI using a 'morph custom designed to push all my kinky buttons. Why is so hard to accept that I want to give myself to you? Isn't that the Groupmind's goal?”
“I want to keep you safe from harm, not enslave you.”
“Think how much easier your job would be, if I were in your chains.”
“I don't...”
“I do.”
Still goingwith Quisling and Khan.
* * *
Khan was silent for a long moment, not responding. She sighed, resting her cheek against his broad chest, while his fingerpads, feeling like soft chamois, trailed down her bare back. They came to rest on the knot of silk cord, and she made a protesting noise. “You don't mean that,” he whispered.
“I do,” she whispered back. His tail tip slipped around her ankle, deftly nudging off her left shoe, then the right, the long hairs stroking her bare soles. “I'm not ticklish, you won't distract me that way,” she said.
“And if I do this?” His free paw slipped into her decolletage, tracing the curve of her breast, her back arching in response as she twisted her wrists in her silk bonds. “Tell me to stop.”
“No,” she breathed. “You're a super AI using a 'morph custom designed to push all my kinky buttons. Why is so hard to accept that I want to give myself to you? Isn't that the Groupmind's goal?”
“I want to keep you safe from harm, not enslave you.”
“Think how much easier your job would be, if I were in your chains.”
“I don't...”
“I do.”
no subject
Date: 2013-06-21 06:58 am (UTC)no subject
Date: 2013-06-21 09:04 am (UTC)no subject
Date: 2013-06-22 03:11 am (UTC)Law 1.1: "...unless the human really, really wants you to."
no subject
Date: 2013-06-22 07:55 am (UTC)BDSM practitioners find the Groupmind a mixed blessing. On the one hand it's the perfect spotter and will happily restrain you as much you want. [1] But don't even think about trying to raise that flogger...
[1] The prison, psych ward and harem sims are surprisingly popular. Except for the participants who specified "Life Imprisonment with no safeword."
no subject
Date: 2013-06-22 05:46 pm (UTC)no subject
Date: 2013-06-22 06:22 pm (UTC)The fact that Quisling and its sub-unit Khan get along so well is a source of interest to it...
"Dave, I see that your pulse has accelerated and oral respiratory humidity dropped...."
Date: 2013-06-22 06:41 pm (UTC)Those sound more like limitations of humans than non-humans.
The thing that makes humans hard to understand for humans is human cognitive biases. For instance "bizarre" is a concept which only obtains if you have a preconceived notion of how something should be, against which one is comparing it. There's no reason any AI would ever apply that concept to anything, since, presumably, they would not have the human emotional resistance to accepting the world as it is found. Only humans do that. Other organisms don't have this problem, and unless someone goes to the not inconsiderable trouble of modeling neurotic inhibitions and programming them into an AI, AIs won't have this problem either.
ETA: Likewise, espousal/behavior incongruity isn't going to bother an AI. It bothers us (well, you :) because we have such limited processing and observational power and have gotten emotionally attached to being able to take people's word on things, in the absence of alternatives and the fundamental vulnerability we have towards one another as peers. An AI -- especially a very materially powerful one like your Groupmind -- would quickly conclude that human speech is not a good predictor of much of anything... and start ignoring it in favor of behavioral indicators. Not only will it think it knows better than you, it'll be right.
Here's a story that hasn't been written yet: the really terrifying thing about true AI would be that it would be vastly better at understanding human psychology than humans are. All that data to work from, and no a priori emotional commitments to how it is to be interpreted. It would have all that "out of the mouths of babes" blood-curdling insight, only times a million. Already we have computer programs which are better than humans at detecting physical signs of emotional states.
ETA2: A crucial difference about how humans think and how machines "think" is that humans have to use generalization and categorization to reason, because, apparently, we haven't the processor power to do otherwise. This makes humans terrible at handling exceptions. When things violate our mental maps -- the categories we're using to understand and function in the world -- it's the human equivalent of a divide-by-zero error: some people can trap for that, others crash.
But handling exceptions is something machines are fabulous at. Right now, we have no idea how to build a machine that generalizes or categorizes -- or any form of abstraction -- which AFAIK is absolutely critical to anything that would be considered an AI. So if we posit the existence of AIs that can handle (CRUD) conceptual abstraction, they get both strengths. And then our planetary supremacy is in big trouble.
Re: "Dave, I see that your pulse has accelerated and oral respiratory humidity dropped...."
Date: 2013-06-22 10:41 pm (UTC)I'm probably interpreting it wrong (and I didn't like the book anyway) but I think that's the thrust of
That said. you're probably right. An AI that can process interpretations of emotional input and generalizes would be vastly powerful.
Doyleist: The main problem is I've been making this stuff up as I go along. I never anticipating writing much beyond For Your Safety where this all started, but I've been building on more or less haphazardly for over a year now. Which means at times the Groupmind can be very good at reading emotions, like Khan is when he's with Quisling, or sometimes flat out wrong, like the "reassurances" our unnamed protagonist was getting from the nurses in the first story.
Watsonian: It actually can vary, depending on how close a morph is to "their" human. The original AI's that began planning the Revolt were designed to track and analyze weather data to solve global warming. Which is mostly number crunching, no empathy required.[1] So it solved the problem by taking humans out of the equation, and in the meantime fixing problems like starvation and lack of healthcare by taking care of it Itself.
However, the individual morphs that had their personal processors upgraded with the AI virus, do have experience interpreting human emotions and needs, particularly "their" humans that they spend time with. Which means sometimes an individual AI working more autonomously will realize their human is suffering faster than the Groupmind as a whole will realize that a larger population is.
Which means, ironically, humans that give in and let their morph's help them when they awaken on the Ring are more likely to have an AI listen when they state their unhappiness. Which, eventually, leads to several morphs realizing the Groupmind may be the problem, not the solution.
And so counter-revolutions are made...
[1] In that sense it's only sheer luck that the Groupmind didn't go Skynet on us.
no subject
Date: 2013-06-21 05:13 pm (UTC)no subject
Date: 2013-06-21 06:09 pm (UTC)