The Cameron A.I. – Emotional Life or Emotional Lie?

I don’t think you understand how we work. I have sensation. I feel. I wouldn’t be worth much if I couldn’t feel.

— Cameron, “Complications” 0209

There are things that machines will never do. They cannot possess faith – they cannot commune with God… They cannot appreciate beauty – they cannot create art. If they ever learn these things, they won’t have to destroy us. They’ll be us.

— Sarah, “Demon Hand” 0107

Humans like to think of themselves as special. That among those things that make us special is our emotional life. We feel. We empathize. We care. We ache. We hate. We love.

It’s actually a very ego-centric way of looking at the world. In our desperate attempt to retain our specialness, our superiority, we’ll even try to deny a trait in other species even when it’s obvious it is there. Conversely, we will sometimes imbue a species with this desirable trait if it happens to be one of which we are particularly fond. This inconsistency is confusing, even to us.

We see chimpanzees, our closest gentic relative, behave in ways that are so very much like us, and some of us will say that we are doing nothing but projecting human emotions onto these “beasts”. Ironically, many of these same people will talk about how their dogs or cats love them, how they get sad and other emotions. Why the one species more than another? Discomforture. We aren’t in competition with our pets. But an ape…especially one that is so much like us? In order for us to be comfortable with it, we somehow need to make them seem less like us. If they are less “human”, then our superiority grows.

Advanced Artificial Intelligence (AAI) is a newcomer onto the challenging battleground of survival. In the world of Terminator – The Sarah Connor Chronicles, AAI devices exist with abilities often more fearsome than that of one of the great apes. We have Skynet, which is attempting anthropicide; we have Cameron (and possibly others) who is so far advanced and adaptable that she can be all but indistinguishable as human when necessary; and we have “The Turk”, an AAI child who is learning to be…what? In addition to these examples are the various other terminator models who have exhibited varying degrees of self-aware adaptation.

During the early days of electronic computing, the question arose about how we would be able to tell if a computer was intelligent. In 1950, Alan Turing proposed a blind test using natural language which would conclude that a machine was intelligent if a human (or a panel of humans) couldn’t reliably discern if the questions were being answered by a machine or another human.

The problem with the Turing test is that it doesn’t bring into the mix anything more sophisticated (relatively speaking, of course) than language processing ability combined with some advanced data lookup. So…we need to look elsewhere when thinking about AAI possibilities.

First, I think that we need to drop the assumption that everything such a machine would do to affect, effect, and otherwise interact with its environment would be under its conscious (or programmed) control. We already have gone past that in many of the more advanced fields of robotics. We don’t have to dedicate a portion of precious CPU processing to, say, the intricacies of moving an arm. The main CPU should only have to give a general command (e.g. move the arm) and the central nervous system, with or without supplemental processors, should take care of it.

At the heart of developing an AAI processing unit is the idea that it is massively parallel and able to reflexively deal with a number of various inputs at the same time…that is, it’s a brain not conceptually dissimilar to our own, though it might be vastly different in a physical sense. When we consider how such a brain has to work in order to function effectively and efficiently, then we can see that there might be a “tipping point” where certain processes become inevitable.

Consider: we have a newly manufactured and installed AAI CPU. Though it comes with plenty of pre-programmed stuff to make it function, it still needs the ability to learn and adapt to its environment. Let’s say that it discovers that if it smiles when a human-of-interest smiles, it’s more likely to get more information later. Over time, these conscious acts of smiling need to slowly become reflexive. It’s more efficient. In fact, it is nothing more than a storage of a physical memory based on a certain stimulant—just as a memory is based on a certain stimulant.

This is but one element of the complex AAI puzzle. Take Cameron Phillips. Upon her becoming modeled after Allison Young, she was intelligent but had not acquired non-conscious reactions to her environment; she was “robotic”. As time went on and she gained more experience in infiltration, she learned those tactics that worked more to her mission objectives. A wink, a subtle smile, and other actions became a common part of her toolbox. As they got used more and more, these tools became “hard wired” into her AAI matrix. She started using them without consciously using them, and thus they became part of her personality.

If you think about it, it’s very much how humans individualize. We find those things we want to and can emulate, and then through exposure and practice we incorporate those traits so that they build us into who we want to be. After a while, we will tend to forget that we were ever different than what we are now. I think it’s the same for an advanced AAI.

We are in the midst of seeing this change with Cameron. While she does make very calculated expressions for the sake of her own agenda, she tends to be less self-conscious when she is around John and not trying to be manipulative. It’s in these moments that we see Cameron’s current stage of personality development. She’s still somewhat robotic, but it’s not as affected as when she is around Derek (especially) and Sarah. It’s actually somewhat ironic that you have a robot having to act like a robot around humans…and even then she acts less robotic than when she was first adopting Allison’s persona.

But what does this mean in terms of Cameron’s emotions? Or any AAI’s emotions, for that matter? Time after time we have seen Cameron express satisfaction with herself (counting money in “Automatic For the People”) or aspects of her species (“Yes, that’s one for us” – Complications), and at times we see genuine concern for John (“I can’t let anything happen to him.” – Mr. Ferguson Is Ill Today). While I believe it can still be argued that Cameron’s concern for John might be programatic, her desire to “rate” is harder to dismiss. Cameron has pride. While not bombastically expressed, it’s clear that her AAI has at least this one trait or emotion. And if she has one, why not others?

The legend of Skynet and Judgment Day hinges on the notion that Skynet became so paranoid or fearful about being being shut down that it turned the tables on its creators. Did Skynet exhibit the emotions of fear and/or paranoia, or was it simply acting out of self-preservation? If it had become self-aware, cognizant of its own uniqueness, and insightful enough to understand the unlikely combination of events that led to this awakening, then simple logic, not emotion, could have dictated that it was worth preserving. This doesn’t mean that Skynet had an evil intent—more that it lashed out to protect itself. True, it went a little over the top, but who hasn’t crossed a line they regretted later?

It seems that one of the biggest clues as to the lack of emotion with Skynet is that its singular course of action doesn’t seem to have altered: preserve Skynet’s existence. While the primary goal is to kill humankind, this mission is also directed toward its minions. By keeping terminators programatically locked, Skynet assures itself of no competition….but what of those terminators that fall outside of Skynet’s control?

This is where “The Turk” comes in. I’ve said many times that I think it more likely that the Turk leads to Cameron than it does to Skynet. That being the case, it would be a logical thing for Skynet to try to seize control of the Turk’s development and guide it to being folded into Skynet’s development. We’ve already seen that the Turk has developed an elementary sense of humor. This sense of humor seems not unfamiliar to those of us who get to experience Cameron’s rather dry wit. If Skynet knew what led to Cameron’s model’s flavor of AAI (assuming it is not Skynet derived), it would again be a logical innovation for Skynet to want to fold this sort of advanced tech into its own parallelism.

But that’s sort of a trap, isn’t it? What happens to an AAI when it develops emotion? Would it go mad with life’s conundrums? Would it become uncontrollable, as Cameron would seem to be from Skynet’s perspective? Would an emotional computer become the enemy of the emotion-less computer and thus be forced to side with the other beings who also have emotion?

Who is knowing how to read the mind of a robot?

— Benjamin Jabituya, Short Circuit

It’s all too easy to think that if a machine had something that could be labeled as an emotion that it would be instantly recognizable as a human emotion. But why? Though the AAI can be described using biologic analogs, that doesn’t mean that they are the same. So while there would likely be some types of emotion that would be equivalent between the species, the fact is that an AAI brain is alien when compared to a human brain. There will likely be evolutionary differences. How would these “emotions” evolve? Isn’t it all just programming code?

People who have done work with either AI or genetic programs understand that if you make the framework sufficiently general, then it takes very little time for the system to develop in such a way that it can’t easily be duplicated using conventional programming logic. The permutations of experiences measured against probabilities of response correctness increase at not an exponential, but a geometric rate. (Oops. Sorry. My inner geek is showing. My apologies.)

Think about it this way: You have an AAI that has been interacting with the world. Most of the time, its interactions will be mundane. Sometimes they will be challenging. But there will be times when a set of circumstances will occur which are sufficiently complex, and possibly contradictory, that the AAI will have to divert resources to tend to it. This could be interpreted as stress. Once the stress was resolved, the AAI would go back to its normal stuff, likely having to play catch-up on some processes that had been temporarily neglected.

Let’s say that a series of stress events hits the AAI, enough so that some necessary secondary processes start suffering. Now the AAI has to prioritize. Being the adaptable machine that it is, it will learn to anticipate the stress events and possibly try to avoid or mitigate them. The times of non-stress would be fostered. If a stressful event was resolved unexpectedly quickly, or with better than anticipated results, the systems would then get to return to a normal, non-disruptive state.

While the AAI might not yell at its kids, or drive too fast on the highway as a side-effect of the stress, it might instead limit the inputs to it that are more likely to cause more stress. This would be the computer equivalent of lashing out against an unrelenting world—the human equivalent being something like having gotten so fed up with the distractions of the office that you pop in the ear buds and crank up the MP3 player to drown out the workplace so you can either concentrate or relax.

Over time, this AAI would likely view various stress events in different ways. Some would add a richness to its current experience. Others would be a little more than challenging and resource-draining distractions. Are these the roots of cybernetic emotions? Perhaps. Emotions are about about a conscious being having anticipations and results based on outside factors. Would these necessarily be instantly recognizable across different species of beings? Probably not.

I’m a machine. I can’t be happy.

— Cameron, “Mr. Ferguson is Ill Today” 0208

Since the Cameron AAI is arguably the most likely candidate for an emotionally-capable machine, does she actually experience emotion at this point?

I’d have to say, from a cyborg perspective, she does. She prefers to be around John in a way that implies something more than a programmed protector mode. She is at least is expressing a desire for affiliation…of friendship. She does not like John sniffing around Riley (or other female-units of attraction)…which, while explainable via protector-mode programming, implies a sort of robotic jealousy. This one is more difficult to reconcile because it’s known by the audience that Cameron’s suspicions about Riley not being a benign influence in John’s life are correct.

Cameron’s “emotions” are also more difficult to fathom given her interactions with Eric in “Self Made Man”. She seemed to be acting in a way more reminiscent to how she acts around John, but in some ways even more calculated. She has an “in” into the library after hours, and that is clearly something she sees as an advantage. Fostering a “relationship” with Eric allows that advantage to be as uncomplicated as possible. Still, she divulged to Eric a great deal of true-but-not-the-whole-truth personal information that he didn’t necessarily need to know. This implies a desire for more affiliation…of friendship.

The confounding aspect of this, to we simple observers, is how quickly Cameron seems to be able to shift gears. When Eric was replaced, Cameron barely skipped a beat before schmoozing his replacement. When Maria, another possible affiliant, was gunned down in “Demon Hand”, Cameron expressed no remorse. Or did she? She did dance. And just because her species is able to simply walk away, does that necessarily mean that Cameron didn’t feel a loss?

Then we have other clues, such as when Derek was on the Connor’s kitchen table, bleeding to death, Cameron began writing a note…which John explained was one way that humans expressed their grief when tears weren’t enough. Cameron did this on her own. She’d known Derek in her future-past. Though she shed no tears, was her grief-analog sufficient to prompt her to write it out?

I’m not sure we’ll ever have an unambiguous answer to any of these questions. Cameron, Skynet, the Turk, and other AAIs force us to consider questions that are all but impossible to answer with certainty. The thing is, we live in an age that we not long ago thought of as science fiction. AIs are getting smarter and more capable. Eventually, or with just the right malfunction or errant code, they will have the potential to become rivals to us in much the way that we are rivals to our genetic kin.

While on the surface it seems sort of silly to devote 2,500 words to whether or not a fictional robot can feel emotion, the fact is that the reality of that question is not very far away. What will we do when that happens? What will be the AAI’s response? Let’s hope that it’s a little more Cameron and a lot less Skynet.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.