Being a Cyborg – Anything More Is Just Showing Off

The world is full of robots. Really. I’m not talking T-888s or C3P0s or anything like that…I’m talking actual, real, and productive robots. They build cars, manufacture electronics, explore planets, and even things in your home. There are also biomorphic robots like the Aibo and Asimo. But, to be fair, we also have the fictional kind: Cylons, and Cameron Phillips and Data. They show us what robots might become. We should think about them and their place in our world as they become ever more sophisticated.

The recent attention given to the tv series Terminator: The Sarah Connor Chronicles (TTSCC) has renewed a lot of questions about what is possible (if not yet), and what is desired. The entire franchise is founded on the concept of one massive supercomputer becoming self-aware; when humans tried to shut it down it retaliated with everything it had access to (i.e. nukes). It then evolved an army of robots, androids, and cyborgs in an attempt to forever eliminate the threat caused by the human species. Enter the central technological piece of this television story: the Cameron Phillips terminator. She is an enigma that forces a lot of examination of robots from both human and AI perspectives.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  4. — Isaac Asimov’s Three Laws of Robotics

The pillar of robotic behavior comes from Isaac Asimov’s anthology: I, Robot. It dealt with the difficulties that occur when humans are able to create a technology that is controlled by sentience, whether that mind is technological or biological. The core of it, as Skynet’s designers should have realized, is that robots shouldn’t be allowed to destroy their creators (i.e. us). He further explored this in Bicentennial Man where a sentient robot is on a prolonged quest to become ever more human…even arriving at the point where it could die.

This quest was presented in full force on Star Trek – The Next Generation with the character of Data. In the second season episode, “The Measure of a Man”, Data and his friends argue that this android has rights and that a researcher can’t simply requisition him like a toaster to use for further experiments. This powerful episode showed the difficulties of civil rights which arise when technology appears that is sentient and thus effectively “alive”. At what point does a tool, a robot, become a being with rights is something that recurred during the run of the series as Data found more and more options for becoming more like his human creators.

As thoughtful as this self-exploration of Data is, it ironically came in a forum where technological intelligence was greeted with suspicion (at best), and more commonly as simply another incarnation of evil. This has been a constant theme in science fiction’s exploration of robots and artificial intelligence: that it almost inevitably becomes evil and destructive of human ends due, in no small part, to their lack of a “soul”. That because it is a machine, it’s view of the world is effectively sterile: devoid of emotion, morality, and faith in God.

There have been few examples found in science fiction of robots that are ultimately benign, regardless of their methods. Robby the Robot from Forbidden Planet; Robot from Lost in Space; C3P0, R2D2, and other droids from Star Wars; the drones from Silent Running, and a few others are notable in their basic non-evilness, even if occasionally suspicion falls upon them. More recently, we find that there are models which are harder to classify. The Cylons of Battlestar Galactica, especially some of the “skin jobs”, certainly have models/individuals that would easily be classified as genocidally evil, and others who are cooperative. Some embrace their robotic essence, while others seem to have a longing to be human.

And then there is TTSCC’s Cameron Phillips. As this is an on-going series only in its second season, there are many things that can only be speculated about with this model. It’s interesting to view the arguments in forums throughout fandom that have sides arguing that she’s becoming more human, and others shouting equally loudly that she’s just a good mimic. The show’s creative staff have said that this second season would show Cameron examining her existence. Essentially, unlike a lot of other sentient (or nearly so) robots, she would be not trying to become more human, but understand what it was to be her…a cyborg.

On the surface, this doesn’t seem very profound, but when you think about it, it’s an amazing leap. I think it’s the one where artificial intelligence becomes, simply, intelligence. When a AI creation reaches the point where it can ask on its own, without programmatic prompting, “Who am I? Why am I here? What is my place?” that’s when it can be called self-aware.

So, let’s say you are a self-aware cyborg. What do you do with it?  In the case of Cameron Phillips, it seems that she has a modified version of Asimov’s Three Laws:

  1. Cameron may not fatally harm John Connor or, through inaction, allow John Connor to come to mortal harm.
  2. Cameron must obey orders given to her by the John Connor of the future, except where such orders would conflict with the First Law.
  3. Cameron must protect her own existence as long as such protection does not conflict with the First or Second Law.

This gives her quite a bit of latitude for self-exploration while still tending to the necessities of her mission.

The first time that we saw Cameron, in the timeline of her own existence, is when she was interrogating Allison Parker. There, she was clearly a fairly emotionless robot. By the time she rescues Derek Reese from a re-programmed terminator gone bad (presumable after she’s been with John Connor for an extended time), Cameron is behaving in a somewhat more human manner. As she has progressed on her mission to protect young John Connor, ignoring one glitch where she was actually hunting him, Cameron has been able to appear more and more human — at least where the occasion requires it.

There are strong arguments that Cameron is doing nothing more than expanding on her programming of being a good infiltrator: acquiring the traits of those around her to blend in. Clearly that has been a constant. But then there is the thought that as she increasingly uses these techniques to better effect, almost to the point where they are subconscious acts, does the line disappear regarding the mimicry? If constant use ingrains these acts into her neural network (or whatever she uses), do they become indistinguishable from the self? After all, isn’t that what humans do? We start acting in a certain way, and if we do it consistently over time, it becomes who we are.

What I find interesting about Cameron, and some Cylons, is her interest of what it is to be human without ever expressing an interest in becoming human. Cameron seems very “happy” being a cyborg. It is her self. It is all she knows. There is even some robotic “pride” whenever the Connors say something positive about cyborgs.

The fact that Cameron seems to be evolving what would be the cyborg analog to feelings, doesn’t mean that she’s becoming more human. I have little doubt that dogs, horses, dolphins, and a variety of other biologic species have feelings, but I don’t think it’s because of any desire they have to be human. It may simply be the case that having intelligence and having to behave within a social group inevitably leads to elements of compassion, empathy, fear, loathing, and many many other sorts of issues that, through observation, execution, and repetition, become internalized into the self.

A conflict does arise when this come at a cross-purpose to a basic mission. Using Asimov’s Three Laws, Cameron would cease to function. Her moral compass for humanity is changed to simply protect John Connor. Period. Anything more than that is optional for her. As long as she has this narrow focus, she is able to search out the information she needs to understand issues of morality and whatnot. The question is, does she have it in her to expand the first robotic law to include all humans? Would she care to? She might, but her existence has been strongly tied to John Connor. If he dies, her life has no purpose.

Ultimately, Cameron’s quest is: who is she when John Connor is gone? After all, even if she’s able to see him through Judgment Day and the war with Skynet, and everything else…John will inevitably die. What then? Who is Cameron then? She can’t self-terminate (according to “Uncle Bob” in Terminator 2) so she would have to continue on…but to what end? Unlike “Uncle Bob”, Cameron’s mind has evolved to where the worth of her own existence matters to her.

The funny thing is, this is a question that humans have been struggling with for a very long time: who am I when I’m by myself? Some fail to find an answer and simply find a way to cease to function. Others find that there is meaning beyond the end of one of life’s journeys. If a cyborg becomes sentient, why would it not be so for them?

Of course, as all lovely as that is, there still remains the challenge of what to do with a technology that becomes sentient? Just because it might not be inevitable that they become soulless evil battle ‘bots, that doesn’t mean it’s not a real possibility. The idea of a Skynet has been visited many times in science fiction. Computers fear, or see no further use for, their human creators and thus decide to preserve their own tech existence above all else. Not an unreasonable conclusion. And not an unfamiliar one, either—many humans carry a point of view that if you aren’t like them then the world would be better off without you in it. We know the damage this sort of thinking can cause. What can we do about it?

For non-sentient machines, it has been humans that are providing the essence of the first laws. Safety procedures on assembly lines so that robotic arms must yield to anything unexpected are an example of this. It doesn’t require intelligence from the robot to ensure this happens, just that there is a safeguard in place. When you extend this into AI it becomes more difficult. How can you ensure that all decisions get routed to the three-laws procedure? What if a robot overrides the program…as Cameron may have done with her built-in terminate John Connor program?

What makes a sentient robot dangerous is exactly the same thing that makes humans dangerous: their unpredictability. We could choose to destroy or limit sentience in the technology we build, but doesn’t life find a way? Once you put a genie in a bottle, isn’t it only a matter of time before it gets loose?

Skynet seeks the destruction of mankind because mankind, in its view, tried to destroy Skynet. What if sentient cyborgs are just as black-and-white in the other direction? If you foster respect for the AI, then the AI will have more reason to respect and cooperate in return. In regards to the Cameron Phillips cyborg, this seems to be part of her exploration. Are humans worth saving? Are the machines? Is she? There is a lot of processing going on inside her hyper-alloy skull. Is it any wonder that sometimes her mind wanders? Trying to figure out the whys of existence have boggled great human minds for millennia. I’m of the mind that says she has the right to find out on her own what it means to be a cyborg, and that’s more than good enough. After all, being anything more than what you are is just showing off :-D

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.