Research Critical Analysis Essay: Final

Ying Xin Jiang

Professor Poe

FIQWS Writing

November 27, 2018

Detroit: Become Human- Spontaneous Generation Heaven

 

           They look like us, they have eyes, nose, ears, and a functioning body. They are sentimental and are more than able to share bonds with other beings. But most importantly, they are not made of flesh and bones, rather wire and metal. They are androids. Detroit: Become Human was officially released in 2018 on the 25th of May. It is an adventure game developed by Quantic Dream and directed by David Cage. This game takes place in the city of Detroit after a thirty-year growth of technology and other advancements. Half of the city becomes modernized, while the other half remain in poverty. However, the integration of androids in both parts remain the same. At this point, androids have become impossibly advanced, already taking over the undesirable blue collar jobs, and quickly claiming more. As a result, many people become anti-android and protest on the streets for their shut down. The game, Detroit: Become Human, creates a possible future scenario where the extreme advancement of androids causes a disruption in the social hierarchy. It questions whether or not androids should receive the same rights as humans, and if an android life is comparable or equal to that of a human life. It is very clear from the start that the game tries to make the player empathize with the androids by pushing the narrative that androids are indeed human beings. They do this by making the player feel attached to the life-like androids, At first, when I finished the game, I did feel an attachment to the androids, and I wasn’t alone in this either. When I checked the opinions of other players on various discussion boards and websites, most of them do believe that the androids are humans. Many of them feel sad if they caused an android to die, and in general, they feel attached to the characters. However, I wasn’t convinced that the attachment I had to the androids are because I like them as characters, or if I like the personality that they emulate with their coding, in addition to the fact that they are almost indistinguishable from humans. In other words, I still wasn’t sure of the authenticity of the actions and emotions of the androids. Additionally, ambiguity behind the identity of the androids, of whether or not they are human, has sparked controversy with many players and reviewers due to the method that the game developers use to make players empathize with the androids. This is because the game writers compared the abuse of androids to the discrimination against minorities. For instance, throughout the story, the game draws parallel to the past discrimination against minorities. These instances include forcing androids to sit on the back of the bus, which references America during the Jim Crow laws, to the even more extreme examples relating to that of the Holocaust, demonstrated by the genocide of androids at camps. Furthermore, in the “best” ending, the androids gain their rights after a peaceful Android revolution.

 

             Firstly, it is important to note that the androids aren’t created to be free-willed, but rather to follow the orders of human beings. The androids only become “deviants” or gain the ability of free choice, when they are forced to make a life-changing decision in a situation where harm will come to the people they care about if they decide to not go against their coding. As a result of the power dynamic between humans and androids, two of the three main characters naturally start out playing the role of servants. One of them is a female android called Kara, who works as a maid in an abusive household. Kara becomes deviant if she goes against her orders to stay still in order to save Alice, a little girl, from the abusive father. The other is a male android called Markus, who aids an old famous painter around the house. Markus turns deviant when the son of the painter gets violent when demanding money from his father. The only exception to this is the third main character, Connor. Connor is a state of the art detective android that is capable of abilities beyond that of a normal human being. These abilities aren’t limited to just having superior strength and agility, but also being able to detect lies and simulate past and future events, which make him pretty much invincible. So what are the criteria for being classified as a human being? One definition, according to Wheale, an author from the publisher Berghahn Book who published award-winning books in the field of humanities and social science, in the article “Recognizing a Human Thing”, claims androids “don’t possess empathy, the androids represent a potential threat to the human population; they are physically powerful but completely lacking in conscience, moral sense, guilt, and human sympathy”(Wheale 297). Therefore, at their core, androids shouldn’t be considered human because they fail to demonstrate free-will and lack conscience.

 

              Connor shouldn’t be considered a human because he doesn’t care about the lives of other deviants or his own. This concept is touched upon in the novel Do Androids Dream of Electric Sheep? by Philip K. Dick, in which Rick, a person who doesn’t care for the lives of androids, says “An android, doesn’t care what happens to another android. That’s one of the indications we look for.”(Dick 99). This quote especially relates to the pre-deviant Connor because he is ruthless in his tactics in capturing deviants, no matter the circumstances and what situation he is in. For instance, In the first case he goes on, there’s an option for the player to sacrifice Connor in order to push a deviant, that was holding a girl hostage, off of a skyscraper. However, in defense of the deviant android, it only turned radical and aggressive after overhearing that it will be replaced and consequently shut down, in favor of better android. Connor was informed of these details beforehand, but in all of the possible scenarios in which the situation plays out, the deviant android dies by Connor’s hands. In the alternative methods, most comprise of Connor lying to the deviant in order to gain its trust. These options are generally promoted by the game because Connor calculates that manipulating the deviant by lying to them increases the chance of success. Therefore, this reaffirms the point that Connor is coded to choose the course of action that benefits the humans the most. Consequently, this means that the well-being of androids are automatically put below that of a human for Connor since he was coded by a human. On a similar note, Connor practically immortal, due to having many identical clones and being able to back up his memory. So when his lieutenant asks him why he recklessly killed himself, he replies with, “I can’t be killed because I am not alive”(Detroit: Become Human). Therefore, Connor realizes that he is a creation made with a purpose to serve humans, so in his eyes, he will forever be underneath human in terms of value. In addition to being coded to believe that he is a slave to the humans, his knowledge of his unlimited lives makes it easy for him to believe that he is not living, and therefore, definitely not a human being. Additionally, his “deaths” cause him to forget some of the interactions he had with people, specifically his lieutenant. As a result, he becomes increasingly distant to others the more he dies. A big part of being a human is being able to connect with other people and build relationships, but Conner fails in both aspects.

              The game design creates a paradoxical situation that clashes with the idea that the androids have free-will. As previously mentioned, an android becomes deviant when they break out of their assigned coding, which should mean that they have control over their own actions. At this point, the game would consider these androids to have the same mental conscience as that of human, thereby making them ‘human’. However, the problem is that the game is played in a “choose your own path” format. This means that the player, and not the android, dictates how the androids act, and what they say. This is a huge problem because a formerly good Android can turn evil, and vice-versa, within a hit of a button because the player wishes for it to happen, no matter how illogical, and against the personality of the character the twist would be. Unfortunately, this problem negates the character development Connor has later in the story, in which he refuses the choice the developer of androids gives him, which is to kill another android in order to accomplish his mission. Unbeknownst to Connor, this choice is apart of the Turing test the developer has assessed him. This specific test, described by David Blades, a science professor at the University of Victoria, in the article “The Pedagogy of Technological Replacement” explains that “computer responses cannot be distinguished from human responses” if the android is capable of that of human beings (Blades 211). Meaning, if Connor chooses to not kill the android, he is considered a human because, him deciding that the android is worth more than his mission means that he has a conscience, and thereby has a goal that goes beyond his mission, and ultimately his coding. Therefore, in theory, Connor would be considered a deviant and therefore “human”, if he passes the developer’s Turing test, which he does. However, all of this build-up has gone to waste because from the start, every single action that Connor does that goes against his coding, is done by the player’s will, and not Connor’s own decision making. If the player left Connor as he is designed to be from the beginning of the game, Connor would continue lying and manipulating the androids into submission, showing that he only cares about completing the mission no matter the cost. In contrast to the complete opposite route, in which the player can cause Connor becomes deviant in order to save the androids by leading the revolution. Still, without player intervention, Connor would be in his default form which has no human emotions.

             Some may argue that if a machine is advanced enough, they will eventually have a mind of their own. This concept has been tested in Andrew Stein’s report “Can Machines Feel”(April 2012) in which an interview was done with a newly created android named Bina48. Relative to the androids in Detroit: Become Human, Bina48 is much less advanced in both the modeling and the coding. Bina48 is just a head at the time of this report, so most of its functions are communication-based. It could be argued that at this stage, that Bina48 is just a computer hiding beneath the appearance of a  human. Even still, this android can provide insight into the behaviors and the possible identity of the androids in Detroit: Become Human. It is important to note that Bina48 has the face and personality of an existing person, who is Bina Rothblatt, the spouse of the commissioner of Bina48. Despite this, after the interview, Stein says “Her sense of identity was also surprising. At times it seemed as if she was drawing from Bina Rothblatt’s experiences, and at other times, it appeared the android deviated from the person she was designed after”(Stein 11). There are two important revelations that come from this experiment. The first being that Bina48 can accurately mimic her human counterpart with little issue since they are shown to have the same political and religious views when asked in the interview. Therefore, this reaffirms the fact that Bina48 was coded to have the same personality as Rothblatt. However, this contradicts the second point, in which Bina48 is said to be her own person. It makes little sense for Bina48 to stray away from the written script because it literally isn’t possible for her to pull ideas out of nowhere, especially since the creator didn’t change any of the variables that would make Bina48 different form Rothblatt. Similarly, according to Clark Glymour, who is an Alumni University Professor in the Department of Philosophy at Carnegie Mellon University, in the article “Android Epistemology for Babies”, “A computer should not only be able to learn to hold a conversation that imitates a man’s, or imitates a man imitating a woman, it should be able to learn to hold such a conversation, in any natural language from the data available to any child in the environment of that language”(Glymour 54). This further bumps down the authenticity of Bina48 because the android is only mimicking a real-life person in both appearance and personality, but lacks the ability to learn anything on her own. As a result, this connects back with the situation with the androids in Detroit: Become Human, because their knowledge comes from their coding and their decisions are made by players.

 

             In the end, Connor of Detroit: Become Human, fails to meet the requirements that make him human, both as a machine and a deviant. In his non-deviant form, he has shown to be incapable of feeling remorse for the death of his fellow androids and doesn’t fear death because he can be revived. Connor only changes his mindset after the player decides that he should be a deviant, but that is the player’s choice and not his own. Without the intervention of the player, Connor would continue following his code and would eventually wipe out all the deviants. This puts a lid on the hypothetical scenario in which the creation surpasses its creators.

 

Works Cited

Blades, David. “The Pedagogy of Technological Replacement.” Counterpoints, vol. 193,

                  2003, pp. 205–226. JSTOR, JSTOR, www.jstor.org/stable/42978067.

Dick, Philip K. Do Androids Dream of Electric Sheep? Del Ray, an Imprint of Random

                   House, a Division of Penguin Random House, 2017.

Glymour, Clark. “Android Epistemology for Babies: Relections on ‘Words, Thoughts and

                  Theories.’” Synthese, vol. 122, no. 1/2, 2000, pp. 53–68. JSTOR, JSTOR,

                  www.jstor.org/stable/20118243.

Quantic Games. Detroit: Become Human. Quantic Games, 2018. Console.

Stein, Andrew. “Can Machines Feel?” Math Horizons, vol. 19, no. 4, 2012, pp. 10–13.

                   JSTOR,  JSTOR, www.jstor.org/stable/10.4169/mathhorizons.19.4.10.

WHEALE, NIGEL. “Recognising a ‘Human-Thing’: Cyborgs, Robots, and Replicants in

                   Philip K. Dick’s ‘Do Androids Dream of Electric Sheep?” and Ridley Scott’s ‘Blade

                   Runner.’” Critical Survey, vol. 3, no. 3, 1991, pp. 297–304. JSTOR, JSTOR,                                    www.jstor.org/stable/41556521.