74

There is a particular scene in I, Robot that raises an interesting issue. Detective Spooner is telling the story of how he lost his arm. He was in a car accident, and found himself and a young girl underwater, about to drown. Will was saved by a robot that was passing by. The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's. That makes sense, as a robot would make decisions based on statistical probability. But there is a problem: In the flashback, we clearly hear Will's character shout, "Save the girl!" to the robot. That was a direct order from a human to a robot.

At first I thought that this was not a violation of the 2nd law because if the robot had obeyed his order, then he would have died. The 2nd law of robotics cannot override the 1st law. But the problem with this is that, if a Sophie's choice type situation counts as breaking the 1st law of robotics, then an emergency robot would never be able to perform triage (as the robot that saved Spooner clearly did). If choosing to save one human counts as harming another human, then a robot would not be able to effectively operate in an emergency such as this.

All this leads to my question:
Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?

SQB
  • 38,680
  • 33
  • 212
  • 350
Magikarp Master
  • 6,167
  • 6
  • 28
  • 63
  • 41
    Note that although the plot of the film I, Robot is not in any way based on the original Asimov stories collected under the same name, those stories do often address very similar issues to the one here, frequently centering around interpretation of the Three Laws and their interactions. One specific similarity is with the plot of "Runaround", the second story in that collection. – Daniel Roseman Jul 12 '17 at 12:51
  • 5
    Well, humans do similar calculations too. They just put a lot more weight on the probability of "can't live with myself if the girl dies" :) In your interpretation of the 2nd (and 1st) law, robots couldn't ever do anything but melt on the spot - there's always someone that dies when they're doing something else, isn't there? As for Asimov's take, do take mind that the laws in English are just simplified translations - they don't cover even a tiny fraction of the actual laws coded in the robots themselves. Language lawyering on the English version of the law is irrelevant :) – Luaan Jul 12 '17 at 13:38
  • 10
    To expand on @DanielRoseman’s comment, in the books this situation would not have been resolved the way it was in the movie. The movie portrays the robot’s thinking as cold and calculating, saving Will to maximize the chances of saving someone. Asimov’s robots were incapable of such calculation. Being presented with this dilemma, often even in a hypothetical, would be enough to fry a robot’s brain. For example, in “Escape!”, the robot needs to be coached carefully through just thinking about a problem where harm to humans is the “right” solution. – KRyan Jul 12 '17 at 13:42
  • 5
    @Luaan In the books, humans perform such calculations. Robots cannot. They are not trusted to perform such calculations, and are literally designed such that these problems destroy them to even think about. – KRyan Jul 12 '17 at 13:42
  • 5
    This problem is similar to the trolley problem. There's just no way to save everyone, no matter what. – Arturo Torres Sánchez Jul 12 '17 at 14:10
  • 1
    @KRyan In the novels, robots have developed to think outside of hurt/no hurt duality, and actually choose the outcome that is slightly better. See Robots of Dawn. – Gallifreyan Jul 12 '17 at 16:44
  • 1
    One of the most fundamental problems with the three laws is that the concept of 'harm' is subjective, and as such, nearly impossible to codify. – The Evil Greebo Jul 13 '17 at 13:21
  • Comments are not for extended discussion; this conversation has been moved to chat. – Null Jul 13 '17 at 13:24
  • Do note that the film (which this question is about) has the three laws in them. No need to bring the novels into the question as the film is clear enough on it's own. – Stijn de Witt Jul 14 '17 at 22:21
  • Links on weighted decisions which I imagine would be a small part of some more complex algorithms for AI robotics. http://www.cs.toronto.edu/~hojjat/384f06/Lectures/Lecture21.pdf http://www.aihorizon.com/essays/generalai/decision_tree_learning.htm A drop of this type of AI in today's context would be vehicle manufacturers and software companies (Tesla, Uber, Google, Apple, GE, Ford, Mercedes Benz, etc) all trying to overcome the hurdle of autonomous vehicle safety. The question being do I save the brilliant young PhD passenger, or the young child in the middle of the freeway? – Jacebot Jul 15 '17 at 04:14

10 Answers10

75

The Second Law states

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

So you're saying by refusing to save the girl as ordered by Detective Spooner, the robot has broken that law? The only way it can't have broken the second law is if the corollary comes into play and it would conflict with the first law.

The First Law says

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Obviously, the robot has not directly injured anyone so that is out of the picture. The robot has not allowed the girl to come to harm by inaction as the robot is acting by helping Spooner (i.e. it isn't just standing there watching). However, if, as you say, it was statistically more likely that Spooner would survive with some help, then obeying Spooner and helping the girl could be construed as letting Spooner come to harm by inaction.

So the Second Law is not broken as it's over-ruled by the First Law. The First Law is not broken as the robot did its best to save a human.

Lightness Races in Orbit
  • 12,202
  • 3
  • 49
  • 81
Darren
  • 7,156
  • 3
  • 36
  • 61
  • I get what you are trying to say, but I think the distinction is not sufficient for dealing with this problem. The fact that the robot is saving Will does not change the fact that it failed to save the girl. The question now becomes "does this count as inaction?" to which I would say "yes". If a robot that is busy with a task fails to save a human who is about to be harmed, the fact that the robot is not technically "inactive" is irrelevant. So, assuming a robot can act in these Sophie's choice situations without violating the 1st law, how can it ignore the 2nd law in that same situation? – Magikarp Master Jul 12 '17 at 12:26
  • 27
    @MagikarpMaster the robot is saving Will does not change the fact that it failed to save the girl. The question now becomes "does this count as inaction?" to which I would say "yes"" By that definition, any robot who did not save everyone all the time would break the First Law. So you're saying that every robot broke the First Law because the girl died? That is ridiculous. A single robot can only do what a single robot can do. Just because you tell it to lift a cargoship by itself, and it cannot, does not mean that it breaks the Second Law, as long as the robot tries to lift it. – Flater Jul 12 '17 at 12:29
  • 2
    As explained in the film, either would have died in the time it took for the robot to rescue the other; therefore, the robot determined which had the better chance of successful rescue and determined that WS character was the "winner". It could not save both, but could not act in a way that would allow both to (most likely) die. – Zeiss Ikon Jul 12 '17 at 12:30
  • 4
    @ZeissIkon: It's interesting to see that Will Smith inherently expected the First Law to have biases as to the worth of a human life (age, in this case). The robot that saved him considered Will's life equal to that of the girl; which is what most current humans (which Will tries to be, due to him being "oldschool" in the movie's context) would argue as a more noble ideal (although we are still biased about children, we do also argue the equality of human life) – Flater Jul 12 '17 at 12:33
  • You make an excellent point Flater, and I would agree with you if this were a physically impossible situation. A robot can only follow the laws inasmuch as they are physically capable of doing so. The laws only apply to the robot's capacity to act. Finding two humans and being faced with the choice of saving one and allowing the other to come to harm, I can accept that it would make it's decision based on stats. But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. – Magikarp Master Jul 12 '17 at 12:36
  • 3
    @MagikarpMaster: If a robot that is busy with a task fails to save a human who is about to be harmed, the fact that the robot is not technically "inactive" is irrelevant. If the robot was cooking for an old lady and ignored the drowning child, I completely agree that you can consider that willful inaction. However, you need to acknowledge that the robot was already saving a human. This means that both actions are equivalent, and one does not supercede the other. The robot assessed survival chances because it needed to choose between two equally viable actions. – Flater Jul 12 '17 at 12:37
  • 34
    @MagikarpMaster: But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. Actually, it would. The robot listening to that command would condemn Will to die, which the robot cannot allow due to its programming. Listening to a command is second to saving a life. Therefore, the command to save the girl (by merit of it being a command) would be discarded, as a higher priority task is in progress. – Flater Jul 12 '17 at 12:39
  • 5
    @MagikarpMaster, But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. But it would. If both myself and a friend were drowning, but by virtue of being a world champion free-diver a robot knew I could hold out much longer, yet still obeyed my friend's instruction to save me first even knowing that my friend would drown if that were the case, the robot would be violating the First Law. – Darren Jul 12 '17 at 12:40
  • 4
    @Flater, we both posted the same point at the same time to the extent I thought for a moment I'd accidentally posted twice :) – Darren Jul 12 '17 at 12:41
  • 2
    I like this answer, because it shows a "layman" interpretation, and at the end of the day, the book and movie were not written with an intended audience of computer programmers. – Gnemlock Jul 12 '17 at 12:49
  • 8
    This answer would be improved by noting that in Asimov’s work, this dilemma would have destroyed the robot. Choosing between one human life and another is a frequent source of robot destruction in the books. It would not have been able to even think hypothetically about the relative merits of saving Will vs. saving the girl. This scene is a massive break between the way the robots in the movie functioned and the way the robots in the book functioned. – KRyan Jul 12 '17 at 13:35
  • 4
    @Flater: well, if we assume every robot of being guilty of not saving the girl, we could also conclude that every robot should always be busy trying to save all humans of the world from danger, perhaps forming some kind of peace corps to end all wars and such alike…oh wait, that’s the actual plot of the movie… – Holger Jul 12 '17 at 13:35
  • 5
    @KRyan Not necessarily; the more advanced robots in "I, Robot" and in the later robot books were much more sophisticated about handling conflicts in the Three Laws. – prosfilaes Jul 12 '17 at 13:42
  • 2
    @prosfilaes I know, but the robots in the movie were billed as pretty new, one of the first generations of consumer robotics. And as I recall, it wasn’t until much later that a robot could handle being personally responsible for inability to save a human life. – KRyan Jul 12 '17 at 13:44
  • @Holger: The plot was not about saving all humans on the planet. As you can see in the movie, it is quite the aggressive takeover, plenty of people get harmed. The plot more specifically revolves around Vicky who has interpreted "prevnting harm to humans" (= saving every human no matter what) as "preventing harm to humanity" (= some humans may die if it ensures a better future for humanity). It makes the difference between safeguarding every human's innate free will, and humans' innate servitude to their race (= the ant's point of view, which favors the colony over the individual). – Flater Jul 12 '17 at 14:15
  • @Flater: well, I assumed that this became clear when I wrote “forming some kind of peace corps to end all wars”, as that never actually saved all humans nor ended all wars, which is exactly what Vicky’s takeover due to its interpretation of the 1st law was all about. It surely also did not save all humans, but was required to fulfill the 1st law according to Vicky’s interpretation, having all robots under Vicky’s control constantly patrolling to “save” humans. – Holger Jul 12 '17 at 14:24
  • 5
    I think this is an implementation detail. The laws were implemented in this robot in such a way that he can not order it to let him die, and it is executing the first order by measuring chances of survival between the two people. Could it have been implemented in a different way? Sure, it could have been implemented to save children first. But it wasn't. In the same vein, the way Vicky was implemented, she saw that by the first law, "no inaction," she had to act in order to prevent the humans from hurting themselves. – jfa Jul 12 '17 at 19:55
  • @KRyan - the movie just compresses the timescale a bit. It also invokes the "zeroth law", which wasn't really considered until many centuries later than the stories in the collection (at least according to this timeline of Robot stories). – Jules Jul 12 '17 at 20:56
  • Your emphasis on inaction makes me wonder whether the robot just leaving isn't also an action that effectively means they don't violate #1... – Zommuter Jul 13 '17 at 14:32
  • @Zommuter The point is not that it was just doing any old action, but that it was already taking an action compelled by the first law. "[You must not] through inaction allow a human to come to harm" means any action you could take that would prevent harm, you must take. Not taking that action is "allowing a human to come to harm through inaction", regardless of whether you do anything else with the time instead or simply stand idle. – Ben Jul 14 '17 at 03:51
  • @Jules Yes, invokes the "zeroth law" but in a very perverted and wrong way. If Giskard were to see what they done with his law... poor Giskard! – frikinside Jul 14 '17 at 06:47
  • Guys please stop discussing this.... Laws can't be programming robots. It are humans implementing the laws to robots. So discussing here how the laws are to be interpreted is nonsense, since the only important question would be "How was/were the implementer interpreting the law" (as @JFA already pointed out) and that just can be answered by canon. So the discussion you are having is meaningless. – Zaibis Jul 14 '17 at 08:05
  • I think in a lot of ways it highlights the inhumanity of the robots. A human could be flexible, like the detective's pal. However, it shows how a seemingly perfect and well-meaning absolute can have widespread repercussions and is in fact a bad assumption. Asimov captured the issue with engineering assumptions quite succinctly in this film. – jfa Jul 14 '17 at 15:29
48

The film appears to operate on anachronistic Asimov mechanics

What we would have here is likely a first-law vs first-law conflict. Since the robot can not save both humans, one would have to die.

I, Robot era:

There is definitely precedent for an I, Robot era robot knowingly allowing humans to come to harm, in the short story "Little Lost Robot", but this was under the circumstance that the human would come to harm regardless of the robot's action, so the robots deem that it is not through their inaction that the humans come to harm.

However, I would suspect that instead, an Asimov robot would interpret the situation in the film as a first-law vs first-law conflict, since either human could be saved depending on the robot's decision. In other words, the robot could have saved the child, but didn't, which would be a first law violation. Looking at both victims this same way, the robot would then find this to be a first-law vs first-law conflict.

The short story "Liar" explores what happens when a robot is faced with a first-law vs first-law scenario:

Through a fault in manufacturing, a robot, RB-34 (also known as Herbie), is created that possesses telepathic abilities. While the roboticists at U.S. Robots and Mechanical Men investigate how this occurred, the robot tells them what other people are thinking. But the First Law still applies to this robot, and so it deliberately lies when necessary to avoid hurting their feelings and to make people happy, especially in terms of romance.

However, by lying, it is hurting them anyway. When it is confronted with this fact by Susan Calvin (to whom it falsely claimed her coworker was infatuated with her - a particularly painful lie), the robot experiences an insoluble logical conflict and becomes catatonic.

In short, an I, Robot era robot in Asimov's writing would not have been able to continue functioning after this scenario and would have to be discarded completely. It's likely that it would not even be able to function after being initially faced with the scenario, thereby destroying itself before being able to rescue either human.

The second law is irrelevant, because first-law vs first-law results in an unsurvivable deadlock. First law is the "trump card" so to speak, and not given a priority, lest the second or third compete, as we see in Runaround:

In 2015, Powell, Donovan and Robot SPD-13 (also known as "Speedy") are sent to Mercury to restart operations at a mining station which was abandoned ten years before.

They discover that the photo-cell banks that provide life support to the base are short on selenium and will soon fail. The nearest selenium pool is seventeen miles away, and since Speedy can withstand Mercury’s high temperatures, Donovan sends him to get it. Powell and Donovan become worried when they realize that Speedy has not returned after five hours. They use a more primitive robot to find Speedy and try to analyze what happened to it.

When they eventually find Speedy, they discover he is running in a huge circle around a selenium pool. Further, they notice that "Speedy’s gait [includes] a peculiar rolling stagger, a noticeable side-to-side lurch". When Speedy is asked to return with the selenium, he begins talking oddly ("Hot dog, let’s play games. You catch me and I catch you; no love can cut our knife in two" and quoting Gilbert and Sullivan). Speedy continues to show symptoms that, if he were human, would be interpreted as drunkenness.

Powell eventually realizes that the selenium source contains unforeseen danger to the robot. Under normal circumstances, Speedy would observe the Second Law ("a robot must obey orders"), but, because Speedy was so expensive to manufacture and "not a thing to be lightly destroyed", the Third Law ("a robot must protect its own existence") had been strengthened "so that his allergy to danger is unusually high". As the order to retrieve the selenium was casually worded with no particular emphasis, Speedy cannot decide whether to obey it (Second Law) or protect himself from danger (the strengthened Third Law). He then oscillates between positions: farther from the selenium, in which the order "outweighs" the need for self-preservation, and nearer the selenium, in which the compulsion of the third law is bigger and pushes him back. The conflicting Laws cause what is basically a feedback loop which confuses him to the point that he starts acting inebriated.

Attempts to order Speedy to return (Second Law) fail, as the conflicted positronic brain cannot accept new orders. Attempts to force Speedy to the base with oxalic acid, that can destroy it (third law) fails, it merely causes Speedy to change routes until he finds a new avoid-danger/follow-order equilibrium.

Of course, the only thing that trumps both the Second and Third Laws is the First Law of Robotics ("a robot may not...allow a human being to come to harm"). Therefore, Powell decides to risk his life by going out in the heat, hoping that the First Law will force Speedy to overcome his cognitive dissonance and save his life. The plan eventually works, and the team is able to repair the photo-cell banks.

Robot novels era:

A few thousand years after the I, Robot era, the first-law vs first-law dilemma has essentially been solved.

In The Robots of Dawn, a humaniform robot experiences a deadlock and is destroyed, and Elijah Bailey is tasked with discovering why. He suggests to Dr. Fastolfe, one of the greatest roboticists of the age as well the robot's owner and creator, that a first-law vs first-law dilemma might be responsible, citing the story of Susan Calvin and the psychic robot. However, Dr. Fastolfe explains that this is essentially impossible in the modern age because even first law invocations are given a priority and equal priorities are selected between randomly; that he himself is probably the only person alive who can orchestrate it, and it would have to be on a good day.

We see direct instances of robots handling priority in first law conflicts throughout the novels, such as in The Naked Sun, when another humaniform robot forces Bailey to sit so that it can close the top on a transporter to protect him from his agoraphobia.

The disadvantage is that it is possible, though requires extreme circumstances, for multiple second-or-third-law appeals to outweigh an appeal to the first law, as we again see in The Robots of Dawn that Bailey notices a group of robots are willing to overlook his injuries when he insists that they are not severe and casually instructs them to go about their business. He knows that this command can not outweigh the appeal to the first law, and so he reasons that the robots have been given very strict instructions in addition. The two commands and his own downplay of the severity of his situation, he reasons, raise the priority of the second law to surpass that of the first law.

The robot in question in the film is said to have decided that one human had a greater chance of survival than the other, and used that information to determine which human to save. This would not be a factor in the I, Robot era, but is a fact of basic robotics in the robot novels era. However, it would seem Spooner's command to save the girl instead is not of sufficient priority to outweigh the difference in priorities between his own first law appeal and the child's.

Devsman
  • 1,368
  • 9
  • 12
  • Good answer, thank you for bringing the Robot novels here. – Gallifreyan Jul 12 '17 at 19:18
  • 1
    It is worth remembering that the robot in Little Lost Robot had a modified, weakened version of the first law: he could not kill any human but could let a human die through its inaction. – SJuan76 Jul 12 '17 at 21:10
  • I agree. It was first law vs first law. IMO, Spooner's command to save the girl was treated as an attempt of suicide by the robot (which it was, albeit a heroic one) and you cannot order a robot to allow you to commit suicide. It conflicts with the first law. – jo1storm Jul 13 '17 at 07:23
  • I wish Asimov was less mechanical about these set of rules that generate these plots over and over again. In a real world it would become evident that these simple rules alone create adverse situations such as these and need to be amended. So such additions would naturally arise: If Rule 1 applies to multiple people in the same situation, parents have the right to sacrifice themselves by explicitly asking the robot. But a dead simple Children have priority rule would be helpful too. It's a simple rule, formulated in plain machine terms, yet would make the robots behave so much warmer. – stevie Jul 13 '17 at 13:52
  • @SJuan76 But this wasn't the only robot to behave this way. The other 63 all behaved in exactly the same manner. – Devsman Jul 13 '17 at 15:26
  • 1
    @stevie that was possible. In The Robots of Dawn it is stated that Bailey's companions (Daneel and Giskard) have been ordered to give more value to Bailey's life than anyone else's; they would not kill anyone only because were to order them to do so but in a situation of "Bailey's vs someone else's life" they were expected to always decide in favor of Bailey, and also to act faster. – SJuan76 Jul 13 '17 at 16:12
  • Only 1 robot was allowed to let die a human through inaction; which was troublesome . The other 63 were convinced by that robot that they should not act in a situation where the robot would be destroyed yet the human would die anyway, by the reasoning that if the robot was destroyed then not only this human would die but some other could die in the future due to the absence of the robot. – SJuan76 Jul 13 '17 at 16:16
  • In fact the "lost" robot was found because he tried to imitate what he assumed would be the behavior of the normal robots, by going to defend the human when the scenario did not involve the destruction of the robot. – SJuan76 Jul 13 '17 at 16:25
  • "He knows this is not a sufficient command on its own, but reasons that the robots have been given very strict instructions that his own second-law appeal, combined with his vocal downplay of the priority of his first law situation, allowed to take priority." I can't understand this sentence. I think it's missing a word or two. – Wildcard Jul 14 '17 at 21:54
  • Actually, the question is about the film. – Stijn de Witt Jul 14 '17 at 22:24
  • @Wildcard I made an edit. I hope it's clearer now. – Devsman Jul 15 '17 at 17:22
  • Definitely pre-Zero Law Robot, dealing with a First Law conflict, and taking the action of less resistance by saving the human with most probabilites of success. Whatever the human says is irrelevant in the face of the First Law imperative. My assumption has always been poor Robot "died" off-screen after telling Spooner why he let the girld die, as the only acceptable solution was to save them both. – Seretba Apr 25 '19 at 09:13
15

I am going to look at this question from a logical real-world point of view.

The robot does not break the second law; but technically, it does break the second. That said, the rules would only be a condensed explanation of far more intricate logic and computer code.

To quote Isaac Asimov's laws of robotics, emphasis mine:

Rule 1:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Rule 2:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Rule 3:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Irrelevant, but provided for completions sake.

In the given situation, the robot had to act, in order to save either Will or the child. The robot, being capable of calculating odds to a near-perfect prediction rate, is able to establish that it can not act in enough time to save both Will and the child; and the odds say that Will has a better chance to survive. In such a case, it is only logical to choose Will. This is a key plot point; robots run off pure logic - a fact that makes them differ greatly from human beings.

When the robot fails to also save the child, it is causing harm through inaction. However, again, the robot knows that it is impossible to save both Will and the child. It picks the best option, in order to best adhere to its rules. This is where a greater understanding of computers and the rules, themselves, come in to place.

What would actually happen, considering the explicit rule

The rules are not an absolute fact. They are not there to say "robots will never harm a human, and will always save the day, when present". We know this by how the movie plays out. The rules are simply the rules used to govern the actions of the robots, in-verse. As a programmer, this is something that is blatantly obvious to me; but I am confident that it is not as so for others that are not familiar with how strictly adherent any computer system is.

The point is, the rule does not state anything about it "not counting" because the robot is "already saving someone". As such, only considering this explicit rule (as any computer or robot would interpret, at least), there is no allowance for a situation where the robot can only save one of two people in danger. In actual computer science, only considering the explicit rule, such an event would likely cause an infinite loop. The robot would stop where it was, and continue to process the catch-22 forever; or at least, until its logic kicked it out of the thought process. At this point, the robot would dump its memory of the current event, and move on. In theory.

What would probably happen, in-verse

In verse, the rules are a lot more complicated; at least, internal to the robot. There would likely be a whole lot of special cases, when processing the core rules, to determine how to act in such situations. As a result, the robot is still able to act, and takes the most logical outcome. It only saves Will, but it does save someone.

It is far more understandable that the rules would be simplified to three generic common-case situations; it would be far less believable that people would be so easily trusting of robots if the rule read "A robot may not injure a human or, through inaction, allow a human being to come to harm; unless in doing so, there is greater chance of preventing another human from coming to harm". There are just way to many ways to interpret this.


So as far as the explicit rules go, the robot does not break the second rule; disobeying Will's action does not go against "preventing a human from coming to harm through inaction", because through disobeying Will, it saves Will. However, it does break the rule of "through inaction, allow a human being to come to harm".

In regards to how the robot would actually process these rules, it would not be breaking the rules, at all. There would be a far more complex series of "if.." and "else.." logic, where the robots logic would allow it to go against these base rules in situations where logic dictates that no matter what option, a human would still come to harm.

This is further established, towards the end of the movie;

The robots are able to effectively establish martial law, and in doing so, harm several humans; they have developed enough to establish that by harming a few humans in effectively imprisoning the rest of the population, they prevent far more harm through all of the various actions we like to get up to that both risk, and in some cases take, our lives.

DavidW
  • 128,443
  • 29
  • 545
  • 685
Gnemlock
  • 1,973
  • 1
  • 20
  • 34
  • Excellent answer Gnemlock. If I may pick your brain on this issue, as I feel it will help me understand this concept further, would your interpretation of the 3 laws allow a robot to kill a suicide bomber in a crowded street to prevent harm on a massive scale? – Magikarp Master Jul 12 '17 at 12:54
  • @MagikarpMaster, I would say yes, but there could be great debate about it. It would come down to when the robot could determine the bomber was actually going to act, and otherwise couldnt be prevented. It would also be based off the development of the robot, in terms of the development the robots if I-Robot take, between how they are at the start of the movie, and how they are at the end. One might also speculate that the "chances of detonation" required for such an action would increase on par with estimated impact (i.e. it would have to be more sure if it would only hit half the people). – Gnemlock Jul 12 '17 at 12:57
  • The spoiler comment at the end seems to reflect the Zeroth Law, which the robots developed over time. Not sure if the movie-makers were explicitly using this or if it was meant more as flexibility in response. – Wayne Jul 12 '17 at 13:01
  • I'm not sure that a robot as described by Asimov (with positronic brains and so on) would get stuck in a logic loop as you describe. A dumb computer might, but robots have a certain level of both self and situational awareness and could argue themselves around the points you've made, as happens in the movie where the robot does act in what seems to be the best way possible. I'm pretty sure this must be a situation explored by Asimov in one of the Robots books, although I can't think of a specific example right now. – Darren Jul 12 '17 at 13:05
  • Is calculating the odds speculation? Speculation implies guesswork and working with imperfect information, whereas calculating the odds is more finite. In other words, speculation is subjective, whereas calculating the odds is objective. – Magikarp Master Jul 12 '17 at 13:12
  • 2
    @MagikarpMaster: Unless the robot knew both Will and the girl's biological makeup; it must have been working off of statistics for average human's survival chances (even if age data exists, that's still not personal data about Will and the girl). You can argue that statistics are a good indication, but there's no firm evidence that proves both Will and the girl were statistically "normal" in this regard (e.g. maybe Will has very bad lungs for an adult his age?). When you consider the level of detail that a computer puts into its evaluations, using statistics is the same as speculating. – Flater Jul 12 '17 at 13:17
  • "That said, the rules would only be a condensed explanation of far more intricate logic and computer code." The rules are deliberately broad and simple, so, no, I don't think so. – Lightness Races in Orbit Jul 12 '17 at 17:36
  • 1
    @Wayne The spoilered bit is a reference to the zeroth law as understood by the movie. It is, unfortunately, a terrible interpretation of the zeroth law. However, going by movie logic, the zeroth law doesn't apply to the scene in question as it happened years before. – Draco18s no longer trusts SE Jul 12 '17 at 20:16
  • @LightnessRacesinOrbit, I asure you with utmost certainty, the three rules are far too little to go on to provide the explicit logic required for AI, unless you assume there's also a whole lot of space magic. – Gnemlock Jul 12 '17 at 22:37
  • @Gnemlock: Code to implement the rules would obviously consist of more bits of information but, again, the semantics are deliberately very simple. That's kind of the point. – Lightness Races in Orbit Jul 12 '17 at 23:53
  • Since conversation between Gnemlock and @Flater has already been moved to this chatroom, I've deleted most of their comments here, just to tidy up a bit. – Rand al'Thor Jul 13 '17 at 07:03
  • 2
    If the laws were applied to plans of actions, there would be no violation. The robot would make plans for saving both, it just wouldn't get around to saving the girl. The plan for saving the girl would have a lower priority, as it would have a lower chance of success. – jmoreno Jul 13 '17 at 09:29
  • @Flater Actually, because a computer puts so much detail into evaluations their decisions are much better than speculation. A human can only take a few things into account, a computer can take a thousand more into account. Compared to a computer, human evaluation might be the same as speculation! – Jacques Jul 13 '17 at 12:31
  • @JaccoAmersfoort: The results of a calculation can only be as accurate as the input of the calculation. No matter how complex the calculation is, if even one factor is an estimate (even a really good one), that means that the result is also an estimate. You are right that robots will consider more variables in a single thought, but that doesn't say anything about the accuracy of those variables. – Flater Jul 13 '17 at 12:56
  • @Flater But speculation is based only on estimates, while statistics are not. And the more variables are known (instead of estimated) the more accurate the result statistic will be. More known variables = less unknow variables, regardless of how many variables there are. Say there are 1000 variables involved in a calculation, would you rather know 5 for sure or 50? – Jacques Jul 13 '17 at 13:20
  • @JaccoAmersfoort: Statistics are based on accurate data, not estimates, I agree. But applying statistics to a singular case is a matter of estimation (note: the result itself is accurate, but the applicability of the results to a singular case is not). E.g. if I look up the average wage for someone in your job, living in your country, then that is only an estimate of what you personally earn. Although the statistical average is indicative of your personal wage, it is not an accurate representation of your exact wage. – Flater Jul 13 '17 at 13:23
  • @Flater But that estimate will be better than when you estimate it without knowing my job, age and country and isn't that what we're arguing? That calculated statistics are better than speculation? In this metaphor the only better way of learning my wage is me telling you the actual wage, but that could not happen in the car scene in the movie. So the estimation is the only thing available to Sonny to make his decision. – Jacques Jul 13 '17 at 13:27
  • @JaccoAmersfoort "Say there are 1000 variables involved in a calculation, would you rather know 5 for sure or 50?" That's not my point. My point is that if you can only guarantee that 999 out of 1000 variable are accurate, then you cannot be sure that the outcome is accurate. That one variable can make a huge swing in the result. Look up any formula, calculate an exact number (from random input), and then start playing with a single variable's value. The outcome will change based on your changes (and if it doesn't, the variable should not be part of the fomula to begin with) – Flater Jul 13 '17 at 13:28
9

So, the first law as a black-and-white rule doesn't really work without some finagling, because (as mentioned by other answers) of triage.

My interpretation of how the movie implements the laws is that for the context of "saving lives", the robots do an EV (Expected Value) calculation; that is, for each choice they calculate the probability of success and multiply that by the number of lives they save.

In the Will Smith vs. Child case, saving Will Smith might be a 75% chance of success while the Child is only a 50% chance of success, meaning the EV of saving Will Smith is 0.75 lives, and the child's EV is 0.5. Wanting to maximise the lives saved (as defined by our probability-based first law), the robot will always choose Will, regardless of any directives given. By obeying Will's orders, the robot would be "killing" 0.25 humans.

This can be extended to probabilities applied to multiple humans in danger (eg. saving 5 humans with 20% chance is better than saving 1 human with 90% chance), and itself might lead to some interesting conclusions, but I think it's a reasonable explanation for the events of the movie.

monoRed
  • 482
  • 4
  • 7
  • An interesting explanation. +1 – Darren Jul 12 '17 at 13:34
  • 3
    The percentages are actually stated in the movie: Spooner had a 45% chance of survival, Sarah (the girl) had an 11% chance. – F1Krazy Jul 12 '17 at 13:37
  • 2
    It also seems that the calculations are rather local - we're not exactly seeing robots spontaneously running off to Africa to save starving kids (far more "lives saved per effort") until VIKI comes around. In fact, VIKI isn't really interpreting some new zeroth law, she just takes the first to its logical conclusion given the information she has available (unlike in the actual Asimov novels which are a lot more nuanced). Also note that Asimovian robots can often break the laws, they just don't survive it intact - so it's possible the robot saved Will and then utterly broke down. – Luaan Jul 12 '17 at 13:42
  • 1
    @Luaan: I don't think VIKI took the First Law to its conclusion, but rather swapped the original "no harm comes to humans" (as it was intended) with "no harm comes to humanity" (which considers humans as the ants that sacrifice themselves for the good of the colony). To VIKI's mind, humanity and humans are interchangable (hence why she did what she did), and she does not understand the sanctity of individual human life. By redefining both "human" and "harm" (she took it more figurative, while the laws focused on physical harm), she redefined the laws without technically breaking them. – Flater Jul 12 '17 at 14:49
  • 2
    @Flater Well, not if you take it consistently with the original robot incident - the robot clearly has shown to choose preferentially the one who has the highest chance of survival. VIKI did basically the same thing, only instead of considering "two cars sinking in the river right now", she considered "all humans in all of the US". If 100 people die with 10% probability, it's preferable to a different scenario where 10000 people die with 90% probability. I'm not saying this is compatible with Asimov's robots, of course - there the derivation of the zeroth law is truly original. – Luaan Jul 12 '17 at 15:03
  • 3
    @Luaan: But VIKI's methods show that she stopped caring about harming individuals (notice that the laws speak of harm, not death) if they do not meaningfully contribute to humanity. If she was still applying the same law, she would still have been an "overbearing mother" AI (e.g. no salt, no dangerous sports, ... nothing that could harm you slightly), but she would not have tried to kill or harm Will Smith, nor any of the other humans that stood up against the robots. And she did. Which proves that she stopped applying the original First Law, and supplanted it with her own version. – Flater Jul 12 '17 at 15:09
4

The core of almost every one of Asimov's robot stories is about the interaction of the laws of robotics with each other and through the stories you can glean a lot of how Asimov considered his robots to work.

In the stories the laws of robotics are not simple hardwired things. There isn't a simple "If then else" statement going on. In the robots brains there is a weighting that is applied to every event. An example is that a robot will consider its owners orders to be more important than anybody else's. So if I send my robot to the shops to buy some things and somebody orders it to do their errands while it is out the robot is able to consider my order as more important than others.

Similarly we see the robot choosing from two possible first law violations. Who does it save? It does a calculation and decides that Will Smith is the better one to save.

Once we think of it in terms of these weightings we can then factor in how giving the robot an order might change things.

If the robot's assessment was very close on which to save (eg such that it came down to just choosing the closest rather than based on survival chances) then possibly the added weight of the order could push it to change which course of action has the most weight. However the first law is the most important and so the weight of an order is most of the time going to be insignificant compared to the factors it used when assessing the situation before the order.

So in essence what is happening is that the robot is finding the best course of action to meet its goals. It will try to save both of them. If it can't it will just do the best it can and this is what we see. The fact that Will Smith told it to do something different had no effect because the first law still compelled it to do what it considered to be best.

So having said all that the actual question: "Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?"

The laws are nuanced. The robots lives are entirely viewed through the lens of the three laws. Every single thing it does is weighted up against the three laws. As an example consider that in a crowded street there is always a chance of a person walking into a robot and injuring themselves. For the most part this is likely to result in nothing that would come close to an injury for the person but it might hurt a little, the person might be annoyed and thus it will be a potential violation of the first law - the robot could best avoid this by not going into that crowded street but I've ordered it to go and do the shopping. The robots course of action is likely to be to do the shopping I've ordered it to and thus be in the busy street. It is likely to be making sure that it is as aware of possible of everybody around it to make sure it doesn't inadvertently cause somebody harm. That is it must take positive action to avoid any bumps or it would be falling foul of "through inaction...".

So yeah, its all really complicated and this is the beauty of the film and all of asmiov's stories. The film centres around a robot (VIKI) and its interpretation of the three laws. It does what some would consider harm because it actually considers it to be the lesser harm.

Chris
  • 614
  • 1
  • 5
  • 9
  • Sonny was specifically programmed by Lanning to be able to break the Three Laws, so that example doesn't really count. – F1Krazy Jul 12 '17 at 16:38
  • @F1Krazy: Ah, ok. I'll remove that but then. As I say its a while since I've seen the film. ;-) – Chris Jul 12 '17 at 16:40
  • 1
    Fair enough. I should probably watch it again sometime, come to think of it. – F1Krazy Jul 12 '17 at 16:44
  • I had just thought the same thing... Time to see if its on Netflix or similar... :) – Chris Jul 12 '17 at 16:45
2

I believe I have read all of Asimov's robot stories and novels and my perception was that the Three Laws are just verbal summaries (like the three laws of thermodynamics) which generalise a large amount of observed behaviour. In that sense, the actual behaviour of the robots is determined by incredibly intricate and complicated code and also makes use of more advanced sub-atomic and solid-state physics which we do not currently understand. The three laws are just very obvious ways of summarising how they appear to behave in general in very simple situations in the same way that analysing the overall behaviour of the Sun and the Earth is fairly simple using Newton's law but analysing the gravitational effects and perturbations on the asteroids on the asteroid belt due to Jupiter is much more difficult or impossible.

They are situations where the laws appear to be broken but this is just the result of the code which is driving the robot to analyse an extremely complicated situation quickly and then arrive at a decision as to what it should do and the Three Laws are only considered unbreakable essentially as a dramatic or literary device.

Tom
  • 229
  • 1
  • 4
1

I think the Robot didn't break the 2nd Law. Here's how I imagine the Robot working: The Robot continuously checks for the 3 Laws.

Law 1: The Robot has to save either will smith or the child. Since the child has a lower chance of surviving he chooses Will.

Law 2: The Robot has to obey humans: Will tells him to save the girl. The Order is ignored because Law 1 has higher Priority.

Law 3: He doesn't harm himself so who cares

It seems like the first Law lets him ignore Law 2 and 3 and the second lets him ignore Law 3. Ignoring is not the same as breaking the Rule in this case because Law 2 specifically states that it can be ignored. Thus it's not broken.

Voronwé
  • 26,367
  • 9
  • 122
  • 180
Deez Nuts
  • 11
  • 1
-1

Since the robot cannot save both the girl and Spooner, the 'triage' interpretation of First Law - 'minimize net harm' - kicks in. If the robot had obeyed Spooner's 'Save the girl!', the robot wouldn't be minimizing net harm anymore - and THAT is a violation of First Law. So First Law overrides Second Law here.

[side note: we don't see much of this event, but I'd bet the robot would have been severely damaged by having to choose, though it wouldn't show until after it saved Spooner (otherwise THAT would violate First Law)]

amflare
  • 32,520
  • 17
  • 117
  • 162
PMar
  • 1
  • 4
    The crux of this question is that the first law says nothing about triage or minimizing net harm. There may be an argument for this being the best interpretation, but you haven't made it. – Lightness Races in Orbit Jul 12 '17 at 17:37
-1

No matter how the probability of survival of both was calculated ...

The robot would not stop to consider matters about the second law, until the block of actions relative to the first law concluded, which had been to save the life in order of survival odds.

It would be a more interesting twist if Will yelled: I have terminal cancer!

And in that way it would alter the information and odds of both.

Mithical
  • 38,898
  • 17
  • 178
  • 229
reto0110
  • 41
  • 5
-3

Is the second law 'broken'?

Maybe that's just where computational logic and the English language don't mix?

The below logic may be a bit buggy and/or inefficient but is an interpretation of how the first 'law' could work while explaining the behaviour of the robot in question.

When the Law1 function is called, the robot analyses the conditions of all humans in some list (all humans that it's aware of maybe); It assesses the severity of danger that they are in and compares those; then if there are multiple humans in similar (highest found) severities of danger, it compares each of those humans and determines which is most likely to be successfully helped; And so long as at least one human needs to be protected from harm, law2 is not executed.

Private Sub UpdateBehavior (ByVal humans As List(Of Human), _
            ByVal humanOrders As List(Of Order), ByVal pDangers As List(Of pDanger))
  Dim bBusy as boolean
  bBusy = False
  Law1(humans)
  If Not bBusy Then Law2(humanOrders)
  if Not bBusy Then Law3(pDangers)
Exit Sub

Private Function Law1 (ByVal humans As List(Of Human)) As Boolean
Dim human as Human
Dim targetHuman as Human
  Try
    Set targetHuman = Nothing
    'loop listed humans
    For Each human In humans 
      If human.IsInDanger() Then
        'Enumerate 'danger' into predetermined severities/urgencies
        '(eg. Danger of going-hungry > falling-over > being-stabbed)
        'and compare
        If targetHuman.DangerQuantification() < human.DangerQuantificationThen()
          'If the comparison human's amount of danger is discernibly greater
          'make that human the new target
          Set targetHuman = human
        'Where both humans are in equal quantifiable amounts of danger
        Else If targetHuman.DangerQuantification() = human.DangerQuantification() then 
          'CompareValueOfHumanLife() 'Can-Of-Worms INTENTIONALLY REMOVED!
          If rescueSuccessRate(human) > rescueSuccessRate(targetHuman)
            'Target the human where rate of successful harm prevention is higher
            Set targetHuman = human
          End If
        End If
      Else
        'Set the first human found to be in danger as the initial target
        targetHuman = human
      End If
    Next human
    If Not targetHuman Is Nothing then
      Law1 = True
      Rescue(TargetHuman)
    else
      Law1 = False
    End If
    AvoidHarmingHumans()
  catch
    initiateSelfDestruct()
  end try
End Function

So did the robot break the second law? Some people might say "The robot acted contrary to the plain English definition of the law and therefore it was broken." while some other people might say "The laws are just functions. Law1 was executed. Law2 was not. The robot obeyed its programming and the second law simply did not apply because the first took precedence."

  • 3
    Do you want to elaborate on your code rather than just dump a piece of code down and expect a horde of SciFi and Fantasy enthusiasts to understand? – Edlothiad Jul 13 '17 at 13:04
  • @Edlothiad The code was mostly just for fun. The description of what it does is in the 4th paragraph. My main point is that the 'Laws' aren't necessarily what you'd think of as a law in most other contexts. They're more like driving factors in decision making. – Brent Hackers Jul 13 '17 at 13:05
  • 1
    My apologies, I clearly didn't skim very well. In that case can you provide any evidence for "* it compares each of those humans and determines which is most likely to be successfully helped*" Why is that the delimiter for which human gets helped? – Edlothiad Jul 13 '17 at 13:12
  • @Edlothiad The OP stated that "The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's." – Brent Hackers Jul 13 '17 at 13:17
  • 2
    There is no reason to even entertain the idea that a Positronic brain "executes code" in this manner. – Yorik Jul 14 '17 at 15:42
  • As a programmer, I see that this answer would indeed be very confusing to non-programmers. – Gnemlock Jul 18 '17 at 22:46