51

The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself. How did Asimov come up with these?

  • 17
    "Asimov attributes the Three Laws to John W. Campbell, from a conversation that took place on 23 December 1940. Campbell claimed that Asimov had the Three Laws already in his mind and that they simply needed to be stated explicitly. Several years later Asimov's friend Randall Garrett attributed the Laws to a symbiotic partnership between the two men – a suggestion that Asimov adopted enthusiastically. - https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#History – Valorum Sep 13 '21 at 17:20
  • 7
    I seem to remember reading somewhere Asimov wrote the 3 laws because he hated stories about robots turning on humans of which there were so many, and in his mind, gave robots a bad rap. – Nu'Daq Sep 14 '21 at 08:10
  • 1
    There's actually 4 laws. The zeroth law was added many years later. https://en.wikipedia.org/wiki/Three_Laws_of_Robotics – computercarguy Sep 14 '21 at 21:08
  • @Nu'Daq, that would be the "Frankenstein Complex" mentioned repeatedly in his Robotics stories. – Mark Sep 14 '21 at 22:07
  • @computercarguy there's a putative Fifth law there too, and Asimov himself created the Zeroth Law – Carl Witthoft Sep 15 '21 at 12:05

2 Answers2

69

The Three Laws developed gradually. It started with Robbie, aka Strange Playfellow, Asimov's first robot story, which he wrote in 1939. Asimov thought of robots as “industrial products […] built with safety features” (The Complete Robot, 1982), and so he had a character say: “He just can’t help being faithful and loving and kind. He’s a machine—made so.” Asimov later called this “the first germ […] of the First Law of Robotics.” (“Robots and Androids,” The Visual Encyclopedia of Science Fiction, 1977).

In his next robot story, Reason (written in 1940), a character says, “Those robots are guaranteed to be subordinate.” According to Asimov, “That was a hint of the Second Law” (“The Word I Invented,” The Magazine of F&SF, October 1980).

The third robot story, Liar! (written in 1941), mentions the First Law and hints (again) at the Second Law:

Lanning’s voice was hostile. “What is all this, Dr. Calvin?”
    She faced them and spoke wearily. “You know the fundamental law impressed upon the positronic brain of all robots, of course.”
    The other two nodded together. “Certainly,” said Bogert. “On no condition is a human being to be injured in any way, even when such injury is directly ordered by another human.”

As Asimov puts it, “What had been a simple ‘made so’ in ‘Strange Playfellow’ had now become a ‘fundamental law’ of robotics.” (Opus 100, 1969).

Asimov’s fourth robot story, Robot AL-76 Goes Astray (written in 1941), again mentions the first and the second law:

It also struck him that he had heard somewheres that robots were mentally incapable of harming human beings.

and

What he didn’t know was that a robot always obeyed a human order except where carrying it out involved danger to another human.

But it’s the fifth robot story, Runaround (written in 1941), in which the Three Laws were finally given in full (although Asimov slightly changed the wording later):

“Now, look, let’s start with the three fundamental Rules of Robotics—the three rules that are built most deeply into a robot’s positronic brain.” In the darkness, his gloved fingers ticked off each point.
    “We have: One, a robot may not injure a human being under any conditions—and, as a corollary, must not permit a human being to be injured because of inaction on his part.”
    “Right!"
    “Two,” continued Powell, “a robot must follow all orders given by qualified human beings as long as they do not conflict with Rule 1.”
    “Right!”
    “Three: a robot must protect his own existence, as long as that does not conflict with Rules 1 and 2.”

Regarding who came up with the Three Laws, Asimov or John Campbell, it seems that they credited each other:

It was he [John Campbell] who considered my third or fourth robot story, shook his head and said, “No, Isaac, you're neglecting the Three Laws of Robotics which are—” and that was the first I heard of them. […]
    He admitted he fed me ideas but he said he kept on doing so only because I brought them back changed and improved. He denied he had made up the Three Laws of Robotics and insisted he found them in my stories and merely put them into words.
“The Father of Science Fiction,” Luna Monthly #27S, August 1971

[O]n 16 December 1940, when I was preparing to write “Liar!” (which eventually appeared in the May 1941 issue of Astounding), I brought the plot to John Campbell, editor of the magazine. As I tried to explain what made my robot work the way he did, John lost patience with my attempts and said:
    “Look, Isaac, it boils down to this. There have to be the three laws of robot behavior: First …”
    And he went on to recite the three laws nearly as I finally worked them out. He always claimed thereafter that he had merely distilled the three laws out of what I was trying to tell him, but all I know is that I heard them from him first and I didn't have them clearly in my mind, in words, before then.
“Robots and Androids,” The Visual Encyclopedia of Science Fiction, 1977

Ubik
  • 10,644
  • 2
  • 45
  • 51
  • 7
    I read somewhere, somewhen that Asimov deliberately made the laws vague enough to allow stories where obeying the laws caused trouble in special situations. If true, it would be interesting to know what he would consider better laws - through such would probably have to be expressed in logical language rather than flawed human speech. – Klaus Æ. Mogensen Sep 14 '21 at 07:29
  • 12
    @Klaus Æ. Mogensen: Yes, Asimov addressed the ambiguities in his laws, and as a storyteller, he profited from them: “There was just enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the sixty-one words of the Three Laws.” (The Rest of the Robots, 1964). I'm not aware that he tried to formulate better laws that would be usable for real-world applications, and he probably wouldn't have considered himself qualified to do so. – Ubik Sep 14 '21 at 09:28
  • 1
    @KlausÆ.Mogensen: One recurring theme in I, Robot that I noticed is that quite often, the Three Laws are too general. They apply to situations where they don't necessarily make sense, or they conflict with the on-the-ground requirements of a particular situation. In the real world, you'd be more likely to see specific precautions taken for each individual industrial context where robots are deployed, rather than having a single, universal set of laws. – Kevin Sep 14 '21 at 16:56
  • 2
    @KlausÆ.Mogensen, There's also the fact that the laws, as written, are "only an approximation of the mathematics" inherent in the positronic matrix. Unfortunately, I don't recall at this time which character described them as such. It's likely to have been in "I, Robot" somewhere, but it could be anywhere robots feature. – enkorvaks Sep 14 '21 at 23:21
  • @KlausÆ.Mogensen - I think I remember that phrase from The Robots of Dawn, which I just recently reread. Doesn't mean it wasn't used before then, though. – Bobson Sep 15 '21 at 17:20
  • @Kevin Only if your robots are stupid. Once you have intelligent robots (able to generalize), of course you would give them vague laws the same way we use vague laws for humans. (Surely overly specific rules are much more likely to cause problems. If you try to, for example, train a self-driving car not to hit things by showing it specific examples of things not to hit, then you're bound to miss a few situations...oops: https://slate.com/technology/2021/08/teslas-allegedly-hitting-emergency-vehicles-why-it-could-be-happening.html ) – user3067860 Sep 15 '21 at 20:13
  • @user3067860: You should read I, Robot. Its robots are assuredly not stupid. The problem is not that the robots are unable to reason about or understand the laws. The problem is that the laws are categorical, and the robots are not allowed (or able, if you think of this as a "fundamental" part of the positronic brain) to apply "common sense" exceptions or limitations to the laws. So when a law interacts strangely with a given set of circumstances, the robots act all screwy and do weird things, and then somebody calls Susan Calvin to explain why the robots are acting up. – Kevin Sep 15 '21 at 20:47
  • @Kevin (I have read it, thanks.) But your proposal of having tons of specific little laws is even worse. Would you have, "If you're in a factory, don't drive a forklift over humans." "If you're on a golf course, don't drive a golf cart over humans." "If you are driving a car, don't drive the car over humans." ? (Well, what if the car is a very small remote controlled car and the human is holding up a track for it to drive over? On the other hand, you shouldn't drive your remote controlled car over a small infant.. Maybe don't drive vehicles over x% of someone's body weight over them...?) – user3067860 Sep 15 '21 at 21:01
  • @user3067860: The idea is that you don't try to deploy one set of laws for all circumstances in the first place. Instead, if a robot is going to drive forklifts, you then teach it all of the necessary safety rules (which probably includes, but is not limited to, "don't run people over" - what about overweight loads etc.?). You might have some general principles which resemble the laws, but they can't be categorical like the "real" industrial safeguards which the laws were originally intended to mimic (because, as demonstrated in I, Robot, such categorical laws don't work very well). – Kevin Sep 15 '21 at 21:03
  • @Kevin I don't know how you make "rules for forklifts" less categorical, though. If there's a terrorist on the factory floor threatening people, can the robot run it over then (disobeying the rule "don't run over people")? If you include a rule like "don't drive in front of other vehicles", can the robot drive in front of a runaway truck to stop it before it hits the bus full of school children? – user3067860 Sep 15 '21 at 21:06
  • Asimov's robots absolutely bend (to the breaking point) the laws in a sort of commonsense way... He describes them as comparing outcomes and choosing the one that does the best job in meeting the laws, e.g. a robot can be made to spank a child, if it is sufficiently justified as being for the child's own good. Making the laws more specific isn't going to help that at all. – user3067860 Sep 15 '21 at 21:09
  • @user3067860: The robots are also depicted suffering serious psychological drawbacks from such flexibility (whereas the spanking is merely alluded to, never actually shown). See for example "Runaround", "Escape!", "Little Lost Robot", and (arguably) "Liar!", all of which involve one or more robots malfunctioning or otherwise misbehaving as a result of conflicts between the laws and the scenarios in which they have been placed. The laws are not flexible enough to accommodate those relatively straightforward situations. – Kevin Sep 16 '21 at 05:34
25

In this issue of Compute! magazine Asimov discusses the Three Laws as something that is "obvious from the start, and that everyone knows subliminally"; he credits himself not so much with inventing them but in writing them down in easily memorable fashion, and in particular draws a direct parallel between the Laws of Robotics and the (implicit) laws governing the design of tools:

  • a tool should be safe to use
  • a tool should carry out its task accurately and efficiently, to the extent that this is possible without compromising safety
  • a tool should be durable and re-usable, unless the nature of its task, or the safety of the user, requires consuming it

Wikipedia attributes the specific formulation of the Laws to collaboration between John W. Campbell and Asimov, saying that "Asimov had the Three Laws already in his mind and that they simply needed to be stated explicitly. Several years later Asimov's friend Randall Garrett attributed the Laws to a symbiotic partnership between the two men"; the cite for this is Asimov's autobiography, In Memory Yet Green, and excerpts from it support this, although I don't have a copy of my own to check the full text.

ToxicFrog
  • 1,234
  • 1
  • 12
  • 10
  • 2
    Has anyone actually read the entirety of ''In Memory Yet Green''? I remember Martin Amis talking about how he read it before interviewing Asimov and joking that he had to use a winch to get the book onto a lectern where he could read it. – Tom Sep 14 '21 at 14:50