Since then, I have never had occasion, over a period of over forty years during which I wrote many stories and novels dealing with robots, to be forced to modify the Three Laws. However, as time passed, and as my robots advanced in complexity and versatility, I did feel that they would have to reach for something still higher. Thus, in Robots and Empire, a novel published by Doubleday in 1985, I talked about the possibility that a sufficiently advanced robot might feel it necessary to consider the prevention of harm to humanity generally as taking precedence over the prevention of harm to an individual. This I called the "Zeroth Law of Robotics," but I'm still working on that.
My invention of the Three Laws of Robotics is probably my most important contribution to science fiction. They are widely quoted outside the field, and no history of robotics could possibly be complete without mention of the Three Laws. In 1985, John Wiley and Sons published a huge tome, Handbook of Industrial Robotics, edited by Shimon Y. Nof, and, at the editor's request, I wrote an introduction concerning the Three Laws.
Now it is understood that science fiction writers generally have created a pool of ideas that form a common stock into which all writers can dip. For that reason, I have never objected to other writers who have used robots that obey the Three Laws. I have, rather, been flattered and, honestly, modem science fictional robots can scarcely appear without those Laws.
However, I have firmly resisted the actual quotation of the Three Laws by any other writer. Take the Laws for granted, is my attitude in this matter, but don't recite them. The concepts are everyone's but the words are mine.
Essays The Laws Of Humanics
My first three robot novels were, essentially, murder mysteries, with Elijah Baley as the detective. Of these first three, the second novel, The Naked Sun, was a locked-room mystery, in the sense that the murdered person was found with no weapon on the site and yet no weapon could have been removed either.
I managed to produce a satisfactory solution but I did not do that sort of thing again.
The fourth robot novel, Robots and Empire, was not primarily a murder mystery. Elijah Baley had died a natural death at a good, old age, the book veered toward the Foundation universe so that it was clear that both my notable series, the Robot series and the Foundation series, were going to be fused into a broader whole. (No, I didn't do this for some arbitrary reason. The necessities arising out of writing sequels in the 1980s to tales originally written in the 19408 and 1950s forced my hand.)
In Robots and Empire, my robot character, Giskard, of whom I was very fond, began to concern himself with "the Laws of Humanics," which, I indicated, might eventually serve as the basis for the science of psychohistory, which plays such a large role in the Foundation series.
Strictly speaking, the Laws of Humanics should be a description, in concise form, of how human beings actually behave. No such description exists, of course. Even psychologists, who study the matter scientifically (at least, I hope they do) cannot present any "laws" but can only make lengthy and diffuse descriptions of what people seem to do. And none of them are prescriptive. When a psychologist says that people respond in this way to a stimulus of that sort, he merely means that some do at some times. Others may do it at other times, or may not do it at all.
If we have to wait for actual laws prescribing human behavior in order to establish psychohistory (and surely we must) then I suppose we will have to wait a long time.
Well, then, what are we going to do about the Laws of Humanics? I suppose what we can do is to start in a very small way, and then later slowly build it up, if we can.
Thus, in Robots and Empire, it is a robot, Giskard, who raises the question of the Laws of Humanics. Being a robot, he must view everything from the standpoint of the Three Laws of Robotics-these robotic laws being truly prescriptive, since robots are forced to obey them and cannot disobey them.
The Three Laws of Robotics are:
1-A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2-A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Well, then, it seems to me that a robot could not help but think that human beings ought to behave in such a way as to make it easier for robots to obey those laws.
In fact, it seems to me that ethical human beings should be as anxious to make life easier for robots as the robots themselves would. I took up this matter in my story "The Bicentennial Man," which was published in 1976. In it, I had a human character say in part:
"If a man has the right to give a robot any order that does not involve harm to a human being, he should have the decency never to give a robot any order that involves harm to a robot, unless human safety absolutely requires it. With great power goes great responsibility, and if the robots have Three Laws to protect men, is it too much to ask that men have a law or two to protect robots?"
For instance, the First Law is in two parts. The first part, "A robot may not injure a human being," is absolute and nothing need be done about that. The second part, "or, through inaction, allow a human being to come to harm," leaves things open a bit. A human being might be about to come to harm because of some event involving an inanimate object. A heavy weight might be likely to fall upon him, or he may slip and be about to fall into a lake, or anyone of uncountable other misadventures of the sort may be involved. Here the robot simply must try to rescue the human being; pull him from under, steady him on his feet and so on. Or a human being might be threatened by some form of life other than human-a lion, for instance-and the robot must come to his defense.