Eventually, if we learn how to make a computer sufficiently complex and sufficiently large, why should it not achieve a human intelligence?
Some people are sure to be disbelieving and say, "But how can a computer possibly produce a great symphony, a great work of art, a great new scientific theory?"
The retort I am usually tempted to make to this question is, "Can you?" But, of course, even if the questioner is ordinary, there are extraordinary people who are geniuses. They attain genius, however, only because atoms and molecules within their brains are arranged in some complex order. There's nothing in their brains but atoms and molecules. If we arrange atoms and molecules in some complex order in a computer, the products of genius should be possible to it; and if the individual parts are not as tiny and delicate as those of the brain, we compensate by making the computer larger.
Some people may say, "But computers can only do what they're programmed to do."
The answer to that is, "True. But brains can do only what they're programmed to do-by their genes. Part of the brain's programming is the ability to learn, and that will be part of a complex computer's programming."
In fact, if a computer can be built to be as intelligent as a human being, why can't it be made more intelligent as well?
Why not, indeed? Maybe that's what evolution is all about. Over the space of three billion years, hit-and-miss development of atoms and molecules has finally produced, through glacially slow improvement, a species intelligent enough to take the next step in a matter of centuries, or even decades. Then things will really move.
But if computers become more intelligent than human beings, might they not replace us? Well, shouldn't they? They may be as kind as they are intelligent and just let us dwindle by attrition. They might keep some of us as pets, or on reservations.
Then too, consider what we're doing to ourselves right now-to all living things and to the very planet we live on. Maybe it is time we were replaced. Maybe the real danger is that computers won't be developed to the point of replacing us fast enough.
Think about it!
I present this view only as something to think about. I consider a quite different view in "Intelligences Together" later in this collection.
Essays The Laws Of Robotics
It isn't easy to think about computers without wondering if they will ever "take over."
Will they replace us, make us obsolete, and get rid of us the way we got rid of spears and tinderboxes?
If we imagine computerlike brains inside the metal imitations of human beings that we call robots, the fear is even more direct. Robots look so much like human beings that their very appearance may give them rebellious ideas.
This problem faced the world of science fiction in the 19208 and 19308, and many were the cautionary tales written of robots that were built and then turned on their creators and destroyed them.
When I was a young man I grew tired of that caution, for it seemed to me that a robot was a machine and that human beings were constantly building machines. Since all machines are dangerous, one way or another, human beings built safeguards into them.
In 1939, therefore, I began to write a series of stories in which robots were presented sympathetically, as machines that were carefully designed to perform given tasks, with ample safeguards built into them to make them benign.
In a story I wrote in October 1941, I finally presented the safeguards in the specific form of "The Three Laws of Robotics. " (I invented the word robotics, which had never been used before.)
Here they are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where those orders would conflict with the First Law.
3. A robot must protect its own existence except where such protection would conflict with the First and Second Law.
These laws were programmed into the computerized brain of the robot, and the numerous stories I wrote about robots took them into account. Indeed, these laws proved so popular with the readers and made so much sense that other science fiction writers began to use them (without ever quoting them directly-only I may do that), and all the old stories of robots destroying their creators died out.
Ah, but that's science fiction. What about the work really being done now on computers and on artificial intelligence? When machines are built that begin to have an intelligence of their own, will something like the Three Laws of Robotics be built into them?
Of course they will, assuming the computer designers have the least bit of intelligence. What's more, the safeguards will not merely be like the Three Laws of Robotics; they will be the Three Laws of Robotics.
I did not realize, at the time I constructed those laws, that humanity has been using them since the dawn of time. Just think of them as "The Three Laws of Tools," and this is the way they would read:
1. A tool must be safe to use.
(Obviously! Knives have handles and swords have hilts. Any tool that is sure to harm the user, provided the user is aware, will never be used routinely whatever its other qualifications.)
2. A tool must perform its function, provided it does so safely.
3. A tool must remain intact during use unless its destruction is required for safety or unless its destruction is part of its function.
No one ever cites these Three Laws of Tools because they are taken for granted by everyone. Each law, were it quoted, would be sure to be greeted by a chorus of "Well, of course!"
Compare the Three Laws of Tools, then, with the Three Laws of Robotics, law by law, and you will see that they correspond exactly. And why not, since the robot or, if you will, the computer, is a human tool?