“Meet Shakey: the First Electronic Person—The Fascinating and Fearsome Reality of a Machine With a Mind of Its Own”, 1970-11-20 (; backlinks):
[more photos of Shakey] …Marvin Minsky of MIT’s Project MAC, a 42-year-old polymath who has made major contributions to Artificial Intelligence, recently told me with quiet certitude, “In 3–8 years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after that its powers will be incalculable.”
I had to smile at my instant credulity—the nervous sort of smile that comes when you realize you’ve been taken in by a clever piece of science fiction. When I checked Minsky’s prophecy with other people working on Artificial Intelligence, however, many at them said that Minsky’s timetable might be somewhat wishful—“give us 15 years”, was a common remark—but all agreed that there would be such a machine and that it could precipitate the third Industrial Revolution, wipe out war and poverty and roll up centuries of growth in science, education and the arts.
At the same time a number of computer scientists fear that the godsend may become a Golem. “Man’s limited mind”, says Minsky, “may not be able to control such immense mentalities.”
Intelligence in machines has developed with surprising speed. It was only 33 years ago that a mathematician named Alan Turing proved that a computer, like a brain, can process any kind of information—words as well as numbers, ideas as easily as facts; and now there is Shakey, with an inner core resembling the central nervous system of human beings. He is made up of 5 major systems of circuitry that correspond quite closely to how human faculties—sensation, reason, language, memory, ego and these faculties cooperate harmoniously to produce something that actually does behave very much like a rudimentary person.
…Shakey can understand about 100 words of written English, translate these words into a simple verbal code and then translate the code into the mathematical formulas in which his actual thinking is done. For Shakey, as for most computer systems, natural language is still a considerable barrier. There are literally hundreds of “machine languages” and “program languages” in current use, and computers manipulate them handily, but when it comes to ordinary language they’re still in nursery school. They are not very good at translation, for instance, and no program so far created can cope with a large vocabulary, much less converse with ease on a broad range of subjects. To do this, Shakey and his kind must get better at working with symbols and ambiguities (“the dog in the window had hair but it fell out”). It would also be useful if they learned to follow spoken English and talk hack, but so far the machines have a hard time telling words from noise.
Language has a lot to do with learning, and Shakey’s ability to acquire knowledge is limited by his vocabulary. He can learn a fact when he is told a fact, he can learn by solving problems, he can learn from exploration and discovery. But up to now neither Shakey nor any other computer program can browse through a book or watch a TV program and grow as he goes, as a human being does. This fall, Minsky and a colleague named Seymour Papert opened a two-year crash attack on the learning problem by trying to teach a computer to understand nursery rhymes. “It takes a page of instructions”, says Papert, “to tell the machine that when Mary had a little lamb she didn’t have it for lunch.”
…With very little change in program and equipment, Shakey now could do work in a number of limited environments; warehouses, libraries, assembly lines. To operate successfully in more loosely structured scenes, he will need far more extensive, more nearly human abilities to remember and to think. His memory, which supplies the rest of his system with a massive and continuous flow of essential information, is already large, but at the next step progress it will probably become monstrous. Big memories are essential to complex intelligence. The largest standard computer now on the market can store about 36 million “bits” of information in a 6-foot cube, and a computer already planned will be able to store more than a trillion “bits” (one estimate of the capacity of a human brain) in the same space.
…Many computer scientists believe that people who talk about computer autonomy are indulging in a lot of cybernetic hoopla. Most of these skeptics are engineers who work mainly with technical problems in computer hardware and who are preoccupied with the mechanical operations of these machines. Other computer experts seriously doubt that the finer psychic processes of the human mind will ever be brought within the scope of circuitry, but they see autonomy as a prospect and are persuaded that the social impact will be immense.
Up to a point, says Minsky, the impact will be positive. “The machine dehumanized man, but it could rehumanize him.” By automating all routine work and even tedious low-grade thinking, computers could free billions of people to spend most of their time doing pretty much as they d—n please. But such progress could also produce quite different results. “It might happen”, says Herbert Simon, “that the Puritan work ethic would crumble to dust and masses of people would succumb to the diseases of leisure.” An even greater danger may be in man’s increasing and by now irreversible dependency upon the computer.
The electronic circuit has already replaced the dynamo at the center of technological civilization. Many US industries and businesses, the telephone and power grids, the airlines and the mail service, the systems for distributing food and, not least, the big government bureaucracies would be instantly disrupted and threatened with complete breakdown if the computers they depend on were disconnected. The disorder in Western Europe and the Soviet Union would be almost as severe. What’s more, our dependency on computers seems certain to increase at a rapid rate. Doctors are already beginning to rely on computer diagnosis and computer-administered postoperative care. Artificial Intelligence experts believe that fiscal planners in both industry and government, caught up in deepening economic complexities, will gradually delegate to computers nearly complete control of the national (and even the global) economy. In the interests of efficiency, cost-cutting and speed of reaction, the Department of Defense may well be forced more and more to surrender human direction of military policies to machines that plan strategy and tactics. In time, say the scientist, diplomats will abdicate judgment to computers that predict, say, Russian policy by analyzing their own simulations of the entire Soviet state and of the personalities—or the computers—in power there. Man, in short, is coming to depend on thinking machines to make decisions that involve his vital interests and even his survival as a species. What guarantee do we base that in making these decisions the machines will always consider our best interests? There is no guarantee unless we provide it, says Minsky, and it will not be easy to provide—after all, man has not been able to guarantee that his own decisions are made in his own best interests. Any supercomputer could be programmed to test important decisions for their value to human beings, but such a computer, being autonomous, could also presumably write a program that countermanded these “ethical” instructions. There need be no question of computer malice here, merely a matter of computer creativity overcoming external restraints.
The men at Project MAC foresee an even more unsettling possibility. A computer that can program a computer, they reason, will be followed in fairly short order by a computer that can design and build a computer vastly more complex and intelligent than itself—and so on indefinitely. “I’m afraid the spiral could get out of control”, says Minsky. It is possible, of course, to monitor computers, to make an occasional check on what they are doing in there; but men know it is difficult to monitor the larger computers, and the computers of the future may be far too complex to keep track of.
Why not just unplug the thing if it got out of hand? “Switching off a system that defends a country or runs its entire economy”, says Minsky, “is like cutting off its food supply. Also, the Russians are only about 3 years behind us in AI work. With our system switched off, they would have us at their mercy.”
The problem of computer control will have to be solved, Minsky and Papert believe, before computers are put in charge of systems essential to society’s survival. If a computer directing the nation’s economy or its nuclear defenses ever rated its own efficiency above its ethical obligation, it could destroy man’s social order—or destroy man. “Once the computer got control”, says Minsky, “we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”
But even if no such catastrophe were to occur, say the people at Project MAC, the development of a machine more intelligent than man will surely deal a severe shock to man’s sense of his own worth. Even Shakey is disturbing, and a creature that deposed man from the pinnacle of creation might tempt us to ask ourselves: Is the human brain outmoded? Has evolution in protoplasm been replaced by evolution in circuitry?
“And why not?” Minsky replied when I recently asked him these questions. “After all, the human brain is just a computer that happens to be made out of meat.”
I stared at him—he was smiling. This man, I thought, has lived too long in a subtle tangle of ideas and circuits. And yet men like Minsky are admirable, even heroic. They have struck out on a Promethean adventure and you can tell by a kind of afterthought in their eyes that they are haunted by what they have done. It is the others who depress me, the lesser figures in the world of Artificial Intelligence, men who contemplate infinitesimal riddles of circuitry and never once look up from their work to wonder what effect it might have upon the world they scarcely live in. And what of the people in the Pentagon who are footing most of the bill in Artificial Intelligence research? “I have warned them again and again”, says Minsky, “that we are getting into very, dangerous country. They don’t seem to understand.”
I thought of Shakey growing up in the care of these careless people—growing up to be what? No way to tell. Confused, concerned, unable to affirm or deny the warnings I had heard at Project MAC, I took my questions to computer-memory expert Ross Quillian, a nice warm guy with a house full of dogs and children—who seemed to me one of the best-balanced men in the field. I hoped he would cheer me up.
Instead he said, “I hope that man and these ultimate machines will be able to collaborate without conflict. But if they can’t we may be forced to choose sides. And if it comes to a choice, I know what mine will be.” He looked me straight in the eye. “My loyalties go to intelligent life, no matter in what medium it may arise.”