“Superhumanism: According to Hans Moravec § AI Scaling”, 1995-10-01 ():
…Moravec’s early work in robotics was plagued by setbacks. “I spent most of the 1970s”, he recalls, “trying to teach a robot to find its way across a room. After 10 years, in 1979, I finally had one that could get where it was going 3× out of 4—but it took 5 hours to travel 90 feet.” He chuckles like a fond father recalling the first incompetent steps of his baby boy.
Why was it so hard for a robot to accomplish a task that even a mouse can manage with ease? The answer, of course, is that animals have had hundreds of millions of years in which to evolve motor skills. The problem of moving through a 3-dimensional world is hideously complex, as Moravec indicates, while counting off the tasks on his fingers: “Our robot used multiple images of the same scene, taken from different points of view, in order to infer distance and construct a sparse description of its surroundings. It used statistical methods to resolve mismatching errors. It planned obstacle-avoidance paths. And then it had to decide how to actually turn its motors and wheels.”
In 1980, he built new robots and attempted to boost their performance [the Stanford Cart project, see 1983]. “But the best we were able to do with our old approach”, he recounts, “was speed it up about 10× and improve its accuracy 10×. We did not manage to reduce its brittleness.”…In 1984, using $32.46$101984 Polaroid ultrasonic range finders instead of expensive video cameras, he created a new commercial robot that analyzed maps of the surrounding space rather than just objects in it. The result, to his surprise, was a system that could navigate reliably and relatively swiftly.
Moravec’s current research robot, a project initiated in 1987, now sits in a small workshop just across the corridor outside his office. “Would you like to take a look?” he asks.
…“Today’s best robots can think at insect level”, he says as we return to his office. He explains that state-of-the-art mobile robots orient themselves by sensing special markers placed on floors, walls, or ceilings. Insects behave the same way: ants follow pheromone trails, lightning bugs look for each other’s flashes, and moths navigate with reference to the moon.
The trouble is, such systems are still brittle. Just as a moth can become fatally confused by fixing on candlelight instead of moonlight, a robot guided by markers can easily make a disastrous mistake—as happened when one designed by a Connecticut company to distribute hospital linens took a nosedive down a flight of stairs when it failed to notice a marker that was supposed to tell it not to proceed past a certain point.
…Moravec estimates that these systems will need an onboard computer capable of 500 million instructions per second. The first IBM PCs managed 0.3 MIPS; a modern Pentium-based PC reaches 200 MIPS; and it’s reasonable to expect that 500-MIPS processors will be affordable by the turn of the century.
This power will enable the robot to convert 500-by-500-pixel stereoscopic pictures from its camera eyes into a 3-D model consisting of about 100-by-100-by-100 cells. Updating and processing all this visual information will take about one second—the longest interval that is reasonably safe and practical, since the robot will move blindly between glimpses of the world.
Once robots find a niche doing dull, repetitive jobs, Moravec sees an ever-expanding market. “The next step will be adding an arm and improving the sensor resolution so that they can find and manipulate objects. The result will be a first generation of universal robots, around 2010, with enough general competence to do relatively intricate mechanical tasks such as automotive repair, bathroom cleaning, or factory assembly work.”
By “universal” Moravec means the robot will tackle many different jobs in the same way a Nintendo system plays many different games. Plug in one cartridge, and the robot will know how to change the oil in your car. Plug in another, and it will know how to patrol your property and challenge intruders.
Add more memory and computing power and enhance the software, and by 2020 we have a second generation that can learn from its own performance. “It will tackle tasks in various ways”, says Moravec, “keep statistics on how well each alternative has succeeded, and choose the approach that worked best. This means that it can learn and adapt. Success or failure will be defined by separate programs that will monitor the robot’s actions and generate internal punishment and reward signals, which will actually shape its character—what it likes to do and what it prefers not to do.”
Moravec pauses. The near future of robotics is something he’s spelled out a thousand times before, and he no longer finds it particularly exciting. But now we get to a subject that interests him more: the idea that robots can mimic human traits.
By 2030, according to Moravec, we should have a third-generation universal robot that emulates higher-level thought processes such as planning and foresight. “It will maintain an internal model not only of its own past actions, but of the outside world”, he explains. “This means it can run different simulations of how it plans to tackle a task, see how well each one works out, and compare them with what it’s done before.” An onlooker will have the eerie sense that it’s imagining different solutions to a problem, developing its own ideas.
…On the plus side, each time a robot learns a fact or masters a skill, it will be able to pass its knowledge to other robots as quickly and easily as sending a program over the Net. This way, the task of understanding the world can be divided among thousands or millions of robot minds. As a result, the machines will soon develop a deeper knowledge base than any single person can hope to possess. Within a short space of time, robots that are linked in this way will no longer need our help to show them how to do anything. [followed not long after by human extinction, which Moravec believes is a good thing]
…But what about the time scale? Isn’t he compressing a huge amount of progress into a very few decades?
“Back in the 1970s I made some overoptimistic assumptions about the rate of progress of computers [see 1976]. I thought that using an array of cheap microcomputers, we might achieve human equivalence by the mid-1980s. Then I did a slightly more careful calculation around 1978 and decided it would take another 20 years, requiring a supercomputer. But then I started getting serious, writing articles and essays, and I thought I should do the calculations more rigorously. So I collected 100 data points of previous computer progress, I did the best calculation I could, I compared the human retina with computer vision applications, and I plotted it all out.” [see “When will computer hardware match the human brain?”]