from pages 84 to 88 of
The New Yorker
January 9, 1989
by Brad Leithauser
No Loyalty to DNA
Qeer happenings are afoot in the tree of
life. That's the message one gets, in any case, from Hans Moravec's
"Mind Children: The Future of Robot and Human Intelligence" (Harvard
$18.95). At a time when many books for the layman document the steady
pruning of life's tree -- the dwindling of plant and animal species
through environmental spoliation -- Moravec focusses on a prospective
branching out: the emergence of new life-forms that soon will, he
feels quite certain, "mature into entities as complex as ourselves."
He is asking us, in effect, to transform our taxonomy at the roots. No
longer would the complementary kingdoms of plants and animals
represent the primary division; he foresees a still more fundamental
bifurcation -- that between what might be called the empire of organic
life and the empire of inorganic life. We are about to enter a
"postbiological" world. Imminently -- perhaps within the lifetimes of
our children -- robots of such advanced capabilities will emerge that
"our DNA will find itself out of a job, having lost the evolutionary
race to a new kind of competition."
As a genre, the popular science book boasts a number of
distinctive traits, several of which are present in "Mind Children."
Books about rapidly developing technology can often seem undeservedly
cheerful, and at various points Moravec might be faulted for scanting
the darker psychological and moral implications of the vision he
conjures up. He brings to his subject both the natural optimism of the
technician whose field is mushrooming before his eyes and the
belief -- which seems to keep creeping in, despite his disavowals of
it -- that technology will necessarily save us from technology. "Mind
Children" also illustrates, as books of the genre commonly do, that
the modern proliferation of research effectively makes each of
us -- scientist and nonscientist alike -- a layman; as knowledge expands
breathtakingly, in every direction, even the scientist must greet most
discoveries with head-shaking incomprehension. Although one's primary
reason for picking up a book like "Mind Children" may be to get a grip
on a burgeoning new field, one is probably also hoping to throw off
some of the dazzled numbness that comes of living in a technologically
explosive age. Doubtless many readers who turn regularly to the best
science popularizers -- to authors like Loren Eiseley and Stephen Jay
Gould and Paul Colinvaux and Douglas Hofstadter -- do so to combat the
numbness; the reader seeks the haphazard saving moment when an
impossibly distant object or an unthinkably complex equation comes
alive. Such sightings are likely to be unpredictable and personally
idiosyncratic. I remember once discovering, in a mathematician's
biography, that a hypothesis whose implications I did not understand
had been modified for all integers less than 10101034, a figure
beside which the number of elementary particles in the
observable universe looks infinitesimally small. That so mind-boggling
a figure had been of use and application in somebody's line of
work seemed in itself an affirmation of the wonders of the
universe.
Another characteristic of the genre is that the guilt one feels
toward unread books diminishes on a steady, almost graphable
basis. The reader who buys but fails to open, say, "The Tale of Genji"
or "Njal's Saga" or Hobbes' "Leviathan" or Boswell's "Life of Johnson"
introduces into his home a durable source of guilt; such books are
classics, and they reproach us unremittingly as long as they remain
unread. But the reader who buys the latest volume on artificial
intelligence or quantum theory or paleontology or black holes knows
that with each passing month its urgency will fade; within a few
years, it will make no claim whatsoever, since by then it can safely
be deemed out of date. One would be making a mistake, though, to let
"Mind Children" recede unopened into a guiltless oblivion. It's a
tonic book, thought-provoking on every page. And it reminds us that,
in our accelerating, headlong era, the future presses so close upon us
that those who ignore it inhabit not the present but the past.
Hans Moravec, who is director of the
Mobile Robot Laboratory of Carnegie Mellon University, possesses a
lucid, reassuringly commonsensical style and a flair for analogical
simplification which together make the recondite seem approachable and
the revolutionary plausible. Ever since completing his graduate
training at the Stanford Artificial Intelligence Laboratory almost a
decade ago, he has concentrated on robot locomotion and vision, and he
devotes part of the initial chapter of "Mind Children" to a detailing
of the difficulties inherent in any attempt to endow a robot with
sight. The theoretically simple business of "hooking up" a computer to
television equipment proves fiendishly complex in practice. The field
of robotics is, in fact, full of unexpected reversals. Tasks that look
elementary often prove formidable. In general, scientists have had a
much easier time teaching a robot to perform the "higher" functions
that formerly belonged solely to human beings (reading, proving
theorems, diagnosing diseases) than the "lower" functions that animals
have mastered (hearing, seeing, grasping objects). Improbably enough,
a robot is more easily taught to play expert chess than to move the
pieces.
As Moravec points out, there are evolutionary reasons for the
higher being more accessible than the lower: "Encoded in the large,
highly evolved sensory and motor portions of the human brain is a
billion years of experience about the nature of the world and how to
survive in it. The deliberate process we call reasoning is, I believe,
the thinnest veneer of human thought, effective only because it is
supported by this much older and much more powerful, though usually
unconscious, sensorimotor knowledge." Given the irregularities and
uncertainties of terrain outside the laboratory, freewheeling movement
is not merely tricky but often hazardous for a robot. The
monumentality of the programmer's task becomes evident when one
considers that a truly flexible and autonomous robot would have to
have enough of what researchers call "world knowledge" to translate
all relevant physical conditions -- every shifting object, every stray
obstacle -- into the strings of binary numbers which are its language of
operation.
So thoroughgoing and convincing is Moravec on the subject of the
complexities of robot movement that one comes away from his early
pages feeling that the goal of autonomy is almost insurmountable -- and
also feeling that in the face of such a sober-minded assessment one
must treat even the most outlandish of his subsequent predictions with
respect. In its less than two hundred pages of text, the book
undertakes quite a journey. By its close, the reader has met robots
that can go on risky vacations for vicariously adventurous human
beings, "protein robots" so miniscule that they can assemble machinery
molecule by molecule, even robots that can construct other robots in
factories out in the asteroid belt.
Near the start, while discussing some of the ways in which a
mechanical object could be programmed to behave like a human being,
Moravec takes an intellectual sidestep. "The conditioning software I
have in mind would receive two kinds of messages from anywhere within
the robot, one telling of success, the other of trouble," he
begins. "I'm going to call the success messages 'pleasure' and the
danger messages 'pain.' Pain would tend to interrupt the activity in
progress, while pleasure would increase its probability of
continuing." But once he has dropped the quotation marks around
"pleasure" and "pain" he treats the terms as though each was genuinely
synonymous in its robotics and its human applications. He acknowledges
no potential confusion when describing machines in emotive language:
"Modules that recognize other conditions and sense pain or pleasure
messages of appropriate strength would endow a robot with a unique
character. A large, dangerous robot with a human-presence detector
sending a pain signal would become shy of human beings and thus be
less likely to cause injury." In short, he appears to accept as a
given that the hypothesis that mind is merely a kind of machine -- one
whose meditations and commands are ultimately duplicable by other,
inorganic machinery -- and therefore finesses a question that lies at
the core of current debate in the field of artificial intelligence: Is
there any area of human activity which is obdurately, permanently
inaccessible to machines? Moravec, who once remarked that he has "no
loyalty to DNA," may be sound in assuming an underlying identicalness
between the human mind and the machine (most experts in the field
would probably agree with him), but he was unwise in choosing to pass
over so fertile and significant a controversy. Readers who look
elsewhere for a discussion of the subject -- perhaps to the contentious
essays assembled in "The Artificial Intelligence Debate," a new
collection edited by Stephen R, Graubard -- will likely encounter a
tangle of human emotions, including skepticism, anger, foreboding, and
indignation, that are all but missing in Moravec's expansive,
self-assured projections.
But readers who are willing to go along with him, at least
temporarily, on the issue of duplicability -- something which the patent
joy he derives from speculation invites one to do -- will find that his
arguments proceed with a sureness that verges on the
inexorable. Attempting to place the modern computer in historical
perspective, he ventures back a hundred years to examine Herman
Hollerith' punchcard tabulator (a device that eventually became a sort
of founding father of I.B.M.), and he concludes that since the
beginning of the century "there has been a trillionfold
increase in the amount of computation a dollar will buy." He estimates
that in terms of computational power the largest of the present-day
supercomputers "are a match for the 1-gram brain of a mouse," but that
in time we may be able to build machines that operate at a million
million million million million (1030)
times the power of a human mind. What can one say in response to such
a number? If duplicability is possible, is it not inevitable? And even
if we assume that it is not possible how can we deny that
machines of such unreckonable energies would not be capable of a rich
and ranging inner life of their own?
Isn't it only a matter of time, Moravec asks, before we can
transfer, or "download," our minds into computers? Copies could then
be made of copies and stored in separate, secure places, not all of
them on the earth -- a procedure that would virtually insure our
immortality. He foresees a number of ways in which downloading might
take place. A person could wear each day a miniaturized observational
device, whose data, compiled over years and years, would serve as the
memory bank of a new intellect. Or you might enter the hospital for
brain surgery to be performed by a robot whose hands are
microscopically precise and whose command of speech allows the two of
you to proceed collaboratively. Since the brain registers no pain when
it is subjected to incision, you could be fully conscious during the
entire operation. Equipped with an encyclopedic understanding of human
neural architecture, and proceeding millimetre by millimetre, the
robot surgeon would develop a program that would model the behavior of
a discrete layer of brain tissue. This program would produce signals
equivalent to those flashing among the neurons in the area under
scrutiny, and a series of cables would allow the robot to create
"simulations," in which the program is substituted for the layer of
brain tissue. The simulation process would be analogous to what's now
available in sophisticated audio shops, where a customer can test and
compare components at the push of a button and without breaking the
flow of the music:
To further assure you of
the simulation's correctness, you are given a pushbutton that allows
you to momentarily "test drive" the simulation, to compare it with the
functioning of the original tissue. When you press it, arrays of
electrodes in the surgeon's hand are activated. By precise injections
of current and electromagnetic pulses, the electrodes can override the
normal signaling activity of nearby neurons. . . . As long as you
press the button, a small part of your nervous system is being
replaced by a computer simulation of itself. You press the button,
release it, and press it again. You should experience no
difference. As soon as you are satisfied, the simulation connection is
established permanently. The brain tissue is now impotent -- it
receives inputs and reacts as before but its output is ignored.
Microscopic manipulators on the hand's surface excise the cells in
this superfluous tissue and pass them to an aspirator, where they are
drawn away. . . . Eventually your skull is empty, and the surgeon's
hand rests deep in your brainstem. Though you have not lost
consciousness, or even your train of thought, your mind has been
removed from the brain and transferred to a machine.
A slower and seemingly less traumatic transfer might be achieved by
installing in the corpus callosum -- the main cable that unites the
brain hemispheres -- a microscopic monitor linked to a computer that
would "eavesdrop" in order to make a model of your mental
activities:
After a while it begins to
insert its own messages into the flow, gradually insinuating itself
into your thinking, endowing you with new knowledge and new skills.
In time, as your original brain faded away with age, the computer
would smoothly assume the lost functions. Ultimately your brain would
die, and your mind would find itself entirely in the
computer.
Any such event would compel a further modification in our
taxonomy. The distinction between organic and inorganic life -- and,
indeed, all the subdistinctions by which a species is fitted into a
unique biological niche -- would dissolve. Although Moravec has
disappointingly little to say about religion, his ultimate vision
incarnates widespread theological convictions about the "oneness" of
all life:
Mind transferral need not
be limited to human beings. Earth has other species with large brains,
from dolphins, whose nervous systems are as large and complex as our
own, to elephants . . . and perhaps giant squid, whose brains may
range up to twenty times as big as ours. Just what kind of minds and
cultures these animals possess is still a matter of controversy, but
their evolutionary history is as long as ours, and there is surely
much unique and hard-won information encoded genetically in their
brain structures and their memories. The brain-to-computer
transferral methods that work for humans should work as well for these
large-brained animals, allowing their thoughts, skills, and
motivations to be woven into our cultural tapestry. Slightly
different methods, that focus more on genetics and physical makeup
than on mental life, should allow the information contained in other
living things with small or no nervous systems to be popped into the
data banks. The simplest organisms might contribute little more than
the information in their DNA. In this way our future selves will be
able to benefit from and build on what the earth's biosphere has
learned during its multibillion-year history. And this knowledge may
be more secure if it is preserved in databanks spreading through the
universe. In the present scheme of things, on our small and fragile
earth, genes and ideas are often lost when the conditions that gave
rise to them change.
Our speculation ends in a supercivilization, the synthesis of all
solar-system life, constantly improving and extending itself, spreading
outward from the sun, converting nonlife into mind.
Actually, Moravec might plausibly contend that conventional
theological debate is hardly germane to his argument. If he is
mistaken about human duplicability, most of his projections at once
reveal themselves as pipe dreams that connect only remotely and
hypothetically with religious issues. And if, on the other hand, he is
correct in supposing that human minds will be transferred into or
otherwise fused with machines, it seems likely that traditional
religious questions -- and traditional religions themselves -- will
either melt away or suffer wholesale metamorphosis. Debates about
Heaven or Hell -- to take but one example -- would hold little
relevance for an immortal creature. One wishes, however, that he had
accorded greater space to psychological considerations. Many people
experience an instinctive unease at the incursions of the mechanical
-- a feeling concisely summed up by Emerson a century ago: "Machinery
is aggressive." And although such people might reconcile themselves in
time to the notion of a man/machine cohabitation -- most of us, in the
course of modern life, have already grown used to hearing computers
speak to us -- the conviction that there is something innately
"special" about human beings would surely die hard, and at great cost.
The modern scientist and his offering are often likened to Mary
Shelley's Victor Frankenstein and his monster, but the
nineteenth-century novel that Moravec most vividly evokes is
Stevenson's "Dr. Jekyll and Mr. Hyde." In a document found at his
death, Dr. Henry Jekyll explained how, troubled by the "polar twins"
that dwelt in his "agonised womb of consciousness," he conceived the
prospect of a sweet divorce" "If each, I told myself, could be housed
in separate identities, life would be relieved of all that was
unbearable." These words are echoed in Moravec's prologue: "In the
present condition we are uncomfortable halfbreeds, part biology, part
culture, with many of our biological traits out of step with the
invention of our minds. . . . It is easy to imagine human thought
freed from bondage to a mortal body." "Mind Children" makes light of
the possibility that a deathless human being is not a human being at
all -- that the condition of mortality so informs our lives as to
render them unrecognizable without it. It may be (to pose a paradox of
a sort that Moravec himself might relish) that on the day when man
makes himself immortal he makes himself extinct. The future that
Moravec sees is certainly one in which "timeless" truths -- the
eternal verities of the poet -- are set on their ear. Algernon
Swinburne observed that "all men born are mortal but not man." Moravec
everts this dictum: in his world, the individual would become
deathless, but man in the aggregate -- that species whose hopes and
expectations have been framed in the phrase "threescore years and ten"
-- would vanish.
One has to wonder how the art that we have safeguarded throughout
the centuries would survive the transformation. Whether one is
listening to Hamlet speculate on the bourn from which none return or
contemplating a ukiyo-e print of rice harvesters or reading
"Gilgamesh," the appreciation of any work of art generally requires us
to cross a gulf -- both geographical and temporal -- on the bridge of
our kindred uncertainty and helplessness in the face of death. Any art
that might be fabricated in Moravec's new world would be composed, in
effect, in a new language. Without question, it would be
extraordinary. But surely much of what we now revere would suffer in
translation.
Readers who are curious about Hans
Moravec, and long for greater personal detail than is provided in
"Mind Children," will find him in fine form, witty and engaging and
professorially eccentric (a favorite snack is Cheerios topped with
bananas and chocolate milk), in Grant Fjermedal's "The Tomorrow
Makers," whose subtitle is "A Brave New World of Living-Brain
Machines." In pursuit of his book, Fjermedal spent a number of clearly
quite exhilarating months in this country and Japan, drifting from one
artificial-intelligence center to another and meeting a wide range of
individuals who, for all their singularities, seem to share a penchant
for working all night and sleeping catch as catch can on the
morrow. And who share, as well, a daily, ingrained perception that the
intertwined evolution of man and machine -- of which downloading might
be regarded as the apotheosis -- is steadily speeding us toward an
alien world.
Even readers who view the prospect of downloading with confident
disbelief or squeamish distaste will appreciate the poignance that at
present suffuses the field. A number of researchers have come to
believe that they were born just a little too early -- that the
immortality toward which their collective efforts are reaching will
not be attained soon enough. For them, death is an ailment that will
not be cured in their lifetimes. Fjermedal quotes one researcher as
saying, "Everyone would like to be immortal. I don't think the time is
quite right. But it's close. It isn't very long from now. I'm afraid,
unfortunately, that I'm the last generation to die.
According to Moravec, however, such pronouncements are unduly
pessimistic and final. He speculates that by dint of a gathering
mastery of a range of disciplines, including history, genetics,
anthropology, and computer simulation, much that has disappeared may
prove retrievable. Having effaced so many familiar categories, the
future will eventually soften even the distinction between itself and
the past. Time will turn ductile:
Now, imagine an immense
simulator (I imagine it made out of a superdense neutron star) that
can model the whole surface of the earth on an atomic scale and can
run time forward and back and produce different plausible outcomes by
making different random choices at key points in its
calculation. Because of the great detail, this simulator models living
things, including humans, in their full complexity. According to the
pattern-identity position, such simulated people would be as real as
you or me, though imprisoned in the simulator.
We could join them through a magic-glasses interface, which
connects to a "puppet" deep inside the simulation and allows us to
experience the puppet's sensory environment and to naturally control
its actions. More radically, we could "download" our minds directly
into a body in the simulation and "upload" back into the real world
when our mission is accomplished. Alternatively, we could bring people
out of the simulation by reversing the process -- linking their minds
to an outside robot body, or uploading them directly into it. In all
cases we would have the opportunity to recreate the past and to
interact with it in a real and direct fashion.
When that day comes, we will have a choice about which pasts we
want to consign to the past and which we will summon to accompany us
into the future. In the meantime, though, the reader is left to wonder
what the human cost would be of never losing anything.
-- Brad Leithauser