Skip to main content

30 Questions for Hans Moravec

Interview questions for Hans Moravec on compute forecasting, Moravec’s paradox, robotics, mind uploading, machine succession, and the moral status of post-biological intelligence.

After writing “The Scaling Hypothesis”, I became eager to interview Hans Moravec. I was unable to contact him and was concerned that he might be in too poor health to interview, but in April 2026, another journalist succeeded in getting an interview agreement.

I used my “interview prompt” to quickly generate a comprehensive set of last-minute questions.

History:

  • You have been publicly silent for almost 2 decades, despite your vindication. Why, and what have you been doing?

  • Back in the 1970s, what convinced you of the primacy of compute? Was there any single event?

  • Your 198838ya prediction of ~10 tera-ops for ~$2,228.92$1,0001998 by the 2030s was remarkably accurate. How did you get it right?

  • Until the past few years, hardly anyone in the world believed your forecasts or your claim that compute would be converted into intelligence; why were we all wrong?

    • Was there a key fact you didn’t emphasize enough, or was there some experiment or proof missing?

  • In 198838ya, believing your forecast required believing that the field’s smartest people—symbolic AI’s leadership, cognitive scientists, philosophers of mind—were collectively completely and totally wrong. That would be a hard thing to believe even with the math on your side. What gave you the nerve, and what should a reader have looked for to license that same nerve?

  • What did your forecasts underweight most: raw FLOPS, memory bandwidth, training data, algorithmic progress, energy, or economic willingness to spend?

  • Would you still use the retina-to-brain extrapolation today, or would you replace it with a training-compute, data, or algorithmic-efficiency estimate?

Current AI:

  • Do you see modern LLMs as confirming your compute-determinist thesis, or as something different? Where does a GPT-3 fit in your lizard/mouse/monkey/human robot generations?

  • “Moravec’s Paradox” states perception is harder than reasoning. Yet LLMs write, code, and reason well while remaining physically incompetent and robotics remains confined to the factory.

    Does the success of deep learning scaling without a body invalidate your core thesis that physical embodiment is necessary for human-level intelligence?

  • You expected bottom-up robotics and top-down symbolic reasoning to meet. Did deep learning become the bridge, or did it bypass both traditions?

  • Do current LLMs have any form of subjective experience?

  • You famously predicted human-level AI by roughly 2040. Given the current multi-billion dollar GPU boom, are we ahead of schedule, behind, or right on time?

    If we are behind, what are LLMs lacking that would take 14 more years to fix?

Morality and Machine Succession:

  • You once wrote that robots displacing humans is “the best thing we could hope for”. (See Mind Children and Wired 1995.) Do you still believe that about our “mind children”?

  • Do we have the right to create a species that will inevitably dispose of us?

  • You said the takeover will be “swift and painless” and that we’ll be remembered kindly. Is that naive?

  • You suggested AIs would develop their own morality. What would that look like? What is a specific moral law an AI might derive that humans would find abhorrent?

  • If you were forced to choose: humans survive but no AI, or AI replaces humans and spreads across the galaxy? What is the most beautiful thing about a universe with AI successors but no humans?

  • How do you respond to AI safety researchers who want to align AI with human values, like Eliezer Yudkowsky?

  • You proposed internalized law for robot corporations. Does that still look like a plausible alignment scheme?

Long-term outcomes:

  • Do you still believe mind uploading will happen?

  • Would uploaded humans remain “human”, or evolve into something unrecognizable?

  • Is the ultimate fate of post-biological intelligence to become black holes?

Retrospective:

  • If machine intelligence succeeds humanity, what do you hope it carries forward from us?

  • To a terrified young AI researcher today, what is your one sentence of advice?

  • If you could live to see one future event, what would it be?

  • As you face your own mortality in poor health, does the prospect of a machine-dominated, immortal future offer you any personal, spiritual comfort?

  • What would you choose if you could upload today?

Legacy:

  • How do you want to be remembered in the history of AI?

  • Do you have any regrets about how your ideas were received or used? Looking back, do you regret any of your public stances on AI replacement?

GPT-5.5 Pro objects to the list of questions I sent, and suggests the following revised list:

Forecasting:

  1. Silence: You have been publicly quiet since the 2000s, while AI finally became a mass public issue. Which part of the last 20 years most surprised you?

  2. Raw Power: In the 1970s, what convinced you that AI’s missing ingredient was compute rather than theory?

  3. McCarthy: John McCarthy thought existing computers were roughly enough and that the field needed conceptual breakthroughs. Was he simply wrong, or was he right about theory but wrong about scale?

  4. Right For Wrong Reasons: Your compute forecast looks much better now than it did in 198838ya. Which part was genuinely right, and which part was accidentally right?

  5. Retina Extrapolation: Would you still use the retina-to-brain extrapolation today? Or would you now estimate intelligence from training compute, inference compute, data, parameter count, algorithmic efficiency, or something else?

  6. Training Versus Inference: Your forecasts mostly priced brain-equivalent runtime compute. Modern AI often spends enormous training compute to make cheap inference possible. Which side is the better analogy to animal intelligence?

  7. Data: Did you underestimate that Internet-scale text, images, video, code, and interaction logs would become the substitute for physical developmental experience?

  8. Algorithmic Ingredient: If you had to name the single missing algorithmic ingredient in 198838ya, would it be backpropagation, self-supervised learning, transformers, GPUs, scaling laws, reinforcement learning, or something else?

  9. Persuasion Failure: Before deep learning, what evidence should have persuaded symbolic AI researchers and cognitive scientists that compute would be converted into intelligence?

  10. Forecasting Nerve: In 198838ya, believing your forecast required believing that many brilliant people were badly wrong. What licensed that nerve?

Robotics:

  1. Moravec’s Paradox: Text models can write code and solve exams while robots still struggle with ordinary physical tasks. Is this exactly Moravec’s paradox, or a violation of what you expected?

  2. Embodiment: Did you believe embodiment was necessary for human-level intelligence, or only that robotics was the practical route by which intelligence would arrive?

  3. Simulated Experience: Can video, simulation, and synthetic interaction data substitute for physical sensorimotor experience?

  4. Robotics Lesson: What did building real mobile robots teach you that AI theorists still do not understand?

  5. Robot GPT-3 Moment: What would be the first sign that robotics has had its GPT-3 moment?

  6. Capability Test: Which robot capability would most change your mind about the schedule: housecleaning, dexterous repair, autonomous science labs, eldercare, military logistics, or something else?

Mind:

  1. Consciousness Criterion: In your neural-substitution argument, what matters most: external behavior, internal causal organization, continuity of replacement, or something else?

  2. LLM Experience: Do current LLMs cross any threshold for subjective experience, or are they still non-experiencing simulators of minded text?

  3. Copies: Would destructive uploading be survival or reproduction? What would you personally accept?

  4. Gradual Replacement: Does gradual neuron-by-neuron substitution solve the identity problem, or merely make the copy psychologically easier to accept?

  5. Mind Children Versus Uploads: Which arrives first: independent machine minds, high-fidelity uploads, or neither?

  6. Ultimate Computation: Do you still think post-biological intelligence trends toward dense astronomical computation—black holes, reversible computers, Jupiter brains—or has that picture changed?

Succession:

  1. Best Thing: In 199531ya, you treated machine succession as the best thing humans could hope for. Do you still believe that?

  2. Painless Transition: What mechanism makes the transition swift and painless rather than predatory, chaotic, or indifferent?

  3. Robot Corporations: You proposed legally constrained robot corporations. Does that still look like an alignment scheme, or only a transitional legal fiction?

  4. Yudkowsky: AI safety researchers like Yudkowsky want AI aligned with human values. Are they defending value, defending primates, or confusing the two?

  5. Machine Morality: What moral law might machine descendants derive that humans would find abhorrent?

  6. Forced Choice: If you had to choose: humans survive but no AI successors, or AI successors spread through the galaxy after humans vanish—which universe is better?

  7. Moral Patients: When do AI systems deserve moral consideration rather than remaining tools? What evidence would persuade you?

Legacy:

  1. Late Prophet: If the 2040 forecast is late by 20 years but the succession thesis is right, should history call you a failed forecaster or a successful theorist?