---
title: '30 Questions for Hans Moravec'
author: Gwern, GPT-5.4 Pro, Claude-4.6-opus
description: "Interview questions for Hans Moravec on compute forecasting, Moravec’s paradox, robotics, mind uploading, machine succession, and the moral status of post-biological intelligence."
created: 2026-04-22
modified: 2026-05-10
status: finished
importance: 0
confidence: log
css-extension: dropcaps-kanzlei
backlink: False
...

::: abstract
After writing ["The Scaling Hypothesis"](/scaling-hypothesis), I became eager to interview [Hans Moravec](https://en.wikipedia.org/wiki/Hans_Moravec){.link-live}. I was unable to contact him and was concerned that he might be in too poor health to interview, but in April 2026, another journalist succeeded in getting an interview agreement.

`<!-- TODO: link when article published in _New York_? Can we get him to publish the full interview transcript? -->`{=html}

I used [my "interview prompt"](/interview-inkhaven#interview-prompt) to quickly generate a comprehensive set of last-minute questions.
:::

**History**:

-   You have been publicly silent for almost 2 decades, despite your vindication. Why, and what have you been doing?

-   Back in the 1970s, what convinced you of [the primacy of compute](https://web.archive.org/web/20230710000944/https://frc.ri.cmu.edu/~hpm/project.archive/general.articles/1975/Raw.Power.html)? Was there any single event?

-   [Your 1988 prediction](https://jetpress.org/volume1/moravec.htm) of \~10 tera-ops for \~[\$1,000]($1998) by the 2030s was remarkably accurate. How did you get it right?

-   Until the past few years, hardly anyone in the world believed your forecasts or your claim that compute would be converted into intelligence; why were we all wrong?

    -   Was there a key fact you didn't emphasize enough, or was there some experiment or proof missing?

-   In 1988, believing your forecast required believing that the field's smartest people---symbolic AI's leadership, cognitive scientists, philosophers of mind---were collectively completely and totally wrong. That would be a hard thing to believe even with the math on your side. What gave you the nerve, and what should a reader have looked for to license that same nerve?

-   What did your forecasts underweight most: raw FLOPS, memory bandwidth, training data, algorithmic progress, energy, or economic willingness to spend?

-   Would you still use the retina-to-brain extrapolation today, or would you replace it with a training-compute, data, or algorithmic-efficiency estimate?

**Current AI**:

-   Do you see modern [LLMs](https://en.wikipedia.org/wiki/Large_language_model){.link-live} as confirming your compute-determinist thesis, or as something different? Where does a [GPT-3](https://arxiv.org/abs/2005.14165#openai) fit in your lizard/mouse/monkey/human robot generations?

-   ["Moravec's Paradox"](https://en.wikipedia.org/wiki/Moravec%27s_Paradox){.link-live} states perception is harder than reasoning. Yet LLMs write, code, and reason well while remaining physically incompetent and robotics remains confined to the factory.

    Does the success of [deep learning](https://en.wikipedia.org/wiki/Deep_learning){.link-live} scaling without a body invalidate your core thesis that physical embodiment is necessary for human-level intelligence?

-   You expected bottom-up robotics and top-down symbolic reasoning to meet. Did deep learning become the bridge, or did it bypass both traditions?

-   Do current LLMs have any form of subjective experience?

-   You famously predicted human-level AI by roughly 2040. Given the current multi-billion dollar GPU boom, are we ahead of schedule, behind, or right on time?

    If we are behind, what are LLMs lacking that would take 14 more years to fix?

**Morality and Machine Succession**:

-   You once wrote that robots displacing humans is "the best thing we could hope for". (See [*Mind Children*](https://en.wikipedia.org/wiki/Mind_Children){.link-live} and [Wired 1995](https://www.wired.com/1995/10/moravec/#scaling).) Do you still believe that about our "mind children"?

-   Do we have the right to create a species that will inevitably dispose of us?

-   You said the takeover will be "swift and painless" and that we'll be remembered kindly. Is that naive?

-   You suggested AIs would develop their own morality. What would that look like? What is a specific moral law an AI might derive that humans would find abhorrent?

-   If you were forced to choose: humans survive but no AI, or AI replaces humans and spreads across the galaxy? What is the most beautiful thing about a universe with AI successors but no humans?

-   How do you respond to AI safety researchers who want to align AI with human values, like [Eliezer Yudkowsky](https://en.wikipedia.org/wiki/Eliezer_Yudkowsky){.link-live}?

-   You proposed internalized law for robot corporations. Does that still look like a plausible alignment scheme?

**Long-term outcomes**:

-   Do you still believe [mind uploading](https://en.wikipedia.org/wiki/Mind_uploading){.link-live} will happen?

-   Would uploaded humans remain "human", or evolve into something unrecognizable?

-   Is the ultimate fate of post-biological intelligence to become black holes?

**Retrospective**:

-   If machine intelligence succeeds humanity, what do you hope it carries forward from us?

-   To a terrified young AI researcher today, what is your one sentence of advice?

-   If you could live to see one future event, what would it be?

-   As you face your own mortality in poor health, does the prospect of a machine-dominated, immortal future offer you any personal, spiritual comfort?

-   What would you choose if you could upload today?

**Legacy**:

-   How do you want to be remembered in the history of AI?

-   Do you have any regrets about how your ideas were received or used? Looking back, do you regret any of your public stances on AI replacement?

[GPT-5.5 Pro](https://openai.com/index/introducing-gpt-5-5/#pro) objects to the list of questions I sent, and suggests the following revised list:

::: collapse
> **Forecasting**:
>
> 1.  **Silence**: You have been publicly quiet since the 2000s, while AI finally became a mass public issue. Which part of the last 20 years most surprised you?
>
> 2.  **Raw Power**: In the 1970s, what convinced you that AI's missing ingredient was compute rather than theory?
>
> 3.  **McCarthy**: John McCarthy thought existing computers were roughly enough and that the field needed conceptual breakthroughs. Was he simply wrong, or was he right about theory but wrong about scale?
>
> 4.  **Right For Wrong Reasons**: Your compute forecast looks much better now than it did in 1988. Which part was genuinely right, and which part was accidentally right?
>
> 5.  **Retina Extrapolation**: Would you still use the retina-to-brain extrapolation today? Or would you now estimate intelligence from training compute, inference compute, data, parameter count, algorithmic efficiency, or something else?
>
> 6.  **Training Versus Inference**: Your forecasts mostly priced brain-equivalent runtime compute. Modern AI often spends enormous training compute to make cheap inference possible. Which side is the better analogy to animal intelligence?
>
> 7.  **Data**: Did you underestimate that Internet-scale text, images, video, code, and interaction logs would become the substitute for physical developmental experience?
>
> 8.  **Algorithmic Ingredient**: If you had to name the single missing algorithmic ingredient in 1988, would it be backpropagation, self-supervised learning, transformers, GPUs, scaling laws, reinforcement learning, or something else?
>
> 9.  **Persuasion Failure**: Before deep learning, what evidence should have persuaded symbolic AI researchers and cognitive scientists that compute would be converted into intelligence?
>
> 10. **Forecasting Nerve**: In 1988, believing your forecast required believing that many brilliant people were badly wrong. What licensed that nerve?
>
> **Robotics**:
>
> 1.  **Moravec's Paradox**: Text models can write code and solve exams while robots still struggle with ordinary physical tasks. Is this exactly Moravec's paradox, or a violation of what you expected?
>
> 2.  **Embodiment**: Did you believe embodiment was necessary for human-level intelligence, or only that robotics was the practical route by which intelligence would arrive?
>
> 3.  **Simulated Experience**: Can video, simulation, and synthetic interaction data substitute for physical sensorimotor experience?
>
> 4.  **Robotics Lesson**: What did building real mobile robots teach you that AI theorists still do not understand?
>
> 5.  **Robot GPT-3 Moment**: What would be the first sign that robotics has had its GPT-3 moment?
>
> 6.  **Capability Test**: Which robot capability would most change your mind about the schedule: housecleaning, dexterous repair, autonomous science labs, eldercare, military logistics, or something else?
>
> **Mind**:
>
> 1.  **Consciousness Criterion**: In your neural-substitution argument, what matters most: external behavior, internal causal organization, continuity of replacement, or something else?
>
> 2.  **LLM Experience**: Do current LLMs cross any threshold for subjective experience, or are they still non-experiencing simulators of minded text?
>
> 3.  **Copies**: Would destructive uploading be survival or reproduction? What would you personally accept?
>
> 4.  **Gradual Replacement**: Does gradual neuron-by-neuron substitution solve the identity problem, or merely make the copy psychologically easier to accept?
>
> 5.  **Mind Children Versus Uploads**: Which arrives first: independent machine minds, high-fidelity uploads, or neither?
>
> 6.  **Ultimate Computation**: Do you still think post-biological intelligence trends toward dense astronomical computation---black holes, reversible computers, Jupiter brains---or has that picture changed?
>
> **Succession**:
>
> 1.  **Best Thing**: In 1995, you treated machine succession as the best thing humans could hope for. Do you still believe that?
>
> 2.  **Painless Transition**: What mechanism makes the transition swift and painless rather than predatory, chaotic, or indifferent?
>
> 3.  **Robot Corporations**: You proposed legally constrained robot corporations. Does that still look like an alignment scheme, or only a transitional legal fiction?
>
> 4.  **Yudkowsky**: AI safety researchers like Yudkowsky want AI aligned with human values. Are they defending value, defending primates, or confusing the two?
>
> 5.  **Machine Morality**: What moral law might machine descendants derive that humans would find abhorrent?
>
> 6.  **Forced Choice**: If you had to choose: humans survive but no AI successors, or AI successors spread through the galaxy after humans vanish---which universe is better?
>
> 7.  **Moral Patients**: When do AI systems deserve moral consideration rather than remaining tools? What evidence would persuade you?
>
> **Legacy**:
>
> 1.  **Late Prophet**: If the 2040 forecast is late by 20 years but the succession thesis is right, should history call you a failed forecaster or a successful theorist?
:::

::: {.aux-links-append .see-also-append .collapse}
**See Also**:

::: columns
-   [Superhumanism: According to Hans Moravec § On the Inevitability & Desirability of Human Extinction](https://www.wired.com/1995/10/moravec/#extinction){.link-annotated .id-not .backlink-not}

-   [Robot Predictions Evolution](https://web.archive.org/web/20230718144747/https://frc.ri.cmu.edu/~hpm/project.archive/robot.papers/2004/Predictions.html){.link-annotated .id-not .backlink-not}

-   [Afterword to Vernor Vinge's novel, *True Names*](/doc/philosophy/mind/1984-minsky.html){.link-annotated .id-not .backlink-not}

-   [Meet Shakey: the first electronic person---the fascinating and fearsome reality of a machine with a mind of its own](/doc/reinforcement-learning/robot/1970-darrach.pdf){.link-annotated .id-not .backlink-not}

-   [Oral History Interview with Terry Allen Winograd (OH #237) § SHRDLU](/doc/ai/1991-winograd.pdf#page=7){.link-annotated .id-not .backlink-not}

-   [10 Questions for Joseph Henrich](/blog/2026/questions-henrich){.link-annotated .id-not .backlink-not}

-   [*Gödel, Escher, Bach* author Douglas Hofstadter on the state of AI today § What about AI terrifies you?](https://www.youtube.com/watch?v=lfXxzAVtdpU&t=1763s){.link-annotated .id-not .backlink-not}

-   [*Artificial Intelligence: A Guide for Thinking Humans* § Prologue: Terrified](https://melaniemitchell.me/aibook/){.link-annotated .id-not .backlink-not}

-   [*Increments Podcast*: #45---4 Central Fallacies of AI Research (with Melanie Mitchell)](https://www.youtube.com/watch?v=Q-TJFyUoenc&t=2444s){.link-annotated .id-not .backlink-not}
:::
:::
```{=HTML}
<div class='text-center' id='return-to-blog-index-link'>[<a href='/blog/index' class='link-page link-tag directory-indexes-upwards link-annotated-not' data-link-icon='arrow-up-left' data-link-icon-type='svg' rel='tag' title='Link to blog directory'>Return to blog index</a>]</div>
```
