“Even The Politicians Thought the Open Letter Made No Sense In The Senate Hearing on AI Today’s Hearing on Ai Covered Ai Regulation and Challenges, and the Infamous Open Letter, Which Nearly Everyone in the Room Thought Was Unwise”, Brandon Gorrell2023-04-19 (, )⁠:

In a Senate Committee on Armed Services hearing today on how the Department of Defense can both leverage AI and mitigate its risks, senators and industry leaders discussed regulatory approaches to AI for commercial and defense applications, specific obstacles to being the global leader in leveraging AI that the DoD faces—and solutions to overcoming them.

Shyam Sankar, Palantir CTO, Josh Lospinoso, Shift5 CEO, and Jason Matheny, Rand Corporation CEO and Commissioner of the National Security Commission on AI provided expert testimony. The hearing was chaired by Sen. Joe Manchin (D-WV) and Sen. Mike Rounds (R-SD).

The open letter: After describing the open letter to pause AI development that the Future of Life Institute published in March, Senator Mike Rounds (R-SD) said “I think the greater risk, and I’m looking at this from a US security standpoint, is taking a pause while our competitors leap ahead of us in this field… I don’t believe that now is the time for the US to take a break.”

A pause would be “close to impossible… It’s also unclear how we would use that pause”, Matheny responded.

And “other than ceding the advantage to the adversary”, Sankar added, the pause would have no effect. “The bigger consequence is the nature of the AIs. China has already said that AI should have socialist characteristics… To the extent that that becomes the standard AI for the world, is highly problematic. I would double down on the idea that a democratic AI is crucial.”

A pause would be “impractical”, Lospinoso agreed. “We [would] abdicate leadership on ethics and norms, not to mention practical implications of us falling behind on cyber security, military applications.”

Leveraging AI to our advantage: Palantir CTO Sankar called for the US to adopt a more hands on, accelerationist approach to AI. This, in his view, is practically a requirement for securing global, geopolitical dominance.

We need to “spend at least 5% of our budget on capabilities that will terrify our adversaries”, Sankar said.

“We must completely rethink what we are building and how we are building it. AI will completely change everything. Even toasters, but most certainly tanks.”

“This will be disruptive and emotional. Many incumbents in government will be affected, and they will feel threatened and dislocated”, he said. And later: “What keeps me up at night is: do we have the will? The issue of AI adoption is one of willpower. Are we adopting AI like our survival depends on it? Because I believe it does. And I think you see that in our adversaries, they [realize it’s a matter of survival].”

…Lospinoso warned that “if [this] trend [continues], China will surpass us in a decade.”

[Absurd. But as always, “funding comes from the threat”; we must not allow a meta-learning gap!]