A personal take on how Sam's done as CEO with respect to AI safety. Far from perfect. But way better than most alternatives and a huge margin better than the next-most-likely outcomes.

Nov 24, 2023 · 1:37 AM UTC

17
36
13
502
Conflict of interest disclosure: I work at OpenAI. I have financial upside from working at OpenAI. You are welcome to take my thoughts with a grain of salt. But I'll largely focus on stuff that's externally observable.
1
2
46
For context: in 2019, I was a strong "No" in any discussions about future commercialization of AI. Extremely concerned about what the incentives would do to the company. How it would influence the level of safety in deployment of AGI.
2
1
1
41
At the time, like many other thinkers and "experts" about AGI safety, my view was that AGI should be derisked extensively before being used in basically any circumstances. The development path I imagined and advocated for was pure scientific research.
1
1
38
I also wanted substantial public input into how it would be used and governed. I did not have a real answer for how to reconcile the gap between "only a few people really know how it works" and "everyone needs to be able to give input on this."
1
1
46
Like others, I was suspicious and uncomfortable with anyone who seemed to support any commercialization at all, including Sam and Greg. I had many discussions with them about safety, governance, commercialization, incentives, etc.
1
1
36
I also critiqued them quite harshly. I remember telling Sam in 2020 - in front of other people, at a company office hours - that I thought releasing something was a terrible idea. A normal CEO would (should) have fired me. Instead, Sam took me seriously and asked for my advice.
1
1
71
(I don't think Sam even registered the impertinence. I spent a lot of time after that worried that I had gotten myself in trouble with my big mouth. (I have been *very* annoying to a lot of powerful people over the years.) But no. Sam was characteristically magnanimous about it.)
1
1
62
It's pretty hard to overstate the safety information gain created by the public releases of the API and ChatGPT. I don't think these release processes were perfect but I do think they have sort-of obviously created more good than harm.
3
1
1
82
Sam created the circumstances under which it is possible for the entire world to be awake to - and grapple with, and negotiate about - the changes that AGI will bring.
1
3
3
78
When people ask me what safety is, my answer is always something along the lines of: "safety is a negotiation between stakeholders about acceptable risk." All parts of this are important. Safety is not a state; it's a conversation, a negotiation.
1
8
4
114
It depends on the stakeholders: everyone impacted is a stakeholder, and they can only participate in the negotiation if they're aware and engaged that they need to be.
1
2
47
It's about acceptable risk. There is not a singular "level" of safety at which everything is safe, and beyond which everything is not. There are degrees of risk. Likelihoods. Multiple outcomes variously considered acceptable or unacceptable.
2
2
55
Sam as CEO has fully activated and engaged the public in this discussion. Sam has been basically responsible and candid in all of his public communications about this, neither over- nor under-playing the risks, including existential risks.
4
5
3
157
OpenAI's models aren't perfectly safe by all standards set by all people. But the investments into improving them are incredibly tangible and I have faith they will pay off. The safety teams and projects are nontrivial, costly commitments to safety. Sam doesn't get enough credit.
2
62
It is also the case that so far, the broader public seems to have approximate consensus that these products are safe enough to use for regular daily tasks in many domains. Not absolutely safe, but relatively so.
1
46
I genuinely feel more relaxed about the state of the future because of the level of public involvement and scrutiny, and that was pretty much *only* possible because of Sam's decision-making.
3
1
1
75
Regarding claims about Sam's integrity and honesty: I am not privy to all of Sam's conversations. But I can tell you what I've seen from our many personal conversations over the years. I feel confident that Sam has been fully candid with me.
1
50
I can also see, from Sam's communication style, how misunderstandings and disputes can form. He really hears people - he understands their positions and can talk about them fluidly. People sometimes interpret this understanding as commitment to specific actions.
1
57
Sam uses words fairly precisely in my experience. If he tells you what he plans to do, that's what he plans to do. If he tells you he agrees with you in principle, he does. These are different things from each other. Not everyone groks this.
2
1
75
In my experience I've found "words from Sam" to have pretty strong predictive power about the future. I try to be epistemically careful, so I will not give a blanket endorsement to everything Sam says. But the record I've observed up close is pretty damn good.
2
54
Critique is healthy and no one powerful should be immune to critique. A cult of blind loyalty is foolish and I don't think any of us should trust or follow Sam uncritically. But sane critique *requires* credit where credit is due. Sam is due real credit on safety.
1
1
58
Last Friday when the board shared the news of Sam's firing, I took seriously the possibility that they had a real issue that I simply wasn't aware of. I didn't jump ship immediately to support Sam; I am constitutionally incapable of rushing to that kind of judgement.
1
1
1
67
Since then the situation has become clearer. It looks like this was about longstanding philosophical, personal, and political issues. There doesn't seem to have been a proximate cause. The board did not make a case, at all, for their decision.
4
76
This is why I felt confident signing the letter and supporting Sam's return.
5
91
Replying to @jachiam0
Awesome thread — thank you for sharing this. Really thoughtful.
1
10
Replying to @jachiam0
logical fallacy - false middle ground
2
Replying to @jachiam0
Impressive progress by Sam as CEO in AI safety. Not without flaws, but a commendable improvement compared to alternatives. Excited about the future possibilities! #AI #CEOThoughts
Replying to @jachiam0
Thanks for making things more clear
1
Replying to @jachiam0
I honestly think that our biggest hindrance right now is fear of AI. The heroes who moved in that did not have a goal to destroy humanity and I think that they have considered human dignity. We may have a few bumps, but I think we will be able to overcome them at this point because the fathers and mothers of AI didn't want to destroy humanity.
1
3
Replying to @jachiam0
Sam himself has said that AGI may lead to “lights out for everyone.” Once conceded, can you blame everyone else for being concerned? Have you considered that maybe we should just not build AGI?
1
1
Replying to @jachiam0
Thank you for this, I really appreciate the nuanced take Joshua.
Replying to @jachiam0
Curious what you think the next-most-likely outcomes are.
1
Replying to @jachiam0
Thanks for sharing. It’s important that views of insiders close to the technology are shared with society.
1
Interesting thread, thank you for your thoughts . If I may, what does a AGI present as, look like , etc ? What are your thoughts on this ? How will We know a AGI is present ? Does that not deeply inform what sort of risk it maybe ? If so the risk models maybe in error ?
1