“Greg Brockman and OpenAI Safety”, 2024-05-19 (; backlinks):
[commentary on Brockman AI safety tweet] Greg was one of the founding team at OpenAI who seemed cynical and embarrassed about the org’s mission (basically, the focus on AGI and x-risk) in the early days.
I remember at ICLR Puerto Rico, in 2016, the summer after OpenAI was founded, a bunch of researchers sitting out on the rocks drinking wine right near the ocean and people were ribbing Greg for OpenAI’s public comms about safety.
His reply was basically: “oh yeah, there are a few weirdos on the team who actually take that stuff seriously, but…”
He wasn’t the only one of the founding team who wanted to distance themselves from OpenAI’s public stance on safety…
At ICML 2016, in New York, I ran into an old classmate who was also a friend of Wojciech; Wojciech crashed our conversation to tell them I was one of those crazy people who was worried about AI killing everyone.