“Ilya Sutskever Has a New Plan for Safe Superintelligence: OpenAI’s Co-Founder Discloses His Plans to Continue His Work at a New Research Lab Focused on Artificial General Intelligence”, Ashlee Vance2024-06-19 (, ; backlinks)⁠:

…From that point on, Sutskever went quiet and left his future at OpenAI shrouded in uncertainty. Then, in mid-May, Sutskever announced his departure, saying only that he’d disclose his next project “in due time.”

Now Sutskever is introducing that project, a venture called Safe Superintelligence Inc. aiming to create a safe, powerful artificial intelligence system within a pure research organization that has no near-term intention of selling AI products or services. In other words, he’s attempting to continue his work without many of the distractions that rivals such as OpenAI, Google and Anthropic face. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then”, Sutskever says in an exclusive interview about his plans. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

Sutskever declines to name Safe Superintelligence’s financial backers or disclose how much he’s raised…That said, Safe Superintelligence will likely have little trouble raising money, given the pedigree of its founding team and the intense interest in the field. “Out of all the problems we face, raising capital is not going to be one of them”, says Daniel Gross.

…Sutskever has two co-founders. One is investor and former Apple Inc. AI lead Daniel Gross, who’s gained attention by backing a number of high-profile AI startups, including Keen Technologies. (Started by John Carmack, the famed coder, video game pioneer and recent virtual-reality guru at Meta Platforms Inc. Keen is trying to develop an artificial general intelligence based on unconventional programming techniques.) The other co-founder is Daniel Levy, who built a strong reputation for training large AI models working alongside Sutskever at OpenAI. “I think the time is right to have such a project”, says Levy. “My vision is exactly the same as Ilya’s: a small, lean cracked team with everyone focused on the single objective of a safe superintelligence.” Safe Superintelligence will have offices in Palo Alto, California, and Tel Aviv. Both Sutskever and Gross grew up in Israel.

…This fascination with Sutskever’s plans only grew after the drama at OpenAI late last year. He still declines to say much about it. Asked about his relationship with Altman, Sutskever says only that “it’s good”, and he says Altman knows about the new venture “in broad strokes.” Of his experience over the last several months he adds, “It’s very strange. It’s very strange. I don’t know if I can give a much better answer than that.”

…Sutskever says that he’s spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn’t yet discussing specifics. “At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale”, Sutskever says. “After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom.”

Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it’s aiming for something far more powerful. With current systems, he says, “you talk to it, you have a conversation, and you’re done.” The system he wants to pursue would be more general-purpose and expansive in its abilities. “You’re talking about a giant super data center that’s autonomously developing technology. That’s crazy, right? It’s the safety of that, that we want to contribute to.”