OpenAI said it had begun training its next-generation artificial intelligence software, even as the start-up backtracked on earlier claims that it wanted to build “superintelligent” systems that were smarter than humans.
The San Francisco-based company said on Tuesday that it had started producing a new AI system “to bring us to the next level of capabilities” and that its development would be overseen by a new safety and security committee.
But while OpenAI is racing ahead with AI development, a senior OpenAI executive seemed to backtrack on previous comments by its chief executive Sam Altman that it was ultimately aiming to build a “superintelligence” far more advanced than humans.
Anna Makanju, OpenAI’s vice-president of global affairs, told the Financial Times in an interview that its “mission” was to build artificial general intelligence capable of “cognitive tasks that are what a human could do today”.
“Our mission is to build AGI; I would not say our mission is to build superintelligence”, Makanju said. “Superintelligence is a technology that is going to be orders of magnitude more intelligent than human beings on Earth.”
[This seems like an odd definition: usually, ‘superintelligence’ is just ‘more than human intelligence’; but according to Makanju, even an intelligence an order of magnitude more intelligent than humans would still not count as superintelligence…? When would OpenAI ever create a ‘superintelligence’ by this definition, and how could anything OA do matter at the point where such an intelligence was created?]
Altman told the FT in November 2023 that he spent half of his time researching “how to build superintelligence”.
Liz Bourgeois, an OpenAI spokesperson, said superintelligence was not the company’s “mission”.
“Our mission is AGI that is beneficial for humanity”, she said, following the initial publication of Tuesday’s FT story. “To achieve it, we also study superintelligence, which we generally consider to be systems even more intelligent than AGI.” She disputed any suggestion that the two were in conflict.
…OpenAI is attempting to reassure policymakers that it is prioritising responsible AI development after several senior safety researchers quit this month.
Its new committee will be led by Altman and board directors Bret Taylor, Adam D’Angelo and Nicole Seligman, and will report back to the remaining 3 members of the board.
…Makanju emphasised that work on the “long-term possibilities” of AI was still being done “even if they are theoretical”. “AGI does not yet exist”, Makanju added, and said such a technology would not be released until it was safe. [Sounds tautological: then if it’s released, it must be safe?]