“Deep Mind’s Chief on AI’s Dangers—And the UK’s £900 Million Supercomputer: Demis Hassabis Says We Shouldn’t Let AI Fall into the Wrong Hands and the Government’s Plan to Build a Supercomputer for AI Is Likely to Be out of Date Before It Has Even Started”, Mark Sellman2023-07 ()⁠:

“I think because it’s so powerful, this technology, for both good and evil, we need to be very precautionary and thoughtful about what we’re doing. Maybe it’s ten-plus years away. There will be systems that will be extremely powerful, maybe human level or beyond in some ways, general intelligence. That’s never happened before in human history.”…Hassabis says he can “see both arguments” on risk. “Might turn out to be a nothing-burger. Fantastic, right? If it’s all just upside. Brilliant.” But he adds: “I don’t understand how one can have that view today with the uncertainty.” He is also keen to add: “I’m not on the ‘losing my mind, doom-mongers’ sort of side of things either.”

He believes there are 3 types of threat: existential, near-term (deepfakes, disinformation) and bad actors/rogue states using AI. On deepfakes, his unit is building a watermarking technology that he hopes will one day be mandated to be built into image and video generators to identify AI-generated content.

Is he worried about China? “Sure. I mean, they have a very different type of society and value systems. Who’s to say what’s better or worse . . . and so they’ll probably use AI for different purposes.” He does think they should be invited to the global summit on AI safety that the UK is hosting in the autumn. This, he believes, is an opportunity for the AI world to set tests for the technology’s “emergent properties”, the mysterious abilities that AI can develop that its creators did not devise. Google said that its PaLM model developed the ability to translate between English and Bengali without being trained to.

Hassabis has also joined calls for two international bodies to research and regulate AI, akin to CERN (for particle physics) and the IAEA (the nuclear watchdog).

He is a fan of Rishi Sunak & No 10’s approach (“they’re really on the ball”), but when asked about the government’s plan to build a supercomputer for AI, he laughs. The Treasury has announced £900 million [$1.15b] for the project that is due for completion by 2026, but some believe its scale is too small compared with those used by big tech and other states. Hassabis agrees.

“It’s not going to scratch the surface, to be honest. I think that money may be better put towards downstream things . . . developing protocols, analyses of the systems and evaluations. That would be by far, in my view, the better use of that pot of money. Otherwise, you’re just going to do a fast-follow, pretty mediocre thing. It will be out of date before you’ve even started it, given the pace of things.”

Meta is also a driving force behind the movement to “open-source” AI: release it to the global developer community to work on, improve and make safer, all in a transparent way. Hassabis is not a fan. “When you put things out there open-source, you’re no longer in control of what they get used for. And I do worry about bad actors.” When I put to him the argument from the open-source community that bad actors will always acquire the tech and it’s best to make it as safe and transparent as possible, he replies: “I think you bear some responsibility for the things that you put out and how you put them out there.” Zuckerberg, take note.

…One of the fears concerning a merged Google/DeepMind unit is that its talented engineers will get diverted away from big projects for humanity towards more mundane Google products designed to help the company’s bottom line. Another is that DeepMind could leave the UK. The Tony Blair Institute wrote in a report last month that “the UK’s enterprise is overly dependent on a single US-owned and funded entity, Google DeepMind”. Can Hassabis guarantee the company won’t move to the West Coast?

“You can never guarantee anything in life. But every step of the way, I was asked to go to Silicon Valley. Our first ever investor, back in 2010 [the PayPal co-founder Peter Thiel], thought that nothing of this scale could be built outside of Silicon Valley. There are reasons to stay here. There’s an incredible talent base in the UK, in Europe. I think we’ve helped to put London on the map versus the other European centres for AI. It’s a real hotbed for talent now. Google’s an international corporation. It’s quite useful, I think, for them to have a serious European presence here. I think there’s no plans for it to be any different at the moment.”

Like bees to the honeypot, the big American AI labs are now coming here. Both OpenAI, the developer of ChatGPT, and Anthropic, another leading company, are opening offices in London, perhaps to try and poach Hassabis’s staff.

…One of Hassabis’s former employees in Paris just co-founded an AI company called Mistral that wants to compete with OpenAI. It raised $113 million in seed funding despite being only 4 weeks old and having no product. “I think there’s a lot of hype going on in the VC [venture capital] world in this area, probably too much. I think people sort of lost their minds over that”, says Hassabis. He does believe, however, that there are “many billion-dollar company start-ups to be built” in fintech, biotech, and the medical, creative and gaming industries.