“Inflection AI, Startup From Ex-DeepMind Leaders, Launches Pi—A Chattier Chatbot”, 2023-05-02 ():
Mustafa Suleyman, CEO of the year-old startup that’s already raised $225m and claims to run one of the world’s largest language models, sees his dialog-based chatbot as a key step toward a true AI-based personal assistant…
Named Pi for “personal intelligence”, Inflection’s first widely released product—made available today for global users, but only in English at first—is supposed to play the active listener, helping users talk through questions or problems over back-and-forth dialog it then remembers, seemingly getting to know its user over time. While it can give fact-based answers, it’s more personal than OpenAI’s GPT-4, Microsoft’s Bing built on top of it or Google’s Bard, without the virtual companionship veering into unhealthy parasocial relationships reported by some users of Replika bots.
“It’s really a new class of AI—it’s distinct in the sense that a personal AI is one that really works for you as the individual”, Suleyman said. Eventually, Inflection’s CEO added, Pi will help you organize your schedule, prep for meetings and learn new skills.
…The company declined to comment on a March Financial Times report that claimed Inflection has already been back in the market looking for up to $675m in additional funds, but confirmed the previous funding as a “seed round.” (It stretches the meaning of that term past recognition, but such outsized rounds are more table stakes among AI unicorns looking to train and operate large language models at global scale; Anthropic and OpenAI have each raised more than $1b to date.)
…Inflection will offer Pi for free for now, with no token restrictions. (Asked how it will charge users, and when, the company declined to comment.) Built on one of Inflection’s in-house large language models, Pi doesn’t use the company’s most advanced ones, which remain unreleased, according to Suleyman; Inflection already runs one of the world’s largest and best-performing models, he added, without providing specifics. Like OpenAI, Inflection uses Microsoft Azure for its cloud infrastructure.
Test users have been putting Pi through its paces for the past several months. Whereas other chatbots might provide a handful of options to answer a query, Pi follows a dialog-focused approach; ask Pi a question, and it will likely respond with one of its own. Through 10 or 20 such exchanges, Pi can tease out what a user really wants to know, or is hoping to talk through, more like a sounding board than a repackaged Wikipedia answer, Suleyman said. And unlike other chatbots, Pi remembers 100 turns of conversation with logged-in users across platforms, supporting a web browser (
heypi.com), phone app (iOS only to start), WhatsApp and SMS messages, Facebook messages and Instagram DMs. Ask Pi for help planning a dinner party in one, and it will check in on how the party went when you talk later on another.…Some things Pi won’t do at all. With training data up to November 2022, Pi had no clue why the Boston Bruins lost in the first round of the Stanley Cup playoffs, despite a record-setting regular season. It won’t generate code, or answer specific math questions, an area where ChatGPT, for example, notably struggled. Asked to explain basic quantum mechanics, such as the Schrödinger equation that governs a wave function, Pi answered with what appeared to be a condensed Wikipedia-style answer.
By getting to know a user, Pi can better detect when they appear to be growing agitated or frustrated, and tweak its tone of responses to soothe, Suleyman said. That’s important when users are turning to Pi as an active listener to talk through personal problems, role-play difficult conversations or discuss their mental health. Asked how Inflection knows a user is upset, the company did not elaborate. But it says that in the event of an apparent mental health crisis, users detected to be at risk are directed to a qualified mental health professional. Limited to the walkthrough demo before launch, Forbes wasn’t able to fully stress-test Pi for this story; Suleyman said the chatbot has gone through thorough testing of antagonistic and harmful prompts, and has been trained to avoid direct influence on a user’s life or choices. “The higher the stakes, the more cautious it is”, he said. “We don’t want to intervene in somebody’s life. We want to provide fairly balanced and even-handed responses all the way through.”…Pi can also come off as relentlessly bland. The bot isn’t going to tell you how to think, or what to do; you’ll need to decide to quit that job yourself. And that’s probably for the best.