“GPT-4 Rumors From Silicon Valley: People Are Saying Things…”, 2022-11-11 ():
People have been talking these months. What I’ve heard from several sources: GPT-4 is almost ready and will be released (hopefully) sometime December-February [2023].
“OpenAI started to train GPT-4. Release is planned for Dec–Feb [2023].” —Igor Baikov (2022-09-02).
…An open secret: GPT-4 is almost ready: I first heard OpenAI was giving GPT-4 beta access to a small group close to the company on August 20, when Robert Scoble tweeted these:
This is, of course, anecdotal evidence. It could very well be biased by excitement, cherry-picking, or the lack of a reliable testing method.
But, if GPT-4 is to GPT-3 what GPT-3 was to GPT-2—which isn’t at all unreasonable given that OpenAI took its time with this one—this is big news. Think about it: as models improve, we need larger leaps in performance to feel a similar degree of excitement. Thus, if we assume Scoble’s source relies mainly on perception (in contrast to rigorous scientific assessment) those claims may imply a substantially larger leap than GPT-2 → GPT-3.
On November 8 Scoble did it again: “Disruption is coming. GPT-4 is better than anyone expects. And it is one of several such AIs that will ship next year.”.
More hype. Two days ago, Altman tweeted this not-so-cryptic captioned image: “don’t be too proud of this technological terror you’ve constructed. The ability to pass the Turing test is insignificant next to the power of the Force.” [“I meant it as a joke about how I think people are (correctly) realizing the Turing test is a bad test. I regret the tweet, but I’ll leave it up in the spirit of leaving my Ls.”]
Even with NDAs hiding the good stuff, I had access to a more detailed description of what GPT-4 will (may) be like. I can’t personally assess the reliability of the source (it was shared in a private subreddit). And neither can Gwern, who shared it (he said: “No idea how reliable”). It may not be (completely) true but it’s the best we’ve got. Take it with a (big) grain of salt [from Russian Telegram]:
OpenAI started training GPT-4. The training will be completed in a couple of months. I can’t say more so as not to create problems .. But what you should know:
A colossal number of parameters
Sparse paradigm [like Scaling Transformers?]
Training cost ~ $.e6 [that seems to be off by at least 1 OOM given Morgan Stanley]
Text, audio-VQ-VAE, image-VQ-VAE (possibly video) tokens in one thread
SOTA in a huge number of tasks! Especially substantial results in the multimodal domain.
Release window: December-February
PS: where is the info from? ..from there. do I trust it myself? Well, in some ways yes, in some ways no. my job is to tell, yours is to refuse