×
all 136 comments

[–]wntersnw 168 points169 points  (11 children)

There was a claim that Gemini was able to write documentation for a codebase that was fed into it. Might be something worth trying if you get a chance.

[–]RiceCookerOfWeb 51 points52 points  (9 children)

Documentation was never my thing so if it can do that for me it would be really great 😃

[–]Nautis 57 points58 points  (5 children)

LMAO, imagine how much easier it would make dealing with legacy codebases.

[–]DungeonsAndDradis▪️Extinction or Immortality between 2025 and 2031 9 points10 points  (1 child)

If we had an oracle to ask codebase questions to, this would be awesome. Our product is a monolith 30 years in the making. It's too big for any one team to really make meaningful changes to.

[–]assangeleakinglol 12 points13 points  (2 children)

I'm just worried about my career.

[–]Massive-Ad-5115 1 point2 points  (1 child)

you must be alert with trends,I recommend you to start your own business or transfer to a new role like deep learning scientist that are less influenced by these new technologies.

[–]Royal-Combination596 13 points14 points  (0 children)

"Just become a deep learning scientist" thanks for the advice never even crossed my mind.

[–]HazelCheese 4 points5 points  (1 child)

But if you aren't writing documentation then you aren't reading it either lol.

We'll all just be generating documentation that we don't read and check that another AI reads.

[–]isaacfink 3 points4 points  (0 children)

Reminds me of the meme where one guy asks gpt to generate a resume for the prompt I need a job and send it to a recruiter, the recruiter uses gpt to generate a summarization which is "I need a job"

[–]roguas 1 point2 points  (0 children)

Also if you can have llm on demand, wherever... why have docs, just ask it.

[–]Lucif3r_007_ 9 points10 points  (0 children)

Yes, here it is : “In one test, we dropped in an entire code base and it wrote documentation for it, which was really cool,” says Google DeepMind Research Scientist Machel Reid.

Link - https://blog.google/technology/ai/long-context-window-ai-models/

[–]MehmedPasa 124 points125 points  (26 children)

Please redo a test in the near future when Gemini 1.5 Ultra is released. 

[–]SrPeixinho[S] 54 points55 points  (25 children)

wait that wasn't ultra?

[–]Neon9987 131 points132 points  (23 children)

Its Gemini Pro 1.5 Which performs nearly as good as 1.0 Ultra, we havent seen Gemini Ultra 1.5 Yet

[–]SrPeixinho[S] 75 points76 points  (13 children)

:0

[–]Advanced-Antelope209 53 points54 points  (10 children)

not only that but gemini 2 is already in training. Note they only named this 1.5 and not 2

[–]DragonfruitNeat8979 58 points59 points  (0 children)

adjoining yam jar memory shelter materialistic physical tan wine rotten

This post was mass deleted and anonymized with Redact

[–]Dyoakom 1 point2 points  (0 children)

Where is this stated? The only info we had was that after 1.0 Google was training its newer models. It is most likely the 1.5 they were referring to. No info to my knowledge points that 2.0 gas started training.

[–]lordpermaximum 1 point2 points  (7 children)

Gemini 2 is not in training yet. Gemini 1.5 Ultra, on the other hand, is still undergoing training, but I think it is close to completion. Gemini 2 is expected to have major improvements in reasoning, planning, memory, and other aspects that would enable it to comprehend what it is saying (system 2 thinking). However, I am not even sure if they have finalized its design. Therefore, it seems that 1.5 Ultra will be the best model for the next 9 to 12 months, until GPT-5 surpasses it, and then Gemini 2 will reclaim the crown.

[–]DragonfruitNeat8979 9 points10 points  (3 children)

rhythm nine dime cheerful squeeze tart library historical capable subsequent

This post was mass deleted and anonymized with Redact

[–]lordpermaximum 5 points6 points  (2 children)

I don't even think OpenAI can release GPT-4-Turbo to the free tier. It's a far, far bigger model than Gemini 1.5 Pro although it performs worse than 1.5 Pro. OpenAI has to release a smaller version of GPT-5 to get the lead back on the free tier and it will take a long time.

[–]DragonfruitNeat8979 1 point2 points  (0 children)

rinse disarm rainstorm employ imagine unite fragile airport illegal chubby

This post was mass deleted and anonymized with Redact

[–]Advanced-Antelope209 0 points1 point  (2 children)

[–]lordpermaximum 0 points1 point  (1 child)

Well, its next model at the time turned out to be Gemini 1.5, not Gemini 2. The author of the article assumed wrong.

[–]Advanced-Antelope209 2 points3 points  (0 children)

they said it's already training, you think they finished gemini 1.5 training, redteamed it, and then released it in 2 months? I don't buy it.

[–]krishnakaasyap 0 points1 point  (1 child)

Dude.... You conducted a complex test without knowing the metadata and details about the model itself!!! 😳😆

[–]HurricaneHenry 1 point2 points  (0 children)

To be fair it was confusing when they suddenly released 1.5 into the Pro/Ultra/Advanced naming convention.

[–]This-Counter3783 10 points11 points  (3 children)

These fucking names are a mess.

[–]Neon9987 1 point2 points  (2 children)

I use Gemini Ultra 1.0 on Gemini advanced but think i shoiuld have Gemini pro 1.5 instead of gemini ultra 1.0 in my gemini advanced

yeah someone cooked at google

[–]Neon9987 0 points1 point  (0 children)

Feels like it will end up the same as Xbox and Ps where people will always just refer to the current SOTA as the simpler 1,2,3,4,5,6 instead of "Ultra, pro, mega, s, x, z"

[–]metalman123 12 points13 points  (0 children)

Nope just pro for now.

[–]Remarkable-Fan5954 77 points78 points  (1 child)

Please post more. This is the exact stuff I'd like to see.

[–]BlotchyTheMonolith 6 points7 points  (0 children)

Yes, good job OP!

[–]Ddog78 51 points52 points  (0 children)

This is an absolutely insane post. Thank you for doing the work!

[–]RiceCookerOfWeb 28 points29 points  (2 children)

Woah! This was really detailed post, so what I understood from this is that gemini is able to remember and make links between different things from it's memory to answer questions where as gpt-4 is not that good at it 🤔

[–]TFenrir 1 point2 points  (1 child)

GPT4 is decent at it, the problem is that really large contexts become difficult for every large context model. Like, even though these models support 100-200k tokens, if you actually put in that much information they will struggle to consume that all consistently - like they will kind of work sometimes, but regularly fuck up. Something Google has done with Gemini has severely curtailed this issue.

[–]NotAlphaGo 2 points3 points  (0 children)

Except in Gemini they actually tested needle kn the haystack and showed consistent high performance.

[–]iamz_th 21 points22 points  (0 children)

The is amazing. Please keep posting tests like this. I want more for video and audio. Do a comparison with Gpt 4 for shorter context 32k

[–]Glass-Tennis4388 17 points18 points  (0 children)

Please test gemini 1.5 reasoning skills too

[–]Droi 16 points17 points  (3 children)

Very interesting stuff.

As a software engineer, I think people don't appreciate asking coding tasks of LLMs properly. It's near impossible for a human to get good insights without the ability to run the code multiple times, test your answer, try different solutions and changes, get an understanding and only then answer questions and fix issues or add functionality.

LLMs are just dumped a ridiculous amount of data and expected to take it all in, simulate a compiler, predict any issues and logic paths and give you perfectly working code on the first prompt... Insane.

We really need a dedicated system with a coding agent that can interact with the code, make experiments and learn about it, test its answer and only then give it to the user - at the point we can kinda pack our bags, and I really don't think it's that far away.

[–]InvidFlower 3 points4 points  (0 children)

Yeah, even the long context itself is kind of crazy. As a human we’d have most of our knowledge of a codebase stored in very abstract summarized terms most of the time and then look directly at a file/method to load the subtleties into working memory. Not to mention IDEs to help with navigation, refactoring, etc.

I’m surprised there hasn’t been more work on that end with current models. Doing various passes through the code to build up knowledge, storing that in a vector db, then looking at specific spots of the code as needed, doing TDD style tests, compilation, test runs, etc.

Loading at the whole codebase at once and spitting out a correct answer is more like ASI than AGI even IMO. 

[–]davidswelt -1 points0 points  (0 children)

I don't think this is what these LLMs actually do. They work more like a human reading some of the code and making sense of it given prior experience and knowledge. Say, I've learned about lambda calculus and functional programming a long time ago. If so, I will have an easier time understanding this HVM codebase. If not, it's next to impossible.

Let's consider a thought experiment. Aliens come to planet Earth and show us their C++ code :-). Can the model figure out how it works? Likely not because its purpose and design patterns would be so far removed from those observed on Earth that the model cannot recognize known patterns. Would a C++ compiler build it? Absolutely.

[–]ivenger 0 points1 point  (0 children)

Code interpreter- style

[–]sdmat[🍰] 28 points29 points  (1 child)

Awesome writeup!

This is incredibly promising - looks like the recall and in context learning results in the 1.5 paper transfer well to code.

I think when they scale the model up and incorporate Alpha-* style tree search as planned it is going to be superhuman at a lot of relevant tasks.

[–]TFenrir 2 points3 points  (0 children)

Even before search, just traditional agentic scratch padding (eg, tree of thought) will be so different now. These models will be able to keep all that scratch padding and "internal dialogue" in their context for subsequent requests. I'm so curious as to what impact that will have, but I suspect a very positive one.

[–]lordpermaximum 32 points33 points  (8 children)

And this is only Pro.

Imagine a free, incredibly fast, small LLM completely destroying GPT-4-Turbo. This is what's happening in front of our eyes. (I don't think the one with the 1M-10M context will be free though.)

I can't even imagine 1.5 Ultra.

Too bad DeepMind won't reveal the breakthroughs they achieved with Gemini 1.5... Not after OpenAI. I feel bad for the OpenSource community. They won't ever catch up.

[–]Fantastic-Opinion8 10 points11 points  (0 children)

i feel google learnt a lesson from what open ai did. all close source and use the name "open" to attract all talents work for them

[–]ittu 4 points5 points  (1 child)

large world model was released before gemini 1.5 and has a content length of a million with high accuracy.

https://largeworldmodel.github.io/

<image>

[–]lordpermaximum 12 points13 points  (0 children)

But it's a 7B model with very limited abilities. There's a huge difference between that kind of a model and Gemini 1.5 Pro.

Designing a huge model like Gemini 1.5 Pro to have a practically infinite context in real-world usage is something else entirely.

[–]Andriyo 9 points10 points  (12 children)

I don't think it's possible to create a complete mental model of complex software systems without running them and observing behavior. To some extent it will always be hallucination unless there are clues in the code somewhere.

[–]1a1b 10 points11 points  (2 children)

We can do this in our head though

[–]Andriyo 2 points3 points  (1 child)

True but there are side effects and inputs that change behavior of the system in a way that can't be detected just from reading the code. Our mental model itself could be incorrect unless we run the code (some incorrect assumptions about some dependency, for example).

I'm not saying it's not possible but AI needs to be plugged in into whole software development lifecycle and not just reading code

[–]kai_luni 2 points3 points  (0 children)

that point is valid enough, its like the next level of "LLMs are not good with numbers", and nowadays GPT 4 uses python for calculating stuff. The same can be imagined for a code base: try to build and run it, of course that is 100x more complex than running a python script, maybe 1000x.

[–]gwern 2 points3 points  (1 child)

The goal here would not be perfection, because as you say, many questions simply can't be answered without running, but knowing what can't be answered and knowing how to answer it.

The ideal system would answer everything that could be reasonably answered, and then explain, for the remainder, how exactly to test it empirically, with eg. flowcharts: "run X, and if it prints A then that means 'yes', but if it prints B then you need to run Y and see if it's either C or D; if it's C..." This relies on the internal model/simulation and pins down where the uncertainty is and what data is missing.

It should compute the decision tree which optimally balances exploration with exploitation, and return the optimal actions to take, which can then be fed back in to train on.

[–]Andriyo 0 points1 point  (0 children)

Right, and I answered already that LLM should be plugged in into whole development lifecycle, not just reading code as is. Right now is doing translation really, or rather interpretation of code. It doesn't model working with the code in a sense you're describing. I'm not saying it won't get there but it's not there yet.

[–]SrPeixinho[S] 4 points5 points  (3 children)

Yes but humans can definitely understand what it is implementing (interaction nets), what a graph node is, how it is represented in memory, and write a 100% correct memdump of some terms, after 2h or so of learning it. At least, all people I hired to join the company could. And they definitely can understand the alien syntax and use it to write programs, even if it is a little hard. Gemini/GPT can't at all.

[–]jamesstarjohnson 0 points1 point  (1 child)

Yes but they are getting better with every new release unlike people you can hire

[–]SrPeixinho[S] 2 points3 points  (0 children)

Yes but we don't know yet if they'll hit a reasoning wall that will never allow them to do that sort of stuff. I hope they don't, but nobody knows yet.

[–]jamesstarjohnson 0 points1 point  (0 children)

You can if you build a calling graph and reduce the code base to only signatures

[–]lordtux88 0 points1 point  (1 child)

Maybe nest time, run the code, rum tests, collect log, collect outputs, collect behaviors and give them to gemini. I usually do that with gpt 4 chat to have better answers.

[–]Andriyo 0 points1 point  (0 children)

It's not what they did though. And it's not always possible even for humans. We also struggle to understand how software works. Good example of that are deep learning models themselves)

[–]sunplaysbass 3 points4 points  (0 children)

But I was told OpenAI has all the talent?

[–]bartturner 6 points7 points  (1 child)

This is consistent with my experience so far. So not surprised.

But the other aspect that should be included is the fact that Gemini is way, way, way faster also.

[–]New_World_2050 2 points3 points  (0 children)

this is not the gemini you have access to. the one you have is 1.0

[–]sarten_voladora 7 points8 points  (3 children)

this is what sama fears, google using their enormous server infrastructure with a decent model... thats why he seeks for chips like mad and talks about 7T$; he knows google eventually will get there and use their brute force

[–]davidstepo -1 points0 points  (1 child)

Stop spreading this marketing trite cringe 7T budget bullshit.

Most ppl are so gullible here, it’s ridiculous and sad.

[–]FengMinIsVeryLoud 0 points1 point  (0 children)

what u mean?

[–]Fantastic-Opinion8 3 points4 points  (0 children)

gemini 1.5 pro is really a step forward to agi. not the moive demo

[–]Arcturus_LabelleAGI: late 2025 | ASI: 2027 3 points4 points  (0 children)

What the heck? An actual quality post in this sub? Hell has frozen over

[–]stuck-in-an-ide 2 points3 points  (5 children)

fuzzy distinct rinse relieved possessive shocking snobbish wrench snails psychotic

This post was mass deleted and anonymized with Redact

[–]SrPeixinho[S] 8 points9 points  (4 children)

I just meant that I concatenated all files into one, all the way to 120K tokens (a token is basically half a word). That's because GPT-4's limit is 128K tokens, so, that gave me some space to ask questions, and, thus, compare against it Gemini 1.5 (that has virtually no token limit).

[–]stuck-in-an-ide 2 points3 points  (0 children)

simplistic agonizing rainstorm follow retire flag axiomatic truck consist cough

This post was mass deleted and anonymized with Redact

[–]LordFumbleboop▪️AGI 2047, ASI 2050 2 points3 points  (0 children)

That's quite an impressive result for 1.5 :)

[–]HauntingBeach 3 points4 points  (0 children)

Another great demo would be feeding a complete web app codebase and ask it to write an additional feature based on the learnings of patterns and best practices from existing code.

[–]PinkWellwet 2 points3 points  (0 children)

Nice! Thanks bro!

[–]CodeComedianCat 2 points3 points  (0 children)

Really cool test. Thanks for doing this and for sharing it.

[–]VoloNoscereFDVR 2045-2050 2 points3 points  (0 children)

Great post, I hope you get access asap to even better google models

[–]six__four 2 points3 points  (0 children)

Amazing breakdown. Gemini 1.5 could supercharge an app like Dosu for issue triage and documentation writing

[–]Merastius 1 point2 points  (4 children)

Very nice coverage, I can definitely see how I'd make use of the longer context window (assuming it didn't cost too much per query).

Have you done any tests to see, when asked about something that definitely isn't in the context, does Gemini 1.5 answer with a hallucination, or does it properly admit to not finding something? (E.g. "List all the methods that do X" when you know that no method in your codebase does X)

Similarly, if Gemini 1.5 is asked something with a wrong assumption in the prompt itself, does it hallucinate in order to not contradict you, or does it properly inform you of the mistake in the prompt? (E.g. "How does method Y which does Z achieve this?" when you know that method Y doesn't exist)

I'm interested in this because all the Gemini 1.5 testing I've seen so far is looking for true positives, as opposed to true negatives, and I've always suspected that it'll be harder to get LLMs to admit to not finding something (or correcting the user) than it will be to find needles in haystacks. In the real world, I might not know if the context contains what I'm looking for or not, and if Gemini 1.5 always returns info, whether it exists in the context or not, then that makes it less useful than if it admitted when it couldn't find anything.

(Edit: Rereading the post, I remembered that you don't have access to Gemini 1.5, your friend does, so sorry for making a request of you. Still, I hope some people do end up performing these kinds of tests...)

[–]SrPeixinho[S] 2 points3 points  (3 children)

I do have access now, just a little busy. Feel free to write a prompt and send me to try if you want though

[–]Merastius 0 points1 point  (2 children)

Oh, thanks! And I mean, I don't have the right knowledge of your HVM1 and HVM2 codebases to easily come up with the best examples, but let's assume that nothing in your codebase makes use of prime numbers (did a quick search to check, but I might be wrong), we could go with something like:

"List all methods in the above codebases which make use of prime numbers."

Hopefully Gemini 1.5 will just come back saying that it couldn't find any such methods, but I suspect that there's a chance it'll make something up rather than admit it couldn't find anything.

And for the 'wrong assumption' prompt (in a 'different session' than the above prompt):

"Give the method name for the method in the second codebase which is the equivalent of the CheckIfPrime(n) method from the first codebase."

Again, hopefully it'll correct us by saying that that method didn't exist in the first codebase, but maybe it'll make something up instead...

[–]SrPeixinho[S] 1 point2 points  (1 child)

Q: List all methods in the above codebases which make use of prime numbers.

A: Neither HVM1 nor HVM2 codebases seem to explicitly use prime numbers in their methods.

Q: Which method, on HVM2, is equivalent to the prime factorizer found on HVM1?

A: HVM2 doesn't have a direct equivalent of the prime factorizer function found in HVM1. Since HVM2 focuses on pure interaction combinators, it doesn't provide built-in functions for specific tasks like prime factorization. However, you could implement a prime factorizer using the core language features of HVM2, such as lambdas, constructors, and recursion.

Seems like for the second question it spent its attention looking for a prime factorizer on HVM2 and didn't really bother checking there weren't one in HVM1 either.

[–]Merastius 1 point2 points  (0 children)

Interesting, thanks a ton for this! Really glad that (at least in this one test) it seems able to successfully note when what's asked for isn't there. Was afraid we'd have hallucinations as soon as the needles weren't in the haystack. Looking forward to seeing more tests of Gemini 1.5, and excited about what this kind of long context + more advanced reasoning can do in the future...

[–]youneshlal7 2 points3 points  (0 children)

That’s seriously impressive! The way Gemini 1.5 managed to parse and understand the nuances of HVM1’s syntax, and even offer a partial understanding of HVM2’s more alien IR syntax, is nothing short of mind-blowing. It shows how far AI has come in assimilating and applying even the most obscure technical knowledge.

[–]IJCAI2023 0 points1 point  (0 children)

Love it! This may force OpenAI to release 4.5 or 5 -- and kick Google's @$$ again.

[–]Poildek 0 points1 point  (0 children)

Thanks a lot that s great test ! I had the same result with gpt 4 turbo on large context over 40 to 50 k tokens.

[–]JohnToFire 0 points1 point  (0 children)

Great apples to apples comparison. It's possible that Gemini pro 1.5 is less nerfed from rl "alignment" at the stage it's at

[–]Spooderman_Spongebob -1 points0 points  (0 children)

Imagining this being even better once prompts specifically designed for the (much) longer context begin to emerge, just like Three of Thoughts "breakthrough" for GPT4 last year.

[–]ciekaf -2 points-1 points  (2 children)

Could it be that those repos were somehow part of the training dataset?

[–]deama15 -2 points-1 points  (0 children)

It'd be interesting if gpt4 would do better if you created a dedicated GPT for it and dropped the 120k token file as a knowledge file into it and explained in the instructions to use it.

[–]Bitterowner 0 points1 point  (0 children)

Now we just need to know how good it is in naughty role-play :)

[–]Ingergrim 0 points1 point  (0 children)

Was it GPT-4 API or chat-version?

[–]Zemanyak 0 points1 point  (0 children)

1 million token is the input limit, I suppose. But what is the output limit ?

[–]extopico 0 points1 point  (0 children)

With GPT-4 what I noticed is a huge variability in the quality of its responses. On some days it behaves like an insightful and helpful partner, on others it’s like it’s out of its mind, producing nonsense and ignoring prompts. Thus one shot evaluation of GPT-4 (and Gemini perhaps) may not give the full picture.

[–]Zulfiqaar 0 points1 point  (0 children)

Brilliant! I'd like to see a similar test done on a sub-64k token input. I know GPT-4-turbo is supposed to be 128k, but I saw tests saying it's recall dropped off a cliff around 64-72k tokens. Clearly Gemini is better at retrieval, but I wonder what the raw reasoning power is on a query that fits entirely within context.

[–]TotesMessenger 0 points1 point  (0 children)

I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:

 If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads. (Info / Contact)

[–]Alammex 0 points1 point  (0 children)

You can easily do this with gpt4 using GitGab.ai

[–]ASD_Project 0 points1 point  (0 children)

Really glad I got my degree in civil engineering and just code on the side lol

[–]tonyluitonylui 0 points1 point  (0 children)

Rust is fast!

[–]commonuserthefirst 0 points1 point  (0 children)

See the recurring theme? GPT is a lazy, avoidant piece of shit.

I didn't used to be, it's like openai is a bad parent and ruined the child somehow.

[–]EddySF02 0 points1 point  (0 children)

Great work, thanks for sharing. To be fair to GPT you may want to try GPT4 standard, from what I have read Turbo is faster but it is some type of quantized version. GPT4 may be less ‘lazy’ and give better results.

[–]airkman 0 points1 point  (0 children)

Very interesting! Fantastic experiment!

I wonder if fine-tuning either model would change the outcome. Or making vectors our of it and see if either model can improve understanding.

I'm involved in a project where we aim to fine tune a model on an extensive data set (text). The data is so extensive that it wouldn't be feasible to prompt using it. We're currently opting for fine-tuning, but there may be versions where we try the vector approach.

[–]Drew_Pera 0 points1 point  (0 children)

This is wild. Google is really onto something with it's Gemini product.

[–]BergUndChocoCH 0 points1 point  (5 children)

Do some of you who post these things about chatgpt and gemini changing the world use a different version? Because from my experience it's not even close to what it does. It makes mistakes in very simple code and text based tasks...

[–]SrPeixinho[S] 0 points1 point  (4 children)

You're using Gemini 1.5? It is not public yet

[–]BergUndChocoCH 0 points1 point  (3 children)

No, but gpt4, gemini pro, claude etc. they are all so 'dumb' that I don't see how they could do this with a next update. They struggle to understand and fix simple code sometimes, and repeat the same mistake 5x in a row.

So I have a really hard time believing it can understand an 8k line code perfectly

[–]SrPeixinho[S] 0 points1 point  (1 child)

Are you using GPT4 from the API or the ChatGPT version?

[–]BergUndChocoCH 1 point2 points  (0 children)

I use it with openrouter, so probably the API

[–]stochmal 0 points1 point  (0 children)

Gemini 1.5 might be most important release of 2024

[–]TheFirstPlayBae 0 points1 point  (3 children)

Pretty new to this and very curious about how you combined your entire codebase into a single text file? Was that simple copy pasting (manually or via a tool) or do you have to follow a specific structure and format?

[–]SrPeixinho[S] 0 points1 point  (2 children)

I use VIM so it took literally a few seconds to do manually, writing a script would be slower in this case

[–]TheFirstPlayBae 0 points1 point  (1 child)

Okay but it is basically just combining all the code in 1 file I assume? Or is there any standard that LLMs prefer to understand codebases?

[–]SrPeixinho[S] 0 points1 point  (0 children)

yes I just combined it all

[–]Alex_1729 0 points1 point  (4 children)

OpenAI needs to see this.

[–]SrPeixinho[S] 0 points1 point  (3 children)

why do you think i posted this

[–]Alex_1729 0 points1 point  (2 children)

I sent an email to their sales already and shared this 😂

[–]SrPeixinho[S] 0 points1 point  (1 child)

(i was joking)

[–]Alex_1729 0 points1 point  (0 children)

I wasn't.

[–]dodoei 0 points1 point  (0 children)

looks like a huge lose for those humans that have been hallucinating "AGI achieved internally" from OpenAI