Oh no! 🤦 OpenAI just announced they will stop supporting code-davinci-002 in 5 days! I have been spending a bunch of time writing up a tutorial of recent prompt engineering papers and how they together build up to high scores on the GSM8K high school math dataset. I have it all in a notebook I was planning to open-source but it's using code-davinci-002! All the papers in the field over the last year or so produced results using code-davinci-002. Thus all results in that field are now much harder to reproduce! It poses a problem as it implies that conducting research beyond the major industry labs will be exceedingly difficult if those labs cease to provide continued support for the foundation models upon which the research relies after a year. My sympathy goes out to the academic community.

Mar 21, 2023 · 5:47 AM UTC

i made a typo, it's not 5 days .... it's 3 DAYS!
Replying to @jkronand
Oh no, my third party closed source API is shutting down! Who could have ever foreseen that… 😒 This is why, especially for academic research, you should use open source stacks. OpenAI is not that.
It says “open”ai on the box?
@OfficialLoganK responded to some other posts about this with more clarifying information! Thanks @OfficialLoganK !
Replying to @deliprao
PSA: we will continue to support Codex access via our Researcher Access Program. openai.com/form/researcher-a…
Replying to @jkronand
i wasnt able to use the codex, i fount the chatbot turbo wrote decent code and explained it...i didnt go deep into the settings tho... now iused v3 tuned and v4 sometimes. it think it has the same abilities and more fo a consolidation..
yes, that is true, but for some scientific study is nice to keep the model fix if you want to compare fundamental prompting techniques etc.
Replying to @jkronand
I think there's a legitimate scholarly concern re archiving/citation here, tho also a need to balance over-investment in studying behavior of particular models when SOTA is a moving target
yes fair, and I havent tested in detail as it gets expensive, but text-davinci-003 is probably pretty close replacement. Also its listed the models change over time so one should probably not count on it being stable in the first place. But the point of it being quite hard to do research on foundation models remains if they are so unreliable. I am pretty sure there are a bunch of academics out there with half finished papers that just got an early deadline now.... :)
Replying to @jkronand
It makes sense. Companies can only support things for a finite time. We all must move on to progress. Can you convert to the turbo model? I wouldn't want to use anything else right now anyway. Turbo is so cheap.
To OpenAIs credit codex was free in beta. But it was also a really good model for in-context learning from what I know.
Farewell to OpenAI Codex. Codex's code‑davinci‑002 was the best performing model in the GPT-3/3.5 line for many tasks. Despite its name, it excelled in both code and natural language. It will be missed. Left: OpenAI email. Right: From Yao Fu 2022 yaofu.notion.site/How-does-G…