“Here’s What I Saw at an AI Hackathon: AI Gossip, Celebrity Sightings, Tech Trends—And Some Great Projects”, 2022-12-20 ():
I went to an AI hackathon and saw God. Well, an AI-generated version of God, anyway. It was a version of GPT-3 cosplaying as a deity and rigged to conduct a real-time verbal conversation. A projector displayed a visualization of it on a screen, and it could listen to your questions and respond with a god-like verbal inflection and tone. The program’s aim was to get you to commit to using AI for benevolent purposes, and it was about as impressive a demo for a weekend hack project as I’ve ever seen.
…So I did something I’ve never done before for an Every article: I traveled to San Francisco to see what I could first hand, at AI Hack Week—an AI hackathon put on by my friend Dave Fontenot of HF0. (Dave is also an Every investor through his fund Backend Capital. Evan went too—his column on this is coming later this week.) They rented out a mansion in Alamo Square, and a bunch of programmers spent a week building projects to try to show what’s possible with this new technology wave.
…The art projects weren’t limited to visual art. One of the top hacks, Biological Artificial Intelligence, was a Reverse Turing test: send a prompt to the project and it would return 4 completions. One of them was from GPT-3, and the other 3 were from human “chatbots” hidden behind a curtain. The challenge was to identify which one of the outputs was from GPT-3. Its creators demoed this in front of the audience—and the audience got it wrong.
…Overheard in AI: It’s always fun being in a house with lots of smart people because you hear things that you might not otherwise. Here’s a list of them. I don’t agree with all of them but they’re at least an interesting portrait of what at least some people do believe.
- Google’s brain drain: “All the best people I used to work with at Google Brain are now at OpenAI.”—AI Hack Week participant
People are expecting asymptotic gains for completion quality: These technologies have quickly progressed from returning great results 30% of the time to 80% of the time, but getting 80% → 90% is going to be much harder. 90% to 99% is even harder. The consensus is that it’s better to build for use cases where 80% is good enough; if you need 99%, you might have to wait awhile.
- Startups building infrastructure on current models are in trouble: A fine-tuned product on the current generation of models will be instantly outperformed by non-fine-tuned next-generation models. (For more, see Evan’s piece.) It’s better to save your money and wait than build a sand castle on top of the current tech.
OpenAI can extract a 10% equity tax from any company that wants to win: The company is currently giving access to GPT-4 for companies that they invest in through their fund Converge ($1 million check for 10%). Any company that has early access to GPT-4 has an advantage, and OpenAI gets to be kingmaker and tax collector. Maybe this is how research-driven companies developing models are actually going to make the most money. (Evan wrote about this as well.)
- The future of programming is writing prompts: The future of programming might be abstracted away from writing code at all. Instead, you ask GPT-4 in natural language to perform any operation (eg. convert this file, perform a function that does the following). In one version of the future, GPT-4 generates the code and you run it yourself (which is currently possible with ChatGPT.) In another version, you don’t even need the code—you just rely on GPT-4 to produce the right answer for you. The possibilities are wild.
- A battle between AI’s dark side and light side is currently raging: A contingent of people thinks OpenAI is moving too fast and supports Anthropic’s more closed-down, safety-first approach to building these technologies. On the other side, OpenAI proponents complain Anthropic can’t ship. (This attitude follows the dynamic I wrote about in “Artificial Unintelligence”.) Both organizations are filled with smart people trying to make tough decisions under a lot of pressure, and I’m curious which approach ends up working better.
- Machines are getting smarter faster than humans are: There’s a debate about whether prompt engineering is going to go away, and that debate is even occurring internally at OpenAI. Everyone thinks that with the new generation of models, you won’t need to be as clever in your prompt design. But there’s a big question about whether the more complicated, dynamically generated prompts are going to become obsolete. Even the people inside of these companies don’t know the answer yet.