IMO the coolest thing about ACT-1 is that we got it to look up stuff it doesn't know. You *could* try googling this but it doesn't work for me, and there are lots of other reasons to be excited about what this means for future models: ⬇️

Sep 15, 2022 · 7:08 PM UTC

First, it's just pretty sophisticated behavior. The model has to search for star wars, disambiguate to the movie, see that Mark Hamill plays Luke, and then look for his age before giving a useful response.
Second, it stops the model from having to make stuff up. If it can query an up-to-date tool, you don't have to worry it won't know about new things that have happened since you trained it.
Third, we can get the model to do things like this as an inner-loop in order to improve behavior on some higher level task, e.g. by looking up documentation when it's confused about what to do!
Fourth and finally, this *already* seems like a thing that Google can't do reliably. I'm sure there are permutations of the query that will work, but models keep getting better...
Replying to @gstsdn
Thinking similar thoughts.
Who's building the Google Search-killer with LLMs as the search interface? Google Search returns overly SEO-ed 2 page answers for the simple questions, users hate it but have to tolerate it because the friction to improve your search is very very high.
The friction in finding information has been steadily increasing, ACT could easily be better than the naive user at understanding intent and finding it.
Replying to @gstsdn
I read that as ACT-R at first and got *super* confused about how it got that much more capable in the last five or so years
Replying to @gstsdn
Good luck competing with plug-in Gpt bud