Clever prompt eng trick for those who haven't seen it: If you have many examples that can serve as "few-shot" guidance ({example[0]['input']} etc. below) => select these samples dynamically at runtime based on semantic (embedding) similarity with your {input} Big perf gains!

Nov 23, 2022 · 1:33 PM UTC

Replying to @mathemagic1an
Nice! I used this recently and called it “context-based few-shot” for lack of a better description. I wonder when you start seeing diminishing returns based on example count or token length. Might be nice to set that at runtime as well.
I think it will depend highly on the domain you’re operating in - if there’s high variance in the distribution this is much more useful
Replying to @mathemagic1an
How would you go about doing that?
Get embeddings for each "in-context" sample Get an embedding for your input Take the similarity (cosine sim) between input embedding and all in-context samples Paste the top 3 in your prompt
Replying to @mathemagic1an
Yes! Works especially well with consensus if you inject some stochasticity in your retrieval process.
Replying to @mathemagic1an
Oh nice, love this!
Replying to @mathemagic1an
Wow this is pretty clever
Replying to @mathemagic1an
We're working on something similar to map vectors of user preferences & source info: Each user has their own prefs or sources saved as embed vectors, and for each new input, we compare similarity to the saved prefs. The prefs get pretended to the new prompt. Repeat cycle.