detecting prompt injection is very easy. you put a safety-check-prompt before feeding user input into the main prompt. gpt3 is crazy good at few-shot input categorization the reason people don't use it in their startups is cost (additional ~2000 tokens just for safety)

Oct 15, 2022 · 8:57 PM UTC

Replying to @mayfer
i have better ways to do it than that...50/50 string processing and prompt design. pretty much mitigated these attacks on pitch.expert (you are welcome to prove me wrong haha)
yeah i know what you mean, though committed people find clever ways of escaping both of those (even though it makes it much harder than full rawdogging)