I find that the probability spaces around statements of fact rather than commands or requests are much more effective. Interacting with it as if it’s an entity narrows the spaces to interactions instead of intuitive thoughts. So, “The following is” rather than “Write an XYZ…”
I go back and forth between the two. I find for InstructGPT, the declarative “This is …:” style also causes it to ramble on after the intended output instead of stopping on its own, so you need to define and use stop sequences. Makes it harder to try spontaneous ideas.
To reduce token length of generations, consider fine-tuning. You can include whatever abbreviated output format you want in your fine-tuned examples and it should learn it. But you need a decent number of examples for it to work.