Can you get GPT-3 to express its Q&A uncertainty in the form of probabilities, confidences, or verbal equivalents? Postfixed/prefixed probabilities like “A. answer [60%]” do not work, and neither do postfixed natural estimative words like “A. answer [likely]”, but it seems like prefixed uncertainty words like “A. [likely] answer” may improve results (at least, for asking nonsense, weight, commonsense, and existence questions). > > Later research demonstrated GPT-3-scale models are capable of calibration (Linet al2022), and subjective certainty (Kadavathet al2022).