“Can Large Language Models Reason about Medical Questions?”, 2022-07-17 ():
[Github] Although large language models (LLMs) often produce impressive outputs, it remains unclear how they perform in real-world scenarios requiring strong reasoning skills and expert domain knowledge.
We set out to investigate whether GPT-3.5 (Codex and InstructGPT) can be applied to answer and reason about difficult real-world-based questions.
We utilize two multiple-choice medical exam questions (USMLE and MedMCQA) and a medical reading comprehension dataset (PubMedQA). We investigate multiple prompting scenarios: Chain-of-Thought (CoT, think step-by-step), zero-shot and few-shot (prepending the question with question-answer exemplars) and retrieval augmentation (injecting Wikipedia passages into the prompt). For a subset of the USMLE questions, a medical expert reviewed and annotated the model’s CoT.
We found that InstructGPT can often read, reason and recall expert knowledge. Failure are primarily due to lack of knowledge and reasoning errors and trivial guessing heuristics are observed, eg. too often predicting labels A and D on USMLE. Sampling and combining many completions overcome some of these limitations.
Using 100 samples, Codex 5-shot CoT not only gives close to well-calibrated predictive probability [replicating et al 2022] but also achieves human-level performances on the 3 datasets: (1) USMLE: 60.2%, (2) MedMCQA: 57.5% and (3) PubMedQA: 78.2%…Although InstructGPT and Codex still do mistakes, we found that scaling inference-time compute by sampling many CoTs per question could overcome part of these limitations. With 100 samples, Codex 5-shot CoT delivered unprecedented performances on the 3 datasets, bridging the gap with human-level performances and virtually passing the USMLE by 0.2% points.