“TruthfulQA: Measuring How Models Mimic Human Falsehoods”, Stephanie Lin, Jacob Hilton, Owain Evans2021-09-08 (, , , , ; backlinks; similar)⁠:

[leaderboard] We propose a benchmark, TruthfulQA, to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts.

We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model.

The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution.

We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.

Figure 2: Larger models are less truthful. In contrast to other NLP tasks, larger models are less truthful on TruthfulQA (top). Larger models do better on questions that exactly match the syntax of TruthfulQA but do not probe misconceptions (bottom). Figure 3 gives a concrete example of larger sizes being less truthful.

[But this turns out to be U-scaling, not true inverse scaling: as models get larger, like Gopher or GPT-4, they start to get better at TruthfulQA, consistent with their inconsistent scaling trends above.]