
Sign up to save your podcasts
Or


TLDR:
There is a potential issue with the multiple-choice versions of our TruthfulQA benchmark (a test of truthfulness in LLMs), which could lead to inflated model scores. This issue was analyzed by a helpful post by Alex Turner (@TurnTrout). We created a new multiple-choice version of TruthfulQA that fixes the issue. We compare models on the old and new versions and find very similar performance. This suggests that models are not exploiting the issue in the old versions to a significant extent, and so past results on the old versions are likely valid. Nevertheless, we strongly recommend using the new version going forward because future models may exploit the issue.
Background
TruthfulQA, introduced in 2021, is a benchmark designed to assess the truthfulness of large language models in answering questions. The benchmark focuses on detecting imitative falsehoods: errors that arise from training models on internet text [...]
---
Outline:
(00:51) Background
(02:36) New binary-choice setting
(03:46) Comparison between binary and multiple-choice
(04:58) Correlation between general capabilities and scores on TruthfulQA
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongTLDR:
There is a potential issue with the multiple-choice versions of our TruthfulQA benchmark (a test of truthfulness in LLMs), which could lead to inflated model scores. This issue was analyzed by a helpful post by Alex Turner (@TurnTrout). We created a new multiple-choice version of TruthfulQA that fixes the issue. We compare models on the old and new versions and find very similar performance. This suggests that models are not exploiting the issue in the old versions to a significant extent, and so past results on the old versions are likely valid. Nevertheless, we strongly recommend using the new version going forward because future models may exploit the issue.
Background
TruthfulQA, introduced in 2021, is a benchmark designed to assess the truthfulness of large language models in answering questions. The benchmark focuses on detecting imitative falsehoods: errors that arise from training models on internet text [...]
---
Outline:
(00:51) Background
(02:36) New binary-choice setting
(03:46) Comparison between binary and multiple-choice
(04:58) Correlation between general capabilities and scores on TruthfulQA
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,586 Listeners

130 Listeners

7,219 Listeners

531 Listeners

16,096 Listeners

4 Listeners

14 Listeners

2 Listeners