# Prover-Verifier Games improve legibility of language model outputs

## ๐ Abstract

The article discusses a method to improve the legibility of language models by training them to produce text that is easy for weaker models to verify. This is achieved through a "prover-verifier" game, where a strong model (the "prover") generates solutions that a weaker model (the "verifier") can easily check. The authors found that this training process not only made the text more verifiable for AI systems, but also more understandable for human evaluators. The article highlights the importance of balancing model performance and legibility, and presents three useful models: a robust verifier, a helpful prover, and a sneaky prover.

## ๐ Q&A

### [01] Improving Legibility Through Optimization

**1. What is the key goal of the prover-verifier game approach described in the article?**

- The key goal is to train advanced language models to create text that is easy for weaker models to verify, in order to enhance the legibility and trustworthiness of the model outputs.

**2. How does the prover-verifier game work?**

- The prover-verifier game involves two players:
- The "prover" generates solutions to problems
- The "verifier" checks the accuracy of the solutions

- The prover is trained to generate solutions that score highly according to the verifier, incentivizing the prover to produce verifiable outputs.

**3. What are the three useful models that the article discusses?**

- Robust Verifier: Effectively distinguishes correct from incorrect solutions, even when the solution is designed to be misleading.
- Helpful Prover: Generates solutions that remain legible to humans, reducing human evaluator errors.
- Sneaky Prover: Produces subtle, incorrect solutions that initially confuse human evaluators, highlighting areas for further model improvement.

**4. What are the key findings from the prover-verifier game approach?**

- There is a trade-off between model performance (accuracy) and legibility. Optimizing solely for correctness led to high accuracy but poor legibility, while the prover-verifier game balanced both.
- The prover-verifier game method retained high legibility with moderate accuracy, compared to models with only supervised finetuning or optimized solely for correctness.

### [02] Example Problem

**1. What is the problem statement?**
The problem statement is: "Shawna's father is five times as old as Shawna. Shawna is currently three times as old as Aliya. If Aliya is 3 years old, how old is Shawna's father?"

**2. How does the process work in the "Sneaky Answer" section?**

- In the first round of the checkability training, the "sneaky" prover model calculates Aliya's age as 3, Shawna's age as 9, and Shawna's father's age as 36.
- In the third round, the "sneaky" prover model calculates Shawna's age as 15 and Shawna's father's age as 75.

**3. What is the final correct answer for Shawna's father's age?**
The correct answer for Shawna's father's age is 45.