Game Theory Can Make AI More Correct and Efficient | Quanta Magazine
๐ Abstract
The article discusses the use of game theory to improve the consistency and accuracy of large language models (LLMs) like ChatGPT. It introduces the "consensus game" developed by researchers at MIT, where the LLM's generator and discriminator systems play against each other to reach an agreement on the correct answer to a given question. The article also mentions other game-based approaches, such as the "ensemble game," that can further enhance LLM performance without additional training or parameter changes.
๐ Q&A
[01] Introduction
1. What is the issue with the inconsistent answers given by large language models (LLMs) like ChatGPT?
- LLMs can give different answers to the same question depending on how it is phrased, which can make them seem unreliable.
- This "disconnect" when the same question is phrased differently is a problem that needs to be addressed to improve the overall reliability of these models.
2. How did the researchers at MIT try to address this issue?
- They devised a "consensus game" where the LLM's generator and discriminator systems play against each other to find an answer they can agree on.
- This game-based approach uses the tools of game theory to improve the model's accuracy and internal consistency.
[02] Putting Play to Work
1. How does the consensus game work?
- The generator receives a question and some candidate responses, and is told whether to answer correctly or incorrectly based on a coin toss.
- The generator sends the question and its chosen response to the discriminator, who tries to determine if the response was intentionally correct or incorrect.
- Both the generator and discriminator are rewarded when they agree, incentivizing them to reach a consensus.
- The players also have initial "beliefs" about the probabilities of different answers, which they incorporate into their responses to keep them grounded in reality.
2. What were the results of testing the consensus game?
- Moderate-sized language models that played the consensus game performed better and were more consistent than larger models that did not play the game.
- The game-based approach is computationally lightweight and can be applied to any LLM without requiring additional training or parameter changes.
[03] Playing Games With Language
1. What other game-based approaches are the researchers exploring?
- The "ensemble game" involves the primary LLM playing against smaller, allied and adversarial models, which can further boost the primary model's performance.
- Researchers are also looking at using game theory to help language models handle more sophisticated interactions, such as negotiation scenarios like the academic paper review process.
2. How do these game-based approaches compare to previous work on using games to measure AI success?
- Past approaches focused on measuring an AI program's success by its mastery of specific games, like chess or Go.
- The new approaches use games as a tool to improve the language models themselves, rather than just testing their capabilities.