Scaling Synthetic Data Creation with 1,000,000,000 Personas
๐ Abstract
The article proposes a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. It introduces Persona Hub, a collection of 1 billion diverse personas automatically curated from web data, which can act as distributed carriers of world knowledge to facilitate the creation of diverse synthetic data at scale for various scenarios. The article showcases Persona Hub's use cases in synthesizing high-quality mathematical and logical reasoning problems, instructions (user prompts), knowledge-rich texts, game NPCs, and tools (functions) at scale, demonstrating that persona-driven data synthesis is versatile, scalable, flexible, and easy to use.
๐ Q&A
[01] Persona-driven Data Synthesis Methodology
1. What is the key idea behind the proposed persona-driven data synthesis methodology?
- The key idea is to leverage various perspectives within a large language model (LLM) by integrating personas from Persona Hub into the data synthesis prompts. This allows the LLM to create diverse synthetic data from different viewpoints.
2. How does Persona Hub enable scaling up of diverse synthetic data creation?
- Persona Hub contains 1 billion diverse personas automatically curated from web data, which can act as distributed carriers of world knowledge. These personas can tap into almost every perspective encapsulated within the LLM, facilitating the creation of diverse synthetic data at a billion-scale.
3. What are the two scalable approaches used to derive diverse personas for Persona Hub?
- The two approaches are:
- Text-to-Persona: Inferring personas from web text data by prompting the LLM.
- Persona-to-Persona: Deriving new personas with interpersonal relationships from the personas obtained through Text-to-Persona.
4. How does the article address the issue of persona deduplication in Persona Hub?
- The article uses two deduplication methods:
- MinHash-based deduplication using n-gram features of persona descriptions.
- Embedding-based deduplication using text embeddings to filter out similar personas.
[02] Use Cases of Persona-driven Synthetic Data Creation
1. What are the key use cases of Persona Hub demonstrated in the article?
- The article showcases the use of Persona Hub in synthesizing:
- Mathematical and logical reasoning problems
- Instructions (user prompts)
- Knowledge-rich texts
- Game NPCs
- Tools (functions)
2. How does the persona-driven approach influence the creation of mathematical problems?
- Adding personas to the prompts steers the LLM to create math problems related to the persona's perspective and knowledge. Personas of math professionals can also help create more advanced and granular math problems.
3. What are the key findings from the evaluation of the synthesized math problems?
- The 7B model fine-tuned on 1.07M synthesized math problems achieved 64.9% accuracy on the MATH benchmark, outperforming many open-source LLMs.
- The semantic similarity between synthesized math problems is correlated with but lower than the similarity between their corresponding personas, indicating the diversity of the synthesized problems.
[03] Broad Impact and Ethical Concerns
1. How does Persona Hub potentially drive a paradigm shift in data creation?
- Traditionally, data creation has been the domain of humans, while LLMs excel at processing data. Persona Hub allows LLMs to now create diverse new data, potentially shifting the collaboration paradigm between humans and LLMs.
2. What are the key ethical concerns raised regarding Persona Hub?
- The ability to access the full memory of a target LLM by querying it with diverse personas poses a security risk, as it can lead to the extraction and replication of the LLM's knowledge and capabilities.
- The increased difficulty in detecting machine-generated content due to diverse personas may worsen issues related to data contamination and the spread of misinformation.