The Boy on the Tricycle: Bias in Generative AI
๐ Abstract
The article discusses the issue of bias in generative AI systems, particularly in the context of image generation. It highlights how popular AI tools like Midjourney, Stable Diffusion, and DALL-E 2 can perpetuate and intensify societal biases related to gender, race, and emotional portrayals in the images they generate. The article also presents a case study where the author tests this bias by uploading their own computer-generated image from 1987 to Midjourney and analyzing the differences in the generated images with and without the prompt "Black".
๐ Q&A
[01] Bias in Generative AI
1. What are the key findings from the "Bias in Generative AI" study?
- The study revealed systematic gender and racial biases in all three AI generators against women and African Americans.
- The study also uncovered more nuanced biases or prejudices in the portrayals of emotions and appearances.
2. What are some examples of how the AI-generated images with the "Black" prompt differed from the original image?
- The images with the "Black" prompt showed less foliage and the picket fence was replaced with cement walls, sometimes with markings or graffiti.
- The image with the Black child showed a closed fence, suggesting that his access was blocked, which visualizes the problem of inclusion and diversity in digital art.
3. Why is it important to consider who is shown in GenAI images and in what light they are portrayed?
- The way individuals are represented in GenAI images can have a significant impact on how they are perceived, especially for those from underrepresented groups.
- Biased or prejudiced portrayals can lead to the perpetuation and intensification of societal biases, which can be detrimental to marginalized communities.
4. What are some steps that can be taken to address bias in GenAI systems?
- Prioritizing the development of GenAI systems that are shaped by an ethical commitment to inclusivity and equity.
- Ensuring diverse representation in the development of large language/training models.
- Improving methods to counter bias in AI, such as through more diverse data collection and model training.
- Representing diverse cultures, perspectives, and experiences in digital art collections.