An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary
🌈 Abstract
The article discusses the development of hyperrealistic AI-generated avatars by the startup Synthesia, and the implications of this technology for the future of media and truth. It explores the challenges of creating realistic avatars, the company's efforts to ensure consent and content moderation, and the broader societal concerns around the rise of synthetic media.
🙋 Q&A
[01] An AI startup made a hyperrealistic deepfake of me that's so good it's scary
1. What are the key points about Synthesia's new technology?
- Synthesia has developed a new generation of AI-generated avatars that are more realistic and expressive than anything seen before
- The avatars can better match facial expressions, reactions, and intonation to the script
- This technological progress raises concerns about the increasing difficulty in distinguishing real from synthetic content
2. What are the author's thoughts on the distinction between "synthetic media" and "deepfakes"?
- The author questions whether the distinction between "synthetic media" and "deepfakes" is fundamentally meaningless, as the end result can be the same
- Even if the intent and consent are different, the author wonders if there is a way to safely create AI avatars if the outcome is indistinguishable from reality
3. What are the author's concerns about the implications of this technology?
- The author is concerned about the threat to trust in everything we see, which could have dangerous consequences
- The author questions whether we really want to get out of the "uncanny valley" if it means we can no longer grasp the truth
[02] The process of creating the author's AI avatar
1. What was the process of creating the author's AI avatar?
- The author went through a data collection process where their facial features, mannerisms, and voice were captured
- This involved the author reading scripts, expressing different emotions, and having their hands filmed to test the technology's capabilities
2. How does Synthesia ensure consent and content moderation?
- Synthesia has a policy of not creating avatars without the subject's explicit consent
- The company has put in place rigorous verification and content moderation systems, including watermarking, AI-powered filters, and human moderators
3. What are the limitations of the author's AI avatar?
- The avatar's range of emotions is limited, and it sometimes struggles to match the author's natural speech patterns and accents
- The avatar is currently limited to a front-facing, portrait-style presentation, but Synthesia is working on developing full-body avatars that can move and converse
[03] The broader implications of AI-generated avatars
1. What are the concerns about the proliferation of AI-generated content?
- The author is concerned that the growing wave of AI-generated content will cause trust issues, as it becomes increasingly difficult to distinguish real from synthetic
- There are fears that this could enable bad actors to plausibly deny real evidence and content, leading to a "liar's dividend"
2. What are the author's thoughts on the future of AI-generated avatars?
- The author sees a bleak future where humans consume AI-generated content presented by AI-generated avatars, in a cycle of AI-generated content creation
- The author believes the technology sector needs to urgently improve its content moderation practices and ensure robust content provenance techniques
3. What are the author's final reflections on the experience of having an AI avatar created?
- The author found the experience of seeing their avatar both fascinating and unsettling, as it highlighted the nuanced ways in which the avatar did not fully capture the author's mannerisms and speech patterns
- The author concludes that while Synthesia's technology is significantly better than anything seen before, the implications of this technology for the future of media and truth remain deeply concerning.