Summarize by Aili
OpenAI training and inference costs could reach $7bn for 2024, AI startup set to lose $5bn - report
๐ Abstract
The article discusses OpenAI's significant spending on training and inference for its AI models, particularly ChatGPT, and the potential need for the company to raise more funds to cover growing losses.
๐ Q&A
[01] OpenAI's Spending and Costs
1. What are the key details about OpenAI's spending and costs?
- OpenAI is set to spend nearly $4 billion this year on using Microsoft's servers to run inference workloads for ChatGPT
- The company has the equivalent of 350,000 servers containing Nvidia A100 chips for inference, with around 290,000 of those servers used for ChatGPT
- Training ChatGPT as well as new models could cost as much as $3 billion this year
- OpenAI gets heavily discounted rates from Microsoft Azure, paying about $1.30 per A100 server per hour
- The company now employs about 1,500 people, which could cost $1.5 billion as it continues to grow
- OpenAI had originally projected workforce costs of $500 million for 2023 while doubling headcount to around 800
2. What are the potential revenue sources for OpenAI?
- OpenAI is bringing in about $2 billion annually from ChatGPT
- The company could be set to bring in nearly $1 billion from charging access to large language models (LLMs)
- OpenAI recently generated $283 million in total revenue per month, which could mean full-year sales of between $3.5 billion and $4.5 billion
3. What is the potential financial shortfall for OpenAI?
- The estimated spending and costs for OpenAI could leave a $5 billion shortfall
- The company is likely in need of fresh funds within the next 12 months to cover the shortfall
Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.