magic starSummarize by Aili

The long long tail of AI applications

๐ŸŒˆ Abstract

The article discusses the long tail of AI applications, highlighting the distinction between foundational AI (creating models like GPT-4) and applied AI (using existing models to create smart applications). It argues that there are significantly more companies focused on applied AI than foundational AI, and these applied AI companies will be busy for decades to come.

๐Ÿ™‹ Q&A

[01] The long tail of AI applications

1. What is the distinction between foundational AI and applied AI?

  • Foundational AI refers to creating models like GPT-4 (text) and Sora (video).
  • Applied AI refers to using existing models to create smart applications.

2. Why are there more companies focused on applied AI than foundational AI?

  • To get the most out of large language models (LLMs) like GPT-4, you need to ask the right questions and provide the right context.
  • LLMs have access to limited context by default, so agents need to be programmed to pass the right context to the LLM.
  • LLMs are not artificial general intelligence (AGI), so agents need to be programmed manually for specific use cases.
  • LLMs don't understand specialized problems, as they are trained on public internet data, which is mostly in English.
  • Integrating AI into every aspect of a product is a lot of work, as each "smart" feature requires careful consideration of prompts, context, and agent programming.

3. What is the long tail of AI applications?

  • Just as there was a long tail of devices that gained WiFi connectivity (laptops, smartphones, TVs, etc.), there is a similarly long tail of applications where AI can add value.
  • Incorporating AI into all these applications will be a long-term effort, as each application requires careful integration of prompts, context, and agent programming.

[02] Asking the right questions

1. What is the lesson from "The Hitchhiker's Guide to the Galaxy" about asking the right questions?

  • In the story, a computer is asked "The Ultimate Question of Life, the Universe, and Everything" and answers "42", highlighting that the quality of the answer depends on the quality of the question.
  • Similarly, with LLMs like ChatGPT, a large part of the intelligent conversations is due to the intelligence of the human asking the right questions.

2. Why is it difficult to get LLMs to give correct responses without providing a lot of context?

  • LLMs have access to very limited context by default, so when applying them to a specific product or use case, the engineers need to provide a lot of context for the LLM to give a correct response.
  • For example, when asking an HR-related question, the LLM would need context about the employee's contract, company policies, regulations, and the potential impact of the change.

[03] Limitations of LLMs

1. Why are LLMs not artificial general intelligence (AGI)?

  • LLMs are limited in their abilities and "just" generate text based on the provided context, rather than having true general intelligence.
  • To get the most out of LLMs, an agent structure needs to be programmed manually to give instructions, check answers, and provide the right context.

2. How do specialized problems pose a challenge for LLMs?

  • LLMs are trained on public internet data, which is mostly in English and lacks the specialized knowledge and terminology required for certain domains, such as tender management.
  • Applying LLMs to these specialized problems would require a lot of additional work to provide the necessary context and expertise.

[04] Integrating AI into products

1. Why is integrating AI into every aspect of a product a lot of work?

  • For each "smart" feature, the engineers need to carefully consider the right prompts, context, and agent programming to make it work effectively.
  • If a product has many input fields, each one would require this level of work to make it AI-powered, which is a significant undertaking.

2. How does the author compare the long tail of AI applications to the adoption of WiFi in various devices?

  • Just as WiFi connectivity spread from laptops to smartphones, TVs, and other devices, the author suggests there is a similarly long tail of applications where AI can add value.
  • Incorporating AI into all these applications will be a long-term effort, as each one requires careful integration of prompts, context, and agent programming.
Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.