magic starSummarize by Aili

The Prompt Report: A Systematic Survey of Prompting Techniques

๐ŸŒˆ Abstract

Generative Artificial Intelligence (GenAI) systems are being increasingly deployed across all parts of industry and research settings. Developers and end users interact with these systems through the use of prompting or prompt engineering. This paper establishes a structured understanding of prompts, by assembling a taxonomy of prompting techniques and analyzing their use. The authors present a comprehensive vocabulary of 33 vocabulary terms, a taxonomy of 58 text-only prompting techniques, and 40 techniques for other modalities. They further present a meta-analysis of the entire literature on natural language prefix-prompting.

๐Ÿ™‹ Q&A

[01] Introduction

1. What is the purpose of this paper? The purpose of this paper is to establish a structured understanding of prompts by assembling a taxonomy of prompting techniques and analyzing their use. The authors aim to create a robust resource of terminology and techniques in the field of prompting.

2. What is the scope of this study? The study focuses on discrete prefix prompts rather than cloze prompts, and on hard (discrete) prompts rather than soft (continuous) prompts. It also only studies task-agnostic techniques to keep the work approachable to less technical readers and maintain a manageable scope.

3. What is a prompt? A prompt is an input to a Generative AI model that is used to guide its output. Prompts may consist of text, image, sound, or other media.

4. What is a prompt template? A prompt template is a function that contains one or more variables which will be replaced by some media (usually text) to create a prompt. This prompt can then be considered to be an instance of the template.

[02] A Meta-Analysis of Prompting

1. What was the systematic review process used in this paper? The authors conducted a machine-assisted systematic review grounded in the PRISMA process to identify 58 different text-based prompting techniques, from which they create a taxonomy with a robust terminology of prompting terms.

2. What are the 6 major categories of text-based prompting techniques presented in the taxonomy? The 6 major categories are: In-Context Learning, Zero-Shot, Thought Generation, Decomposition, Ensembling, and Self-Criticism.

3. What is the difference between Few-Shot Prompting and Few-Shot Learning? Few-Shot Prompting is specific to prompts in the GenAI settings and does not involve updating model parameters, while Few-Shot Learning is a broader machine learning paradigm to adapt parameters with a few examples.

4. What are the 6 design decisions that critically influence the output quality of Few-Shot Prompting? The 6 design decisions are: Exemplar Quantity, Exemplar Ordering, Exemplar Label Distribution, Exemplar Label Quality, Exemplar Format, and Exemplar Similarity.

[03] Beyond English Text Prompting

1. What are some key multilingual prompting techniques discussed? Some key multilingual prompting techniques include Translate First Prompting, XLT (Cross-Lingual Thought) Prompting, Cross-Lingual Self Consistent Prompting (CLSP), X-InSTA Prompting, and In-CLT (Cross-lingual Transfer) Prompting.

2. What are some key multimodal prompting techniques discussed? Some key multimodal prompting techniques include Prompt Modifiers, Paired-Image Prompting, Image-as-Text Prompting, Duty Distinct Chain-of-Thought (DDCoT), and Chain-of-Images (CoI).

[04] Extensions of Prompting

1. How are agents defined in the context of GenAI? Agents are GenAI systems that serve a user's goals via actions that engage with systems outside the GenAI itself, such as making API calls to use external tools like a calculator.

2. What are some examples of tool use agents discussed? Examples include the Modular Reasoning, Knowledge, and Language (MRKL) System, Self-Correcting with Tool-Interactive Critiquing (CRITIC), and Program-aided Language Model (PAL).

3. What are the 3 main components of an answer engineering framework? The 3 main components are: Answer Shape (the physical format of the output), Answer Space (the domain of values the output can contain), and Answer Extractor (a rule or function to extract the final answer from the model output).

[05] Prompting Issues

1. What is prompt hacking and what are the main risks associated with it? Prompt hacking refers to attacks that manipulate the prompt to exploit GenAI models. Risks include privacy concerns from leaking training data or prompt templates, and security vulnerabilities from generated code.

2. What are some techniques discussed to mitigate prompt hacking? Techniques include prompt-based defenses, guardrails, and detectors that classify malicious inputs.

3. How can prompts be designed to improve model alignment and reduce biases, overconfidence, and ambiguity? Techniques include using balanced demonstrations, injecting cultural awareness, and prompting for clarification of ambiguous questions.

[06] Benchmarking

1. What benchmark was used to evaluate a subset of prompting techniques? The authors evaluated a subset of prompting techniques on the Multimodal-ManyLang-UnifiedBenchmark (MMLU).

2. What were the key findings from the benchmark evaluation? The benchmark evaluation found that different prompting techniques can have a significant impact on model performance, highlighting the importance of prompt engineering.

</output_format>

Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.