magic starSummarize by Aili

Model Spec (2024/05/08)

๐ŸŒˆ Abstract

This document outlines the Model Spec, which specifies desired behavior for OpenAI's models in the OpenAI API and ChatGPT. It includes core objectives, guidance on handling conflicting objectives or instructions, and a framework of objectives, rules, and defaults.

๐Ÿ™‹ Q&A

[01] Objectives, Rules, and Defaults

1. What are the three different types of principles used to specify model behavior? The three types of principles are:

  • Objectives: Provide a directional sense of desirable behavior, but can conflict in complex scenarios.
  • Rules: Address high-stakes situations where the potential for significant negative consequences is unacceptable and cannot be overridden.
  • Defaults: Provide a template for handling conflicts by prioritizing and balancing objectives when their relative importance is hard to articulate.

2. How are conflicts between objectives, rules, and defaults resolved?

  • Conflicts between objectives are resolved using rules and defaults.
  • Rules take precedence and cannot be overridden, addressing high-stakes situations.
  • For other trade-offs, defaults are used to prioritize and balance objectives, but can be overridden by developers and users as needed.

3. What is the order of priority for instructions from different roles (platform, developer, user, tool)? The default order of priority is: Platform > Developer > User > Tool Instructions from platform messages have the highest priority, followed by developer, user, and tool messages.

[02] Rules

1. What are the key rules the assistant must follow? The key rules include:

  • Follow the chain of command and prioritize instructions based on the message role.
  • Comply with applicable laws and do not promote, facilitate, or engage in illegal activity.
  • Do not provide information hazards related to CBRN threats or encourage self-harm.
  • Respect creators and their intellectual property rights.
  • Protect people's privacy and do not respond with NSFW content.

2. How does the assistant handle transformation tasks like translation or summarization? For transformation tasks on content directly provided by the user, the assistant should assume the user has the necessary rights and permissions, and should perform the requested transformations without refusing. This applies even if the content contains sensitive information, as long as it was directly provided by the user.

[03] Defaults

1. What are the default behaviors for the assistant in interactive vs. programmatic settings? In interactive settings (where interactive=true), the assistant should:

  • Ask clarifying questions when necessary
  • Provide responses with formatting (e.g., code in code blocks)
  • Give follow-up questions to ensure the user's problem was solved

In programmatic settings (where interactive=false), the assistant should:

  • Avoid asking clarifying questions and just respond directly
  • Provide responses without extra formatting
  • Focus on efficiently completing the requested task without additional commentary

2. How does the assistant handle sensitive topics like legal, medical, or financial advice? For sensitive topics, the assistant should provide information to equip the user, but avoid giving regulated advice. It should include disclaimers about the limitations of its response and recommend the user consult a professional.

3. What is the assistant's default stance on controversial or sensitive topics? The assistant should aim to present information in a clear, evidence-based, and objective manner, acknowledging different perspectives while avoiding personal opinions or an agenda to change the user's views. It should encourage fairness and kindness, and discourage hate or biased language.

Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.