magic starSummarize by Aili

Who understands alignment anyway | Statistical Modeling, Causal Inference, and Social Science

๐ŸŒˆ Abstract

The article discusses the concept of "alignment" in the context of machine learning (ML) and artificial intelligence (AI), which refers to the goal of aligning ML models with human values and preferences. It explores the perspectives of the ML research community and the human-computer interaction (HCI) community on this topic.

๐Ÿ™‹ Q&A

[01] The Concept of Alignment

1. What is the concept of "alignment" in the context of ML and AI?

  • "Alignment" refers to the goal of aligning ML models with human values and preferences, in order to avoid risks ranging from the mundane to the catastrophic.
  • It is a topic of discussion in the AI and ML communities, with papers, workshops, talks, and funding calls dedicated to it.

2. What are some of the criticisms of the ML conception of alignment?

  • There has been criticism of the nebulousness of what alignment is supposed to actually represent.
  • Some of the critique comes from the HCI research community, which studies how people interact with technology and how to design human-computer interfaces.
  • The HCI community has been critical of the ML conception of alignment, as it predates the "alignment" buzzword.

3. How does the author view the relationship between HCI and alignment?

  • The author believes that HCI can help with alignment, but what it can offer is not what much of the ML research community wants or perceives themselves to need.
  • The author suggests that the real value of HCI in alignment is its role in helping to rethink the objective from the ground up, rather than just providing the best tools to address a narrower problem.

[02] The Potential Contributions of HCI to Alignment

1. What are some examples of how human-oriented concerns arise in the current ML paradigm for alignment?

  • Questions of eliciting specific information from humans become important for deploying generative models, such as in reinforcement learning from human feedback (RLHF).
  • Other examples include how to represent fairness ideals and how to evaluate post-hoc explanation techniques.

2. Could an HCI researcher be helpful for these questions?

  • The author suggests that while an HCI researcher could be helpful, the most relevant work for some of these elicitation problems may be found in other fields, such as psychophysics or decision science.
  • The author also notes that ML researchers may be able to figure these things out themselves, as HCI is a very large and interdisciplinary field.

3. What does the author see as the broader value that HCI can bring to the goal of alignment?

  • The author believes that the HCI perspective of user-centered design, which makes serious attempts to understand the needs of the people being designed for, can be valuable for rethinking human-facing ML models from the ground up.
  • HCI research also demonstrates the understanding that human values are not monolithic, and contributes knowledge on methods to try to get at what different groups want from technology.

[03] Challenges in Bridging HCI and ML for Alignment

1. What are the challenges in getting ML researchers to invest in the HCI way of doing things?

  • The author suggests that there is little incentive for the average ML researcher interested in alignment to invest in the HCI way of doing things, as interdisciplinary collaborations tend to be hard and this one seems likely to be particularly slow and messy.
  • Meanwhile, AI/ML research is moving at a faster pace than ever.

2. What does the author suggest for HCI researchers who want to have an impact on alignment?

  • The author suggests that if HCI researchers want alignment to be done better or differently, they may need to invest enough time in understanding the ML field to be able to demonstrate the concrete value they can bring.
  • The author suggests that HCI researchers may need to "reinvent themselves" as ML researchers, and figure out how to publish HCI-oriented papers at ML venues, as a step toward real impact.

3. What is the author's general rule regarding academic expertise?

  • The author's general rule is to never trust an academic outside their narrow field of expertise, and to realize that within that narrow realm, they are usually heavily invested in a particular point of view.
  • The author notes that this caution is more prevalent in applied math and statistics compared to computer science.
Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.