Will ARC's Heuristic Arguments research substantially advance AI alignment before 2027?
Basic
5
Ṁ51
2027
26%
chance

ARC's Heuristic Arguments research agenda aims to formalize the notion of intuitively (but not deductively) valid heuristic arguments, primarily in order to solve the problem of eliciting latent knowledge (ELK) in AI alignment.

A theory of impact for the research agenda is given in this paper (https://arxiv.org/pdf/2211.06738.pdf):

'Heuristic arguments may let us see “why” a model makes its predictions. We could potentially use them to distinguish cases where similar behaviors are produced by very different mechanisms—for example distinguishing cases where a model predicts that a smiling human face will show up on camera because it predicts there will actually be a smiling human in the room, from cases where it makes the same prediction because it predicts that the camera will be tampered with.'

This question resolves YES if it is clear on Jan 1, 2027 that the research agenda has "substantially advanced" AI alignment. Even if the research agenda is eventually responsible for such an advance in 2027 or beyond, this question only focuses on whether such an advance is publicly clear by the close date.

A "substantial advance" is somewhat vague. In order to qualify what I mean, here are a list of things that would resolve this question YES:

  • Heuristic arguments lead to a broadly-applicable and scalable technique for mechanistic anomaly detection and/or ELK

  • Heuristic arguments lead to a technique for formal or empirical verification of ML models that is SOTA in some important respect

  • A successful formalization of heuristic arguments is produced, and a large AI lab publicly mentions that the theory is central to some component of their research

  • The research agenda fails prima facie, but work produced in the process or directly inspired by the approach leads to such an advance

  • Paul Christiano publicly claims something to the effect of "ARC's Heuristic Arguments research substantially advanced AI alignment"

Here are some things that would resolve this question NO:

  • The research agenda succeeds mathematically, but is not considered to be directly applicable or ontologically central to ELK or alignment

  • Heuristic arguments produce a limited technique for mechanistic anomaly detection which works in toy settings but can't detect backdoors inserted in LLaMa2

Since the resolution of this question is somewhat subjective, I will not bet in it.

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules