Which of these AI Safety Research Futarchy projects will get Conf Accepted, if chosen?
1
225Ṁ157
2026
24%
Goal Crystallisation
37%
Post-training order and CoT Monitorability
24%
Online Learning for Research Sabotage Mitigation
41%
Salient features of self-models
39%
Exploring more metacognitive capabilities of LLMs
24%
Model organisms resisting generalisation
34%
Detection game
26%
Research sabotage dataset
24%
Model Emulation

This is a derivative market of the markets linked to from this post.

For projects that do not get chosen by the futarchy, the corresponding market here will resolve N/A. Otherwise, they resolve according to whether both the "uploaded to arXiv" and "accepted to a top ML conference" resolve YES (if either resolves NO or N/A, they resolve NO).

Be aware the resolving markets N/A isn't always easy, I will do my best to ask for mod assistance if there is trouble.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy