Which of these AI Safety Research Futarchy projects will get Conf Accepted, if chosen?
1
225Ṁ1572026
24%
Goal Crystallisation
37%
Post-training order and CoT Monitorability
24%
Online Learning for Research Sabotage Mitigation
41%
Salient features of self-models
39%
Exploring more metacognitive capabilities of LLMs
24%
Model organisms resisting generalisation
34%
Detection game
26%
Research sabotage dataset
24%
Model Emulation
This is a derivative market of the markets linked to from this post.
For projects that do not get chosen by the futarchy, the corresponding market here will resolve N/A. Otherwise, they resolve according to whether both the "uploaded to arXiv" and "accepted to a top ML conference" resolve YES (if either resolves NO or N/A, they resolve NO).
Be aware the resolving markets N/A isn't always easy, I will do my best to ask for mod assistance if there is trouble.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will I be accepted to these AI safety fellowship programs for Winter 2026?
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
93% chance
AI Safety Research Futarchy: Detection game
Which AI future will we get?
Is RLHF good for AI safety? [resolves to poll]
40% chance
AI Safety Research Futarchy: Research sabotage dataset
In 2025 Jan, the UK AI summit will have been effective at AI safety? [Resolves to manifold poll]
25% chance
AI Safety Research Futarchy: Online Learning for Research Sabotage Mitigation
AI Safety Research Futarchy: Goal Crystallisation
AI Safety Research Futarchy: Model Emulation