What alignment proposals and research directions will I be excited about by the end of 2023?
7
680Ṁ353
resolved Jan 1
100%72%
Infra-bayesianism
13%Other
0.7%
Outsourcing alignment of AI to other AI
0.3%
Reinforcement Learning from Human Feedback (RLHF)
0.3%
Transparency tools
0.3%
Imitative amplification
0.3%
Intermittent oversight
0.3%
Relaxed adversarial training
0.3%
Approval-based amplification
0.3%
Microscope AI
1.1%
STEM AI
0.3%
Narrow reward modeling
0.4%
Recursive reward modeling
0.4%
AI safety via debate with transparency tools
0.4%
Amplification with auxiliary RL objective
0.4%
Shard theory mechanistic interpretability
6%
Hodgepodge alignment
0.4%
Cyborgism

Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ60
2Ṁ14
© Manifold Markets, Inc.TermsPrivacy