
In 1 years time, what credence will John assign to the field of alignment converging toward primarlity working on decoding the internal language of neural nets?
6
110Ṁ99Jun 1
28%
chance
1D
1W
1M
ALL
https://www.lesswrong.com/posts/BzYmJYECAc3xyCTt6/the-plan-2022-update
Resolution will be determined by the earliest statement John makes publically on his credences after 1st January 2024.
If, by 1st June 2024, John doesn't make any such statement, this question will result as N/A.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Sort by:
Related questions
Related questions
Will we solve AI alignment by 2026?
8% chance
Will >= 1 alignment researcher/paper cite "maximum diffusion reinforcement learning" as alignment-relevant in 2025?
19% chance
Will I think that alignment is no longer "preparadigmatic" by the start of 2026?
20% chance
In 5 years will I think the org Conjecture was net good for alignment?
57% chance
Will tailcalled think that the Brain-Like AGI alignment research program has achieved something important by October 20th, 2026?
20% chance
Will we find polysemanticity via superposition in neurons in the brain before 2040?
64% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will OpenAI announce a major breakthrough in AI alignment in 2024?
10% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Will Inner or Outer AI alignment be considered "mostly solved" first?