
How valuable is it to work on AI alignment TODAY, compared to other problems in AI?
28
Never closes
Much less valuable
Less valuable
As valuable as other problems
More valuable
Much more valuable
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Sort by:
@robm I'm not familiar enough with either's use of the word but in general, you can say
alignment research aims to make artificial general intelligence (AGI) aligned with human values and human intent.
my understanding is that ai alignment doesn’t only deal with safety but also ensuring the model is aligned to the goals of the user - right now gpt-4 feels less aligned to my goals than it was a couple of month ago
@Soli How is it that all these models are getting worse with time but the promises are getting bigger.
you see gpt 4 gets worse each generation whatever the reason. Each of these Claude models score worse than their predecessor on benchmarks.
Related questions
Related questions
How difficult do you think is AI alignment compared to other problems in AI?
POLL
Will we solve AI alignment by 2026?
8% chance
Is AI alignment computable?
34% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Will there be a well accepted formal definition of value alignment for AI by 2030?
25% chance
Will I focus on the AI alignment problem for the rest of my life?
60% chance
How difficult will Anthropic say the AI alignment problem is?
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
51% chance