28
61
Never closes
Much less valuable
Less valuable
As valuable as other problems
More valuable
Much more valuable
Get Ṁ600 play money
Related questions
Sort by:
@robm I'm not familiar enough with either's use of the word but in general, you can say
alignment research aims to make artificial general intelligence (AGI) aligned with human values and human intent.
my understanding is that ai alignment doesn’t only deal with safety but also ensuring the model is aligned to the goals of the user - right now gpt-4 feels less aligned to my goals than it was a couple of month ago
@Soli How is it that all these models are getting worse with time but the promises are getting bigger.
you see gpt 4 gets worse each generation whatever the reason. Each of these Claude models score worse than their predecessor on benchmarks.
Related questions
When will an AI model be better than me at competitive programming?
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
37% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
48% chance
Will I focus on the AI alignment problem for the rest of my life?
62% chance
When will AI be better than humans at AI research? (Basically AGI)
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
51% chance
When will AIs be good at solving complex problems? (read description)
Is AI alignment computable?
33% chance
In a year from today, will I have a satisfactory framework for describing the epistemology of AI alignment?
38% chance