How valuable is it to work on AI alignment TODAY, compared to other problems in AI?
28
61
Never closes
Much less valuable
Less valuable
As valuable as other problems
More valuable
Much more valuable

Get Ṁ600 play money
Sort by:

Is this Alignment, like Yudkowsky often uses the word, in the X-risk sense?

Or alignment, like Altman tends to use it, as in non-offensive and helpful?

@robm I'm not familiar enough with either's use of the word but in general, you can say

alignment research aims to make artificial general intelligence (AGI) aligned with human values and human intent.

my understanding is that ai alignment doesn’t only deal with safety but also ensuring the model is aligned to the goals of the user - right now gpt-4 feels less aligned to my goals than it was a couple of month ago

@Soli You might like:

@Soli How is it that all these models are getting worse with time but the promises are getting bigger.

you see gpt 4 gets worse each generation whatever the reason. Each of these Claude models score worse than their predecessor on benchmarks.

More related questions