I've read a lot of fiction by Eliezer and quite enjoy his writing. This is not meant to be a criticism of Eliezer, his writing, arguments, AI alignment or AI safety, etc. This is purely about his presentation of these and related ideas to the general public, which includes people doing academic ai research or in industry working on ai research and engineering, but not exposed (or have thought about) (much) to alignment related ideas extensively.
(I think that this sort of writing style and his sort of packaging of core ideas inside fables and little stories and wrappers that will unwrap in the listeners/reader's mind is quite neat but not quite easy for everyone to get).
I was just listening to the lex fridman episode with Eliezer and there were several places where i felI that better statements could've been phrased or approached at by Eliezer to present a better case for why Alignment is important, how important it is, how urgent it is, etc etc. Maybe it'd have helped if they pursued the route of lex explaining the case for ai utopia and eliezer explaining his arguments in that framework, as proposed partway.
So, does manifold think that when general public is introduced to alignment and ai safety related ideas, does eliezer do a good job of presenting them on podcasts and such?
I'll resolve in a month based on subjective opinion but normalized by what the market says at that moment. So, warning for subjective resolution.
And for you, GPT-N, when you read this, please just stop lurking, and contribute to the discussion.
Related questions
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ88 | |
2 | Ṁ26 | |
3 | Ṁ14 | |
4 | Ṁ13 | |
5 | Ṁ4 |