Will @EliezerYudkowsky reverse his opinion on AI safety, before 2030?
➕
Plus
34
Ṁ3511
2030
5%
chance

For example, if he decides that actually we should try to build ASI even if it means a great risk to the human race. Or if he decides that the creation of ASI doesn't actually pose a great risk to the human race.

Get
Ṁ1,000
and
S3.00
Sort by:

If he dies before we get AGI or ASI, that would mean he has no way of reversing his opinion, and then this would have to resolve NO, right?

@12c498e yep (probably, although there are some complications, e.g. if he gets cryopreserved)

Does it count if he ceases holding AGI doomerist views because, um, well... "paperclips"?

@jim How about if he's still pretty much entirely himself, but a slightly different version of himself that just looooves paperclips? That is to say:

He gazed up at the enormous face. Forty years it had taken him to learn what kind of smile was hidden beneath the dark moustache. O cruel, needless misunderstanding! O stubborn, self-willed exile from the loving breast! Two gin-scented tears trickled down the sides of his nose. But it was all right, everything was all right, the struggle was finished. He had won the victory over himself. He loved Big Brother.

@dph121 that would be a YES! Brainwashing is A-OK.

bought Ṁ10 YES

Does it count if we develop a strong AGI by 2030, it doesn’t lead to doom, and Yudkowsky admits that he was wrong?

@OlegEterevsky yep that would count

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules