[POLL] Is Eliezer Yudkowsky right about AI risk, AI timelines, both, or neither?
42
Never closes
Right about both
Right about AI risk, wrong about AI timelines
Right about AI timelines, wrong about AI risk
Wrong about both
n/a - see results

Adding some granularity to a previous poll of mine.

Get
Ṁ1,000
to start trading!
Sort by:

What are supposedly my timelines?

@EliezerYudkowsky I think people perceive you to have very fast timelines but I'm not sure you've ever outlined them in detail.

@SemioticRivalry I remember him saying that if you have children now, there's a good chance you'd still see them live to kindergarten so he probably has a low P(doom) for the next five years at least.

@EliezerYudkowsky You are at +44% on "artificial superintelligence exist by 2030?". Which I think is a quite fast timeline.
(I am at ~2% for it by 2030, and ~25% by 2100)
Otherwise, I think the majority of your arguments are sound.

@EliezerYudkowsky "Your children may get to start kindergarten" strongly suggests 5-10 years.

@EliezerYudkowsky As of December 2021:

I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain's native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them. What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one's mental health, and I worry that other people seem to have weaker immune systems than even my own. But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050.

https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works

As of April 2023:

When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.

https://www.lesswrong.com/posts/oM9pEezyCb4dCsuKq/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all-1

And as quoted earlier:

@EliezerYudkowsky You bought YES on "will AI wipe out humanity by 2030" up to 40%, so that shows a minimum probability of 40% chance of AI capable of wiping out humanity by 2030, in 6.5 years.

That timeline is wrong, along with your opinions about AI risk.

if my p(doom) is 30% and his p(doom) is 99.99%, would I say he's 'right'? well, I am closer to him than probably 99.99% of people are, but I am also quite far away in a numerical or log sense. So it's hard to answer.

Also, my understanding is that he doesn't have a very strong position on timelines.

@SemioticRivalry Hm, fair point. I think in that case whether you think he's "right" depends on the vibes. (Is his p(doom) seriously 99.99%????)

He's posted "if we're still alive" on tweets talking about 2-4 year time-frames, so he definitely expects very rapid AI development.

@evergreenemily I don't think he's ever given out an actual numerical p(doom), but I know he's said "probability approaching 1" before.


I think on timelines he has very wide error bars, and that tweet was just a one-off line. When talking with Hotz for example he basically refused to give a timeframe.

He probably won't respond, but might as well @EliezerYudkowsky and ask for specifics

@SemioticRivalry He bought "will AI wipe out humanity by 2030" up to 40%, which is a pretty strong opinion on timelines.

@DavidBolin Looking at

/MartinRandall/will-ai-wipe-out-humanity-before-th-d8733b2114a8

I see bets up to 25% and also later bets down to 16%.

I know he also bet on

/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r but I don't think play money bets on non-predictive markets are evidence of strong opinions.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules