Will Andrew Ng start to take AI extinction risk seriously before 2024?
73
777
αΉ€1.3K
resolved Jan 2
Resolved
NO

He seems to be open to changing his mind, will he?

https://twitter.com/AndrewYNg/status/1665759430552567810?s=20

Get αΉ€200 play money

πŸ… Top traders

#NameTotal profit
1αΉ€810
2αΉ€164
3αΉ€152
4αΉ€140
5αΉ€136
Sort by:
predicted NO

Resolved no β€” have not seen any change of heart and his latest post on the topic on Dec 19th was still very sceptical.
https://twitter.com/AndrewYNg/status/1737183906800283980

predicted NO
predicted YES

@Mag what the hell this is actually unhinged lmao

How exactly does this resolve?

predicted YES

@YoavTzfati If he publicly says something (eg a tweet) that seems aligned with having changed his mind and now taking the X-risk from AI seriously

What does seriously mean? He will stop doing online courses on ChatGPT and instead become a full time activist?

predicted YES

@parhizj It means that he acknowledges that there is a risk and that it should be taken seriously (which he has previously not done)

@Mag Ok so a public statement acknowledging there is considerable existential risk. In that case, I will keep watching his statements for a few days before initial bets.

@Mag I very much appreciate that people are having discussions to try to explore this topic but I worry this will end up only becoming a exercise in researching for academic projects that is not productive enough over the short term (<2 years) for him to take it "seriously" -- that is, only serving as a defense for terror management. A comparison I could make is knowing climate change is going to be bad for your neighborhood and living conditions in 15 years, so you spend the majority of your effort researching for resiliency and adaptation, and this means your are never spending most of your time ACTING towards adaption or migration, which is ultimately maladaptive long-term but -feels- good short-term.

predicted YES

@parhizj Not sure if you are arguing against AI risk as a research area entirely or against this specific market?

@Mag If you believe AI risk research presents an existential risk on timelines shorter than it would require to achieve a consensus, which might be a necessary precondition for it's global governance, then academic research might be a poor choice with alternatives like direct action. If your timeline is longer, then it's a great idea if you are optimistic about the promise of AI positively transforming our civilization, especially in the face of the polycrisis. I think a harder yet easier to instrument question (at conferences) might be when will the field reach consensus, or even harder, when will there be global governance over AI.

I don't think we will be afforded as many decades we had for climate change as we will for AI.