
He seems to be open to changing his mind, will he?
https://twitter.com/AndrewYNg/status/1665759430552567810?s=20
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ810 | |
2 | Ṁ164 | |
3 | Ṁ152 | |
4 | Ṁ140 | |
5 | Ṁ136 |
People are also trading
Resolved no — have not seen any change of heart and his latest post on the topic on Dec 19th was still very sceptical.
https://twitter.com/AndrewYNg/status/1737183906800283980
Looking a lot less likely nowadays https://x.com/AndrewYNg/status/1719378661475017211?s=20
@YoavTzfati If he publicly says something (eg a tweet) that seems aligned with having changed his mind and now taking the X-risk from AI seriously
@parhizj It means that he acknowledges that there is a risk and that it should be taken seriously (which he has previously not done)
@Mag Ok so a public statement acknowledging there is considerable existential risk. In that case, I will keep watching his statements for a few days before initial bets.
@Mag I very much appreciate that people are having discussions to try to explore this topic but I worry this will end up only becoming a exercise in researching for academic projects that is not productive enough over the short term (<2 years) for him to take it "seriously" -- that is, only serving as a defense for terror management. A comparison I could make is knowing climate change is going to be bad for your neighborhood and living conditions in 15 years, so you spend the majority of your effort researching for resiliency and adaptation, and this means your are never spending most of your time ACTING towards adaption or migration, which is ultimately maladaptive long-term but -feels- good short-term.
@parhizj Not sure if you are arguing against AI risk as a research area entirely or against this specific market?
@Mag If you believe AI risk research presents an existential risk on timelines shorter than it would require to achieve a consensus, which might be a necessary precondition for it's global governance, then academic research might be a poor choice with alternatives like direct action. If your timeline is longer, then it's a great idea if you are optimistic about the promise of AI positively transforming our civilization, especially in the face of the polycrisis. I think a harder yet easier to instrument question (at conferences) might be when will the field reach consensus, or even harder, when will there be global governance over AI.
I don't think we will be afforded as many decades we had for climate change as we will for AI.