2
Will Tyler Cowen agree that an 'actual mathematical model' for AI X-Risk has been developed by October 15, 2023?
35
closes Oct 16
14%
chance

On the Russ Roberts ECONTALK Podcast #893, guest Tyler Cowen challenges Eliezer Yudkowsky and the Less Wrong/EA Alignment communities to develop a mathematical model for AI X-Risk.

https://www.econtalk.org/tyler-cowen-on-the-risks-and-impact-of-artificial-intelligence/

This market resolves to "YES" if Tyler Cowen publicly acknowledges, by October 15 2023, that an actual mathematical model of AI X-Risk has been developed.

Two clips from the conversation:

https://youtube.com/clip/Ugkxtf8ZD3FSvs8TAM2lhqlWvRh7xo7bISkp

...But, I mean, here would be my initial response to Eliezer. I've been inviting people who share his view simply to join the discourse. So, they have the sense, 'Oh, we've been writing up these concerns for 20 years and no one listens to us.' My view is quite different. I put out a call and asked a lot of people I know, well-informed people, 'Is there any actual mathematical model of this process of how the world is supposed to end?'

So, if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data...

https://youtube.com/clip/Ugkx4msoNRn5ryBWhrIZS-oQml8NpStT_FEU

...So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.

So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.'...

Sort by:
Gigacasting avatar
Gigacastingis predicting NO at 15%

AI risk =

max(BMI, hair length) * (sci-fi novels read) **0.5 / (coding ability) * (regulatory capture probability)

—-

NB

Global warming risk =

(Female = 3, Male = 1) / age ** 0.5

* 1/ (0.5 + number of offspring)

* population density of neighborhood

/ (1+ growth rate of country) ** 2

Gigacasting avatar
Gigacastingis predicting NO at 15% (edited)

Veganism =

Iq / (upper body strength)

* (neuroticism * conscientiousness) **0.5

Related markets

Will AI wipe out humanity before the year 2030?23%
Will Yann LeCun change his mind about AI risk before 2025?21%
Will AI xrisk seem to be handled seriously by the end of 2026?39%
Will there have been a noticeable sector-wide economic effect from a new AI technology by the end of 2023?53%
Will an AI solve any important mathematical conjecture before January 1st, 2030?69%
Will the "Will AI wipe out humanity before the year 2030?" market reach 20% in 2023?67%
In a year, will Peter Wildeford believe that AI is the largest single source of existential risk?92%
Will the "Will AI wipe out humanity before the year 2030?" market reach 10% in 2023?11%
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?79%
Will Robin Hanson debate Eliezer Yudkowsky on AI risk in 2023?24%
At the beginning of 2035, will Eliezer Yudkowsky still believe that AI doom is coming soon with high probability?68%
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?31%
Will AI create philosophy before 2030?85%
Will Donald Trump claim to have a solution for the existential threat of AI (but not say what it is) by the end of 2023?26%
By the end of 2023, will Richard Hanania begin spending a significant amount of time working on AI risk?25%
Will Donald Trump propose a solution for the existential threat of AI by the end of 2023?20%
Will Donald Trump propose a solution for the existential threat of AI by the end of 2023?28%
Will Donald Trump propose a solution for the existential threat of AI by the end of 2023?20%
Will Donald Trump propose a solution for the existential threat of AI by the end of 2023?20%
Will the U.S. have passed legislation that requires cybersecurity around AI models before 2030?87%