
On the Lex Fridman Podcast #368, Lex and Eliezer Yudkowsky debate whether there will be a huge funding commitment to a program toward mitigating AI risk.
https://youtube.com/clip/UgkxXHBY4k3r6n7Ab6I4qUWbxmJExTdsTQgD
Lex hopes it will happen; Eliezer doesn't believe it will happen. The discussion specifies a $1 billion threshold, but with no specific timeframe.
This market resolves to YES if there is a prize offer or other funding commitment made in 2023, equivalent to at least US$1 billion, toward understanding and mitigating AI risk.
The program has to be a $1 billion budget commitment to a prize or to a coordinated effort by a single umbrella organization, but can be supported and funded by any combination of government or private sources. The funding only has to be committed by end of 2023, as opposed to provided or spent by end of 2023.
As reference, the Manhatten Project cost nearly $2 billion (equivalent to $24 billion in 2021). https://en.wikipedia.org/wiki/Manhattan_Project
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ2,433 | |
2 | Ṁ426 | |
3 | Ṁ317 | |
4 | Ṁ278 | |
5 | Ṁ213 |