
Will the US government launch an effort in 2023 to augment human intelligence biologically in response to AI risk?
82
Ṁ3.3kṀ6.7kresolved Jan 1
Resolved
NO1H
6H
1D
1W
1M
ALL
On the Lex Fridman Podcast #368, Eliezer Yudkowsky mentions that one approach in combating AI risk could be to augment human intelligence biologically.
https://youtube.com/clip/UgkxuTA2mywY03QjltaJTR95jEjUisEwfR43
(note that he frames this approach as contingent on massive public outcry, which he does not expect)
This market resolves to YES if there is a new program following this approach announced or launched by any branch of the US federal government by the end of 2023.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ37 | |
| 2 | Ṁ34 | |
| 3 | Ṁ20 | |
| 4 | Ṁ10 | |
| 5 | Ṁ7 |
People are also trading
Related questions
[ACX 2026] Will the U.S. enact an AI safety federal statute or executive order in 2026?
23% chance
Is the US government currently (2023) hiding AI capabilities beyond the current state of the art? (Resolves 2040)
6% chance
Will the US implement AI incident reporting requirements by 2028?
83% chance
Will the United States ban AI research by the end of 2037?
24% chance
Will the US Federal Government spend more than 1/1000th of its budget on AI Safety by 2028?
12% chance
Will there be a military operation to slow down AI development by the end of 2035?
32% chance
Will there be a military operation to slow down AI development by the end of 2030?
16% chance
Will the US implement testing and evaluation requirements for frontier AI models by 2028?
82% chance
Will the US require a license to develop frontier AI models by 2028?
49% chance
Will the US implement information security requirements for frontier AI models by 2028?
88% chance