Which proposition will be denied?
24
2kṀ3669
2026
0.3%
1. If AI develops the capability to control the environment better than humans, then humanity is doomed.
0.1%
2. If we continue to scale AI capabilities, then it will eventually be able to control the environment better than humans.
0.1%
3. 1 and 2 imply that if we continue to scale AI capabilities, then humanity is doomed.
0%
4. We should not be doomed.
0%
5. 3 and 4 imply that we should stop scaling AI.
0.2%
6. If every person on the planet understood the alignment problem as well as Eliezer Yudkowsky, then we would not scale AI to the point where it can control the environment better than humans.
0.1%
7. People only understand the things they have learned.
6%
8. People learn the things that they have obvious incentives to learn.
0.1%
9. 6, 7, and 8 imply that if people had sufficient and obvious incentives to understand the alignment problem, then we would not scale AI to the point where it can control the environment better than humans.
0.5%
10. It is possible to build a machine that pays individuals for demonstrating they’ve understood something.
10%
11. If individuals can see that they will earn a substantial cash reward for demonstrating they understand something, they will be incentivized to demonstrate they understand it.
10%
12. 10 and 11 imply that it is possible to incentivize people to understand the alignment problem.
0.3%
13. If a majority of people understood the actual risks posed by scaling AI, then they would vote for representatives that support legislature that prevents the scaling of AI.
0.1%
14. 9 and 13 imply that if we sufficiently incentivize the understanding of the alignment problem, then people would take action to prevent dangerous AI scaling.
0.2%
15. If your goal is to prevent the scaling of dangerous AI, then you should be working on building mechanisms that incentivize awareness of the issue. (from 14)
0.1%
16. Krantz's work is aimed at building a mechanism that incentivizes the demonstration of knowledge.
0.4%
17. 5, 12, 14, 15 and 16 imply that if your goal is to prevent the scaling of dangerous AI, then you should review the work of Krantz.
0.2%
18. If AI safety orgs understood there was an effective function that converts capital into public awareness of existential risk from AI, then they would supply that function with capital.
0.1%
19. 17 and 18 imply that Eliezer Yudkowsky and other safety organizations should review the Krantz system to help prevent doom.
71%
None. This argument is sound and Eliezer will be compelled to look at Krantz's work.

The following is an argument for why AI safety organizations should consider my work. If Eliezer is not compelled by this argument, which proposition will he deny?

Will resolve if @EliezerYudkowsky claims to deny any proposition by number in the comment section of this prediction or agrees to review my work.

1. If AI develops the capability to control the environment better than humans, then humanity is doomed.

2. If we continue to scale AI capabilities, then it will eventually be able to control the environment better than humans.

3. 1 and 2 imply that if we continue to scale AI capabilities, then humanity is doomed.

4.  We should not be doomed.

5. 3 and 4 imply that we should stop scaling AI.

6. If every person on the planet understood the alignment problem as well as Eliezer Yudkowsky, then we would not scale AI to the point where it can control the environment better than humans.

7. People only understand the things they have learned.

8. People learn the things that they have obvious incentives to learn.

9. 6, 7, and 8 imply that if people have sufficient and obvious incentives to understand the alignment problem, then we would not scale AI to the point where it can control the environment better than humans.

10. It is possible to build a machine that pays individuals for demonstrating they’ve understood something.

11. If individuals can see that they will earn a substantial cash reward for demonstrating they understand something, they will be incentivized to demonstrate they understand it.

12. 10 and 11 imply that it is possible to incentivize people to understand the alignment problem.

13. If a majority of people understood the actual risks posed by scaling AI, then they would vote for representatives that support legislature that prevents the scaling of AI.

14. 9 and 13 imply that if we sufficiently incentivize understanding of the alignment problem, then people would take action to prevent dangerous AI scaling.

15. If your goal is to prevent the scaling of dangerous AI, then you should be working on building mechanisms that incentivize awareness of the issue.

16. Krantz's work is aimed at building a mechanism that incentivizes the demonstration of knowledge.

17. 5, 12, 14, 15 and 16 imply that if your goal is to prevent the scaling of dangerous AI, then you should review the work of Krantz.

18. If AI safety orgs understood there was an effective function that converts capital into public awareness of existential risk from AI, then they would supply that function with capital.

19. 17 and 18 imply that Eliezer Yudkowsky and other safety organizations should review the Krantz system to help prevent doom.

This argument is one of many that should exist on a decentralized ledger like this:

https://manifold.markets/Krantz/krantz-mechanism-demonstration

If it did, we could be scrolling through the most important arguments in the world (on platforms like X) and earning livings by doing the analytic philosophy required to align AI/society.

This is how we build collective intelligence.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy