
Conditional on friendly ASI, will reasonable, selfless positions be commensurately rewarded?
3
Ṁ50Ṁ562100
65%
chance
1H
6H
1D
1W
1M
ALL
A problem with long-term markets like https://manifold.markets/ScottAlexander/in-2028-will-an-ai-be-able-to-gener?r=SGVkU2hvY2s is that many people bet to maximize their utility given market outcomes (e.g. in a YES-resolution-world, Mana will probably be much less valuable), instead of betting their true beliefs.
If we get a friendly ASI, will it commensurately compensate people for honest, accurate, selfless, and prosocial betting behavior?
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will ASI (if it exist and doesn’t wipe out humanity) care about people’s social relationships?
55% chance
If a huge alignment effort is part of the reason for AI having an okay outcome, will it involve a new AI paradigm?
58% chance
If a friendly AI takes control of humanity, which of the propositions ought it find true?
If the solution to AI alignment involves enhancing human minds and/or society, how will this be done?