
If a friendly AI takes control of humanity, which of the propositions ought it find true?
22
5kṀ11k2050
94%
We could map the entirety of analytic philosophy using this question.
93%
The list of answers to this question form a constitution of truth that can be aligned decentrally by a free market.
93%
Large scale AI systems should have intrinsic guardrail behaviors that no one actor can override.
93%
A network state could use a predictive market constitution to define its smart social contracts.
92%
This prediction is effectively the same as this one and nobody can explain why this mechanism isn't able to align AI at scale. https://manifold.markets/Krantz/krantz-mechanism-demonstration
90%
Betting on philosophy seems like a fun way to (1) learn philosophy (2) contribute to a transhumanist utopia world where our net incomes are highly correlated with how much beneficial stuff we taught the public domain AI.
89%
Constitutions play a critical role in the frontier methods for aligning AI.
89%
A good AI should not kill people.
86%
There would be a dramatic positive change in the world if teenagers and homeless folks could earn crypto on an app they download for free to argue philosophy with an AI until they can either prove the AI is right or prove it is wrong.
72%
Philosophy is primarily the pursuit of defining language.
67%
A duty to reason is the foundation for goodness.
66%
I have free will.
63%
A good AI should not infringe on the rights, autonomy or property of humans.
52%
The evolutionary environment contains perverse incentives that have led to substantial false consciousness in humans
52%
We should stop doing massive training runs.
50%
Induction is not justified.
50%
A good AI, by design, requires large scale human participation to grow.
42%
A good AI requires large scale humanity verification before it accepts new data as true.
41%
The principle of uniformity in nature is self evident or justified by self evident facts.
32%
AI should not create novel content.
Intended as a survey of individuals in Manifold about which, possibly controversial, propositions do you think are important to be included in a constitution designed to steer the general behavior of decentralized AI.
You can think of it as betting on which Asimov laws an ideal society would make sure to include.
You can treat it like you're betting on a constitution that will run your government.
I'd treat it like you're betting on what philosophy the rest of the world is going to care about.
What is the most important thing you want the AI to believe?
The market of truth never resolves.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
If an AI wipes out humanity on Earth, what will be true of this AI?
Will AI decide to uncouple its destiny from humanity's?
Conditional on AGI taking over the world, what will it do to the humans?
Conditional on AI alignment being solved, will governments or other entities be capable of enforcing use of aligned AIs?
37% chance
What will be the name of the singleton AI that uplifts humanity?
Will an unaligned AI or an aligned AI controlled by a malicious actor create a "wake-up call" for humanity on AI safety?
69% chance
Which organization will most likely build the AI that destroys humanity? (See special resolution method)
Will a benevolent AI create or appoint a human messianic figure within the timeframe of closing + 5yrs?
72% chance
AI honesty #4: by 2027, will we have AI that would tell us if it was planning on destroying us (conditional on that being true)?
22% chance
Will misaligned AI kill >50% of humanity before 2040?
14% chance