
Intended as a survey of individuals in Manifold about which, possibly controversial, propositions do you think are important to be included in a constitution designed to steer the general behavior of decentralized AI.
You can think of it as betting on which Asimov laws an ideal society would make sure to include.
You can treat it like you're betting on a constitution that will run your government.
I'd treat it like you're betting on what philosophy the rest of the world is going to care about.
What is the most important thing you want the AI to believe?
The market of truth never resolves.
The point of this exercise is to define the term 'friendly' as a property of AI that accepts/denies these particular beliefs.
You see, the robot is trying to figure out what 'friendly' means.
Because that's what it was told to be.
Answering these questions to the best of your ability (using that really complicated version of 'friendly' that you have in your head) will help it understand what we mean by the word.
It's how 'we' tell 'it' what the word friendly means.
I wouldn't recommend letting it define the term for you.
Re "God exists", what if the AI's point of view is "he/she/it does now"?
@PontiMin You are asking me the consequences of a market survey reflecting that "the market survey itself" is "God"? Well, I'd say that's a society that has seemed to consent to a language where the referent "God" points at the sense of "Truth" (Frege). Sounds like a rationalist cult.
I think it's important to understand the identity of "the AI" could be stored as a vast constitution of subjectively mapped truth, as opposed to complexity of ML based capabilities.
I think that's the second bitter lesson.
We needed to scale the production of diverse alignment data.
We should have built decentralized schools that taught kids how to print money by aligning their teachers.