If the work between Anthropic and the colletiveintelligenceproject scales well, this will be on their constitution.
Mini
1
Ṁ250
2100
94%
Every person that can verify their humanity should have a secured private way to earn money for using their constitutions to train this one.
94%
Having private access to earn a living by aligning AI is an important job that should be protected.
86%
We should pause massive training runs.

A paraphrase of:

https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6

I believe this project is the most similar in spirit to the one that I'm trying to teach people about. It's about solving outer alignment, not inner alignment.

You can read about it here:

https://www.anthropic.com/news/collective-constitutional-ai-aligning-a-language-model-with-public-input

I think the question 'Who ultimately defines the constitution?' is the question everyone should be asking.

Right now, it's a centralized process.

I hope to see this process evolve into a decentralized one.

It's important to understand you not only need to decentralize the responses to the various survey questions, you also need to decentralize the process of who creates the questions to be asked.

I've been using constitutions to try to scale/align symbolic AI for 14 years.

I have an algorithm that compares your constitution against the global one and recommends which proposition would be most beneficial to evaluate next.

This is the same function that drives nearly every social media platform in existence. I think this function will be seen by history as more important than the transformer function. The algorithm I keep talking about scales the effectiveness of this function orders of magnitude.

I've been trying to share my work privately with researchers that have influence for a very long time. I'm unwilling to put it in the public domain. I'd like for that to change. I'd like to discuss with other researchers whether or not they believe it's infohazardous.

Nobody seems to really understand what I'm trying to do though, so I'm going to point at other people in the world that are attempting to do similar work in hopes of providing additional context to discern my aims.

If you want to learn more about my project, start here:

https://www.lesswrong.com/posts/qqCdphfBdHxPzHFgt/how-to-avoid-death-by-ai

As objective truth can't be known, this will not resolve.

Get Ṁ1,000 play money