A place to add the important questions you think society (or at least Manifold) should be asking.
No. It would require 100% of the vote. Maybe read this.
https://manifold.markets/Krantz/if-the-work-between-anthropic-and-t?r=S3JhbnR6
In general, you appear to be imagining a constitution similar to something like the US constitution.
I'm talking about a constitution in a technical sense for computers to operate on.
Ok. Here's a possible resolution criteria that might address your concerns while preserving the intent behind listing the prediction. (which is to teach people that instead of wagering their money on predictive markets, they could be getting paid to align AI instead without needing to put down any capital at all).
If Anthropic is still pursuing constitutional alignment on August 1st 2025, which principles will exist on Claude's constitution?
https://www.anthropic.com/news/claudes-constitution
This would resolve according the the state of Claude's constitution (or equivalent if renamed) on August 1st, 2025.
Thoughts?
I believe that the criteria for saying something is 'resolved' is similar to the criteria for saying something is 'objectively true'.
As a philosopher, I believe I can never know whether something is objectively true.
I think prediction markets are flawed because of this.
I think it's important to talk about that.
It's a principle I want AI to understand.
I believe that the criteria for saying something is 'resolved' is similar to the criteria for saying something is 'objectively true'.
Why not instead resolve options at a sufficiently high non-absolute credence of a given third-party-verifiable criterion, so that you can harness the incentives of a prediction market to produce actual information?