Guessing Game: What propositions does Krantz believe?
2
6.8kṀ60
2100
55%
It would be helpful if every important thought leader had a prediction similar to this to formalize their positions on complicated issues.
52%
Aligning AI is a job everyone should have the right to earn a living by doing.
51%
I'm trying to get people to help me write the constitution of truth for AI.
51%
I am trying to pay you to help me map my beliefs.
50%
This is Krantz data.
50%
Krantz data is money.
50%
I believe everybody doing this is the solution to alignment.
50%
We could turn X into a machine that allows us to print our own crypto by doing philosophy and arguing about literally everything.
50%
The main thing we need to do to solve alignment and game theoretic peace is to get all the important philosophers in the world to start talking in analytic English instead of continental English, so we can map our language properly.
50%
Wiggenstien was right the first time and everyone's personal truth is compatible.
50%
Financial and physical oppression and war are side effects of not being able to communicate effectively.
50%
If we simply allowed every real person to securely evaluate every interpretable fact and treated that data as money, all other problems could be solved instrumentally using that process.
50%
Humanity should pivot from (converting oil into artificial intelligence capabilities) to (converting mechanistically interpretable human cognitive labor into decentralized constitutional parameters for guiding collective intelligence).
50%
If the X platform was used primarily as a decentralized mechanism for minting Krantz data, kids could earn a retirement before the age of 23 by getting a sovereign education independently.
50%
If Americans had the right to express their support (or opposition) for each proposition of the constitution (both securely and publicly in a way that can be operated on), we wouldn't need politicians.
50%
If all humans had the right to vote on the constitution that controls AI, AI would be decentrally controlled by a market of opinions.
50%
If humans could print their own money by voting on propositions that control AI, education would be economically accelerated several orders of magnitude.
50%
The solution to aligning ASI is really simple.
50%
Krantz has the most important message to give to society. https://manifold.markets/Krantz/which-person-had-the-most-important?r=S3JhbnR6
49%
I am challenging all the analytic philosophers on Manifold to an argument about whatever topic they choose.

Same concept as this market, where you guess a fact about a person:

https://manifold.markets/IsaacKing/isaac-kings-guessing-game

But where you instead try to guess which propositions in philosophy Krantz believes, like this market:

https://manifold.markets/Krantz/krantz-mechanism-demonstration

Will resolve at my discretion after wagering levels off.

Get
Ṁ1,000
to start trading!
Sort by:
bought Ṁ10 YES

All of them sound vaguely like talking points I've heard you say a lot but the original piece is so long and hard to understand that I'm not sure if the exact wordings of the options have been slightly changed so that they're narrowly not true anymore 🤔

@TheAllMemeingEye Happy to clarify any questions in the comments.

@Krantz could you write a 1-3 paragraph layman language explanation of the Krantz mechanism?

i.e. don't use any phrases you wouldn't expect an average English speaking person to already understand, certainly don't use any phrases you've invented or redefined, do a Toki-Pona-ing / Yudkowskian-tabooing ( https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your-words ) if need be. Existing explanations you've shared seem to be either extremely long or include a bunch of opaque invented/redefined language e.g. krantz-x, constitution, ledger, collective intelligence etc.

@TheAllMemeingEye

It is a mechanism that assigns points to people for checking facts. It's not complicated. I'm trying to explain that "getting points for checking facts" (a sort of decentralized Metaculus for propositions) is the same thing as paying people to demonstrate publicly that they understand something.

This is how analytic philosophers argue.

It also produces mechanistically interpretable alignment data. We will need a lot of alignment data. It will be a valuable resource and I'd like future generations to be fairly reimbursed for it in a way they can understand.

What's confusing?

@Krantz thanks for explaining :)

A couple points of clarification:

  • I know elsewhere you mentioned offering free bets on prediction markets that depend on a given piece of info as a concrete way to pay to incentivise someone to understand such info, and that you've suggested using your mechanism to teach the world about AI x-risk so they'll be motivated to prevent it, but do you have calculations that prove that this is a more cost efficient and robust way to prevent AI x-risk than the research, governance, and outreach work that Effective Altruism organisations are currently funding? Intuitively I would expect it to be many orders of magnitude more expensive than is practical, and that a significant portion of the population are unknowledgeable / inexperienced / uncaring enough that they would still reach false conclusions even if financially incentivised not to.

  • You mention this mechanism producing mechanistically interpretable alignment data. Am I right in thinking this applies to the specific case where you're paying someone to understand what their own values are, and where presumably there's also a rule that they have to explicitly and truthfully write out their reasoning before they can get paid? If so, how do you incentivise the people to be truthful, how do you create a coherent set of objectives from the heavily conflicting and often diametrically opposed values of different people, and perhaps most crucially how do you make a mechanistically-uninterpretable machine-learning-based ASI remain truthfully aligned to the objectives you give it?

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules