Who will cheerfully explain to @Krantz why his proposed solution to alignment of ASI isn't valid?
11
4.1kṀ6854
2100
7%
Roman Yampolskiy
6%
Nobody
6%
Liron Shapira
6%
Agnes Callard
5%
Sarah Constantin
5%
Rob Bensinger
5%
Cat Woods
5%
Other
4%
Holly Elmore
4%
Rosie Campbell
3%
Amanda Askell
3%
Aella
3%
Zvi Mowshowitz
3%
Allison Duettmann
3%
Rob Miles
3%
Jeffrey Ladish
3%
Robin Hanson
2%
Simon Baars
1.9%
Nathan Young

Since 2010, I've been trying to securely share a solution to the scalable alignment of a decentralized symbolic artificial collective intelligence that can beat machine learning to the punch with folks like Doug Lenat, Danny Hillis, Ben Goertzle, Balaji Srinivasan and Robin Hanson. I really need to talk to Eliezer.

https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6

https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6

Most of my predictions are attempts to find someone to either (1) help me share infohazardous research securely or (2) help me understand why the fate of the world doesn't rest on whether this research safely makes it to the public domain.

I will not make any profit off this prediction. I will only be feeding it names and mana to incentivize cheerful charitablility. If I add you to this list, it is because I respect you. I don't have much money, but this is the most important place I can find to put it. I genuinely believe I am trying to save the world.

It will resolve to the first person that takes the time to explain to me why my approach is invalid. I hope someone does. I will give any profit I make from this prediction to the person (other than me) that invests the most into it via bets or liquidity.

Thank you for helping me learn. 🤓

  • Update 2025-07-31 (PST) (AI summary of creator comment): The creator has clarified the process for how one might successfully convince him:

    • One should use his argument system to challenge a proposition on which you and the creator disagree.

    • The creator noted that nobody has fully used the system for this purpose yet.

  • Update 2025-08-01 (PST) (AI summary of creator comment): The creator has clarified that you can use any method to convince them their solution is invalid. Using the creator's specific argument system is not a requirement for the market to resolve.

  • Update 2025-08-02 (PST) (AI summary of creator comment): To resolve this market, one must convince the creator that his proposed process for scaling symbolic AI does not scale.

Arguments about other potential failure modes of the system, such as its economic viability, will not be sufficient for resolution.

  • Update 2025-08-03 (PST) (AI summary of creator comment): The creator has specified that his claim is that a "truth economy" can scale to a point where it generates > 50% of GDP.

At this point, he argues, society would have the coordination and understanding necessary to halt the scaling of dangerous Machine Learning. To convince him his solution is invalid, one would have to argue against this premise.

Get
Ṁ1,000
to start trading!
Sort by:

@Krantz For me, there's still a pretty wide inferential gap between your idea, what I imagine your idea might be, what you suppose your idea could accomplish and what I suppose your idea might accomplish. It seems almost like you're communicating in a different language. Years ago, my mother-in-law asked me why I was afraid of AI. Words were said. Words like Alignment. Orthogonality. Optimization. Goal function. She never could make any connection between my words and reality. I feel like my mother-in-law here. It seems I'm not alone.

May I try to step waaaay back and use a very broad paint roller to sketch my understanding of your proposal?

  1. Change society and/or economy towards using krantz

  2. Krantz-alignment has higher (10x? 1000x?) probability of aligning ASI than any other method we know of

And you posit

  1. is possible

  2. is true

Did I get this somewhat not completely wrong?

@Primer This feels like a very charitable comment. Thank you.

I agree that there is a large inferential distance to cover.

To answer directly, yes, I believe those things are possible and true. I actually don't think viewing what I'm proposing as another language is too far off base. It's a shift from continental language to analytic language.

A large step back is what's needed.

Overall, what I'm saying is that we need to change the nature of our public discourse. We have to change the way we communicate at scale. What I'm trying to do is change the format of our communication from something like looks like what you'd see in a book by Hagel (long meandering prose like you see in many blog posts today) into discrete assertions like you'd see in the Tractatus.

If you can imagine Wiggenstien running around trying to convince all the other philosophers to formally write out there beliefs using the same format he used for the Tractatus because he believes one day we'd be able to compile those texts and use AI to interpretably map their logical connective structure and identify the contradictions that exist, then you can imagine how I feel running around trying to convince all the smart alignment researchers that write long meandering blog posts, that if they instead used the krantz format to convey their work, it'd be a lot easier to compile, operate on and identify the contradictions that exist between each other's work.

If we're going to cover the full inferential distance, it's important to first understand that it is possible for an N sized group to collectively have an argument together within a system like this.

Do you feel like you comprehend how that process would work?

@Krantz Thank you, especially for those direct answers! The way you try to communicate your ideas, patiently, determined and truthseeking, is pretty impressive in my opinion.

Which is why I sort of feel bad if you feel like you need to explain those ideas to me, as "being a Manifolder" is the only connection I have to the people and institutions you'd need to convince, and you should direct your efforts elsewhere.

That being said, I'd like to stay back one step. I think I agree that (1) is possible (the way lots of things are possible). What I want to point out is: Even if I'd be convinced (2) were true, I consider the number of worlds where (1) happens to be much smaller than the number of worlds where we stop development of ASI in the next decade by some other means (societal collapse, natural disaster, world war, butlerian jihad, world leaders mindhacked into sanity, pivotal act, ...). Basically: Maybe you overestimate the importance of krantz by a few orders of magnitude (which could still leave krantz at the top of possible alignment pathways or at least at the "definitely worth fighting for" stage).

The name changed on this market, right?

@kindgracekind No, but I do have other similar predictions. Maybe that's what you're thinking about?

bought Ṁ5 YES

(Copied from other market)
Credit where it is due for developing enough humility to acknowledge that being disproved is indeed a possibility, could this be the start of the krantz redemption arc?

If I convinced you that the alignment tax imposed by the truth economy was sufficiently high that a mis-aligned AGI controlling 49% of GDP would outcompete a truth economy controlling 51% of GDP, would that cause this question to resolve?

@LoganZoellner No. I'm not sure what an alignment tax has to do with anything I'm talking about. I'm claiming there is a process that exists which could scale symbolic AI fast enough to beat ML to the punch.

The goal would be to convince me that process doesn't scale. So far, nobody has really demonstrated they understand that process.

@Krantz

> I'm claiming there is a process that exists which could scale symbolic AI fast enough to beat ML to the punch.

It sounds like you are claiming that your process (truth economy) can out compete alternative process (scaled up Deep learning) from some starting point (which I inferred was 50% of GDP based on your other post).

Is that the actual claim, or is it also necessary to ban traditional ML?

@LoganZoellner I'm claiming that if we scaled up a truth economy, such that most individuals understood they could earn a living by learning about and publically voting on particular issues in a way that can be utilized by lawyers in a petition to force actions that stop scaling (this process can also be applied to any other normative changes we might want to make), then that function will have more economic inertia than the economic force motivating further scaling of ML. If people understand that the only job left is to produce alignment data and that it is in their best interest to protect that job from artificial capture, they ought coordinate to produce the game theoretic conditions to prevent it.

Whether or not a truth economy can "out compete" ML is sort of a category error. A truth economy doesn't really perform the same function as a ML based AI.

If we wanted to compare the component that's important, we should look at the GDP produced per unit of energy put into the system.

ML runs on oil.

Truth economy runs on human cognitive labor.

At the end of the day, humanity needs to decide who is going to align AI. (1) The guys with all the oil or (2) a truly free and open market that philosophers openly compete on. I really don't want my kids ending up in a 1984 novel.

My claim is, if we scale a truth economy to the point where it is creating > 50% GDP, by that time, the world will understand well enough the dangers of scaling ML, the opportunity that exists for them if they protect the job of aligning AI and have the coordination ability to secure the future they want.

@Krantz this has been pointed out many times but there are simple and obvious errors in statements like,

If people understand that the only job left is to produce alignment data

Because obviously someone will have to clean the toilets. Posting on a website will not be the only job even if you waved a wand to make your website exist and have users.

@Krantz
>the world will understand well enough the dangers of scaling ML,

It would have been sufficient to say, "yes, under a truth economy we will ban traditional ML"

@JasonQ Short answer: We will have machines cleaning toilets. People earn a living by hitting buttons on a computer that define how that machine works.

Longer answer: As opposed to a "prediction market", let us consider an "event loan". An event loan, is a loan that is made out to a person with repayment schedule contingent on a particular event. For example, instead of betting on whether or not the Whitehouse will disclose UFOs are real before a given date, a loan can be constructed that is funded by those that believe UFO disclosure will happen soon and given to those who believe it will not occur anytime soon. If and when the event occurs, a repayment plan at a much higher rate is initiated. This is similar to saying to someone before a football game, "If you give me $20 now, I'll give you this $40 gift card that will only work if everyone thinks your team won the game.".

If we extrapolate this, we can develop fact based assets. If we have fact based assets, there is economic value in the decentralized verification of facts. Krantz data is an explicit uniform way of discretely and interpretably resolving the value of those assets.

Decision markets and futarchies teach us that we can vote by choosing the currency we transact in. Should we put our money into the coin that's only going to be valuable if people believe that (x) is true or the coin that will only be valuable if people believe (x) is false? The ratio of the value of those two coins represents society's split opinion on whether (x) is true or false.

The value of all money is intrinsic upon the opinion of people about how valuable that data is. Paper money is an idea culture was brainwashed into agreeing on. We could do the same with Krantz data.

An AI constitution that every human being has the intrinsic right to publicly and securely express their opinion on, in order to earn rewards that anyone might want to provide for consenting to that data being true to resolve other contracts, is valuable and should be treated as money.

My claim is that if everyone believed krantz data was money and frantically tried to "print more of it" by openly publishing it in a way it can be human verified as them consenting to particular truths, then krantz data would function as money.

As for whether or not "cleaning your toilet" is an obvious instance of an essential job, I think there's an important distinction between "task" (like pressing the button on the machine that cleans your toilet or wiping your own ass) and "job" (function you perform for the economy to produce overall GDP).

When machines dominate physical labor, all that's left is to produce data that steers them. Krantz is sufficient for this.

@Krantz It would also just be really helpful if we considered money and valuable alignment data the same thing. I've got a feeling we'll need a lot of it.

@Krantz

It seems like for your plan to work the following steps have to happen in the following order:

1) we ban traditional ML before it kills us all
2) we invent neuro-symbolic AI (that is good enough to scrub toilets)
3) everyone on Earth devotes themselves to producing Krantz data

Is that how you imagine it happening, or is there something I'm missing?

@LoganZoellner

It sounds like you're missing the entire point and have no idea what I'm talking about.

Here's what the steps would look like.

1. Build a nonresolving market of propositions that people can buy stock in to produce Krantz data. This is more like Metaculus than Manifold. You don't need money. You earn points by accurately predicting which direction public confidence will move over time. Have fun playing with it. Argue with each other on it. Scale it across several platforms so bots can arbitrage and nobody can turn it off. It exists in abstract space.

2. Treat that data like money. Or social status or whatever you want t I call it. Because it is valuable. It's discrete interpretable training data for decentralized alignment. Our society needs to economically incentivize the production, collection and organization of a massive shitload of it.

3. Use that data to scale Doug Lenat's work. (There's about 8 hrs of details here that I'm not going to go over in the comments section on Manifold, but I'd recommend reading Doug Lenat's books on knowledge maps and expert systems from the 80s.) Very few people at Manifest seemed knowledgeable about this domain. Also, read Wittgenstein.

4. By doing this, we intrinsically create an AI futarchy. Similar to Goertzle's approach. If someone wants others to express their opinion on something, they can inject capital directly into specific topics. Technically, we'd be paying people to train the GOFAI to understand that ML shouldn't be scaled. Which is a paraphrase of saying we are paying each other to prove we understand those issues.

5. Culturally compell our politicians to participate in this process so their beliefs and corresponding actions are mechanistically interpretable to us.

6. Have an army of lawers draft petitions for action siting the public declarations of truth people have earned by sharing. Just pay people to vote on the constitution.

I'm sure Satoshi saying, "just treat really large co-primes as money" sounded pretty naive in 2005.

I'm trying to tell you to just treat interpretable alignment data as money. There's will be enough for "K" (gofai scales fast enough to beat machine learning to the punch).

@Krantz

It still seems like you need to pull off steps 1 and 2 because:
a) someone needs to scrub the toilets
b) Traditional ML is going to be in spitting distance of "could potentially kill us all" in 3-5 years.

> Use that data to scale Doug Lenat's work.

From what I can tell, his work doesn't seem very bitter-lesson pilled (in fact, the exact opposite). I'm incredibly skeptical that a method people have been trying (and failing) since the 1950's to get to work is suddenly going to beat ML to the punch.

> There's will be enough for "K" (gofai scales fast enough to beat machine learning to the punch).

If you have a plan to beat deep learning in <5 years, you should be raising money from venture capital, not running surveys on Manifold.

> I'm sure Satoshi saying, "just treat really large co-primes as money" sounded pretty naive in 2005.


Bitcoin is actually a pretty good point of reference. ~165 bitcoin are mined each year (worth $19B or 0.017% of the world economy). In order to hit your stated target of 50% of GDP, Krantz money would have to be 5000x more successful than Bitcoin. That sounds like a lot, but current prediction markets have revenue ~$40m, or 500x lower than bitcoin so the real factor is 2.5million .

By contrasts, capex invested in traditional ML will be ~$0.5T this year (25x as much as Bitcoin).

Of course 50% of GDP should sound like a lot since it represents the labor of the bottom ~90% of the people on earth.

So if your argument is: once we convince 9/10 people on Earth to quit their jobs and spend all day making Krantz data, then GOFAI will finally beat Deep Learning... well, that's.. yeah.

@Krantz

We will have machines cleaning toilets. People earn a living by hitting buttons on a computer that define how that machine works.

This is orders of magnitude less efficient and brings in a lot of supply chain, production, and engineering issues vs just having an able-bodied human do it. Billions of people don't have the internet. How are they going to do this?

For that matter, who is going to build all the janitor bots? How are “buttons on a computer that define how a machine works” any different than a keyboard used to define computer programs? How can we provide every establishment with these robots at near 0 cost while also not using the most promising technology for tasks like piss recognition (ML trained image recognition)? If you aren't automating it but manually having a person control every movement, how is this “abstract krantz data”? Why would anyone store data that says, “move arm 5 inches left to mop up piss”?

@Krantz also you say, “ML runs on oil” but wouldn't the plastics production, and shipping of billions of jani-bots also require insane resources?

And janitors are just one example. There’s already huge incentive to automate as much food production as possible and yet it's still employing a ton of human beings.

That's to say nothing of the obvious jobs you’ve just created of janibot technician, janibot parts manufacturer, janibot resaler, etc. Etc. That themselves contradict the idea you can replace all jobs if people just “understand” you.

I added my name. I engaged with you in a pretty genuine way about this a few months back. I came away convinced you are sincere, but deluded.

I'd have to get familiar with your stuff again, which would take a large incentive at this point. But with some patience I think I could convince you to drop it, or at least why Yudkowsky would not accept your system.

@robm thanks, I really appreciate it!

opened a Ṁ100 YES at 21% order

Does this resolve to "nobody" in 2030 if nobody has convinced you?

If not Destiny, who should I add

@Chumchulum Charitable analytic philosophers. Feel free to add yourself. You seem interested and curiosity is what most people are lacking. Thanks for engaging with the mechanism. Do you understand the basic concept? Try finding a proposition we disagree on and write an argument in the system that challenges it. Nobody has actually used it fully to do that yet.

write an argument in the system that challenges it

I don't like this part. Your expecting someone to study and understand your system, disagree with it*, and then limit themselves to your system while convincing you it's wrong?

Let them use any method they have to convince you. As long as they demonstrate enough understanding of your system, that should be enough.

* There's a branch at this step where they could agree instead of disagree, but in that case the rest of my point doesn't matter

@robm You can convince me however you want. I just wanted @Chumchulum to write an argument to demonstrate they understood the process.

@robm Also, I can see how you wouldn't like the part about restricting yourself to the system to convince me, but the expectation for you to use the system, so that I can verify you understand how it works seems pretty reasonable to me. One of the propositions in my formal argument is "it is possible to build a machine that pays people to demonstrate they've understood something". I'm trying to get you to demonstrate that you understand the system within the system. I will give you money to do that. This should compel you to accept at least that proposition.

@Krantz Does write an argument in the system mean a numbered series of syllogisms as in this Krantz system? https://manifold.markets/Krantz/which-proposition-will-be-denied?r=Q2h1bWNodWx1bQ

I intend to write such an argument about a proposition in that very Krantz, and so cause you to accept the existence of contradictions ( Gegensatzen ). This is a secret alternative win condition to your market here, https://manifold.markets/Krantz/will-anyone-write-an-argument-that?r=Q2h1bWNodWx1bQ and to that effect I'm currently asking diamaterialists to bet while it's still low so we can all profit together, found the Manifold Red Army, and start to clear away so much metaphysical nonsense on manifold.markets.

@Krantz Though I have several projects on the backburner, so please don't be shy of reminding me. It is also too early to place a significant bet on https://manifold.markets/IsaacKing/will-manifold-be-taken-over-by-any?r=Q2h1bWNodWx1bQ.

@Chumchulum This is the system I'm referring to.

https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6

Try to add a numbered series of syllogisms that lead me into changing some belief I have (something more fundamental than a clerical error).

Krantz Mechanism Demonstration
This prediction is aimed at demonstrating the function of the krantz mechanism. The krantz mechanism is an abstract format for storing information. Similar to 'the set of integers', it does not exist in the physical world. To demonstrate the set of integers, we make physical approximations of the abstract idea by writing them down.  Humans have created symbols that point to these elements and have shared them with each other (to ensure the process of decentralization) and agreed to use the same ones so nobody gets confused. To demonstrate the krantz mechanism, we make a symbolic approximation of the abstract idea, by writing down elements of the form ('Any formal proposition.', confidence=range(0-1), value=range(0-1)). My claim is that the krantz mechanism is an operation that can be applied to approximate a person's internal model of the world.  Much how LLMs can infinitely scale to approximate a person's model of the language itself, krantz aims to compress the logical relation of observable truths that language points at. For this demonstration, I will not be using value=range(0-1) and will only be using ('This is a proposition.', confidence=range(.01-.99)) (due to limitations of the current Manifold system).   If I were working at Manifold, I would recommend creating a secondary metric for expressing the degree to which the 'evaluation of confidence' is valuable.  This will later play a critical role in the decentralization of attention. A proposition can be any expression of language that points to a discrete truth about the user's internal world model.  Analytic philosophers write arguments using propositions.  Predictions are a subset of propositions.  Memes and other forms of visual expression can also be true or false propositions (They can point to an expression of the real state of the observable Universe).  Things that are not discrete propositions:  Expressions that contain multiple propositions, blog posts, most videos, or simple ideas like 'liberty' or 'apple'.   Each proposition (first element of the set) must have these properties. (1) Is language. A language can be very broad in the abstract sense of the krantz mechanism, but for this posting, we will restrict language to a string of characters that points to a proposition or logical relation of propositions. (2) Is true or false. Can have a subjective confidence assigned to it by an observer. (3) Has value. Represents a market evaluation of how important that idea is to society.  This is distinct from representing the value the idea has specifically to the user.  It is aimed to represent roughly what a member of society would be willing to pay for that truth to be well accepted by society.  One example of this would be taxes.  Ideally, we pay our taxes because we want someone to establish and become familiar with the issues in our environment that are important to us, make sure people know how to fix them, make sure society will reward them if they do and make sure society understands the need to reward the individuals that do resolve the problems.  We pay our taxes, so people will do things for us.  We do this on social media with likes and shares.  We give up attention to the algorithm to endorse and support other ideas in society because we believe in investing value into directing attention to it.  We do this in education and professional careers.  Our economy is driven because it rewards those that succeed in figuring out how to do what society needed done and direct the energy to do it. We give our children rewards for doing their homework because it is important for engineers to understand math.  Soon all that will be left is learning and verifying where to direct the abundant energy to accomplish the tasks that we should.  It is a way of pointing the cognitive machinery of the collective consciousness.   Propositions as logical relations: Since propositions can be written as conditionals, like 'If Socrates is a man, then Socrates is mortal.' or as nested conditionals, like 'If all men are mortal, then 'If Socrates is a man, then Socrates is mortal.'.', it follows that the work of analytic philosophers can be defined within a theoretical object like the krantz mechanism.  For example, every element of the Tractatus is a member of the krantz set.  Every logical relation defined by the outline of the Tractatus is a member of the krantz set.  As a user, you can browse through the Tractatus and assign a confidence and value to each element (explicit and conditional) and create a market evaluation of the ideas contained.  If hundreds of other philosophers created works like the Tractatus (but included confidences and values), then we would be able to compile their lists together into a collective market evaluation of ideas.  A collective intelligence. Society seemed to enjoy continental philosophy more, so we ended up with LLMs instead of krantz functions. It is important to note, that philosophy is quite a wide domain.  The content in the krantz set can range from political opinions, to normative values, to basic common sense facts (the domain approximated by CYC), to arguments about the forefront of technology and what future we should pursue.  We could solve philosophy.  We could define our language.  If @EliezerYudkowsky wanted to insert his full Tractatus of arguments for high p(doom) into such a system, then we could create a market mechanism that rewards only the individuals that either (1) identify a viable analytic counterargument that achieves greater market support or (2) align with his arguments (because, assuming they are complete and sound, it would allow for Eliezer to exploit the market contradiction between propositions that logically imply each other).  For example, if Bob denies 'Socrates is mortal' but has accepted 'Socrates is a man.' and 'All men are mortal.', both with confidence(.99), then he will create a market variance by wagering on anything lower than the product of those confidences.  So, why would Bob wager in the market of 'Socrates is mortal.'?  Liquidity.  Somebody that wants Bob to demonstrate to society that he believes 'Socrates is mortal' injected it. Limitations of the current incentive structure in Manifold and what I'd recommend considering to change: There is a fundamental difference between Manifold and Metaculus.  The difference is in whether or not capital is needed to be put down in order to earn points.  Metaculus embraces a mechanism that is beneficial for the user with no capital, but insight into the truth.  Manifold uses a mechanism that requires capital investment, but attracts more participation.  Both concerns can be addressed by handling liquidity injection as the primary driver of incentives.  When a user wants to inject capital into a market, they can inject liquidity into particular propositions directly (this could also be done by assigning a value(0-1) and distributing liquidity proportionally across propositions from a general supply). That liquidity is dispersed evenly across either all or a select group of users as funds that can be wagered only on that proposition.  Think of the liquidity as a 'proxy bet' the house (in this case the individuals injecting liquidity into the market) is letting the user place on what the market confidence will trend to.  If the user fails to place a wager, the free bet may be retracted if liquidity is withdrawn from the fund.  If the user places a wager and loses, the user can no longer place any free wagers on that proposition (unless further liquidity is contributed which then would allow the user to freely invest the difference), but may still choose to wager their own funds up to the amount allocated for the free wager.  If a user places a proxy wager for 100 and the market moves to where they have a value of 1000, they can choose to sell their shares to receive 900 credits and reimburse the proxy liquidity to be used in the future.  In other words, if you have 500 mana and 10 people, instead of injecting 500 mana of liquidity into a market that 10 people will be incentivized to wager their own funds on, we should give 50 mana in 'proxy wagers' to each person with the expectation that they will have to invest cognitive labor to earn a profit that they get to retain. The two important principles here are that the liquidity injections (which can be contributed by any users) are (1) what determine the ceiling initial investment in a proposition and (2) the extent of the losses the liquidity will cover.   Overall, there are many aspects of this demonstration that would have to come together to provide a true approximation of what I'm trying to convince the open source community to build.   1. The function of the system would have to exist decentrally.  This could happen if we truly considered the krantz mechanism an abstract function that various competing markets compete to align.  Much how there are different sports betting applications with different interfaces individuals can choose to use, the actual 'sports odds' are distributed across a network of competitive forecasters and are kept accurate by the market. 2. Each account would need to be humanity verified.  This is biggest hurdle to overcome.  It is important to understand that this would not require restricting users from using bots.  It would only restrict them to having one account that 'speaks on their behalf'.  In other words, as long as we don't have individuals with multiple accounts, we can hold the individuals accountable for the actions of their bot. 3.  It would take an incredible investment of liquidity to see the scaling effects of such a market.  Enough liquidity to incentivize everyone to evaluate/wager on the portions of their own Tractatus that you are willing to pay them to think about.   In general, the purpose of this demonstration is to show people a way to directly incentivize others to realize what they might want to think about by providing game theoretic market incentives that require them to wager on their beliefs. I've written several other questions that demonstrate aspects of this project. Examples of how we might use this process to align constitutions: https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6  https://manifold.markets/Krantz/if-the-work-between-anthropic-and-t?r=S3JhbnR6 Wagers on whether a process like this is viable: https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6  https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6 https://manifold.markets/Krantz/this-is-a-solution-to-alignment?r=S3JhbnR6 A paraphrase of the process of recommending propositions to wager on: https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6 https://manifold.markets/Krantz/polls-should-able-to-function-like?r=S3JhbnR6 https://manifold.markets/Krantz/define-a-function-that-converts-cap?r=S3JhbnR6 Attention mechanisms: https://manifold.markets/Krantz/what-is-the-most-important-question?r=S3JhbnR6 https://manifold.markets/Krantz/what-topicsissueevent-should-the-ge?r=S3JhbnR6 https://manifold.markets/Krantz/what-person-should-have-the-most-co?r=S3JhbnR6 https://manifold.markets/Krantz/what-will-i-believe-is-the-most-imp?r=S3JhbnR6 https://manifold.markets/Krantz/who-are-you-aligned-with?r=S3JhbnR6 https://manifold.markets/Krantz/which-will-be-voted-the-most-import?r=S3JhbnR6 https://manifold.markets/Krantz/guessing-game-what-propositions-doe?r=S3JhbnR6 https://manifold.markets/Krantz/what-will-i-believe-is-the-most-imp?r=S3JhbnR6 Please consider finding an argument you think is important for people to understand and map it. I personally think it would be amazing if predominant philosophers could offer other philosophers they disagree with 'free bets' to wager on specific crux issues. It would be a function that turns capital into attention. @manifold - Please consider implementing these changes.  I'd love for folks that are capable of writing these sorts of arguments to earn a living by writing them while at the same time providing individuals with no capital the opportunity to earn something for paying attention to them. This market will not resolve and is simply intended to serve as a demonstration of a mechanism for allowing philosophers to wager on what language ought be in a manner that uses the market to lead them to their cruxes.  *This demonstration does not include the algorithm I keep talking about. It's function is to evaluate the variances between user 'Tractati' (plural of Tractatus??) and recommend inferential trajectories that efficiently surface contradictions within the system.

@Chumchulum I created that prediction as an incentive to construct an argument. The mana I wagered on 'no' is intended to be a game theoretic donation for completing the task.

© Manifold Markets, Inc.TermsPrivacy