Who will cheerfully explain to @Krantz why his proposed solution to alignment of ASI isn't valid?
9
2.8kṀ3749
2030
11%
Holly Elmore
10%
Cat Woods
10%
Robin Hanson
9%
Liron Shapira
8%
Rosie Campbell
7%
Nathan Young
5%
Roman Yampolskiy
4%
Rob Miles
4%
Sarah Constantin
4%
Ronny Fernandez
4%
Allison Duettmann
4%
Agnes Callard
2%
Amanda Askell
2%
Zvi Mowshowitz
1.9%
Aella
1.6%
Nate Soares
1.6%
Other

Since 2010, I've been trying to securely share a solution to the scalable alignment of a decentralized symbolic artificial collective intelligence that can beat machine learning to the punch with folks like Doug Lenat, Danny Hillis, Ben Goertzle, Balaji Srinivasan and Robin Hanson. I really need to talk to Eliezer.

https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6

https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6

Most of my predictions are attempts to find someone to either (1) help me share infohazardous research securely or (2) help me understand why the fate of the world doesn't rest on whether this research safely makes it to the public domain.

I will not make any profit off this prediction. I will only be feeding it names and mana to incentivize cheerful charitablility. If I add you to this list, it is because I respect you. I don't have much money, but this is the most important place I can find to put it. I genuinely believe I am trying to save the world.

It will resolve to the first person that takes the time to explain to me why my approach is invalid. I hope someone does. I will give any profit I make from this prediction to the person (other than me) that invests the most into it via bets or liquidity.

Thank you for helping me learn. 🤓

  • Update 2025-07-31 (PST) (AI summary of creator comment): The creator has clarified the process for how one might successfully convince him:

    • One should use his argument system to challenge a proposition on which you and the creator disagree.

    • The creator noted that nobody has fully used the system for this purpose yet.

  • Update 2025-08-01 (PST) (AI summary of creator comment): The creator has clarified that you can use any method to convince them their solution is invalid. Using the creator's specific argument system is not a requirement for the market to resolve.

  • Update 2025-08-02 (PST) (AI summary of creator comment): To resolve this market, one must convince the creator that his proposed process for scaling symbolic AI does not scale.

Arguments about other potential failure modes of the system, such as its economic viability, will not be sufficient for resolution.

  • Update 2025-08-03 (PST) (AI summary of creator comment): The creator has specified that his claim is that a "truth economy" can scale to a point where it generates > 50% of GDP.

At this point, he argues, society would have the coordination and understanding necessary to halt the scaling of dangerous Machine Learning. To convince him his solution is invalid, one would have to argue against this premise.

Get
Ṁ1,000
to start trading!
Sort by:
bought Ṁ5 YES

(Copied from other market)
Credit where it is due for developing enough humility to acknowledge that being disproved is indeed a possibility, could this be the start of the krantz redemption arc?

If I convinced you that the alignment tax imposed by the truth economy was sufficiently high that a mis-aligned AGI controlling 49% of GDP would outcompete a truth economy controlling 51% of GDP, would that cause this question to resolve?

@LoganZoellner No. I'm not sure what an alignment tax has to do with anything I'm talking about. I'm claiming there is a process that exists which could scale symbolic AI fast enough to beat ML to the punch.

The goal would be to convince me that process doesn't scale. So far, nobody has really demonstrated they understand that process.

@Krantz

> I'm claiming there is a process that exists which could scale symbolic AI fast enough to beat ML to the punch.

It sounds like you are claiming that your process (truth economy) can out compete alternative process (scaled up Deep learning) from some starting point (which I inferred was 50% of GDP based on your other post).

Is that the actual claim, or is it also necessary to ban traditional ML?

@LoganZoellner I'm claiming that if we scaled up a truth economy, such that most individuals understood they could earn a living by learning about and publically voting on particular issues in a way that can be utilized by lawyers in a petition to force actions that stop scaling (this process can also be applied to any other normative changes we might want to make), then that function will have more economic inertia than the economic force motivating further scaling of ML. If people understand that the only job left is to produce alignment data and that it is in their best interest to protect that job from artificial capture, they ought coordinate to produce the game theoretic conditions to prevent it.

Whether or not a truth economy can "out compete" ML is sort of a category error. A truth economy doesn't really perform the same function as a ML based AI.

If we wanted to compare the component that's important, we should look at the GDP produced per unit of energy put into the system.

ML runs on oil.

Truth economy runs on human cognitive labor.

At the end of the day, humanity needs to decide who is going to align AI. (1) The guys with all the oil or (2) a truly free and open market that philosophers openly compete on. I really don't want my kids ending up in a 1984 novel.

My claim is, if we scale a truth economy to the point where it is creating > 50% GDP, by that time, the world will understand well enough the dangers of scaling ML, the opportunity that exists for them if they protect the job of aligning AI and have the coordination ability to secure the future they want.

I added my name. I engaged with you in a pretty genuine way about this a few months back. I came away convinced you are sincere, but deluded.

I'd have to get familiar with your stuff again, which would take a large incentive at this point. But with some patience I think I could convince you to drop it, or at least why Yudkowsky would not accept your system.

@robm thanks, I really appreciate it!

opened a Ṁ100 YES at 21% order

Does this resolve to "nobody" in 2030 if nobody has convinced you?

If not Destiny, who should I add

@Chumchulum Charitable analytic philosophers. Feel free to add yourself. You seem interested and curiosity is what most people are lacking. Thanks for engaging with the mechanism. Do you understand the basic concept? Try finding a proposition we disagree on and write an argument in the system that challenges it. Nobody has actually used it fully to do that yet.

write an argument in the system that challenges it

I don't like this part. Your expecting someone to study and understand your system, disagree with it*, and then limit themselves to your system while convincing you it's wrong?

Let them use any method they have to convince you. As long as they demonstrate enough understanding of your system, that should be enough.

* There's a branch at this step where they could agree instead of disagree, but in that case the rest of my point doesn't matter

@robm You can convince me however you want. I just wanted @Chumchulum to write an argument to demonstrate they understood the process.

@robm Also, I can see how you wouldn't like the part about restricting yourself to the system to convince me, but the expectation for you to use the system, so that I can verify you understand how it works seems pretty reasonable to me. One of the propositions in my formal argument is "it is possible to build a machine that pays people to demonstrate they've understood something". I'm trying to get you to demonstrate that you understand the system within the system. I will give you money to do that. This should compel you to accept at least that proposition.

@Krantz Does write an argument in the system mean a numbered series of syllogisms as in this Krantz system? https://manifold.markets/Krantz/which-proposition-will-be-denied?r=Q2h1bWNodWx1bQ

I intend to write such an argument about a proposition in that very Krantz, and so cause you to accept the existence of contradictions ( Gegensatzen ). This is a secret alternative win condition to your market here, https://manifold.markets/Krantz/will-anyone-write-an-argument-that?r=Q2h1bWNodWx1bQ and to that effect I'm currently asking diamaterialists to bet while it's still low so we can all profit together, found the Manifold Red Army, and start to clear away so much metaphysical nonsense on manifold.markets.

@Krantz Though I have several projects on the backburner, so please don't be shy of reminding me. It is also too early to place a significant bet on https://manifold.markets/IsaacKing/will-manifold-be-taken-over-by-any?r=Q2h1bWNodWx1bQ.

@Chumchulum This is the system I'm referring to.

https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6

Try to add a numbered series of syllogisms that lead me into changing some belief I have (something more fundamental than a clerical error).

Krantz Mechanism Demonstration
This prediction is aimed at demonstrating the function of the krantz mechanism. The krantz mechanism is an abstract format for storing information. Similar to 'the set of integers', it does not exist in the physical world. To demonstrate the set of integers, we make physical approximations of the abstract idea by writing them down.  Humans have created symbols that point to these elements and have shared them with each other (to ensure the process of decentralization) and agreed to use the same ones so nobody gets confused. To demonstrate the krantz mechanism, we make a symbolic approximation of the abstract idea, by writing down elements of the form ('Any formal proposition.', confidence=range(0-1), value=range(0-1)). My claim is that the krantz mechanism is an operation that can be applied to approximate a person's internal model of the world.  Much how LLMs can infinitely scale to approximate a person's model of the language itself, krantz aims to compress the logical relation of observable truths that language points at. For this demonstration, I will not be using value=range(0-1) and will only be using ('This is a proposition.', confidence=range(.01-.99)) (due to limitations of the current Manifold system).   If I were working at Manifold, I would recommend creating a secondary metric for expressing the degree to which the 'evaluation of confidence' is valuable.  This will later play a critical role in the decentralization of attention. A proposition can be any expression of language that points to a discrete truth about the user's internal world model.  Analytic philosophers write arguments using propositions.  Predictions are a subset of propositions.  Memes and other forms of visual expression can also be true or false propositions (They can point to an expression of the real state of the observable Universe).  Things that are not discrete propositions:  Expressions that contain multiple propositions, blog posts, most videos, or simple ideas like 'liberty' or 'apple'.   Each proposition (first element of the set) must have these properties. (1) Is language. A language can be very broad in the abstract sense of the krantz mechanism, but for this posting, we will restrict language to a string of characters that points to a proposition or logical relation of propositions. (2) Is true or false. Can have a subjective confidence assigned to it by an observer. (3) Has value. Represents a market evaluation of how important that idea is to society.  This is distinct from representing the value the idea has specifically to the user.  It is aimed to represent roughly what a member of society would be willing to pay for that truth to be well accepted by society.  One example of this would be taxes.  Ideally, we pay our taxes because we want someone to establish and become familiar with the issues in our environment that are important to us, make sure people know how to fix them, make sure society will reward them if they do and make sure society understands the need to reward the individuals that do resolve the problems.  We pay our taxes, so people will do things for us.  We do this on social media with likes and shares.  We give up attention to the algorithm to endorse and support other ideas in society because we believe in investing value into directing attention to it.  We do this in education and professional careers.  Our economy is driven because it rewards those that succeed in figuring out how to do what society needed done and direct the energy to do it. We give our children rewards for doing their homework because it is important for engineers to understand math.  Soon all that will be left is learning and verifying where to direct the abundant energy to accomplish the tasks that we should.  It is a way of pointing the cognitive machinery of the collective consciousness.   Propositions as logical relations: Since propositions can be written as conditionals, like 'If Socrates is a man, then Socrates is mortal.' or as nested conditionals, like 'If all men are mortal, then 'If Socrates is a man, then Socrates is mortal.'.', it follows that the work of analytic philosophers can be defined within a theoretical object like the krantz mechanism.  For example, every element of the Tractatus is a member of the krantz set.  Every logical relation defined by the outline of the Tractatus is a member of the krantz set.  As a user, you can browse through the Tractatus and assign a confidence and value to each element (explicit and conditional) and create a market evaluation of the ideas contained.  If hundreds of other philosophers created works like the Tractatus (but included confidences and values), then we would be able to compile their lists together into a collective market evaluation of ideas.  A collective intelligence. Society seemed to enjoy continental philosophy more, so we ended up with LLMs instead of krantz functions. It is important to note, that philosophy is quite a wide domain.  The content in the krantz set can range from political opinions, to normative values, to basic common sense facts (the domain approximated by CYC), to arguments about the forefront of technology and what future we should pursue.  We could solve philosophy.  We could define our language.  If @EliezerYudkowsky wanted to insert his full Tractatus of arguments for high p(doom) into such a system, then we could create a market mechanism that rewards only the individuals that either (1) identify a viable analytic counterargument that achieves greater market support or (2) align with his arguments (because, assuming they are complete and sound, it would allow for Eliezer to exploit the market contradiction between propositions that logically imply each other).  For example, if Bob denies 'Socrates is mortal' but has accepted 'Socrates is a man.' and 'All men are mortal.', both with confidence(.99), then he will create a market variance by wagering on anything lower than the product of those confidences.  So, why would Bob wager in the market of 'Socrates is mortal.'?  Liquidity.  Somebody that wants Bob to demonstrate to society that he believes 'Socrates is mortal' injected it. Limitations of the current incentive structure in Manifold and what I'd recommend considering to change: There is a fundamental difference between Manifold and Metaculus.  The difference is in whether or not capital is needed to be put down in order to earn points.  Metaculus embraces a mechanism that is beneficial for the user with no capital, but insight into the truth.  Manifold uses a mechanism that requires capital investment, but attracts more participation.  Both concerns can be addressed by handling liquidity injection as the primary driver of incentives.  When a user wants to inject capital into a market, they can inject liquidity into particular propositions directly (this could also be done by assigning a value(0-1) and distributing liquidity proportionally across propositions from a general supply). That liquidity is dispersed evenly across either all or a select group of users as funds that can be wagered only on that proposition.  Think of the liquidity as a 'proxy bet' the house (in this case the individuals injecting liquidity into the market) is letting the user place on what the market confidence will trend to.  If the user fails to place a wager, the free bet may be retracted if liquidity is withdrawn from the fund.  If the user places a wager and loses, the user can no longer place any free wagers on that proposition (unless further liquidity is contributed which then would allow the user to freely invest the difference), but may still choose to wager their own funds up to the amount allocated for the free wager.  If a user places a proxy wager for 100 and the market moves to where they have a value of 1000, they can choose to sell their shares to receive 900 credits and reimburse the proxy liquidity to be used in the future.  In other words, if you have 500 mana and 10 people, instead of injecting 500 mana of liquidity into a market that 10 people will be incentivized to wager their own funds on, we should give 50 mana in 'proxy wagers' to each person with the expectation that they will have to invest cognitive labor to earn a profit that they get to retain. The two important principles here are that the liquidity injections (which can be contributed by any users) are (1) what determine the ceiling initial investment in a proposition and (2) the extent of the losses the liquidity will cover.   Overall, there are many aspects of this demonstration that would have to come together to provide a true approximation of what I'm trying to convince the open source community to build.   1. The function of the system would have to exist decentrally.  This could happen if we truly considered the krantz mechanism an abstract function that various competing markets compete to align.  Much how there are different sports betting applications with different interfaces individuals can choose to use, the actual 'sports odds' are distributed across a network of competitive forecasters and are kept accurate by the market. 2. Each account would need to be humanity verified.  This is biggest hurdle to overcome.  It is important to understand that this would not require restricting users from using bots.  It would only restrict them to having one account that 'speaks on their behalf'.  In other words, as long as we don't have individuals with multiple accounts, we can hold the individuals accountable for the actions of their bot. 3.  It would take an incredible investment of liquidity to see the scaling effects of such a market.  Enough liquidity to incentivize everyone to evaluate/wager on the portions of their own Tractatus that you are willing to pay them to think about.   In general, the purpose of this demonstration is to show people a way to directly incentivize others to realize what they might want to think about by providing game theoretic market incentives that require them to wager on their beliefs. I've written several other questions that demonstrate aspects of this project. Examples of how we might use this process to align constitutions: https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6  https://manifold.markets/Krantz/if-the-work-between-anthropic-and-t?r=S3JhbnR6 Wagers on whether a process like this is viable: https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6  https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6 https://manifold.markets/Krantz/this-is-a-solution-to-alignment?r=S3JhbnR6 A paraphrase of the process of recommending propositions to wager on: https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6 https://manifold.markets/Krantz/polls-should-able-to-function-like?r=S3JhbnR6 https://manifold.markets/Krantz/define-a-function-that-converts-cap?r=S3JhbnR6 Attention mechanisms: https://manifold.markets/Krantz/what-is-the-most-important-question?r=S3JhbnR6 https://manifold.markets/Krantz/what-topicsissueevent-should-the-ge?r=S3JhbnR6 https://manifold.markets/Krantz/what-person-should-have-the-most-co?r=S3JhbnR6 https://manifold.markets/Krantz/what-will-i-believe-is-the-most-imp?r=S3JhbnR6 https://manifold.markets/Krantz/who-are-you-aligned-with?r=S3JhbnR6 https://manifold.markets/Krantz/which-will-be-voted-the-most-import?r=S3JhbnR6 https://manifold.markets/Krantz/guessing-game-what-propositions-doe?r=S3JhbnR6 https://manifold.markets/Krantz/what-will-i-believe-is-the-most-imp?r=S3JhbnR6 Please consider finding an argument you think is important for people to understand and map it. I personally think it would be amazing if predominant philosophers could offer other philosophers they disagree with 'free bets' to wager on specific crux issues. It would be a function that turns capital into attention. @manifold - Please consider implementing these changes.  I'd love for folks that are capable of writing these sorts of arguments to earn a living by writing them while at the same time providing individuals with no capital the opportunity to earn something for paying attention to them. This market will not resolve and is simply intended to serve as a demonstration of a mechanism for allowing philosophers to wager on what language ought be in a manner that uses the market to lead them to their cruxes.  *This demonstration does not include the algorithm I keep talking about. It's function is to evaluate the variances between user 'Tractati' (plural of Tractatus??) and recommend inferential trajectories that efficiently surface contradictions within the system.

@Chumchulum I created that prediction as an incentive to construct an argument. The mana I wagered on 'no' is intended to be a game theoretic donation for completing the task.

© Manifold Markets, Inc.TermsPrivacy