Since 2010, I've been trying to securely share a solution to the scalable alignment of a decentralized symbolic artificial collective intelligence that can beat machine learning to the punch with folks like Doug Lenat, Danny Hillis, Ben Goertzle, Balaji Srinivasan and Robin Hanson. I really need to talk to Eliezer.
https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6
https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6
Most of my predictions are attempts to find someone to either (1) help me share infohazardous research securely or (2) help me understand why the fate of the world doesn't rest on whether this research safely makes it to the public domain.
I will not make any profit off this prediction. I will only be feeding it names and mana to incentivize cheerful charitablility. If I add you to this list, it is because I respect you. I don't have much money, but this is the most important place I can find to put it. I genuinely believe I am trying to save the world.
It will resolve to the first person that takes the time to explain to me why my approach is invalid. I hope someone does. I will give any profit I make from this prediction to the person (other than me) that invests the most into it via bets or liquidity.
Thank you for helping me learn. 🤓
Update 2025-07-31 (PST) (AI summary of creator comment): The creator has clarified the process for how one might successfully convince him:
One should use his argument system to challenge a proposition on which you and the creator disagree.
The creator noted that nobody has fully used the system for this purpose yet.
Update 2025-08-01 (PST) (AI summary of creator comment): The creator has clarified that you can use any method to convince them their solution is invalid. Using the creator's specific argument system is not a requirement for the market to resolve.
Update 2025-08-02 (PST) (AI summary of creator comment): To resolve this market, one must convince the creator that his proposed process for scaling symbolic AI does not scale.
Arguments about other potential failure modes of the system, such as its economic viability, will not be sufficient for resolution.
Update 2025-08-03 (PST) (AI summary of creator comment): The creator has specified that his claim is that a "truth economy" can scale to a point where it generates > 50% of GDP.
At this point, he argues, society would have the coordination and understanding necessary to halt the scaling of dangerous Machine Learning. To convince him his solution is invalid, one would have to argue against this premise.
People are also trading
@LoganZoellner No. I'm not sure what an alignment tax has to do with anything I'm talking about. I'm claiming there is a process that exists which could scale symbolic AI fast enough to beat ML to the punch.
The goal would be to convince me that process doesn't scale. So far, nobody has really demonstrated they understand that process.
@Krantz
> I'm claiming there is a process that exists which could scale symbolic AI fast enough to beat ML to the punch.
It sounds like you are claiming that your process (truth economy) can out compete alternative process (scaled up Deep learning) from some starting point (which I inferred was 50% of GDP based on your other post).
Is that the actual claim, or is it also necessary to ban traditional ML?
@LoganZoellner I'm claiming that if we scaled up a truth economy, such that most individuals understood they could earn a living by learning about and publically voting on particular issues in a way that can be utilized by lawyers in a petition to force actions that stop scaling (this process can also be applied to any other normative changes we might want to make), then that function will have more economic inertia than the economic force motivating further scaling of ML. If people understand that the only job left is to produce alignment data and that it is in their best interest to protect that job from artificial capture, they ought coordinate to produce the game theoretic conditions to prevent it.
Whether or not a truth economy can "out compete" ML is sort of a category error. A truth economy doesn't really perform the same function as a ML based AI.
If we wanted to compare the component that's important, we should look at the GDP produced per unit of energy put into the system.
ML runs on oil.
Truth economy runs on human cognitive labor.
At the end of the day, humanity needs to decide who is going to align AI. (1) The guys with all the oil or (2) a truly free and open market that philosophers openly compete on. I really don't want my kids ending up in a 1984 novel.
My claim is, if we scale a truth economy to the point where it is creating > 50% GDP, by that time, the world will understand well enough the dangers of scaling ML, the opportunity that exists for them if they protect the job of aligning AI and have the coordination ability to secure the future they want.
I added my name. I engaged with you in a pretty genuine way about this a few months back. I came away convinced you are sincere, but deluded.
I'd have to get familiar with your stuff again, which would take a large incentive at this point. But with some patience I think I could convince you to drop it, or at least why Yudkowsky would not accept your system.
@Chumchulum Charitable analytic philosophers. Feel free to add yourself. You seem interested and curiosity is what most people are lacking. Thanks for engaging with the mechanism. Do you understand the basic concept? Try finding a proposition we disagree on and write an argument in the system that challenges it. Nobody has actually used it fully to do that yet.
write an argument in the system that challenges it
I don't like this part. Your expecting someone to study and understand your system, disagree with it*, and then limit themselves to your system while convincing you it's wrong?
Let them use any method they have to convince you. As long as they demonstrate enough understanding of your system, that should be enough.
* There's a branch at this step where they could agree instead of disagree, but in that case the rest of my point doesn't matter
@robm You can convince me however you want. I just wanted @Chumchulum to write an argument to demonstrate they understood the process.
@robm Also, I can see how you wouldn't like the part about restricting yourself to the system to convince me, but the expectation for you to use the system, so that I can verify you understand how it works seems pretty reasonable to me. One of the propositions in my formal argument is "it is possible to build a machine that pays people to demonstrate they've understood something". I'm trying to get you to demonstrate that you understand the system within the system. I will give you money to do that. This should compel you to accept at least that proposition.
@Krantz Does write an argument in the system mean a numbered series of syllogisms as in this Krantz system? https://manifold.markets/Krantz/which-proposition-will-be-denied?r=Q2h1bWNodWx1bQ
I intend to write such an argument about a proposition in that very Krantz, and so cause you to accept the existence of contradictions ( Gegensatzen ). This is a secret alternative win condition to your market here, https://manifold.markets/Krantz/will-anyone-write-an-argument-that?r=Q2h1bWNodWx1bQ and to that effect I'm currently asking diamaterialists to bet while it's still low so we can all profit together, found the Manifold Red Army, and start to clear away so much metaphysical nonsense on manifold.markets.
@Krantz Though I have several projects on the backburner, so please don't be shy of reminding me. It is also too early to place a significant bet on https://manifold.markets/IsaacKing/will-manifold-be-taken-over-by-any?r=Q2h1bWNodWx1bQ.
@Chumchulum This is the system I'm referring to.
https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6
Try to add a numbered series of syllogisms that lead me into changing some belief I have (something more fundamental than a clerical error).
@Chumchulum I created that prediction as an incentive to construct an argument. The mana I wagered on 'no' is intended to be a game theoretic donation for completing the task.