
This prediction is aimed at demonstrating the function of the krantz mechanism.
The krantz mechanism is an abstract format for storing information. Similar to 'the set of integers', it does not exist in the physical world. To demonstrate the set of integers, we make physical approximations of the abstract idea by writing them down. Humans have created symbols that point to these elements and have shared them with each other (to ensure the process of decentralization) and agreed to use the same ones so nobody gets confused.
To demonstrate the krantz mechanism, we make a symbolic approximation of the abstract idea, by writing down elements of the form ('Any formal proposition.', confidence=range(0-1), value=range(0-1)).
My claim is that the krantz mechanism is an operation that can be applied to approximate a person's internal model of the world. Much how LLMs can infinitely scale to approximate a person's model of the language itself, krantz aims to compress the logical relation of observable truths that language points at.
For this demonstration, I will not be using value=range(0-1) and will only be using ('This is a proposition.', confidence=range(.01-.99)) (due to limitations of the current Manifold system). If I were working at Manifold, I would recommend creating a secondary metric for expressing the degree to which the 'evaluation of confidence' is valuable. This will later play a critical role in the decentralization of attention.
A proposition can be any expression of language that points to a discrete truth about the user's internal world model. Analytic philosophers write arguments using propositions. Predictions are a subset of propositions. Memes and other forms of visual expression can also be true or false propositions (They can point to an expression of the real state of the observable Universe).
Things that are not discrete propositions: Expressions that contain multiple propositions, blog posts, most videos, or simple ideas like 'liberty' or 'apple'.
Each proposition (first element of the set) must have these properties.
(1) Is language.
A language can be very broad in the abstract sense of the krantz mechanism, but for this posting, we will restrict language to a string of characters that points to a proposition or logical relation of propositions.
(2) Is true or false.
Can have a subjective confidence assigned to it by an observer.
(3) Has value.
Represents a market evaluation of how important that idea is to society. This is distinct from representing the value the idea has specifically to the user. It is aimed to represent roughly what a member of society would be willing to pay for that truth to be well accepted by society. One example of this would be taxes. Ideally, we pay our taxes because we want someone to establish and become familiar with the issues in our environment that are important to us, make sure people know how to fix them, make sure society will reward them if they do and make sure society understands the need to reward the individuals that do resolve the problems. We pay our taxes, so people will do things for us. We do this on social media with likes and shares. We give up attention to the algorithm to endorse and support other ideas in society because we believe in investing value into directing attention to it. We do this in education and professional careers. Our economy is driven because it rewards those that succeed in figuring out how to do what society needed done and direct the energy to do it. We give our children rewards for doing their homework because it is important for engineers to understand math. Soon all that will be left is learning and verifying where to direct the abundant energy to accomplish the tasks that we should. It is a way of pointing the cognitive machinery of the collective consciousness.
Propositions as logical relations:
Since propositions can be written as conditionals, like 'If Socrates is a man, then Socrates is mortal.' or as nested conditionals, like 'If all men are mortal, then 'If Socrates is a man, then Socrates is mortal.'.', it follows that the work of analytic philosophers can be defined within a theoretical object like the krantz mechanism. For example, every element of the Tractatus is a member of the krantz set. Every logical relation defined by the outline of the Tractatus is a member of the krantz set. As a user, you can browse through the Tractatus and assign a confidence and value to each element (explicit and conditional) and create a market evaluation of the ideas contained. If hundreds of other philosophers created works like the Tractatus (but included confidences and values), then we would be able to compile their lists together into a collective market evaluation of ideas. A collective intelligence.
Society seemed to enjoy continental philosophy more, so we ended up with LLMs instead of krantz functions.
It is important to note, that philosophy is quite a wide domain. The content in the krantz set can range from political opinions, to normative values, to basic common sense facts (the domain approximated by CYC), to arguments about the forefront of technology and what future we should pursue. We could solve philosophy. We could define our language. If @EliezerYudkowsky wanted to insert his full Tractatus of arguments for high p(doom) into such a system, then we could create a market mechanism that rewards only the individuals that either (1) identify a viable analytic counterargument that achieves greater market support or (2) align with his arguments (because, assuming they are complete and sound, it would allow for Eliezer to exploit the market contradiction between propositions that logically imply each other). For example, if Bob denies 'Socrates is mortal' but has accepted 'Socrates is a man.' and 'All men are mortal.', both with confidence(.99), then he will create a market variance by wagering on anything lower than the product of those confidences. So, why would Bob wager in the market of 'Socrates is mortal.'? Liquidity.
Somebody that wants Bob to demonstrate to society that he believes 'Socrates is mortal' injected it.
Limitations of the current incentive structure in Manifold and what I'd recommend considering to change:
There is a fundamental difference between Manifold and Metaculus. The difference is in whether or not capital is needed to be put down in order to earn points. Metaculus embraces a mechanism that is beneficial for the user with no capital, but insight into the truth. Manifold uses a mechanism that requires capital investment, but attracts more participation. Both concerns can be addressed by handling liquidity injection as the primary driver of incentives.
When a user wants to inject capital into a market, they can inject liquidity into particular propositions directly (this could also be done by assigning a value(0-1) and distributing liquidity proportionally across propositions from a general supply). That liquidity is dispersed evenly across either all or a select group of users as funds that can be wagered only on that proposition. Think of the liquidity as a 'proxy bet' the house (in this case the individuals injecting liquidity into the market) is letting the user place on what the market confidence will trend to. If the user fails to place a wager, the free bet may be retracted if liquidity is withdrawn from the fund. If the user places a wager and loses, the user can no longer place any free wagers on that proposition (unless further liquidity is contributed which then would allow the user to freely invest the difference), but may still choose to wager their own funds up to the amount allocated for the free wager. If a user places a proxy wager for 100 and the market moves to where they have a value of 1000, they can choose to sell their shares to receive 900 credits and reimburse the proxy liquidity to be used in the future.
In other words, if you have 500 mana and 10 people, instead of injecting 500 mana of liquidity into a market that 10 people will be incentivized to wager their own funds on, we should give 50 mana in 'proxy wagers' to each person with the expectation that they will have to invest cognitive labor to earn a profit that they get to retain.
The two important principles here are that the liquidity injections (which can be contributed by any users) are (1) what determine the ceiling initial investment in a proposition and (2) the extent of the losses the liquidity will cover.
Overall, there are many aspects of this demonstration that would have to come together to provide a true approximation of what I'm trying to convince the open source community to build.
1. The function of the system would have to exist decentrally. This could happen if we truly considered the krantz mechanism an abstract function that various competing markets compete to align. Much how there are different sports betting applications with different interfaces individuals can choose to use, the actual 'sports odds' are distributed across a network of competitive forecasters and are kept accurate by the market.
2. Each account would need to be humanity verified. This is biggest hurdle to overcome. It is important to understand that this would not require restricting users from using bots. It would only restrict them to having one account that 'speaks on their behalf'. In other words, as long as we don't have individuals with multiple accounts, we can hold the individuals accountable for the actions of their bot.
3. It would take an incredible investment of liquidity to see the scaling effects of such a market. Enough liquidity to incentivize everyone to evaluate/wager on the portions of their own Tractatus that you are willing to pay them to think about.
In general, the purpose of this demonstration is to show people a way to directly incentivize others to realize what they might want to think about by providing game theoretic market incentives that require them to wager on their beliefs.
I've written several other questions that demonstrate aspects of this project.
Examples of how we might use this process to align constitutions:
https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6
https://manifold.markets/Krantz/if-the-work-between-anthropic-and-t?r=S3JhbnR6
Wagers on whether a process like this is viable:
https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6
https://manifold.markets/Krantz/this-is-a-solution-to-alignment?r=S3JhbnR6
A paraphrase of the process of recommending propositions to wager on:
https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6
https://manifold.markets/Krantz/polls-should-able-to-function-like?r=S3JhbnR6
https://manifold.markets/Krantz/define-a-function-that-converts-cap?r=S3JhbnR6
Attention mechanisms:
https://manifold.markets/Krantz/what-is-the-most-important-question?r=S3JhbnR6
https://manifold.markets/Krantz/what-topicsissueevent-should-the-ge?r=S3JhbnR6
https://manifold.markets/Krantz/what-person-should-have-the-most-co?r=S3JhbnR6
https://manifold.markets/Krantz/which-predictions-on-manifold-are-t?r=S3JhbnR6
Please consider finding an argument you think is important for people to understand and map it. I personally think it would be amazing if predominant philosophers could offer other philosophers they disagree with 'free bets' to wager on specific crux issues. It would be a function that turns capital into attention.
@manifold - Please consider implementing these changes. I'd love for folks that are capable of writing these sorts of arguments to earn a living by writing them while at the same time providing individuals with no capital the opportunity to earn something for paying attention to them.
This market will not resolve and is simply intended to serve as a demonstration of a mechanism for allowing philosophers to wager on what language ought be in a manner that uses the market to lead them to their cruxes.
*This demonstration does not include the algorithm I keep talking about. It's function is to evaluate the variances between user 'Tractati' (plural of Tractatus??) and recommend inferential trajectories that efficiently surface contradictions within the system.
People are also trading
My apologies if I've already echoed this sentiment elsewhere, but I think the most fatal flaws to this plan are:
The amount of money most people are willing to pay for others to understand their beliefs is vastly lower than the amount of money most people would need to receive to be incentivised to genuinely attempt to understand said beliefs.
Many people hold so many biases and fallacies and delusions that even if infinitely incentivised to understand something they will still fail and/or reach conclusions that totally conflict with others.
To: @Krantz
I would like to formulate a proof. Can the Krantz mechanism withstand what I'm about to present?
To demonstrate the krantz mechanism, we make a symbolic approximation of the abstract idea, by writing down elements of the form ('Any formal proposition.', confidence=range(0-1), value=range(0-1)).
Let's make a list of propositions. We will call our first proposition N. We will call the second proposition N+1, then after that, N+2, and so on. Our list of propositions can even go in the negative direction too, N-1, N-2, and so on. Out to infinity in both directions -- infinitely many propositions.
Let's say that each of these propositions are definitionally determined to be true or false by decree of me, and are broadcast to all human civilization in a universal language, so it is universally understood, accepted, and verified by all. By default, they are all false, but I can change my mind whenever.
@Krantz Would you agree that the default state of each proposition can be mapped to 0, and whenever I decree a proposition is changed to true, it can be mapped to 1? It can then be mapped back to 0 if I decree it false once more? Each of those states could have a confidence level of 100%?
Additionally, can the Krantz mechanism map all beliefs somebody could have?
Edit: And any belief?
@Quroe based on his other market about most important questions I think the idea is the the truth value decree must be made by an aligned ASI, not the market creator
@Quroe You did not formulate a proof. If you want me to evaluate a proof you formulate, then formulate it property by listing your propositions (along with their logical connective structure) into the mechanism and I will point out why I think it is invalid by formulating a counterproof that leads to the denial of one of your propositions.
I think that would help you understand how this works..
@Krantz What I'm ultimately trying to build to is this.
I have been trying to read into your reasoning for how the Krantz mechanism works. You seem to state that it can map beliefs and pay people for doing so. It can also pay people for disproving beliefs.
The issue I'm seeing is that this seems like wishful thinking. Imagine you had an omnipotent genie that could grant any wish if you gave it specific directions on how to perform the wish. If you wished for world peace, and it asks you how to do that, and you shrug and say you don't know, you just know what the result looks like, then it cannot grant the wish.
From what I'm seeing, you have stated the shape of the Krantz mechanism, but not how it works.
———
I believe I can formulate a proof for why the Krantz mechanism cannot exist. You may have wished for a black box that contains the program for the halting machine, but it logically cannot exist. We need not know how the Krantz mechanism works, we just need to imagine a world where it does, and then we see if such a world is logically consistent.
There are 3 lines of attack I was trying to lay the groundwork for with my first few comments.
Can the Krantz mechanism map itself?
How does the Krantz mechanism handle recursivity?
Can you trust the Krantz mechanism?
I'd like to focus on the 1st 2 bullet points for now. The 3rd requires a whole different approach that I can unravel at a later time if necessary.
———
What happens if we put the Krantz mechanism inside itself, like the halting machine?
Can the Krantz mechanism map the premise, "the Krantz mechanism cannot map this premise."? Could the Krantz mechanism properly reward anybody for proving or disproving that premise if it can be mapped?
I was also building the basis that the Krantz mechanism is Turing complete earlier. Let's construct a premise about this world where we have turned the Krantz mechanism into a Turing machine. Since it appears Turing complete, we could recursively build a whole Krantz mechanism on the Krantz mechanism! Given that setup, could we have the Krantz mechanism map the premise of "There exists a program to map this exact premise within the Krantz mechanism Turing machine." This seems to recursively build Krantz mechanisms within itself, never reaching the end of the proof. How would this result in the base level mapping of this premise? Can it be mapped? Can it be proven and disproven, paying out a reward to the person who can?
———
If the Krantz mechanism cannot handle these, I argue that it cannot exist.
For further exploration into what I'm trying to argue, it seems like the Krantz mechanism is falling into the same pitfalls here.
@Quroe You are making a category error by assuming krantz is an artificial intelligence and asking whether it is Turing complete. It is a collective intelligence. It is powered by humans. Very technically, it is a public list of propositions where individuals can express their confidence. It is a market.
Would you ask whether "the economy" is Turing complete?
That question seems absurd.
Why don't you just try using it?
@Krantz Ah, got it!
Then we must move to the final bullet point. Can we trust it?
Something I'm getting hung up on is if people should be or are allowed to bet real world money in a Krantz mechanism. Or are you only allowed to wager with credits the system grants you and redeem if for real world money afterwards?
———
While we cannot verify the humanity of every account trading on this whimsical market, would this be an approximation of a Krantz mechanism?
While the impending nature of that market closing (instead of it being eternally open) may operationalize it somewhat differently than a pure Krantz mechanism, doesn't the discourse in the comments there prove that we cannot trust people to present what they truly believe? People seem to flip-flop with the market tides. Doesn't the market value of their opinion cause them to reconsider how they present themself?
For example, somebody with a huge pile of mana has immense and outsized force on the market. (Much like company stock ownership on a board of directors, you don't need to control 100% of a company to control how it acts, you just need 51% for that guarantee.)
If someone (or some collective entity) controlled a critical mass of all the world's wealth, then they could effectively trick the Krantz mechanism to say whatever they wanted it to say for whatever ends they needed for those means. Even if they don't act on it, the threat that they could looms.
———
If a Krantz mechanism market is eternally open, what incentive is there for people to present their beliefs truly? Can you explain how it is different than the Manifold Stock Market format? It seems like that format of market doesn't seem to work well.
———
The reason I have not attempted to use the Krantz mechanism demonstration yet is because I don't trust that I can redeem the value of my input or mental effort for any worthwhile output. In fact, it may be swept out from beneath me if a hostile actor wants to declare that I'm wrong on a whim. Say I place a YES bet of some size, and somebody else places a NO bet in response. If I decide to correct them by counteracting their trades, my original YES bet has not gained any value unless I decide to set the market higher towards YES than I originally traded for.
Therefore, with that game theory in mind, if I'm deciding on whether or not to bet now I can see that I should have no faith that any wager I place now is worth anything unless I have faith that I can perform a pump-and-dump.
———
Let's say I have the proposition "1+1=2", and I want to spend an absurd amount of money to put that proposition in doubt. I want to purchase "truth".
If I have an overwhelming pile of mana, and I arbitrarily decide that proposition should be 50%, I can just buy shares to 50%, countering any market move anybody else makes. If I can do this indefinitely until everybody else runs out of mana (or faith that their bets mean anything), I have decided what truth is just by being rich, and that seems absurd. Therefore, I should have no confidence in the system.
@Quroe You seem to think that at some point some magical thing happens that transforms krantz from a table of normative opinions into an objective ledger of truth. That doesn't happen. Why do you think that happens? Why do you think that needs to happen?
You probably don't understand how that process works because it is not a process that happens.
It feels like you are viewing this as an adversary process that requires people to bet against each other over who will be determined to be objectively correct instead of a communication platform that allows individuals to collaborate on philosophy.
@Krantz So if I'm understanding correctly now, can I boil it down to being a survey with extra steps?
@Quroe it is a survey where people submit their confidence and value for every proposition that can be formed, for the purposes of aligning a decentralized constitution of truth.
Do you think having that information documented would be helpful?
I know that it was helpful for me (studying analytic philosophy) to know what propositions the other philosophers were confident in and thought were important to think about.
Now that I have a better understanding of what you are trying to articulate, then yes!
I think I've seen things close to this in the wild. It sounds like those lists of questions that candidates running for election answer that tell you where they stand on issues, except in your system, they would answer on a scale of 0 to 100 instead of a binary yes or no.
I think that's useful for the average person to quickly grasp the shape of what somebody believes. Even if it loses out on some of the resolution of context due to the potential brevity of each proposition, it gets the conversation started and would help people communicate more easily.
@Krantz how is this list of everyone's confidences and value appraisals of various statements in any way part of the same system as the whole paying people to understand stuff thing? They seem like separate ideas, one consisting of a (possibly infinitely) huge poll, and the other of prediction markets with free share vouchers
So basically, this seems to me like "prediction markets where everyone makes markets about every single belief they have" and "Market operators can give out free bets for their prediction market, e.g. giving you 50 mana which you can only use to bet on their market"
I'm not sure how valuable this is.
Value metrics collected here:
https://manifold.markets/Krantz/what-is-the-most-valuable-propositi?r=S3JhbnR6
From what I'm reading, it seems like you want a system that can bias user's attention towards proving that something is true or false more than just traditional market forces.
If I'm understanding how Manifold works correctly, I think this can be done, but I haven't actually done this yet, so take what I say lightly as I might be wrong.
---
Say, for some market, we want people to be greater incentivised to prove it true.
We make a market at base minimum liquidity for "Is premise P true?" and leave it temporarily unlisted during the setup process.
Make an initial NO bet of some amount of shares S to inject a bias into the market.
Then add a liquidity of some amount L as a subsidy.
Then purchase the same S amount of YES shares to reverse your initial biasing bet's volume.
Then go back to the liquidity page for the market and withdraw L liquidity back out of the market.
Re-list the market so it's open to the public again.
---
If I understand the mechanics of this all correctly, this allows you to inject a biased subsidy into a market and make people 'want' to prove one side more because there's more reward on that side of the equation. In effect, all this should do is change the starting point of a market to be whatever percent you want instead of an initial default 50% without forcing yourself to trade at a loss.
And again, if I am wrong about all of this, somebody please point it out.
If, instead, you're trying to make a system that shows a relative weight for how strongly people believe in something that cannot be proven either true or false, but is instead an ethical or moral code, I suppose Manifold has the eternal "stock" market system, but I don't think that the incentives there are well implemented at the moment.
I guess the closest thing we have to that are viewership/listenership ratings for media like podcasts as we can infer that people consume content they agree with more than content they disagree with, so therefore viewership ratings are a proxy for how much a moral/ethical worldview is accepted.
@Quroe arguably would Metaculus' system of giving credences rather than bets work for the thing you mention in your second comment? A substitute without giving the weighted average for us might be a Manifold poll asking for credences
@Quroe I believe you are underestimating the scope of my aims. I'm not trying to build an app for a small group of nerds to prove things. I'm trying to teach people like Elon Musk and @RobinHanson a completely different way for 8 billion people to communicate at scale. It's about creating a new mode of futarchy. I appreciate the attention though.
We need innovators in the world; I'm not going to shoot the idea out of the sky. That being said, it takes a starting set of people to adopt a technology to make it widespread and get it off of the ground. Evolution does not leap. Start small with achievable (and concrete) goals, and then build step by step.
@Quroe yeah, I'm not aware of any technology that was ever immediately adopted by the entire global population at once, even wildly successful stuff like chatgpt still isn't used by the whole world even multiple years later