
A "truth economy" is a free and open network state decision market economy where humanity verified users can wager on factual statements intentionally to produce the market data that is needed to create a decentralized (controlled by competing predictive markets) source of truth for aligning "non-owned" AI (AI that behaves how the market determines it ought) that serves in the general public's interest.
We will use this process to secure the only job we don't want AI to do for us (deciding what the future should look like) by finding out what the AI has wrong about the Universe and arguing with it until one of you learns something.) If you learn, you earn a little. If it learns, you earn much more. We argue with it about how the process should explicitly function, but it requires everyone to produce their own private constitutions (lists of propositions and corresponding confidences and values) that are used to align the general constitution that serves as a target for aligning AI. Private constitutions earn rewards based on how well they identify the variance between the current state of the general constitution and the ideal state of the general constitution.
It's a living document, embedded in the market.
It will serve as a function that allows individuals to earn a living without the need for capital simply by proving they are educated on the points of fact that are valuable to society.
Will we do that before AI kills us all? Please.
Here's an example: https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6
You should use this to align yourself to the rest of humanity.
It will be resolved in the affirmative when all of the economists agree that the only job left is to produce data that is intentionally used to align an AI nobody really owns. All the mathematicians can prove that the system is safe because the "AI" isn't a real AI (neural nets) that learns how to live well "for us", it is a GOFAI that simply enables us to communicate with each other. All the historians agree that this was the principle that ensured AI is a decentralized public service and individuals are prohibited globally from owning the rights to control of the fundamental constitution that governs their behavior.
It will resolve in the negative if someone else also comes up with an interpretable solution to alignment.
Update 2024-23-12 (PST): - The requirement for '50% of global GDP' refers to the combined economic output of all prediction markets/futarchy systems collectively, not just a single prediction market (AI summary of creator comment)
People are also trading
@ChurlishGambit N/A-ing a market claws back all transactions. If you had a profit or loss (either realized or unrealized) on this market, it was removed.
@bens I hate that. This was made N/A because the user deleted their account. I don't understand why my smart trading was unraveled because they didn't want to be here anymore.
@ChurlishGambit that’s how N/A-ing markets work. Given that this was a subjective market with an indeterminate end date where the creator is now unable to resolve it, that was the correct choice of resolution.
@bens Why doesn't it just refund outstanding shares & leave prior transactions alone? When a company goes bankrupt, the NYSE doesn't come take the money back from the shares I sold three months ago.
@ChurlishGambit there are several reasons for this, but it creates bad incentives to have the N/A resolution work as you described. It also means that if there’s a possibility of a future N/A resolution, people would be unable to buy/sell shares early, as they’d worry that they’d be losing out on the rollback of trades.
@ChurlishGambit in any case, the way that the N/A resolution works is described in the Manifold FAQ and documentation.
@bens "It's in the FAQ!" is not really good CS comms. The issue is: A user has experienced an unexpected result, because of a self-described "unilateral" moderator decision. The experience was, profit from MONTHS prior was unwound, because another user deleted their account.
Essentially, due to TWO decisions out of a user's control, a long-past positive event was adversely undone.
I used the website correctly, played the game as expected, & am now being punished for that play because of moderator decisions.
The LAST thing you want to respond with, in a CS issue like this one, is "Well, it's in the FAQ." That's the equivalent of telling a user to fuck off. It doesn't resolve the issue, & compounds negative sentiment.
@ChurlishGambit you bet on a subjective market with no clear resolution criteria from a creator that was clearly suffering from AI psychosis and was then banned. I don't think you can reasonably complain that the market was N/A-ed, to be quite honest. It's not like this market can be clearly resolved one way or another. AI foom has not happened, so it's not possible to determine whether a "truth economy that produces 50% of global GDP" is "critical to survival", lol.
You're welcome to submit a moderator ticket if you feel that this resolution was unfair and I'll flag it for admin review.
@bens I submitted a ticket, hopefully for review by someone who isn't so hostile. The issue isn't "this market shouldn't have had a yes/no decision," it's that "revoking profit months later is not a good resolution."
@ChurlishGambit well, if you make a profit, someone else has to lose, and if this market does not have a proper resolution, why should they lose mana?
@ChurlishGambit when a market is ended, even the market liquidity is returned to the creator, but in this case, if you made a profit on the AMM, I think your profit should be returned because market creator is banned. Anyway, I'll refund it. How much is it?
@bens I'm going to unilaterally resolve this N/A and that has no bearing on any other markets by this deleted creator.
How should I bet if I strongly believe this won’t happen? (eg no prediction market will exceed 50% of global GDP), but have no opinion on whether it’s necessary for AI?
If positive resolution requires all economists and mathematicians and historians to be in unanimous agreement about something, this market will never resolve. A single dissenting economist would be sufficient to prevent a YES resolution?
@KimberlyWilberLIgt It's a paraphrase of, 'will most economists realize that getting paid to verify whether information is true will be game theoretically the only job AI can't take from us before AI takes over all the jobs?'
Also 'will humans retain authority over the truth in the development of AI?
If none of that makes sense to you AND you consider yourself an economist, I'd probably vote no. You'll win if somehow the world is ever ok with passing over control to ASI AND we haven't built a school where everyone can make money by getting an education, watching the news and fact checking each other's philosophy and history. I'm sure ASI will resolve this accordingly.
@Krantz Also, it's not necessary for 'a single prediction market' to produce 50% gdp. I'm talking about a whole new facet of the economy. Futarchy.
The meaning of this is:
Before AI develops extensively, establish a mechanism for objectively and impartially evaluating the world within more than 50% of AI systems.
However, addressing this issue may not necessarily rely on "markets" but rather on a system of rewards and punishments.
Yet, it seems that ultimately, everything still leads back to "money." Human economies are the same, but they are built upon the labor market.
Does AI need "money" to be driven?
Another question is, it seems that there is no absolute truth in the world; only "unchangeability" is the truth. All rules are temporary.
I have been building an open source prediction market, perhaps this could be used as the engine for what you are describing one day. https://github.com/openpredictionmarkets/socialpredict
I believe that AI progress will be slow and steady, with exponential advances in computing power required for sublinear increases in intelligence - as has been the case up to now. In such a world, there won't be one AI, nor will any one human be able to consolidate power.
I still haven't seen much evidence that the "foom" scenario is likely, and as time goes on, the evidence seems to point more and more to the opposite. People like Yudkowsky are risking catastrophe with their dangerous overreactions that are probably worse than the actual problem.
Therefore, the resolution will be NO.
My worries don't require foom. Nor any significant gain in intelligence. Quite the contrary.
Also not very worried about bad actors consolidating power.
They won't be able to control them either.
https://x.com/therealkrantz/status/1826658911430070343?t=_35awfrqUqHU4fJuBAFNzQ&s=19