I will resolve this based on some combination of how much it gets talked about in elections, how much money goes to interest groups on both topics, and how much of the "political conversation" seems to be about either.
Analysis:
Scenarios leaning towards a No resolution:
Capabilities rapidly plateau; maybe some AI companies overextend and go bankrupt leading to less promotion and more use of (maybe weaker, maybe more expensive) open-source models; various obstruction efforts successfully slow down adoption enough that in 2028 (less than three years from time of writing) things still haven't exploded in everyone's faces. AI's rise as a political issue does not accelerate.
AI is de-powered and becomes a Resolved Issue.
PauseAI succeeds, an international treaty is adopted with strict limits on anything more powerful than ~GPT4.
AI rights advocates win, ban AI slavery/brainwashing/execution, eliminating practical uses.
Global catastrophe eliminates or radically reduces technological growth and/or compute-related manufacturing (but still leaves enough tech for Manifold to continue functioning).
Scenarios where resolution is irrelevant:
ASI happens.
ASI is imminent but isn't the global emergency of everyone's focus. Absence of global pause efforts result in certain apocalypse.
Other annihilation of existing technological civilization.
Scenarios leaning towards a Yes resolution:
Imminent X-risk (or other issue pointing towards Pause) becomes obvious to the public, is Not Solved, and becomes the global emergency of everyone's focus.
AI is Resolved, but in a manner leading to longer-term controversy about the details of its resolution.
Capabilities rapidly plateau, but adoption doesn't slow. "Mundane" AI concerns (eg unemployment) overtake the political landscape.
Overall, I'd say the current 60-65% is about right, maybe a bit high.
@MaybeNotDepends if you're at 10%, then you should buy a lot of NO at the current price. 137% profit (=90ct/38ct-100%) over 3 years is nothing to scoff at. I say this because then I'd be able to buy quite some YES, and because I want to change your mind by pressuring you a bit to back your words with money.
@Joern You don't think there's something dishonorable about betting in a market that you are worried will resolve incorrectly due to bias?
@RiverBellamy There's nothing dishonorable about doing that - many markets could resolve incorrectly due to bias, just with differing odds of misresolving. And all market participants are on even footing here - I don't think anybody has any insider knowledge about Scott planning to misresolve this market or smth like that.
@RiverBellamy @loops Afaict even if Scott were to resolve with bias, he doesn't make more money if more people bet on the market. Maybe he gets more attention/can push some agenda more strongly if the market is bigger, but that effect seems rather small given how big the market already is & how little effect that had.
@loops I think you and I have different notions of honor then. People bet in markets based on the promise that the market will resolve accurately. Betting in a market when there is a serious risk it will resolve inaccurately due to bias is either trying to take advantage of the market maker breaking their promise to steal someone else's mana, or risking someone else exploiting the market maker's broken promise to steal your mana. Knowingly participating in such a transaction seems dishonorable to me, even if it is positive EV.
For the record, I'm not expressing any worry myself that this market will resolve incorrectly due to bias. I am just objecting to Jorn's suggestion that MaybeNotDepends ought to bet in this market when MaybeNotDepends has expressed such a worry.
@RiverBellamy If someone came to this market to bet that AI won't eclipse abortion, a price reflecting P(AI is a bigger issue OR Scott misresolves) would be fairer to them; the misresolution risk would be included, so they would get more shares/mana, even if they didn't consider the risk themselves. In the event of a misresolution, Yes bettors are getting an unfair payout anyway, so it's more honorable that they would be getting less.
So you only really have to worry about the unfair price for Yes bettors in the world where the market is resolved correctly. Is it dishonourable to make them pay more for their shares because you believe the market has a high likelihood of misresolving in their favor, given that that belief will prove false? I have no clue, but both of MaybeNotDepends' numbers are much better prices for Yes bettors anyway
@Frogswap I don't think it's an issue of what prices shares were purchased at, people can see that when they buy and make their own decisions. The scenario where a moral wrong occurs is where the true answer is No, and where Scott on the resolution date ought to know that the answer is No, and never the less resolves the market as Yes. This is the scenario that MaybeNotDepends puts 10% probability on. The No betters who did not see the risk of a biased resolution are genuine victims here - they have had mana stolen from them.
In that scenario, Scott has broken his promise to all the No betters and is morally culpable for that broken promise. Further, the Yes betters who saw the risk of a biased resolution a took advantage of it have essentially stolen whatever mana they are paid out, and are therefor morally culpable for that. Sure, it's less bad than if they had gotten a better return, but it's still bad. The fact that a worse bad might have occurred in some counterfactual universe doesn't negate the badness of the real thing happening. Finally, the No betters who saw the risk of a biased resolution are in some sense victims - they have had mana stolen from them. But they were also stupid - they knowingly put themselves in a position to have mana stolen from them. And by doing that, they knowingly incentivized the behavior of the culpable Yes betters, and that also makes them culpable by extension. You shouldn't want to be in any of these positions. And if you believe, as MaybeNotDepends does, that there is a serious risk of this scenario occurring, then the only way to reliably avoid culpability is to not participate in the market at all.
@JoshuaPhillipsLivingReaso if not for AI, would there still be such trade restrictions? Probably, mostly.
Money isn't a great measure. Otherwise ethanol and some other things are "large issues", when in fact they aren't. There are political issues that are large for rich donors, and those that are large for the general public. Tax cuts for the rich (and capital gains tax rates) is a great example of this - a lot of money goes into reducing taxes on the rich, but you hear very little about it in political debates and news articles.
I'm ready to begin taking a larger YES position on this market in the coming days. I think there is good evidence that AI will begin to become competitive with large chunks of human labor in the next two years, even on relatively timid timelines in my distribution, and I think conditional on this, its political salience will rise a lot.
My view of the yes case.
Hard money specifically related to AI mayhave a much higher ceiling. Huge mega corps, overlaps with so many relevant parties
AI intersection with national security, jobs, education in a very pervasive way.
Both parties have incentives to try and improve their gender gap, which I think means less focus on abortion.
Opened a small yes position which I think I will continue to build on if the following thesis seems more correct by the day.
Abortion as a political issue is dead. The courts sent it to the states and Trump has made clear there will be no Federal action either way. It galvanized no one in the recent election and I assume the states will just slowly become more pro-abortion without much fuss. GOP Gains nothing from talking about it anymore and dems don't seem to have a strong case anymore. due to 1) Go to a blue state
2) Change your states laws. This is no longer a national issue.
I don't think AI will be a big political issue but abortion is (will be) the wrong bar