This is an experimental market where you can add a belief you have as an answer, and users can bet on whether anyone on the site can convince you otherwise, in the comments of this market or in Manifold DMs. (You can also add answers about other users, but I'd make sure they're willing to participate first.) It's meant to be a rough equivalent to /r/CMV. The idea is for users to scope out others' beliefs, even if they don't particularly intend on debating them, and bet accordingly, creating a rough ordering of which users/subjects are likely to be easier or harder (but also which .
To resolve a market YES, simply comment or DM me that you have been convinced and who convinced you. If you naturally change your mind without talking about it with anyone on the site, I will resolve the answer NO. Meta answers about the market itself like "Two debate answers added by separate users in this market resolve YES" are allowed. I think it's ideal if users do not bet on their own answers, since that adds perverse incentives, but I won't try to stop people from doing so.
The market close date will not change, and all remaining open answers resolve NO at that time.
Related questions
@dglid The main thing that bothers me about the pivot is that the removal of loans created tons of volatility in long-term markets as people liquidated positions. Manifold used to be excellent for predicting things years in the future; good predictors now have little incentive to participate in these markets, since you can still earn points in short-term markets.
@SaviorofPlant Agree, it's likely the pivot will make long-term markets less active. However, a potential future is that the pivot attracts so many new users that there's just as much activity as there is now. If not, then I'd argue that maybe that's a good reason for Metaculus to stick around - Manifold and Metaculus can serve truly different parts of the market rather than competing as they more or less do now.
My voting philosophy is similar to Scott Alexander - I vote for Democrats by default, but I'm occasionally willing to vote for a Republican as a protest if they seem well qualified and reasonable.
I can't remember for certain, but I believe I voted for exactly one Republican in 2022. I haven't checked any details yet about the candidates running this year.
@TimothyJohnson5c16 I should add, I won't bet on this, but my initial prior is that it's a little more likely than not - somewhere around 60%.
@TimothyJohnson5c16 There don't seem to be many statewide elections this year: https://ballotpedia.org/California_elections,_2024
What is your opinion on Adam Schiff?
@SaviorofPlant I know basically nothing about Adam Schiff. I voted for Katie Porter in the primary - I don't always agree with her, but the way she uses her whiteboard seems to bring something valuable to discussions that are often lacking in hard data.
@TimothyJohnson5c16 I'm gonna be honest, I don't think I'm going to be able to convince you to vote for Steve Garvey. If only there was a gubernatorial race this year...
@shankypanky As in P(Afterlife) > 0.5? If so, no. But for the sake of the hypothetical, assume the existence of an afterlife were undeniably demonstrated to me before I make my choice.
@Bayesian That seems fairly straightforward, but in order for me to accept that, I would first need to accept that the concept of infinite utility is even meaningful, which I'm not yet convinced of.
@NBAP if you care about some finite amount of time, and the degree to which you care about some bounded amount of time doesn't decrease at a sufficient rate that it converges after an infinite amount of time, then you necessarily care an infinite amount about an infinite amount of time
@NBAP lol good point. yeah ig i should have said converge over an infinite amount of time. but yeah it's possible that it does, and for any epsilon of caring you can go far enough in the future that you don't care that amount about some bounded amount of time or wtv
@Bayesian It seems to me to be possible, in principle, that the quality of being in limbo might be such that an eternity of limbo converges to some finite disutility which might be lesser than the utility of some very idyllic lifetime on Earth. I'm not convinced that this is true, to be clear (hence why I would need to be convinced), but it does seem to me to be possible in principle.
@NBAP This seems plausible to me. You'd go crazy for a while, but eventually your brain would adapt. I've heard (questionable) stories about how people in solitary confinement long enough start to hallucinate, and I can only imagine what would happen after 10,000 years.
I would expect your memories of life to slowly be replaced by memories entirely generated by your brain after a certain point. The description that makes the most sense would be something like an endless dream, although enough time with no stimulus might eventually destroy the ability of your brain to have any thoughts.
@SaviorofPlant I’m not sure I consider this a particularly compelling approach. I regard the prospect of losing all of my memories from my lifetime on Earth (including of the earthly rewards being traded for) to be a significant disutility in its own right.
@NBAP Compared to the alternative, you are spending like a hundred times longer with those memories. There's much more time to appreciate them if that's what you value.
@SaviorofPlant Conversely, losing memories over a prolonged period might very plausibly be a much greater disutility than losing them all abruptly (as in the case of a normal death).