Here's how this works. I'm trying to figure out what Manifold norms & consensus are on various behaviors.
A big reason I create markets is because nobody has created them yet, and I want to make bets on those conditions! But sometimes it's tricky to specify a super objective resolution criteria from the get-go, and sometimes you can't really escape subjective resolution criteria, but you CAN be honest and up front about it.
So this leaves me in a position where I want to know under what circumstances Manifolders generally consider it okay for a market creator like me to make big bets in a market I myself created, provided it has a subjective resolution criteria.
This is the context that drove that discussion -- some folks got confused about the resolution criteria and objected to some bets I had made (I sold my shares at a loss in reply to null out my position in that market):
https://manifold.markets/LarsDoucet/will-the-leftright-culture-war-come#EIiy8w83UsqQfKbdYpCo
Resolution criteria for THIS market:
I won't bet on this market
I will resolve it -- subjectively! -- based on the most persuasive arguments I hear in the comments
I will also take into account the voted probability, but just as weak additional evidence, so don't try to mechanistically manipulate the market resolution outcome on that basis
I'll resolve as soon as I'm convinced or within a week from now
If you have an opinion you'd like to register, note that I take persuasive comments as much stronger evidence than just betting up/down. Betting up/down is essentially a bet about what you think I'll be convinced by and will only minimally influence my resolution.
Based on the comments I've received, I'm convinced that the norms of the Manifold community are that it's generally okay for a market maker to bet on their own market even if the criteria is subjective. A lot of good discussion was brought up and this has given me some insights into some smart things to do and guidelines for how to behave to foster trust.
Resolving YES.
Just wanted to say that I don't think people in the culture war market were objecting to you making a huge bet by itself. If you just made a huge bet, I would have been cool with that.
The problem was more that after you made the huge bet, you made this comment under a comment describing right-wing commentators hating ChatGPT for mouthing liberal pieties:
"Strongly agree. We'll let this ride out a news cycle or two and see how it shakes out. Even if it was just a temporary blip, if that blip has all the features we're looking for, I'll resolve this YES."
This comment was somewhat vague, but I and others were able to read it in such a way that you were saying that, solely on the basis that the right-wing culture warriors were against AI, that would be sufficient to resolve the bet. This would be an improper resolution because the resolution dictates: "This market resolves YES, if, in my sole opinion, it seems abundantly clear that support for "AI"... becomes politically polarized along left/right lines."
It's not enough for the right wing to be anti-AI, the left-wing has to also be pro-AI. Now, I think you probably realized this and that's what you meant by "if that blip has all the features we're looking for". However, that part of your comment is kinda undercut by your "Strongly agree, ride out a news cycle or two, I'll resolve this YES." That part of your comment seemed to indicate that you were amenable to resolving this YES, which I and other commenters consider incorrect- because right now, the left is not sufficiently pro-AI to resolve YES.
The left is currently either neutral or hostile to AI, just for different reasons than the right. The left and the right being polarized as for the reasons they don't like something is not enough to serve as sufficient resolution for the bet. Your market criteria made clear that one side had to be pro and one side had to be anti, and this is not currently the case.
I think you recognize this, but your comment was a little too vague. It sounded a bit like you were prepared to incorrectly resolve the market. In my estimation, it wasn't your bet that caused the criticism, it was the comment after the bet that was, rightly or wrongly, interpreted as you forgetting a key part of the market criteria. Hope this helps!
@ForrestTaylor It does help and is about what I figured! Learning stuff about norms and expectations and how to communicate
@Odoacre For myself personally, as an author with a lot of mana and a lot of experience resolving tricky questions, I feel confident in my ability to resolve without bias from my own trades. I am certainly not the average author though. But for me, a lot of the markets I make are ones I myself am interested in predicting, and if I couldn't predict them that would be a noticeable downside.
Also subjectivity vs objectivity is a sliding scale. There are lots of questions that are purely subjective, like this one, but also many that have mostly objective criteria but with the author having to adjudicate what the criteria actually mean and whether what happened matches them (similar to how laws need to be interpreted by courts).
@jack but as a potential bettor I obviously don't know if you are unbiased or not. This results in potential bettors self selecting and the market becomes a self fulfilling prophecy. It does not take much for this to happen, if even only a small fraction of potential bettors are scared off, the market becomes unbalanced.
@Odoacre I don't think that's true, you only need a small amount of "smart money" for the prediction markets to get to a reasonable price
@jack The fact the author has a stake will be priced in, as it's always riskier on average to bet against the author. Do you disagree with that ? The market will tend to reflect a model of the author's beliefs. If the market is accurate, it successfully predicted something you, as the market author, already knew.
@Odoacre Yes, you are predicting the author's beliefs, but predicting their future beliefs not their current ones.
something you, as the market author, already knew.
Most subjective questions aren't about something the author already knew. They might be something like "Will China launch an invasion of Taiwan in 2023?" - what counts as an invasion is subjective, but that subjectivity will be the author deciding whether the future events fit the word "invasion".
@Odoacre I think it depends a lot on the author in question. For instance, my biggest losses have come from markets I created and put large bets on -- and then was wrong about. I resolved them against myself and just took the loss, because my long-term reputation matters more to me than mana.
what counts as an invasion is subjective, but that subjectivity will be the author deciding whether the future events fit the word "invasion".
As the author, you are in the best position to know what "invasion" means to you. Everyone else has to read your mind.
By betting in the market, you give away information about how you are likely to interpret the wording, and also you create an incentive for yourself to change your future interpretation in a way that suits you.
For both those reasons, as a bettor, I am incentivized to weight my predicted outcome in the direction of whatever you are betting for.
If you created the market to find out what other people thing about something, you are doing yourself a disservice by betting in it, as this will skew the results.
If you created the market so you could enjoy betting in it, you have created an environment where you enjoy a massive advantage over everyone else, which should disincentivize people from taking place, especially if they disagree with you.
In my mind, in both cases you are doing yourself a disservice by betting in your own market.
@LarsDoucet I am talking about the average case of course, the case where as a bettor you don't know the author very well or the author is just unknown.
@Odoacre I completely agree that there are some downsides of author trading, but I am making an argument that they aren't always bad enough to disallow it. The start of this thread was a claim that subjective questions were just echo chambers, I disagree with that. I think they still reflect a substantial amount of actual prediction about the facts on the ground, in addition to a substantial amount of prediction about author interpretation.
I do think most authors probably would be better off not predicting in their own markets if a) they are quite subjective and b) not being able to predict in their markets doesn't dissuade them from making them. I don't know how big effect b is, but it seems important to many authors (including Lars based on his statement in the market description).
Also, I continue to advocate to a different and IMO much better solution to this thorny problem: let the resolution be decided by a trusted third party if it's ambiguous! I would rather move away from the model of author having full control over resolutions, towards one where authors resolve most questions but in cases of ambiguity/dispute the final authority rests with a resolution council of some sort. /jack/will-it-be-possible-to-dispute-and I think that's a better solution to the issues discussed here.
@jack I would definitely make fewer markets if I couldn’t bet in them - otherwise I have to get other people to make bets for weird stuff I care about! And I would submit my own track record of resolving markets against my own bets for some rather big personal losses.
That said, I think I’ve learned some useful ground rules through this discussion about how to set expectations correctly and help people manage risk.
@jack That’s interesting! Ultimately Manifold is a social experiment in trust. It’s trust all the way down. Third party resolution is an interesting new piece of social tech in that regards.
@LarsDoucet It's not really that innovative at all haha! Most prediction markets are resolved by the admins, this is relatively similar to that - note I specified trusted third party, not just any random third party. I guess the main new ideas I have added are that I think for Manifold it would likely be better for it to be a community-driven thing, i.e. trusted authors, not the Manifold team itself.
@jack I am not saying it should be disallowed, we agree completely on that. I stand by my echo chamber idea though, and you seem to agree with that as well, just to a lesser degree.
I don't personally like the idea of an official resolution committee controlled by manifold markets the organization, there are other solutions to the problem though and you can do those things right now
you could just ask someone else to make the market
you could make the market yourself but precommit in the outset to honour the resolution by some trusted third party who agrees not to bet, the choice of the third party can even be delayed to some later time, or perhaps be offloaded to a secondary market.
@Odoacre Oh, to clarify I think that if in fact it is better for authors not to trade in their own markets, there should be an actual Manifold feature that controls it, we shouldn't rely on authors following community best practices, because you can't expect authors to know what those are! Which is why I asked this question: https://manifold.markets/jack/would-it-be-a-net-benefit-if-for-mo
The resolution committee, at least as I have proposed it, wouldn't be controlled by Manifold the organization, it would be community-driven. A couple authors have in fact done similar things on our own initiative, and I think it has worked well!
I'm not a fan of the idea of asking someone else to make the market, it's a huge amount of added friction.
Same with asking a trusted third party - it can be done and has on rare occasions, but it's rare because it's a lot of friction too - you have to ask the third party, they have to be willing to judge a potentially complicated subjective question that they themselves aren't the author of, etc.
I think these are potentially viable ideas if they become better supported by the Manifold features or by the community. I do think being able to delegate resolution to a third party is a useful feature, and one that many people have suggested.
This community is in flux as new people join, and there isn't to my knowledge a guide to social norms, so it seems best to have norms that are simple and roughly what someone might expect when presented with the user interface without being informed further. I think this suggests carrying over straightforward implications of normal social norms (don't lie, don't be a jerk), but otherwise presuming things are okay if they are straightforward to do on the site. In this vein, it seems bad to try to have complicated/non-obvious norms, and I think 'don't bet on your own markets' would be that, because the interface invites you to do so, and it's not immediately obvious that it might be objectionable to a new user. On the other hand, 'no insider trading' seems to be a norm in places in the outside world, and seems like it is considered fine here, and I like that.
I think it's generally fine to bet in your own subjective markets, but there are specific situations where it's beneficial to not bet, and to make clear that you are not betting.
On a basic level, I think the philosophy behind prediction markets says more information is always better. If a market resolves to the creator's opinion, and especially if it's intended as a convince-me market, it's useful to see what the creator's current opinion is, and how that opinion changes as they as they change their holdings in the market.
In my baby name market (What names should make the shortlist for our soon-to-be-born baby boy? | Manifold Markets), I committed to not buying shares for two reasons. First, to incentivize participation--more people participating with genuine ideas is more valuable to me than the mana that I could gain by tricking them. My goal with this market is to gain information, not to gain mana. Second, as a newer user low on the leaderboards, to try to make my intentions clear in the absence of an established reputation.
Yes I would say it's acceptable as long as the market maker discloses from the start what their conflicts of interest are, what their intentions in terms of betting will be, etc. This allows different discussions that are subjective to be had.
I just think that people will look at market makers on a case by case basis and certain market makers will gain a particular reputation if they create markets that are too much of a moving target, never based upon a third party source etc.
I think that markets which are disclosed to be completely subjective will probably only attract smaller bets, hopefully, which makes the individual exposure to any given person smaller, so it's a, "whatever." On the other side, if market makers are not allowed to bet at all, they may be less engaged in a particular market's outcome, they might not listen to particular perspectives. You kind of want a bit of drama to add to the virility of a market, which should attract more research.
@PatrickDelaney Gotcha. SO I think the way I might square this myself is to just be a little clearer about what goes into my subjective resolution criteria, being a lot clearer about when I'm just updating on evidence and when I'm ready to pull the trigger on a resolution. Might even get some standard template language I use in all subjective markets.
I think the main thing people objected to was "bait and switch" signals -- wait you SAID resolution was this, but now you're exploiting this thing I didn't understand as being part of it (or at least I THINK that now based on what you just said in the comments, which might just be a misunderstanding) and then WHOA you just made a 2000 M bet, and say "you agree" with that other guy who bet against me, and now I'm nervous you're going to rug me when I was right all along by a textual understanding of the resolution criteria.
Also probably shouldn't make huge bets on principle anyway, this is how I wound up with a giant negative manifold profit hole I will never dig myself out of 😂
@LarsDoucet (Also for people reading I haven't made up my mind yet, this is just me updating based on an interesting comment)
@LarsDoucet I wouldn't worry about your actual manifold total profit, I would worry about becoming better at research and using Manifold as a way to help teach you how to research things and define things more clearly. You may be a Manifold user for years and end up having a huge unexpected loss or win at some point. I don't think it's good to get swept up in the gambling aspect of things but rather to figure out how to reduce our own biases and learn more. Ultimately I think if you do that for people as a market maker your reputation will continue to be solid. Honestly when I see a market by Lars, I don't see it as being anywhere close to like an Isaac King or Destiny, I think...reliable, v.s. those guys, more entertaining.
@PatrickDelaney > I wouldn't worry about your actual manifold total profit
Oh it's way to late for me to start caring about Manifold profit, I don't think it will ever be positive again at this point. I'm much more focused on the creator side of things anyway, and I get plenty in creator rewards to fund ongoing bets.
Honestly I've been doing some informal study of manifold profit as a metric and it doesn't seem to be super meaningful -- the size of bets will swamp outcomes, so it's not really comparable to Brier score, and if you look at the top bettors they tend to have made their profits in ways that track more with "playing the game of Manifold" then in "making good predictions".
If I wanted to get my profit back up in the black ASAP the best way to do it IMHO risk-free and (mostly?) ethically would be do:
Create a bunch of markets likely to attract lots of volume, but with objective resolution criteria, on short timelines
Wait for the resolution criteria to become very obvious
Plow all of my money into the obvious winning outcome and drive 4% to 0 or 97% to 100
Immediately resolve the market
Which is another example of manifold profit being pretty uncoupled from "is Lars a good predictor of the future"
@LarsDoucet Which is to say -- I think Manifold is still pretty well designed, if you think of the MARKETS THEMSELVES as the primary signal it's producing. It's not great at identifying consistently good predictors (at least not through the currently legible metrics), but it's pretty great at producing useful signals about what the "smart money" thinks is most likely about proposed predictions.
@LarsDoucet The best strategy appears to just be algorithmic, just not exposing one's self to any given market and trading on momentum, as the bots do. That being said, the bots hold no actual expertise.
The advantage of the bots could be...Manifold could delete those accounts randomly over time, which would help fund Manifold because they get all of those credits back, kind of like a garbage collector. If there are too many bots on Manifold, then no one will want to use it.