Referring to the below Poll (and associated market, if interested):
What does freedom of thought mean to you?
I know that the above five options may not cover your full view. Please pick the one that most closely describes your viewpoint. Please comment below if you would like to give a caveat or can't agree with any of the options.
Open Markets - allowing any topic or event to be predicted, irrespective of its ethical or moral implications. No policing of language what so ever is desirable including the potential to stop personal insults, racism, sexism, and so on.
These two sentences are unrelated. I strongly agree with the first and strongly disagree with the second; I don't see a good reason to combine them into a single option. It's perfectly possible (and I believe Manifold's current policy) to allow markets to cover any topic while still requiring that descriptions and comments be polite.
Moreover these answers are not mutually-exclusive, and I agree with most of B, C, D, and E as well. I didn't vote for them because there seems to be an implication that if I vote for those I disagree with the first sentence of A, and I think that's the most important.
@IsaacKing Ok so you would split A into two questions and pick the first option as your favorite of all, basically? Or would it be one of the other options? Or something else?
@PatrickDelaney I mean they're not mutually exclusive, so I'd ideally want to pick multiple. I want a prediction market that:
Is nearly maximally free speech when it comes to market subject, only banning extremely narrow categories of market that >90% of people agree would be vastly more harmful than useful, like markets on a non-public citizen's personal life being used to harass them.
Aggressively enforces civility norms when it comes to the wording used to describe those questions and comments about them, ensuring that people can be as comfortable as reasonably possible while discussing topics that they're inherently uncomfortable with.
Is meritocratic in its leadership, with a focus on avoiding the pitfalls of mob rule and bureaucracy while also preventing dictatorships and "old boy's clubs". (Prediction markets lend themselves especially well to this, as you can simply put the best predictors in charge.)
@IsaacKing I had a comment from another market which you might find interesting https://manifold.markets/PatrickDelaney/prediction-markets-poll-which-is-pr-15bafc832fb2#fNLhfkuxKXmoLzSKG1kQ
@PatrickDelaney So one bar seems to be… “markets on a non-public citizen's personal life being used to harass them. “ Another bar seems to be, “Marginalized people are less likely to feel comfortable using a website if they know they could get called slurs in the comments section, or otherwise face an openly hostile environment” Not to create a spectrum fallacy but the first bar seems lower than the second. Is there a worry that the concept of preventing marginalization could be used cynically to shut down discussion rather than explore a topic to the full rigor necessary in accurately predicting whether a numerical threshold will be passed vs. just casually discussing a topic?
@PatrickDelaney Or even not cynically perhaps, but just due to passions around a topic and preconceived values and beliefs? For example could participants fear engaging in a Gaza/Israel discussion for fear of being accused of marginalizing one side or the other?
@PatrickDelaney Yeah, weaponizing victimhood is a huge problem, and the root of most political disagreements. The problem is that everyone is a victim in some way and not a victim in others, and while some people are actually victimized more than others, it's extremely difficult to tell which is which, and everyone believes it's themselves. (The Nazis believed they were being victimized by the jews.)
The reason I'm fine with banning a market like "Will Alice from down the street who doesn't like me be raped within a year" is because there will be broad community agreement that such a market serves no useful purpose to society. (Of course the market could serve a useful purpose to Alice, but then she can just consent to it and it's not a problem.) And society also strongly believes that Alice being upset by it is reasonable. (I highly doubt that the existence of the profit incentives would increase the chance of rape by a non-negligible amount, but the simple act of publicizing the question could. )
Statements like "Marginalized people are less likely to feel comfortable using a website if they know they could get called slurs in the comments section, or otherwise face an openly hostile environment" on the other hand do not have widespread agreement, and a large fraction of people do not believe that's a valid excuse. Additionally, even those who are in favor of the statement would likely agree (after dodging the question several times) that the real-world outcome being asked about is an important one, and knowing its probability would be very useful to society. (Even if they think that having a market on such a thing is unethical for other reasons.)
And taking the inside view for a moment, I've almost never seen such a statement made in good faith. If it's actually "this person called me a slur and that's bad" then yeah, I'm absolutely on board for censoring that, but in reality it tends to be stuff like "this person politely disagrees with me and I don't have evidence to back up my position, which is making me uncomfortable so this is not a safe space". (I know this sounds like an exaggeration, but I've seen some absolutely wild arguments like that made in full sincerity. When a given subculture doesn't have a "this is too much deference" line, some of the people being deferred to will exploit that to pretty ridiculous lengths for personal gain.)
Additional poll, more feature oriented, created:
https://manifold.markets/PatrickDelaney/if-you-had-to-pick-what-criterium-w#
A is closest, though personal insults are low-value and I'm not against discouraging them in principle. Insulting groups is, while unpleasant and rarely high-value, something I prefer to protect categorically, because rarely is not never and banning it at any time is likely to turn into banning it all times.
Protect racism, sexism, and so on; do not, beyond that, protect personal insults. This probably necessarily involves protecting darkly hinting about other people's race/sex/etc., and that's unfortunate but should still be protected.
@JiSK OK so would it be fair to describe you as, "A., but no hate speech, but you have to be really careful not to misinterpret a personal insult as hate speech."
Is that kind of what you are going for or something else? What would be your second choice if not A., if any of the above? Or are you more saying it would be a combination of high A. + a smidge of prevent harassment?
@PatrickDelaney No, I am in favor of protecting hate speech. Be hateful of people all you like, as long as it's not your interlocutors.
@JiSK OK, so how do you know what your interlocutors find to be hateful ahead of time? How do you account for mistakes? (again, in the context of a Prediction Markets platform)? Is there some kind of guideline that says that people are allowed to and even encouraged offend, but need to say sorry or make concessions after the fact if someone brings up a serious issue? Is it some kind of good faith community guidelines? Or would it be more like...repetitive willful behavior (e.g. if a user knowingly repeats said issue again and again over time)...? Or something else?
@PatrickDelaney You badly misunderstand me. None of that. It doesn't matter. Their opinion of whether the things are hateful is completely irrelevant. The question is whether it's intended as a slight against your interlocutors, which is, while not perfectly objective, much easier to assess.
If I am imputing various disgusting or evil characteristics to Jews, and it so happens that my interlocutor is Jewish, that's fine and should be protected, as long as I didn't know, and didn't have overwhelming reason to suspect, that they were Jewish. If I did know or suspect that, then presumptively it was intended as a sideways attack on them personally and should be heavily discouraged. Adequately disclaiming the personal relevance might sometimes be sufficient to put it back in-bounds but in general that's very hard and maybe not worth bothering to pick out. (Conversely, if I falsely believe that my interlocutor is Jewish and throw around accusations about Jews, that's presumptively out of bounds.)
It is entirely about the person speaking and their intent. Reasonable debate doesn't involve name-calling, and serious name-calling, serious insults, should therefore be out of bounds. Talking about groups not present, or, hell, even 'well most X are evil but you don't seem to be', is just politics, and that should be protected.
@JiSK Seems like a strange rule. I'm half Russian, so now that you know that, can you not say anything bad about Russians? I think you should insult me severely if you think I'm wrong. Make it personal & whatever; I like pain.
@JiSK Pretend you are launching a discussion forum. How do you write a rule which results in the outcome you want?
I'm the only psycho so far that chose option A, though I did it with some reticence since the one thing I'm 100% absolutist about is not being 100% absolutist about anything. Personal insults is how we say "hello" in my family. I will say I can't stand spammers, however.
I would put option D in 2nd place since that doesn't totally presuppose the "community" is right about what it might desire to censor or suppress. Of course, we've all read Tetlock and understand that Foxes > Turtles, and I think that applies with respect to epistemic foundations as well. Sometimes the raving lunatic racist pervert has something important to add to a prediction.
@AlQuinn What is Tetlock and Foxes > Turtles? Can you go into a bit more detail about that and how it related to epistemic foundations?
@PatrickDelaney Sorry got my animals mixed. Foxes vs Hedgehogs, as Tetlock wrote about in "Superforecasting" and other places, e.g.:
https://longnow.org/seminars/02007/jan/26/why-foxes-are-better-forecasters-than-hedgehogs/
@PatrickDelaney Apologies, that link of mine wasn't the clearest articulation of that concept. Here is a good synopsis:
"Hedgehogs are people who make predictions based on their unshakeable belief in what they see as a few fundamental truths. Foxes, by contrast, are guided in their forecasts by drawing on diverse strands of evidence and ideas. Hedgehogs know a few big things, foxes know lots of little things. When new information comes to light a fox is likely to adjust her forecast, where a hedgehog will likely discount the new data. For a fox, being wrong is an opportunity to learn new things."
@AlQuinn OK so your worry is that under a system such as B, C or E Hedgehogs would overwhelm Foxes (regardless of whatever the ideology of Hedgehogs might be). Is that what you are trying to say? Are you saying that A would be ideal but obviously you don't want to actively hurt people, so that why D is a good close second? Hope I'm not reading too much into what you are saying here or trying to fit you into a box too much, just trying to get more of your thoughts on the entire post above that you had written.
@PatrickDelaney I think trying to control discourse beyond anything illegal or spam and a handful of various other edge cases, no matter how well-intentioned, could damage the quality of predictions due to the breadth of opinions and perspectives being suppressed. Of course, I wish more people would comment on Manifold on their thought process (no matter how demented I find their thoughts to be!) because it's what I find to be one of the more interesting things on this site.
The problem with option C, for example, is it presupposes we would know how to moderate the site to arrive at good predictions. I believe this is impossible to do in practice, because the set of inputs and perspectives needed to predict an arbitrary set of questions accurately is unknowable a priori.
B is flawed in my view because it relies on "misinformation" being a clearly recognizable thing. Sometimes it is but often not; it should be open to debate what assertions constitute information vs misinformation.