Will we get an "ought" from an "is" before 2030?
21
236
390
2030
36%
chance

THIS IS A DRAFT UNTIL 2025, trade at your own risk. Please offer feedback.

Will humanity have strong evidence of a moral proposition that is demonstrably stance-independently true by that time?

This would require such a moral truth to be scientifically observable and/or logically/mathematically inferrable using a standard set of axioms those fields already use in 2024.

As a moral non-realist who thinks the is/ought distinction is pretty strong, I have no idea what this would look like to resolve YES, but IMO it would be good news.

I will resolve based on my best judgement. I won't bet on this question.

Get Ṁ200 play money
Sort by:
predicts NO

Premise 1: It is morally bad for you to do X.

Conclusion: You ought not do X.

I think this argument is logically valid and derives an ought from an is.

@PlasmaBallin The argument must be sound as well as valid. Premise 1 looks to me to be begging the question for every X I can think of.

predicts NO

@kenakofer I don't expect this to be resolved based on this argument, it's mainly just there to demonstrate that the real difficulty is in how you define "good" and not in some logical rule about being unable to derive an ought from an is. If you can define "morally good" and "morally bad", then you should be able to justify certain instances of premise 1 and thus derive an ought.

@PlasmaBallin You should google normative vs descriptive. We have a whole collection of these (usually) normative words: "good", "just", "right", "evil", "wrong", "ought", "should". Contrast that with these (usually) descriptive words: "pleasure", "balance", "capable", "lawful", "sofa". Since you have a normative premise 1 and a normative conclusion, you are at best getting an ought from an ought. We're looking for a normative conclusion that follows from sound descriptive premises.

predicts NO

@kenakofer If you define an "ought" to mean a normative statement, then yes, the argument I gave above derives an ought from an ought. That's why I said I don't expect the market to be resolved based on it. The point is that there isn't some sort of logical rule against deriving statements that use the word "ought" from ones that use the word "is", as some people seem to think. There's nothing special about "ought" statements, structurally or logically speaking, since they are equivalent to the statement that something is good/bad. This is important because many people think there is an insurmountable logical gap between normative and descriptive statements just because the former tend to use the word "ought" while the latter use the word "is". In reality, if you have a non-circular definition of the word "good", then you get normative claims from descriptive ones automatically.

@PlasmaBallin Yeah, 'Define "good" descriptively' is a great angle to approach this question from.

bought Ṁ10 of NO

As a moral non-realist who thinks the is/ought distinction is pretty strong, I have no idea what this would look like to resolve YES, but IMO it would be good news.

This makes me think that this is likely to resolve NO, not because I don't think you can get an ought from an is, but because I don't think that you will believe that it's possible to get an ought from an is at close. I can't imagine any new development happening in the next few years and suddenly convincing everyone that it's possible to get an ought from an is, because that's just not how philosophy works.

@PlasmaBallin I made this question because I was floored to find that (at least in 2009) most philosophers believe there are objective moral facts. Some (not sure how many) think that we flesh bags can even perceive some of them! I admit that I find this incomprehensible, but being in the minority (were I a philosopher) gives me pause.

This would require such a moral truth to be scientifically observable and/or logically/mathematically inferrable using a standard set of axioms those fields already use in 2024.

This seems like a bad standard of evidence to use. No one thinks you can derive moral truths from ZFC, so the "mathematical" part is useless, and very few people think you can derive moral truths purely from scientific observation. You also, at the very least, need an analysis of what the word "moral" actually means (which is really the main contention in moral philosophy, rather than scientific facts).

@PlasmaBallin Yeah, and I'm certainly with the purported majority there. I'd love ideas if you think there's a more viable rephrasing of the question or standard of evidence to use that's in the neighborhood? It sounds like for the current question you are saying "resolves NO obviously", but it's so obvious you're worried that I'm defining things in a weird way?

Not sure how to define "moral" or "ought" exactly, but I know moral philosophers use the phrase "stance independent moral truth" a lot. I can do some reading later and lift a definition of "moral" from one of them.

predicts NO

@kenakofer Well, "stance-independent moral truth" is just a fancy way of saying "objective moral truth". It doesn't have anything to do with how the truth is discovered, or if it can even be discovered at all. It's one thing for there to be a moral truth that can be derived from known descriptive facts. It's another thing entirely for everyone to agree that it has been demonstrated.

If you are going to look for a specific definition of "moral" or "ought", you should be careful because most definitions will more-or-less entail a certain theory of morality.

@kenakofer Can you give some examples how could this resolve Yes? Because to be it sounds almost like a category error. If there would be mathematical-like proof, it is hard to imagine the starting point. (Only reason for not betting this down is the big uncertainty that we understand the question in the same way.)

How would you consider arguments like: all people hate suffering, we are all people, therefore we should minimize suffering? Or would it have to be something deeply woven into mathematical / physical laws? (One could, for example, argue that things like suffering or people are not fundamental concepts, they are only emergent properties, and therfore strong truisms will likely not be true about them.)

For context: my understanding of morality is that it is basically a generalization of our shared evolutionary instincts for altruism, which is the shared basis, allowing us to reach similar conclusions. It is often debated that, for example artificial intelligences could have literally any values (the orthogonality thesis), and therefore they could arrive at different morality.

@Irigi Thanks for the good questions and comments.

  • I agree it sounds like a category error, and if I still think that in 2030 this will resolve NO. There exist smart people who are moral realists and think physical systems can discover moral truths much like they can discover mathematical truths. I suspect some of those people would say otherwise and perhaps buy YES.

  • Broadly agree. I’ll add that “all people hate suffering, we are all people, therefore we should minimize suffering” lacks a premise like “Given that we all hate X, we should minimize X” to bridge the is/ought gap (“ought to” = “should” as I use it). Perhaps there’s a well-justified reason (a “greater good”) that we shouldn’t minimize X after all.

  • Yeah, I’m tempted to say this question couples inversely with the orthogonality thesis, but I need to think about that some more.

predicts YES

@Irigi @kenakofer When I bet on this question I was thinking of the paper The Source of Normativity (you can find a non-paywalled draft version on John Bengson's website) which proposes an approach for identifying non-normative facts which provide a fully adequate metaphysical explanation of normative facts. I do not think this paper satisfies the criteria in this market, but it does gesture towards a strategy someone could take to resolve it.

Now, I do not claim to understand the argument thoroughly, but it seemed plausible, which is why I bet so much on yes—I expect someone smarter than me will explain why they think it fails!

@Irigi “All people hate suffering” is very clearly not true, there are many belief systems and psychological profiles that value suffering, at least certain kinds or magnitudes, as a venue for growth and spiritual enlightenment. Furthermore, even if we all “hate” it, it might nonetheless be good for us as a species or serve some higher purpose for which our species is disposable. So I don’t think you can induce an impression of locally hating suffering into some sort of broad moral conclusion.

predicts YES

A question of clarification: If humanity has strong evidence of a moral proposition that is dependent on the stance of a single, simple, immutable and eternal god upon which our world depends, then will this market resolve NO?

@Nadja_L probably correct, especially since you said "stance", which makes it seem like the god could have had some other stance. Immutable gives me pause though. If there's a chain of logic from truth statements to justifying respect to that particular stance (Answering the question "why ought I to obey this god?") that would obviously suffice.

predicts YES

@kenakofer Thanks, this really clears things up!!!

bought Ṁ1 of YES

"suffering is undesirable" can be thought of as "ought" or as an "is" or as both. whether it is descriptive or prescriptive or both depends on background definitions.

@singer Suffering is indeed undesirable to me, that's a true "is". How do you interpret that prescriptively?

@kenakofer Avoid suffering?

@TheAllMemeingEye given that suffering is undesirable to one, why ought one to avoid suffering?

@kenakofer haha this reminds me of the meme "he who denies the law of non-contradiction shall be beaten and burnt until he admits that to be beaten and burnt is not the same as to not be beaten and burnt"

@kenakofer in all seriousness I suspect this is a situation where it is only really possible to define ought/should/goodness in terms of desire/wellbeing (of the self or the collective) while keeping its intuitive properties, and it becomes increasingly unclear why one would willingly and consciously choose and pursue a concept with a definition that diverges from this in the first place