Good Tweet or Bad Tweet? Which controversial posts will Manifold think are a "Good Take" this week?
➕
Plus
440
Ṁ400k
Nov 2
87%
Kevin DeLuca: The cycle of bad/good takes and analysis https://x.com/cantstopkevin/status/1851792202659229955
3%
roon: now more than ever you shouldn’t be paying down tech debt. just wait it out and pray the software agents will do it https://x.com/tszzl/status/1851021282785333595
7%
Aella: Communism seems as bad as Nazism. Why less stigma for being one? Do they mean a different thing? Or is there history amnesia? https://x.com/Aella_Girl/status/1812320840702423451
95%
Aella: fuck the nyt, now they've covered a topic I know, I realize how misrepresenting/unethical they are https://x.com/Aella_Girl/status/1360640220673105930
94%
78%
Franklin: Tier list for candidates in the presidential debate https://x.com/franklinisbored/status/1833551336601723127

You can help us in resolving options by spending at least 1 mana on each tweet you have an opinion on. Buy YES if you think it's a good take and NO if you think it's a bad take.

Many markets come in the form of "is this tweet a good take?" so I thought we'd try just doing the most direct possible version of that.

You can submit any "hot take" tweet, as well as a quote from the tweet or a neutral summary of the take.The tweet can be from any time, but I think more recent hot takes would be better.

I may N/A options for quality control, or edit them to provide a more neutral summary.


As a trader, you should buy any amount of YES in tweets you think are Good Takes, buy any amount of NO in tweets you think are Bad Takes. I will leave the definition of those terms up to you. The amount of shares doesn't matter for the resolution, one share of yes is one vote and one hundred shares of yes is also one vote.

If I think you are voting purely as a troll, such as buying no in every option, I may block you or disregard your votes. Please vote in good faith! But hey, I can't read your mind. Ultimately this market is on the honor system.

Note that market prices will be a bit strange here, because this is simultaneously a market and a poll. If you sell your shares, you are also removing your vote.

The market will close every Saturday at Noon Pacific. I will then check the positions tab on options that have been submitted.

If there is a clear majority of YES holders, the option resolves YES. If there is a clear majority of NO holders, the option resolves NO. If it's very close and votes are still coming in, the option will remain un-resolved. The market will then re-open for new submissions, with a new close date the next week. This continues as long as I think the market is worth running. It does not matter what % the market is at, and bots holding a position are also counted. In a tie, the tweet will not resolve that week.

I may update these exact criteria to better match the spirit of the question if anyone has any good suggestions, so please leave a comment if you do.

Get
Ṁ1,000
and
S3.00
Sort by:

Agreed, @Joshua where you at?

While we're waiting for this to reopen, here's a version for takes that aren't from Twitter:

Kevin DeLuca: The cycle of bad/good takes and analysis https://x.com/cantstopkevin/status/1851792202659229955

@PlasmaBallin This is amazingly clever, but on second thought, I'm not convinced it works.

Are the people who do good data-based analysis the same people who become overconfident and produce bad takes? So far, at least in this market, I think we've endorsed everything Nate Silver says.

Aella: fuck the nyt, now they've covered a topic I know, I realize how misrepresenting/unethical they are https://x.com/Aella_Girl/status/1360640220673105930

Anyone know what she's kvetching about here?

@Najawin Can't view the comments without making a xitter account 😞

@Najawin Cade Metz's article on SlateStarCodex/Yudkowskian rationality from 2021. One could make an argument that Metz in particular is garbage but the paper overall includes many other journalists, but he is in fact still employed there, despite a well-earned reputation and some backlash (measured in subscriber count) from his writing.

bought Ṁ25 Aella: fuck the nyt,... NO

@SeekingEternity If this is why, I think it's a bad take. The SSC article is an outlier in terms of journalistic quality - you can tell by the fact that the vast majority of other articles don't receive the same amount of backlash from people who know about the topic they're reporting on. Claiming that the whole paper is misrepresenting or unethical because of that one article goes too far.

@SeekingEternity Oh, the article that was largely correct? Horrible take then.

Aella: Communism seems as bad as Nazism. Why less stigma for being one? Do they mean a different thing? Or is there history amnesia? https://x.com/Aella_Girl/status/1812320840702423451
bought Ṁ25 Aella: Communism see... NO

I think Nazism killed more people relative to duration of its existence and population of people under its rule than communism did.

@PlasmaBallin chatgpt-4o-latest-20240903 calculated:

Nazi Germany: ~0.0115 people killed per person per year

USSR: ~0.0010 people killed per person per year

PRC: ~0.00067 people killed per person per year

So yeah seems plausible

bought Ṁ50 Aella: Communism see... NO

@PlasmaBallin First of all, utilitarianism - skill issue. Second of all, I'm pretty sure Revolutionary Catalonia, while not the greatest place to live, didn't come close. Neither the USSR nor the PRC would even remotely resemble what Marx envisioned.

bought Ṁ20 Aella: Communism see... NO

@Najawin Oh yeah, I totally agree (besides the diss on utilitarianism), authoritarian "communism" is nowhere near what I would consider true communism, but for sake of argument I compared the numbers for the regimes people usually think of

@Najawin I think non-utilitarianism is a skill issue, but I agree that body count reasoning isn't the best way to compare the two ideologies, especially when it comes to stigma. The reason I use it is because that's always the argument for why communism is worse - people say, "Why do people think the Nazis were worse when communists killed more people?" So the fact that this argument actually fails if you make the proper comparison means there are no longer any points in favor of communism being worse, or even just as bad, as Nazism.

@PlasmaBallin The epistemic argument is still definitive. Responses are just varying levels of bullet biting.

@Najawin Of all the arguments against utilitarianism, I'm surprised that's the one you think is good. I've always thought it was one of the worst philosophical arguments ever made. Consequentialism doesn't require you to know what the future holds (or at least, no form of consequentialism that anyone actually holds does), and if this objection actually worked, it wouldn't just implicate consequentialism, but also any form of beneficence whatsoever, which means it would debunk every sane ethical theory in existence.

@PlasmaBallin "For all finite t, it's impossible to know whether actions taken prior to time t are good or bad" seems like a rather damning refutation of any moral theory, no? It certainly doesn't generalize to virtue ethics or deontology - I just don't see how anyone could come to that conclusion.

@Najawin No, it's not even slightly damning. The only sense in which you don't know which actions are right and wrong is the sense in which you don't know which one objectively has the best consequences (i.e., not in the sense of praise/blameworthiness), but consequentialism doesn't hold that you're morally responsible for your actions having bad consequences that there was no way for you to know about. It just makes no sense whatsoever to hold the cluelessness objection against consequentialism when we already know that there are ways to maximize utility under uncertainty. That's what the entire field of field of decision theory is about.

And yes, this does generalize to all other ethical theories, unless they hold that promoting good consequences is completely morally irrelevant. Since that position would be completely insane, this is a problem for everyone (if it was a real problem at all), not one for consequentialism specifically.

@PlasmaBallin "The only sense in which you don't know which actions are right and wrong is the sense in which you don't know which one objectively has the best consequences (i.e., not in the sense of praise/blameworthiness),"

Yes, so, literally the sense that consequentialism focuses on, since it's about actions rather than the agent. So if you agree to this you agree that consequentialism is impossible to hold. (Unless you're a subjective consequentialist, but this certainly isn't the default position, and it's certainly not the case that everyone holds to subjective consequentialism. And if you're a subjective consequentialist who also dodges out of the other well known arguments against consequentialism by becoming a negative rule util, well, a subjective negative rule util is effectively just a deontologist who's lying to themselves.)

"And yes, this does generalize to all other ethical theories, unless they hold that promoting good consequences is completely morally irrelevant."

No, the distinction here is that virtue ethicists and deontologists both place heavy emphasis on the agent, whereas consequentialism is traditionally focused on evaluating the morality of the act itself. (Again, the one exception being subjective consequentialism, which considers the act as the agent intends for it to be.) (And, yes, deontology does place focus on the agent, or at least Kantianism does. See Groundwork, ftnt 8.) The former two have a much clearer bound on what they have to consider as morally relevant, as such.

@Najawin

Yes, so, literally the sense that consequentialism focuses on, since it's about actions rather than the agent. So if you agree to this you agree that consequentialism is impossible to hold. (Unless you're a subjective consequentialist, but this certainly isn't the default position, and it's certainly not the case that everyone holds to subjective consequentialism. And if you're a subjective consequentialist who also dodges out of the other well known arguments against consequentialism by becoming a negative rule util, well, a subjective negative rule util is effectively just a deontologist who's lying to themselves.)

This is all confused. You're trying to collapse the distinction between the objectively best action and the most prudent action based on your own knowledge. All consequentialists agree that there is a distinction between these things and that, since it's impossible to act directly on the former, consequentialists should do the latter. You're arguing against a theory no one actually holds.

No, the distinction here is that virtue ethicists and deontologists both place heavy emphasis on the agent, whereas consequentialism is traditionally focused on evaluating the morality of the act itself.

Placing more emphasis on the agent doesn't mean they completely ignore the consequences of your actions. If they didn't at least tell you to promote good consequences all else being equal, they would be completely insane moral views.

Again, the one exception being subjective consequentialism, which considers the act as the agent intends for it to be.

The best forms of consequentialism do hold intent to be morally relevant. That doesn't make them "subjective consequentialism" because there is more than one moral property. Intent is relevant to evaluating whether an agent has erred morally (as opposed to erring epistemically or picking the action that happened to have worse consequences through no fault of their own).

the communists defeated the nazis

@PlasmaBallin "This is all confused. You're trying to collapse the distinction between the objectively best action and the most prudent action based on your own knowledge"

Completely wrong. My statement was that right v wrong from a consequentialist point of view is focused on the action rather than the agent. This is the case. Moral motivation, decision procedures, etc, are not questions of right and wrong under consequentialism. So when I said "[f]or all finite t, it's impossible to know whether actions taken prior to time t are good or bad", this was being applied at the level of actually knowing whether actions are right or wrong, not determining what to do.

"You're trying to collapse the distinction between the objectively best action and the most prudent action based on your own knowledge. All consequentialists agree that there is a distinction between these things and that, since it's impossible to act directly on the former, consequentialists should do the latter. "

This is just bullet biting. "Yes, we agree that under consequentialism we are horrifically incapable of determining what things are actually good or bad, but we think you should use this other principle, which has no relation to determining what things are either good or bad, and cannot in principle have a relation under our moral theory."

If you truly hold the position you're claiming to hold, there's no reason why people should even take what you consider to be "the most prudent action based on their own knowledge". You've completely divorced accounts of moral evaluation from moral motivation. The (objective) consequentialist simply can't present a reason to connect the two, since it's already been ceded that we're hopelessly clueless on the first.

"Placing more emphasis on the agent doesn't mean they completely ignore the consequences of your actions. If they didn't at least tell you to promote good consequences all else being equal, they would be completely insane moral views."

Yes they do (insofar as they ignore actual consequences, rather than expected), and this misunderstands the critique. If you focus solely on the intent of the agent you've straightforwardly solved the issue, for instance. The agent doesn't need to calculate 10000 years into the future, their intent to do good and to have good consequences would be sufficient.

"The best forms of consequentialism do hold intent to be morally relevant. That doesn't make them "subjective consequentialism" because there is more than one moral property. Intent is relevant to evaluating whether an agent has erred morally"

...No, these are literally a class of theories called subjective consequentialism. Section 4, Paragraph 11. See also Paragraph 7 for the basic discussion we're having above. The charge is that consequentialism leads to moral skepticism, an inability to know what things are good and bad, and that any decision procedure drawn up is wholly unmoored from a theory of the good. One could just as easily justify hedonism as effective altruism. Or sadism.

@Najawin I mean, I think utilitarianism as a system is just nearly incoherent at the foundations. I don't know if I would even grant that actions have moral values in principle (under utilitarianism)

@Najawin Consequentialism is simply the theory that what actually matters is making things better and not some agent-focused thing like virtue or respecting deontic norms, and that therefore, you should do the best you can to make things better. This doesn't imply that no agent-focused moral properties can exist, just that those properties aren't what actually matters. You're trying to ignore this fact and pretend that consequentialists can only talk about "rightness" in the sense of, "Which action in fact has the best consequences?" (a property better phrased as "goodness" rather than "rightness") and not in the sense of, "Given what the agent knows, what action morally ought they take?" But the cluelessness objection only makes any sense whatsoever if it's applied to the second question rather than the first. The cluelessness objection is supposed to be that consequentialism can't guide action because we're clueless about what the right action is. But that's obviously false if we have an answer to the second question, which we do. Apparently, you think making this distinction is "biting the bullet," but there's not even a bullet here to bite - it's a complete non-issue that you've worked yourself into a tizzy over, presumably due to some deontological intuitions that there's something really specially important about choosing the "right" action rather than simply doing the best that you can, an idea that makes no sense under consequentialism.

If you truly hold the position you're claiming to hold, there's no reason why people should even take what you consider to be "the most prudent action based on their own knowledge". You've completely divorced accounts of moral evaluation from moral motivation. The (objective) consequentialist simply can't present a reason to connect the two, since it's already been ceded that we're hopelessly clueless on the first.

This makes no sense whatsoever. If I care about states of affairs that are objectively valuable, which are the only things I should care about according to consequentialism, then of course I should do the action that produces the most value in expectation. Consequentialism says that the thing that actually matters about your action is how much good it produces, which directly and obviously gives you reason to do the thing that produces the most expected good.

If you focus solely on the intent of the agent you've straightforwardly solved the issue, for instance. The agent doesn't need to calculate 10000 years into the future, their intent to do good and to have good consequences would be sufficient.

Nor does a consequentialist agent! A consequentialist agent does whatever has the best expected consequences even if they can't calculate 10,000 years into the future. And this is very obviously what you should do if you conclude that it's the consequences of your action that matter, rather than some agent-focused property.

No, these are literally a class of theories called subjective consequentialism.

You're misunderstanding the point I'm making here. I'm saying that objective vs. subjective consequentialism is a false dichotomy because it assumes that there's only one relevant property to judge by consequentialist standards, which must be either objective or subjective. But in reality, there are two: If we want to judge the action on its own, without considering the agent's moral character or responsibility, then we can talk about the objective consequences. If an agent committed an action with good expected consequences but that happened to actually be bad, then it is unfortunate that they did so, even though they are not morally responsible for this. But if we want to judge the agent themselves, and in particular what action the agent has the most reason to perform, given their knowledge, then we go the subjectivist route. These things do not conflict with each other, and all of the objections to consequentialism based on cluelessness seem to just come from a pigheaded refusal to consider these two things separately.

I think that this entire argument just rests on a false assumption about how consequentialism works. You're assuming that the way a moral theory guides action is by specifying which actions are "right," and that an agent's goal is to do the "right" action, so that an agent then chooses to perform those actions. But that's not how consequentialism works, and trying to force it into that framework is really just a circular argument against consequentialism. Consequentialism guides action by specifying what goals you should work towards and then telling you to do the best you can at achieving them. The goal of a consequentialist agent is to promote objectively valuable states of affairs, not some agent-relative goal like "to do the thing that is 'right' for me to do in an objective sense" (which would be a circular goal anyway), and certainly not "to pick which actions are permissible on a binary scale of permissibility and impermissibility." Deontic concepts like obligatoriness and permissibility are not the way you evaluate actions in a consequentialist framework, yet this objection tries to force them onto consequentialism in an awkward way and argues that that awkwardness is somehow a problem. In reality, if you want to apply these concepts to consequentialism at all, you should just find a more natural way to do it (treating these as referring the the subjective properties is probably more natural than treating it as referring to the objective ones), and you should still not consider them as important as actual good states of affairs.

@PlasmaBallin "You're trying to ignore this fact and pretend that consequentialists can only talk about "rightness" in the sense of, "Which action in fact has the best consequences?" (a property better phrased as "goodness" rather than "rightness") and not in the sense of, "Given what the agent knows, what action morally ought they take?" But the cluelessness objection only makes any sense whatsoever if it's applied to the second question rather than the first. The cluelessness objection is supposed to be that consequentialism can't guide action because we're clueless about what the right action is"

Once more, section 4, paragraph 7. This is not what the actual objection is. Or, at least, it's not the entirety of the objection. Section II in the original paper is perhaps a little unclear that there's a second problem here - that under consequentialism you're unable to evaluate moral claims at all, but it does say this. Not merely that there's no decision procedure, but that consequentialism outright leads to moral skepticism.

'On a whim of compassion, he orders that her life be spared. But perhaps, by consequentialist standards, he should not have done so. [...] The millions of Hitler’s victims were thus also victims of Richard‘s sparing of Angie. [...] Do Hitler’s crimes mean that Richard acted wrongly, in consequentialist terms? They do not. For Hitler’s crimes may not be the most significant consequence of Richard’s action. Perhaps, had Richard killed Angie, her son Peter would have avenged her, thus causing Richard‘s widowed wife Samantha to get married again to Francis. And perhaps had all this happened Francis and Samantha would have had a descendant 115 generations on, Malcolm the Truly Appalling, who would have conquered the world and in doing so committed crimes vastly more extensive and terrible than those of Hitler.

[Note that we're just evaluating whether by consequentialist standards this action was right or wrong, not whether Richard followed a reasonable decision procedure.]'

So, no, the issue here isn't my supposed "deontological intuitions" (oh how funny that notion is - I'm a natural utilitarian intuitively and I have to consistently stop myself from thinking like one because I fundamentally believe the view is completely wrong on a rational level), it's that for some reason you aren't accepting that I really do mean what I say when I'm telling you that I know what I'm talking about, I know the distinction between the account of the good and the decision procedure, and this is a problem with the account of the good. I'm not critiquing the decision procedure, except insofar as the decision procedure becomes wholly unmoored from any account of the good as a result of the bullets you have to bite. This is a standard version of the cluelessness objection, I gave an SEP cite and everything, so it's not like I'm out here with some wild interpretation I've made up on my own.

"This makes no sense whatsoever. If I care about states of affairs that are objectively valuable, which are the only things I should care about according to consequentialism, then of course I should do the action that produces the most value in expectation."

Ah ah ah. Did you see the sleight of hand there? Caring about states of affairs that are valuable does not imply that you should care about states of affairs that are valuable in expectation, unless you agree that what matters isn't actual consequences.

"Nor does a consequentialist agent! A consequentialist agent does whatever has the best expected consequences even if they can't calculate 10,000 years into the future."

But this doesn't solve the issue. (Objective) Consequentialists don't determine whether actions are good or bad based on the intent of the agent. So a consequentialist can have whatever decision procedure they want, it doesn't change the moral fact of the act. You still have moral cluelessness regardless of their intent. Virtue ethicists and deontologists don't have this problem!

'You're assuming that the way a moral theory guides action is by specifying which actions are "right," and that an agent's goal is to do the "right" action, so that an agent then chooses to perform those actions.'

I really don't know how many more times I have to insist that I'm not doing this. It's frankly disconcerting that I've been so open about the specific objection I'm making, being very specific that it's focused on moral skepticism and how consequentialism fails as an account of the good, and you've failed to accept that this is.... even possible? You think I'm making an objection I'm just not making? I could perhaps understand misreading the original paper, not reading the moral skepticism angle properly. But I was very clear in the last comment about what I meant, and I cited the SEP to show I wasn't off on a limb by myself. Like, let's stop tilting at windmills and address the actual argument.

@Najawin No, you're still using deontological intuitions. In particular, you're using the intuition that what matters is some binary sense of right and wrong, which utilitarians couldn't care less about. What matters is the actual consequences, which means you evaluate actions by the consequences, not by some binary notion of right and wrong. And since you don't know the consequences fully, that evaluation takes the form of expected value. Note that you can have the false intuition of binary right and wrong while still having other intuitions that point in a utilitarian direction, so "But I have utilitarian intuitions" isn't a refutation of the fact that your argument relies on deontic intuitions.

I know the distinction between the account of the good and the decision procedure, and this is a problem with the account of the good.

But you don't seem to understand what the account of the good actually is. You're treating it as if you think what utilitarians care about is the goodness of actions, defined in a binary sense, rather than the goodness of consequences, which is a continuous scalar.

I'm not critiquing the decision procedure, except insofar as the decision procedure becomes wholly unmoored from any account of the good as a result of the bullets you have to bite.

And this criticism is completely false. It relies on your misunderstanding of the account of the good. If you understand what really matters to consequentialists, the decision procedure is obviously the best way to achieve it.

This is a standard version of the cluelessness objection, I gave an SEP cite and everything, so it's not like I'm out here with some wild interpretation I've made up on my own.

I'm not saying you're misinterpreting the cluelessness objection but that the cluelessness objection misinterprets consequentialism.

Ah ah ah. Did you see the sleight of hand there? Caring about states of affairs that are valuable does not imply that you should care about states of affairs that are valuable in expectation, unless you agree that what matters isn't actual consequences.

Nothing I said implies that utilitarians care about states of affairs that are valuable in expectation rather than those that are actually valuable. The only sleight of hand here is you conflating the action recommended by a decision procedure with the terminally valuable states of affairs.

(Objective) Consequentialists don't determine whether actions are good or bad based on the intent of the agent. So a consequentialist can have whatever decision procedure they want, it doesn't change the moral fact of the act.

Once again, you're assuming that the goal of consequentialists is to perform the singular best action rather than to increase utility. If your goal is to increase utility, you obviously want to go with the decision procedure that produces the most utility in expectation.

I really don't know how many more times I have to insist that I'm not doing this.

If you continue to do this, it doesn't matter how many times you insist you're not doing it. Every single objection you make about utilitarians being somehow unable to pick a decision procedure assumes that this is the goal.

it's focused on moral skepticism and how consequentialism fails as an account of the good

If by "moral skepticism," you mean that consequentialism leads to skepticism about the best decision procedure, your objection is completely false, and makes exactly the errors I'm attributing to it. Since you've continually insisted that this is the case, I don't think I'm misinterpreting your objections when I attribute these errors to them. If you just mean, "It's impossible to tell for sure which action is actually the best," then yes, that is true, but it's hardly a bullet to bite, since it has no bearing on what actions a consequentialist decision procedure recommends.

@PlasmaBallin "No, you're still using deontological intuitions. In particular, you're using the intuition that what matters is some binary sense of right and wrong, which utilitarians couldn't care less about. What matters is the actual consequences, which means you evaluate actions by the consequences, not by some binary notion of right and wrong."

To be clear, "was morally good compared to the alternatives at the time", "was morally bad compared to the alternatives at the time" and "increases net utility" are binary evaluations that run head first into the problem of moral skepticism here. The distinction you're attempting to draw is completely irrelevant. (Shockingly, continuous ranges can be used to define binary relations.) Since we're not saying that the decision procedure is the same as the actual statement of fact, the agent's determination can't be used to substitute in for the correct answer, nor can our attempt at evaluation at some finite time t. This argument, again, is an attempt to completely disconnect moral evaluation from moral decisions, and, if true, would justify every possible decision procedure.

"You're treating it as if you think what utilitarians care about is the goodness of actions, defined in a binary sense, rather than the goodness of consequences, which is a continuous scalar."

(To clarify, both of these aren't quite right. It's goodness of acts, determined by consequences. The consequences are not in and of themselves the things considered to be good in the account of the good. It's an evaluation of the goodness of the action through considering the consequences. Saying it's the consequences that are the things that are constitutive of the good is straight up incoherent as a moral theory, obviously.)

"And this criticism is completely false. It relies on your misunderstanding of the account of the good. If you understand what really matters to consequentialists, the decision procedure is obviously the best way to achieve it."

Except the entire point is that consequentialists can't have a theory of the good for specific actions - it reduces to moral skepticism. If you want to address that argument, go ahead. But I'm clearly not misunderstanding the account of the good.

"I'm not saying you're misinterpreting the cluelessness objection but that the cluelessness objection misinterprets consequentialism."

'But the cluelessness objection only makes any sense whatsoever if it's applied to the second question rather than the first. The cluelessness objection is supposed to be that consequentialism can't guide action because we're clueless about what the right action is. But that's obviously false if we have an answer to the second question, which we do. Apparently, you think making this distinction is "biting the bullet," but there's not even a bullet here to bite - it's a complete non-issue that you've worked yourself into a tizzy over, presumably due to some deontological intuitions that there's something really specially important about choosing the "right" action rather than simply doing the best that you can, an idea that makes no sense under consequentialism.'

It's hard to interpret this comment in any other way, given it's focused entirely on the second question, and not the first, when I made it clear that I was discussing the first.

"Once again, you're assuming that the goal of consequentialists is to perform the singular best action rather than to increase utility. If your goal is to increase utility, you obviously want to go with the decision procedure that produces the most utility in expectation"

Doesn't follow, you're equivocating between actual utility and expected utility. Under the moral skepticism that consequentialism leads to we're hopeless judges of whether actions can increase utility or not. So we have no reason to pick any particular strategy.

"Every single objection you make about utilitarians being somehow unable to pick a decision procedure assumes that this is the goal."

I really don't think this is the goal, and I think subjective consequentialists can avoid this problem easily. But you, personally, have made some wacky commitments that cause this as well as moral skepticism to be a problem for you. Moral skepticism is by far the more damning one generally - I think subjective consequentialists have to effectively cede so much ground in order to shy away from it that they've sacrificed consequentialism on the altar of its name.

But moreover, I'm not even suggesting that the decision procedure you're suggesting for the consequentialist is wrong! I'm just saying that you have to bite the bullet and accept it can't be justified in any way from your theory of the good, and hedonism etc are just as justifiable. They may be incorrect, and yours correct but you just have to accept that you can't get from A to B, is all. That's what I said.

'If you just mean, "It's impossible to tell for sure which action is actually the best,"'

I mean "it's impossible to tell whether any particular action increases or decreases net utility and whether or not it's a good or bad choice relative to the other options that were in possible candidates, regardless of at which finite t we reconsider this problem". That's the cluelessness objection.

@Najawin

To be clear, "was morally good compared to the alternatives at the time", "was morally bad compared to the alternatives at the time" and "increases net utility" are binary evaluations that run head first into the problem of moral skepticism here.

Great, so you just proved my point. You're assuming consequentialists care about those binary properties, rather than the continuous property of total utility.

Also, "increases net utility" is not even a binary property. You can increase net utility by different amounts, and consequentialists consider it better to increase it by more. That's what I meant when I referred to increasing utility as the goal of consequentialists. If you interpret "increases net utility" in a binary sense, then it's obviously not the goal of consequentialist - utilitarians don't just care about an action having a positive effect of any size - they want the effect to be bigger.

The distinction you're attempting to draw is completely irrelevant. (Shockingly, continuous ranges can be used to define binary relations.)

The distinction is the difference between the view virtually every consequentialist on the planet holds and a strawman you made up. The fact that continuous ranges can be used to define binary relations is what's completely irrelevant here. No consequentialist in the world cares about those binary relations (except insofar as they're instrumentally useful) - we only care about the continuous ranges themselves.

Since we're not saying that the decision procedure is the same as the actual statement of fact, the agent's determination can't be used to substitute in for the correct answer, nor can our attempt at evaluation at some finite time t. This argument, again, is an attempt to completely disconnect moral evaluation from moral decisions, and, if true, would justify every possible decision procedure.

See, you're just repeating the same nonsense here. No one is claiming that an agent's decision procedure is guaranteed to always pick the action that is in fact the most fortuitous. But the decision theory will produce the most fortuitous consequences in expectation. If you have a problem with an agent whose goal is to maximize X* choosing the actions that produce the most expected X, then you have a problem with decision theory itself, not consequentialism. Consequentialism literally just is (Decision theory) + (the goal of having valuable states of affairs, the more valuable the better).

*in the continuous sense, not the sense where they only care about achieving the maximum value and don't care about smaller values, which you're trying to pretend is the same thing

To clarify, both of these aren't quite right. It's goodness of acts, determined by consequences. The consequences are not in and of themselves the things considered to be good in the account of the good.

This is just false. Consequentialists don't consider actions themselves to be bearers of value. The bearers of value are things in the world like a person being happy. Actions are only instrumentally valuable in achieving good states of affairs.

Now, I assume you're using "good" to mean something other than "valuable" here because otherwise your objection makes no sense whatsoever. But that's the sense that I'm using the word "good" in when I talk about "good states of affairs," so objecting to this based on a different meaning of "good" is the equivocation fallacy.

Except the entire point is that consequentialists can't have a theory of the good for specific actions - it reduces to moral skepticism. If you want to address that argument, go ahead. But I'm clearly not misunderstanding the account of the good.

You haven't actually made any coherent argument for this for me to address which I haven't addressed already. You just keep nonsensically saying that consequentialist decision procedures are "unmoored" from the account of the good, and then assume that the consequentialist account of the good is something other than what it actually is.

It's hard to interpret this comment in any other way, given it's focused entirely on the second question, and not the first, when I made it clear that I was discussing the first.

Because the cluelessness objection itself would only have any force if it was focused on the second question! That's why it misinterprets conseuqnetialism. Cluelessness objectors apparently think that consequentialists hold someone to have morally failed if they don't take the action that would in fact produce the most fortuitous consequences, but no consequentialist in the entire world thinks that! Every consequentialist in the world holds that you should morally judge people based on how good their action was expected to be, given what they knew, not based on factors that were outside their knowledge and control. It's just that this moral judgement differs from the actual fortuitousness of their actions. That's why I said earlier that you're conflating between two senses of the word "right" (or even of the word "good").

And no, the fact that the objectively most fortuitous action can be different from the most moral action for an agent to take in a given state of knowledge doesn't mean that the decision procedure is unmoored from the account of the good. The decision procedure is, after all, directly aimed at producing fortuitous actions. Unless you object to decision theory itself, then the decision procedure, "Do the action that is the most fortuitous in expectation," is not arbitrary and is obviously not unmoored from the goal of having more value realized (in the continuously graded sense).

Doesn't follow, you're equivocating between actual utility and expected utility. Under the moral skepticism that consequentialism leads to we're hopeless judges of whether actions can increase utility or not. So we have no reason to pick any particular strategy.

Nope, it directly follows if you don't assume binary goals. Apparently, you misunderstood what I meant by "increasing utility", as the first paragraph you wrote makes clear. You're treating it as if consequentialists only care about whether their actions make the overall utility higher, but don't care how much. That's the only goal under which it would be a problem that we don't actually know which actions increase utility. But under consequentialists' actual goal, which is just, "The more utility, the better," the fact that you don't know which actions will increase utility is completely irrelevant. If your goal is "the more X, the better," decision theory tells you that to achieve your goal as well as possible, you should do whatever produces the highest expected value of X, not whatever has the highest probability of increasing X. Unless you object to decision theory itself, you can't object to this verdict of conseuqentialism.

I should also clarify here that I'm using "expected value"/"expected utility" as a stand-in for whatever function your decision theory uses to evaluate options, not necessarily for the mathematical expected value of the utility. If you use evidential decision theory, these would be the same, but nothing I've said here assumes evidential decision theory. That's why I said you have to object to decision theory itself, not just to a specific decision theory.

I really don't think this is the goal, and I think subjective consequentialists can avoid this problem easily.

Then why is this an issue? If the goal is not to pick the most fortuitous action (in a binary sense), then the fact that no one knows how to do this is completely irrelevant. And if subjective consequentialists (which the way you're defining it seem to include all consequentialists in existence) can avoid the problem very easily, why did you present it as a fatal objection to conseuqentialism?

But you, personally, have made some wacky commitments that cause this as well as moral skepticism to be a problem for you.

I have been explicitly saying you should use decision theory this whole time, which implies that there's no skepticism over what action you should take (at least, not in the sense of "should" that matters for a theory guiding action). I think you're just misinterpreting what I've said about certain actions being objectively good in the sense of being the most fortuitous (as opposed to the ones that are the most morally good). But I've been saying the whole time that we should distinguish between these things.

I think subjective consequentialists have to effectively cede so much ground in order to shy away from it that they've sacrificed consequentialism on the altar of its name.

As far as I can tell, what you're calling subjective consequentialism is just what every consequentialist actually believes, so I'm not sure what you think is being sacrificed. I guess it's because you think that it's mutually exclusive with objective consequentialism and that the latter is the "real" version of consequentialism. But in reality, these two views aren't distinct at all - they're just using words in a different way. As far as I can tell, there aren't any consequentialists who actually believe in the strawman version of "objective consequentialism" where you morally judge agents based on whether their actions have the best actual consequences, rather than the best expected ones. When they do use words like "ought" and "good" that apparently sound to some non-consequentialists like they're saying this, it's because they're using it in a different way. See, for example

https://www.utilitarianism.net/types-of-utilitarianism/#expectational-utilitarianism-versus-objective-utilitarianism

https://www.philosophyetc.net/2010/05/objective-and-subjective-oughts.html

https://www.philosophyetc.net/2021/04/whats-at-stake-in-objectivesubjective.html

Both of these are written by an actual consequentialist and treat the distinction between these two views as a verbal dispute based on using different senses of the word "ought". The problem with the cluelessness objection is

that it would only make sense if we were clueless about the rational ought (a particular kind of subjective ought), but we're not. We're clueless about the objective ought, which is not the kind of ought that a morally rational agent makes decisions based off of (well, not directly anyway).

And in case you think this sacrifices consequentialism because it makes it subjective, that's entirely false. Just because you've given this theory the name "subjective consequentialism" doesn't mean it's a subjective moral theory. The "subjective" here just refers to the fact that the actions one subjectively ought to take depends on their state of knowledge, but that's true in every moral theory there is, including uncontroversially objective ones.

But moreover, I'm not even suggesting that the decision procedure you're suggesting for the consequentialist is wrong! I'm just saying that you have to bite the bullet and accept it can't be justified in any way from your theory of the good, and hedonism etc are just as justifiable.

Consequentialism doesn't hold that the decision procedure is justified solely from the theory of the good - decision theory is an entirely separate matter from axiology. But if you accept a certain decision theory, or at least are open to accepting one as you're saying you are here, applying it to a given axiology just gives you a consequentialist decision procedure. That's all consequentialism is.

I mean "it's impossible to tell whether any particular action increases or decreases net utility and whether or not it's a good or bad choice relative to the other options that were in possible candidates, regardless of at which finite t we reconsider this problem". That's the cluelessness objection.

The first part, "it's impossible to tell whether any particular action increases or decreases net utility," is true but completely irrelevant, as I already explained. The second part, "whether or not it's a good or bad choice relative to the other options that were in possible candidates," is ambiguous. By "good choice," do you mean the most rational choice, i.e., the choice you rationally ought to make? In that case, it's false. On the other hand, if by "good choice," you just mean that choice that will in fact produce good or bad outcomes relative to the others, then it's true but irrelevant. It doesn't make sense to complain about cluelessness for anything other than the rational ought.

@PlasmaBallin "You're assuming consequentialists care about those binary properties, rather than the continuous property of total utility."

So this can be read in two ways. The first is that the continuous property of total utility matters and is then used to find the binary properties, which are what defines whether or not acts are morally good or bad. Yes, duh, I agree with this. The second is that these binary properties are themselves irrelevant. This is obviously untrue, nobody thinks that. Not least of which because it's simply not a theory of morality once you do this - all you're doing is assigning a scalar to actions!

Like, let's just quote Kagan here: "Consequentialism holds that an act is morally permissible if and only if it has the best overall consequences." (p. 142)

You need these binary relations because they're the things that define any normative framework. You can't jettison them without jettisoning morality as a whole.

"The distinction is the difference between the view virtually every consequentialist on the planet holds and a strawman you made up."

I appreciate that Shelly Kagan of all people isn't a consequentialist and is participating in my strawman. Jesus wept. If you actually believe the second horn of the dilemma above, you're not a consequentialist, you just have completely incoherent moral beliefs.

"See, you're just repeating the same nonsense here. No one is claiming that an agent's decision procedure is guaranteed to always pick the action that is in fact the most fortuitous. But the decision theory will produce the most fortuitous consequences in expectation."

You are completely misunderstanding those two sentences. Those two sentences are saying that because the decision procedure and the actual truth of whether an action is right and wrong are distinct, we can't move between them, nor can we move between the agent's evaluation and the actual truth.

'Now, I assume you're using "good" to mean something other than "valuable" here because otherwise your objection makes no sense whatsoever. But that's the sense that I'm using the word "good" in when I talk about "good states of affairs,"'

Yes, correct. With that said, moral permissibility is a reasonable facsimile for what I mean. Whether there's normative weight to do or not do something, actualize certain states of affairs, follow certain rules, etc etc. I will note, however, that if your critique of "You're treating it as if you think what utilitarians care about is the goodness of actions" used "good" to mean "value", it's simply and obviously false.

"You haven't actually made any coherent argument for this for me to address which I haven't addressed already. You just keep nonsensically saying that consequentialist decision procedures are "unmoored" from the account of the good"

That's the second, not the first. The fact you keep making this confusion calls into question the veracity of the first sentence, no?

"Because the cluelessness objection itself would only have any force if it was focused on the second question! That's why it misinterprets conseuqnetialism. Cluelessness objectors apparently think that consequentialists hold someone to have morally failed if they don't take the action that would in fact produce the most fortuitous consequences, but no consequentialist in the entire world thinks that!"

Okay, but just insisting that it has to be about the second doesn't serve as a rebuttal to me saying let's talk about the first. Indeed, it suggests that you can't talk about the first. Since this is completely unrelated to it.

"And no, the fact that the objectively most fortuitous action can be different from the most moral action for an agent to take in a given state of knowledge doesn't mean that the decision procedure is unmoored from the account of the good."

"In a given state of knowledge" is irrelevant, it doesn't belong, if we're discussing objective consequentialism. If we're discussing subjective consequentialism these two concepts are the same, so this is incoherent. The issue is that if we're discussing objective consequentialism, and we know that there are end states that have various ranges of values, and we want to get to particular ones, but we simply have no reliable way to steer our path to get to them, because we're hopeless judges of what the good is, any path is as good as another. Maybe we still decide that one choice is the "morally right" one, but it can't be justified by our desired end result. (And I'm very much not making the confusion you seem to think I'm making.)

"You're treating it as if consequentialists only care about whether their actions make the overall utility higher, but don't care how much."

'To be clear, "was morally good compared to the alternatives at the time", "was morally bad compared to the alternatives at the time" and "increases net utility"' Please note the first one here. I'm not making the mistakes you think I'm making. I'm just not writing out every single thing in excruciating detail because it's really not relevant to the points I'm making.

"Then why is this an issue? If the goal is not to pick the most fortuitous action (in a binary sense), then the fact that no one knows how to do this is completely irrelevant. And if subjective consequentialists (which the way you're defining it seem to include all consequentialists in existence) can avoid the problem very easily, why did you present it as a fatal objection to conseuqentialism?"

To be clear, subjective consequentialists collapse the moral fact / moral evaluation / decision procedure distinction. To a subjective consequentialist, the moral reality of an act is based on the expected outcomes, not the actual outcomes. So the distinction you've so insistently been drawing this entire time is one the subjective consequentialist straightforwardly rejects. If you believe this distinction exists, you are not a subjective consequentialist.

I really need you to understand that I am not equivocating between what it's rational for someone to do and what it's moral for someone to do; when I'm discussing the decision procedure I'll try to be clear that we're discussing decision procedures, but the cluelessness objection as I'm emphasizing it is about what it's moral for someone to do.

"We're clueless about the objective ought, which is not the kind of ought that a morally rational agent makes decisions based off of (well, not directly anyway).

And in case you think this sacrifices consequentialism because it makes it subjective, that's entirely false. Just because you've given this theory the name "subjective consequentialism" doesn't mean it's a subjective moral theory. The "subjective" here just refers to the fact that the actions one subjectively ought to take depends on their state of knowledge, but that's true in every moral theory there is, including uncontroversially objective ones."

This is not the objection, nor is it what subjective consequentialism is. The objection is that we can never know whether actions were morally right or wrong. Not whether they were rational - whether they were right or wrong. We are forced into moral skepticism. And subjective consequentialism is saying that subjective beliefs about the utility of others are the things that determine the moral reality of rules or acts a person abides by/commits, not the actual fact of the matter about the utility of others.

"But if you accept a certain decision theory, or at least are open to accepting one as you're saying you are here, applying it to a given axiology just gives you a consequentialist decision procedure."

If we're moral skeptics, this is false.

'By "good choice," do you mean the most rational choice, i.e., the choice you rationally ought to make? In that case, it's false. On the other hand, if by "good choice," you just mean that choice that will in fact produce good or bad outcomes relative to the others, then it's true but irrelevant. It doesn't make sense to complain about cluelessness for anything other than the rational ought.'

I mean in the moral sense, not the rational sense. And you can insist that it's irrelevant over and over, but that doesn't make it so. It damns you to fundamentally being incapable of ever knowing whether actions are morally acceptable.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules