Mathematically-perfect and self-consistent value systems are just not possible to square with human intuition, and since all value systems are arbitrary, there's no particular reason to try to force myself into the most elegant one. Just avoid getting dutch-booked/money-pumped and otherwise I should do whatever I feel like doing.
This rejects utilitarianism, since there's no particular reason why I should care equally about any two humans but shouldn't also care about, say, a rock. The entities that I care about are arbitrarily chosen, and if I care more about some humans than others, there's no logical argument to the contrary.
It does not reject emergent game-theoretical considerations, i.e. instrumental approaches towards satisfying my values. A society of entirely self-interested psychopaths would still agree to ban murder, as it's mutually beneficial to do so.
This also rejects attempts to formalize personal identity; if I care about waking up from sleep but don't care about surviving death via cryonics, that's not inconsistent, just an arbitrary value system like any other.
Best counterargument(s) to the above get a bounty.
Moral norms are not different in kind from epistemic norms. (Premise, can be expanded upon if asked for, but is defended in great length in Terrence Cuneo's book The Normative Web. In particular, metaethical constructivists directly base morality off of rationality. The two are joined at the hip.)
By (1.) if there is a successful argument against nonsubjective moral norms, there will be an analogous argument against epistemic norms with similar force.
There are nonsubjective epistemic norms. (Premise, but is presupposed by the argument given above, as it references logical argumentation and self consistency, examples of things with epistemic normative force.)
Thus, there can be no successful argument against epistemic norms.
Thus, there can be no successful argument against nonsubjective moral norms.
Thus, the argument given above must fail. QED.
This may be a distinction without a difference from your perspective but I would say that values are contingent aesthetics rather than arbitrary value systems
Humans have evolved moral/ethical tastes which work in some complicated way to determine their preferences among sets of world states.
There is no particular reason why you should care more about human suffering than a rock, but your moral choice here is not fully unconstrained because you do in fact (hopefully) care more about humans than rocks. Under this view, moral progress can still be made because it is possible to be confused either about what is true of the world or what is true of our own moral aesthetics.
Confused About The World Example
I would in fact prefer a world where everyone lives in Christian Heaven but it just may not be true that convincing others to follow the Ten Commandments will in fact result in the Heaven world state.
Confused About Aesthetics Examples
Exposure to books like Animal Liberation may raise the salience of animal suffering to my own aesthetic preferences and reveal that I do in fact care a great deal about the suffering of animals. It can certainly be debated here whether I am revealing a preference I already had or simply changing my existing preferences by reading this book.
This view is not quite the strawman of enlightened selflessness where one 'does moral things because it will make them happy'. One can still have preferences about how they hope the world will look 100 years after their death or in some weird 'do a good thing and then immediately take an amnesia pill' type thought experiments.
This still leaves open a lot of value systems such as that of a true dedicated psychopath who is correct about their own antisocial aesthetic preferences. There is also a lot of confusion here about what we should do when our own actions will reliably change our existing values.
We do however escape the trap of having to logically justify from first principles our value system. Moral progress proceeds by on the one hand trying to gain more wisdom and seeing more clearly how the happiness/suffering of others creates a more/less beautiful world and on the other hand trying to practically see how our actions can help bring about more desirable world states (usually by collaborating with others that have aligned moral aesthetics).
TLDR:
There just is some physical fact about what you do value as an aesthetic preference between world states. This fact is contingent on your biology/psychology/sociology etc. but can't be arbitrarily changed at will. There is still plenty of room here for moral progress but there is nothing written into the stars of the form "maximizing well-being for all conscious creatures impartially considered is Good".
It seems like torturing babies for fun is bad. There's no reason to believe it's not. So you have good reason to believe that torturing babies is objectively bad.
I don’t have enough of a vocabulary or an education to fully answer this question, but I’ll give it my best shot.
In order to unequivocally prove values have an underlying purpose, you would have to unearth the origin of morality.
As you stated, people don’t intuitively have a desire to do only “good”, so we could assume that morality is an arbitrary concept created by certain humans in order to gain wealth or social status and some form of security. But being “good” doesn’t always pay, sometimes it’s more beneficial to be “bad”.
The way I see it, the only way being “good” would end up being seen as a “good” thing by people is if there was some kind of god who created humanity.
I think the best way to find out if values are arbitrary would be to find out if there is a god, and if so, who is god?
That’s my 2 bits.
Values are in fact arbitrary. In the long term, however, it will be more beneficial for you to be good to people than to do whatever you feel like. We live in a world where your reputation will have a massive impact on how you are treated. If you treat people well and build a good reputation, other people will treat you better, and you will be happier. You should care about other humans because your brain is almost certainly hard wired to feel good when other people care about you. The best way to get people to care about you is for you to care about them :)
Sounds like you're describing some sort of moral skepticism. Read more here, but this view is generally very unpopular with philosophers.
Interactive mode:
Is it fair to say that your main question above is something like "Why would I care about consistency for consistency's sake"?
It does not reject emergent game-theoretical considerations, i.e. instrumental approaches towards satisfying my values
Where do you draw the line between instrumental and terminal? For instance, in the psychopath society case you mention, wouldn't it be also coherent to agree to find murder repugnant as it can also be beneficial to do so?