Moral Evaluations

When it comes to weighing the moral implications of an action, I take the otherwise uncontroversial stance that we can make meaningful, reliable judgments about the relative morality of competing models. Some systems of moral evaluation are frankly terrible, and we gain nothing by standing by without voicing condemnation when people attempt to justify atrocities with these bad systems. I say my position is “otherwise uncontroversial” because some believers assert that atheists cannot make claims to objective morality, suggesting that only members of their own faith have the One True Morality (which I have previously disagreed with at some length). Such presumptions are quite ridiculous and cannot be made by anyone who purports to engage in honest discourse.

Consider the hypothetical case of a soon-to-be rapist who has convinced himself that his victim will enjoy the experience; he believes himself to be an unparalleled lover, and it is inconceivable to him that his victim will be in any way harmed or traumatized by the experience. His intentions are good, even though he is about to perpetrate a terrible misdeed. Would anyone thus argue that his actions were morally good? I don’t think so. Is our moral evaluation of this person any different from that of a rapist who knows that his actions will cause great suffering and actively continues them anyway? I think so, yes, because the first rapist does not intend harm while the second one does. From a “what now?” standpoint following each rape, however, it should be obvious that both men deserve punishment, and I believe this demonstrates that an action’s consequences (i.e., effects in the real world) outweigh any intentional considerations.

Let’s look at this issue from a different angle, though, by considering four different scenarios:

Continue reading

More on Objective Morality

I’ve been involved in a discussion of objective morality recently, and I’d like to open up my most recent thoughts on the matter to comment. What follows is the argument I’ve laid out for a system of morality based on objective measurements of actions as they relate to a shared value (although it goes without saying that a number of greater minds than mine have made similar arguments). If you’re not familiar with how the term objective varies within the scope of philosophy, I’d urge you to read my previous article on the matter. If you’re interested in a more developed account of non-theistic moral reasoning, you can’t go wrong by reading this book.

To construct our moral system, we require no god, nor any other supernatural claim. We begin with a single goal, which is a subjective value, and build a network of objective standards on top of that value statement. With this goal clearly identified, we can objectively establish whether any given action furthers or impedes that goal. The initial premise does not need to be universally supported, but we can use something that almost everyone would agree with. To demonstrate how this works, I’m going to suggest the most selfish value possible: “I want to be happy.”

This may not be the strongest ethic upon which to build a society, but it’s entirely sufficient. (Indeed, I think we can and should choose better goals, and it’s not actually necessary to restrict ourselves to just a single value, but I’m trying to illustrate a point here.)

What kind of world would be compatible with this end goal? Is a society that allows murder/theft/rape going to make us more likely to be happy? Superficially, if you’re the kind of person who wants to do these things, you might think you would be better served by answering in the affirmative, but if you consider the implications of your being unrestricted, it quickly becomes apparent why your freedom to do these things would not be more likely to lead to your happiness. If you’re permitted to commit atrocities on other people, they are also permitted to inflict them upon you. The standard applies to all people equally, so the only options are either 1) X is okay for everyone or 2) X is okay for no one.* Since having our property stolen and our dogs murdered would not make us happy, the goal is objectively better met if we outlaw theft and at least one form of canicide. So what happens in the case of aberrant behavior?—what obligation is there in this system to prefer law and order over chaos and mayhem? Well, if someone criminally victimizes you, you will be less happy, so you have an active incentive to discourage criminality; an objectively efficient way to do this is to have a fair, strong, and consistent legal system. And so on.

Again, the goal of self-happiness is not the strongest possible example, but I’m just trying to demonstrate that even selfish values can be used to create “good” systems of morality. Once that goal exists, the moral system is objective in its evaluation of actions by judging their relationship with that goal.

In anticipation of the (ridiculous) question of why someone should be obligated to be moral even if the system isn’t founded on philosophically objective values, I offer this explanation: it’s for the same reason that we would desire a legal system in the “I want to be happy” system of morality; the rest of us are not obligated to let you abuse us. We are and should remain capable of defending ourselves against violation, and it would be self-destructive to allow outside threats to torture and kill at their own discretion.

Believers often profess a desire for a moral system to be immutable, and it’s possible for the kind of moral system I’ve outlined here to maintain its goals indefinitely, although there’s no reason to demand that to be the case. As societies develop, new problems also develop. Even within the scope of a single goal, we may be forced to alter our perceptions of how that goal can be best achieved over time. Thus, it is absolutely vital to maintain an element of flexibility. Unless the “perfect” moral system** has been achieved (and I think such a thing is factually impossible given the influence of time), a system whose rules are incapable of being changed would be undesirable because it would permanently crystallize any inequities in that system.

I believe this sort of moral system is far closer to what we see in reality than any of the claims made by religious texts. If there were really an all-powerful god concerned enough with the world to establish a set of absolute rules and demand obedience, we’d expect to see widespread (or at least 50%!) adherence to those rules. As it is, the world’s largest religion is actively disbelieved by two-thirds of the world’s population. That does not strike me as a statistic that supports the claim that an omnipotent entity wants our obedience, but it is the kind of statistic you’d expect to see if people form their moral values in communities.

 
 * Although we can safely constrain both of these possibilities to say "X is okay/forbidden in situation Y."
** I actually think the notion of a perfect moral system is not even coherent. Is it possible to have a society
in which nothing can be improved? I doubt it.

Objective Morality as a Shell Game

One of my gripes against professional philosophy is that the great lengths they go to to construct precisely defined arguments have a great propensity to obscure the intended message to anyone who isn’t educated in the nuances of the field. Let’s be clear here: I would never dream of criticizing someone for endeavoring to communicate effectively, which is precisely the philosopher’s intention in creating these elaborate constructs. Language is one of my things, and I have great respect for people who appreciate it and master its use. The written word is a playground for the mind, and those who are unwilling to pay the price of entry probably wouldn’t enjoy the rides anyway. In this case, that price is the willingness to occasionally dust off an old tome, or, far more likely, to take a few seconds to tab over to a dictionary website (and I’ll take no shit from shortsighted, uncreative, reality-challenged pedants for my verbing of that noun, thank you). I’m sorry, but if the idea of learning a new word so turns you off to reading a relatively short piece you’ve found online, you’ll find that this blog simply is not for you. And that’s okay—for now anyway. Remember me a few years from now when you’ve come around to my side of things, and pay me a visit.

But,” you might be tempted to ask, “haven’t you just done the very thing that you disparaged philosophers for doing?” Meh, sort of. I don’t think any of those words are particularly daunting, but even if so, each of them can be looked up rather easily through a quick jaunt over to any one of these helpful websites. To understand philosophical concepts takes slightly more work. Let’s carry on.

Continue reading