I’ve been involved in a discussion of objective morality recently, and I’d like to open up my most recent thoughts on the matter to comment. What follows is the argument I’ve laid out for a system of morality based on objective measurements of actions as they relate to a shared value (although it goes without saying that a number of greater minds than mine have made similar arguments). If you’re not familiar with how the term objective varies within the scope of philosophy, I’d urge you to read my previous article on the matter. If you’re interested in a more developed account of non-theistic moral reasoning, you can’t go wrong by reading this book.
To construct our moral system, we require no god, nor any other supernatural claim. We begin with a single goal, which is a subjective value, and build a network of objective standards on top of that value statement. With this goal clearly identified, we can objectively establish whether any given action furthers or impedes that goal. The initial premise does not need to be universally supported, but we can use something that almost everyone would agree with. To demonstrate how this works, I’m going to suggest the most selfish value possible: “I want to be happy.”
This may not be the strongest ethic upon which to build a society, but it’s entirely sufficient. (Indeed, I think we can and should choose better goals, and it’s not actually necessary to restrict ourselves to just a single value, but I’m trying to illustrate a point here.)
What kind of world would be compatible with this end goal? Is a society that allows murder/theft/rape going to make us more likely to be happy? Superficially, if you’re the kind of person who wants to do these things, you might think you would be better served by answering in the affirmative, but if you consider the implications of your being unrestricted, it quickly becomes apparent why your freedom to do these things would not be more likely to lead to your happiness. If you’re permitted to commit atrocities on other people, they are also permitted to inflict them upon you. The standard applies to all people equally, so the only options are either 1) X is okay for everyone or 2) X is okay for no one.* Since having our property stolen and our dogs murdered would not make us happy, the goal is objectively better met if we outlaw theft and at least one form of canicide. So what happens in the case of aberrant behavior?—what obligation is there in this system to prefer law and order over chaos and mayhem? Well, if someone criminally victimizes you, you will be less happy, so you have an active incentive to discourage criminality; an objectively efficient way to do this is to have a fair, strong, and consistent legal system. And so on.
Again, the goal of self-happiness is not the strongest possible example, but I’m just trying to demonstrate that even selfish values can be used to create “good” systems of morality. Once that goal exists, the moral system is objective in its evaluation of actions by judging their relationship with that goal.
In anticipation of the (ridiculous) question of why someone should be obligated to be moral even if the system isn’t founded on philosophically objective values, I offer this explanation: it’s for the same reason that we would desire a legal system in the “I want to be happy” system of morality; the rest of us are not obligated to let you abuse us. We are and should remain capable of defending ourselves against violation, and it would be self-destructive to allow outside threats to torture and kill at their own discretion.
Believers often profess a desire for a moral system to be immutable, and it’s possible for the kind of moral system I’ve outlined here to maintain its goals indefinitely, although there’s no reason to demand that to be the case. As societies develop, new problems also develop. Even within the scope of a single goal, we may be forced to alter our perceptions of how that goal can be best achieved over time. Thus, it is absolutely vital to maintain an element of flexibility. Unless the “perfect” moral system** has been achieved (and I think such a thing is factually impossible given the influence of time), a system whose rules are incapable of being changed would be undesirable because it would permanently crystallize any inequities in that system.
I believe this sort of moral system is far closer to what we see in reality than any of the claims made by religious texts. If there were really an all-powerful god concerned enough with the world to establish a set of absolute rules and demand obedience, we’d expect to see widespread (or at least 50%!) adherence to those rules. As it is, the world’s largest religion is actively disbelieved by two-thirds of the world’s population. That does not strike me as a statistic that supports the claim that an omnipotent entity wants our obedience, but it is the kind of statistic you’d expect to see if people form their moral values in communities.
* Although we can safely constrain both of these possibilities to say "X is okay/forbidden in situation Y."
** I actually think the notion of a perfect moral system is not even coherent. Is it possible to have a society
in which nothing can be improved? I doubt it.