But Wasn’t Hitler Evil?

Apologists for objective moral codes often seek to make their point by posing questions such as, “Wasn’t Hitler absolutely evil,” “Wasn’t Stalin absolutely evil?” or some variant thereof. The argument is emotional rather than rational, and relies on the manner in which our moral nature is wired in our brains to deny the dependence of morality on that wiring for its very existence. In other words, they rely on the fact that our brains cause us to perceive morality as an objective thing to argue that, therefore, it really is an objective thing.

Archeologist Timothy Taylor presents a variant of the “Wasn’t Hitler Evil?” argument in an essay entitled “The Trouble with Relativism,” that appeared in one of Edge.org’s latest publications, “What Have You Changed Your Mind About?” In this case, the “self-evident” evil he cites is the human sacrifice of children practiced by the Incas. According to Taylor,

In Cambridge at the end of the 1970s, I began to be inculcated with the idea that understanding the internal logic and value system of a past culture was the best way to do archaeology and anthropology… A ritual killing was not to be judged bad but considered valid within a different worldview… But what happens when relativism says that our concepts of right and wrong, good and evil, kindness and cruelty, are inherently inapplicable? Relativism self-consciously divests itself of a series of anthropocentric and anchronistic skins – modern, white, Western, male-focused, individualist, scientific (or “scientific”) – to say that the recognition of such value-concepts is radically unstable, the “objective” outsider opinion a worthless myth.

He then goes on to dismantle the historical myths that claimed “that being ritually killed to join the mountain gods was an honor that the Incan rulers accorded only to their own privileged offspring.” In fact, his research team discovered that they were actually “peasant children, who, a year before death, were given the outward trappings of high status and a much improved diet in order to make them acceptable offerings.”

Taking advantage of the moral high ground thus established, Taylor goes on,

We need relativism as an aid to understanding past cultural logic, but it does not free us from a duty to discriminate morally, and to understand that there are regularities in the negatives of human behavior as well as in its positives. In this case, it seeks to ignore what Victor Nell has described as “the historical and cross-cultural stability of the uses of cruelty for punishment, amusement, and social control.” By denying the basis for a consistent underlying algebra of positive and negative, yet consistently claiming the necessary rightness of the internal cultural conduct of “the Other,” relativism steps away from logic into incoherence.

Taylor is mistaken in equating recognition of the subjective nature of morality with “relativism.” I am familiar with the mentality of the people he describes, and I reject it as much as he does. He is quite right in pointing out the inconsistency of defending moral relativism while claiming at the same time that the internal cultural conduct of “the Other” is necessarily right. However, he also “steps from logic into incoherence” himself when he exploits the emotional impact of the murder of children for cynical ends to defend a “basis for a consistent underlying algebra of positive and negative.” If, in fact, those who would affirm the objective existence of morality have some logically defensible basis in mind then, as so eloquently suggested by E. O. Wilson in “Consilience,” they should “lay their cards on the table.” They have not been able to do that to date.

Morality is an evolved trait of human beings. We perceive good and evil as absolutes because that is our nature. That is the way we are programmed to perceive them. In reality, they are subjective mental constructs. No moral revulsion or emotional response, no matter how strong, not even to Hitler’s Holocaust, or Stalin’s mass slaughter, or the ritual murder of children, can convert morality from what it really is into that which we perceive it to be.

Israel, Sweden, and the Modern Face of Anti-Semitism

If events in Sweden are any guide, European anti-Semitism is in the process of reverting from the coded “anti-Israel” version to the full blown “Protocols of the Elders of Zion” version that prevailed in the days before the Third Reich. The “anti-Israel” form of anti-Semitism has been the prevailing flavor on the left for some time. For example, one can usually find artifacts of it in the form of a grotesque double standard on any given day at the BBC’s “news” site. Now, at least one Swedish tabloid has decided to dispense with the mask and promote the “blood libel” version in unvarnished form.

It has been the unfortunate fate of the Jewish people to fit perfectly into the role of an “out-group” for many centuries (see my post on the Amity-Enmity Complex). Obviously, things haven’t changed. Hatred of the Jews, hidden beneath a thin veneer of “anti-Zionist” camouflage for the sake of political correctness, has been a defining characteristic of the ideological left for some time. An interesting expression of this phenomenon has been the wholesale adoption of leftist “anti-imperialist/anti-colonialist” rhetoric by right wing Islamists.

The lesson here is the same one that history has been beating our heads with for millenia, but that we still stubbornly refuse to learn. It is that the spinners of ideological utopias and the authors of the latest “modern morality” will, inevitably, fail as long as they continue to ignore the facts of human nature. Those facts aren’t going to change any time soon. In the meantime, we must understand and accommodate them. Human beings must have out-groups. They must hate. These things are as much a part of their nature as the “good” aspects of their behavior. Our predisposition to hate and despise an “out-group” must and will have an outlet. One can easily confirm this by visiting any political blog or forum on the ideological right or left and reading the comments posted there. Unless we finally accommodate ourselves to what we really are, we will continue to stumble from one holocaust to another, even as we chase after the latest chimerical ideals. Let us finally accept the fact that we must hate, understand the roots of that hate in our nature as a species, and try to find outlets for it that are not self-destructive. If we fail, self-destruction may well be our fate.

John Stuart Mill and Utilitarianism: The Quest for a Moral Law

Like many others before him, John Stuart Mill sought the summum bonum, the holy grail of the foundation of morality. For him, it was Utility, or “The Greatest Happiness Principle.” What he meant by Utility is neither here nor there as far as this post is concerned. Those interested may find his essay online at Project Gutenberg. The fact that he posed such a solution to the age old riddle of the fundamental law of morality is what interests us here. It demonstrates that he, too, was chasing the chimera of morality as an object. He came close but could not quite free himself of the illusion of morality as a thing-in-itself. In fact, he was well aware of the distinction between subjective and objective morality. We see this, for example, in the following passage:

There is, I am aware, a disposition to believe that a person who sees in moral obligation a transcendental fact, an objective reality belonging to the province of “Things in themselves”, is likely to be more obedient to it than one who believes it to be entirely subjective, having its seat in human consciousness only.

Mill would certainly have objected to the claim that he was a “transcendentalist.” Again, quoting from the essay,

Therefore, if the belief in the transcendental origin of moral obligation gives any additional efficacy to the internal sanction, it appears to me that the utilitarian principle has already the benefit of it. On the other hand, if, as is my own belief, the moral feelings are not innate, but acquired, they are not for that reason the less natural.

In spite of this, one constantly runs into artifacts of the implicit assumption that morality corresponds to an object, a real thing. Consider, for example, the following excerpt concerning the basis of right and wrong:

A test of right and wrong must be the means, one would think, of ascertaining what is right or wrong, and not a consequence of having already ascertained it.
The difficulty is not avoided by having recourse to the popular theory of a natural faculty, a sense of instinct, informing us of right and wrong. For – besides that the existence of such a moral instinct is itself one of the matters in dispute – those believers in it who have any pretensions to philosophy, have been obliged to abandon the idea that it discerns what is right or wrong in the particular case in hand, as our other senses discern the sight or sound actually present. Our moral faculty, according to those of its interpreters who are entitled to the name of thinkers, supplies us only with the general principles of moral judgments; it is a branch of our reason, not of our sensitive faculty; and must be looked to for the abstract doctrines of morality, not the perception of it in the concrete.

The implicit assumption here is that there really is something concrete to find. As I have pointed out in my three posts on the Question of Should, that is the fundamental fallacy of all the “transcendental” moralists, the believers in an independent moral law existing of itself. Mill does not include himself among their number, yet he evidently perceives morality in the same way. He explicitly rejected the notion of morality as a real object, yet his entire essay may be understood as an attempt to establish the claim of Utility to serve as a basis for a morality that may not actually be, but would still be perceived as, a real thing.

Still, as can be seen in the excerpt above, he was groping about, tantalizingly close to the answer. He was aware of the notion of what he called a “moral instinct.” However, he could not quite win through to the realization that the “moral instinct” was itself fundamental.

It is interesting to speculate on the impact Darwinian thought might have had on Mill’s theory of morality had he lived 20 or 30 years later. By that time, a thinker as brilliant as he would have had a much more sophisticated appreciation of the concept of morality as an evolved trait. As it was, he realized he was but the most recent of a long line of thinkers who had, so far, all fallen short in their search for the holy grail. As he put it,

And after more than two thousand years the same discussions continue, philosophers are still ranged under the same contending banners, and neither thinkers nor mankind at large seem nearer to being unanimous on the subject, than when the youth Socrates listened to the old Protagoras, and asserted (if Plato’s dialogue be grounded on a real conversation) the theory of utilitarianism against the popular morality of the so-called sophist.

Conceding all these efforts have been in vain, he says,

To inquire how far the bad effects of this deficiency have been mitigated in practice, or to what extent the moral beliefs of mankind have been vitiated or made uncertain by the absence of any distinct recognition of an ultimate standard, would imply a complete survey and criticism of past and present ethical doctrine. It would, however, be easy to show that whatever steadiness or consistency these moral beliefs have attained, has been mainly due to the tacit influence of a standard not recognized.

The shot flew close to the mark. It is not hard to imagine that, had he read Darwin’s “The Descent of Man,” and “The Expression of Emotions in Man and Animals,” not to mention the works of Huxley and Spencer, the truth about the “standard not recognized” would have dawned on him.

Mill had much else to say of relevance to our modern political predicament. We will take this up in the context of another of his famous essays, “On Liberty,” in a later post.

John Stuart Mill
John Stuart Mill

The Blogosphere Rediscovers Tarzan

Fausta, Katie Baker, et.al. wander off the reservation after discovering they’re not really “new Soviet woman” after all. As Karol at Alarming News puts it:

And yet, the fact is that “me Tarzan, you Jane” is ultimately what makes us hot. That’s what these feminists, who are trained to really, truly believe they want a man who is mostly like a woman, admit in these posts “tee hee, I know I’m not supposed like this, but I kinda do.” You know why? Evo-freaking-lution. Women like the men who take care of them. Whether it’s put food on the table or beat back the saber-tooth tiger. We’re programmed to crave the man who behaves…like a man.

I know, for you connoisseurs of “pop ethology,” this is a bit down in the weeds, but still, the paradigm shift continues. May the day come soon when the neuroscientists can explain this “programming” at a molecular level. What fun it will be to confront the world’s last, hoary behaviorists with the facts about who and what we really are.


Consequences: The Great Question of Should, Part III

In two earlier posts I explored the consequences of the subjective nature of morality. We have already explored some of the ramifications of that conclusion as far as the individual is concerned. In this post we will continue that discussion.

I touched earlier on the virtual impossibility of amoral behavior. We are wired to be moral creatures, and there is a moral context to all our interactions with other human beings. It is for this reason that the argument that religion is necessary because without it we would have no reason to act morally is absurd. We don’t need a reason to act morally. We just do because that is our nature, just as it is the nature of other more intelligent animals that act morally even though they can have no idea of the existence of a God.

Morality did not suddenly appear with the evolution of homo sapiens. Rather, it evolved in other creatures millions of years before we came on the scene. I suspect the expression of morality in human beings represents the interaction of our high intelligence, which evolved in a relatively short time, with predispositions that have undergone only limited change during the same period. One interesting result of this is the fact that we consciously perceive morality as a “thing” having an objective existence of its own independent of ourselves. An artifact of this perception that we have noted earlier is the adoption of complex “transcendental” moral systems by some of our most famous atheists, who obviously believe their versions of morality represent the “real good,” applicable not only to themselves, but to others as well, in spite of the fact that they lack any logical basis for that belief.

We all act according to our moral nature, almost unconsciously applying rules that correspond to a “good” that seems to be external to and independent of ourselves. I am no different than anyone else in that respect. I can no more act amorally than any other human being. I act according to my own moral principles, just as everyone else does. I have a conscience, I can feel shame, and I can become upset, and even enraged, if others treat me or my own “in-groups” in a way that does not correspond to what I consider “good” or “just.” Anyone doubting that fact need only look through my posts in the archives of at Davids Medienkritik. I behave in that way because it is my nature to behave in that way. In fact, if I tried to jettison morality and, instead, rationally weigh each of my actions in accordance with some carefully contrived logical principles, I would only succeed in wasting a great deal of time and making myself appear ludicrous in the process.

However, there are logical consequences to the conclusion that good and evil are not objects that exist on their own, independent of their existence as evolved mental constructs. In the first place, they evolved at a time when the largest social groups were quite small, containing members who were generally genetically related to each other to some extent. They evolved because they promoted the survival of a specific packet of genetic material. That is the only reason they exist. The application of moral standards to the massive human organizations that exist today, such as modern states, is, therefore, logically absurd. Morality evolved in a world where no such organizations existed, and the mere fact that it evolved did not give it any universal legitimacy. We nevertheless attempt to apply morality to international affairs, and to questions of policy within nations involving millions of unrelated people, in spite of the logical disconnect this entails with the reason morality exists to begin with. We do so because that is our nature. We do so not because it is reasonable, but because that is how our minds are programmed. Under the circumstances, assuming that we agree survival is a desirable goal, it would seem we should subject such “moral” behavior to ever increasing logical scrutiny as the size of the groups we are dealing with increases. Our goal should be to insure that our actions actually promote the accomplishment of some reasonable goal more substantial than making us feel virtuous because we have complied with some vague notion of a “universal good.”

When it comes to our personal relationships with other individuals or with the smaller groups we must interact with on a daily basis, we must act according to our moral nature, because, as noted above, it would be impractical to act otherwise. In such cases it seems to me that if our goals are to survive and enjoy life in the process, we should act according to a simple moral code that is in accord with our nature and refrain from attempting to apply contrived “universal moral standards” to our fellow beings that are absurd in the context of the reasons that promoted the evolution of morality in the first place. In other words, we should act in accordance with the well understood principles of what H. L. Mencken referred to as “common decency.”

In the process, we should not lose sight of the dual nature of our moral programming, which can prompt us to act with hostility towards others that is counterproductive in the context of modern civilization. It would behoove us to take steps to channel such behavior as harmlessly as possible, because it will not go away. We cannot afford to ignore the darker side of our nature, or engage in misguided attempts to “reprogram” ourselves based on the mistaken assumption that human nature is infinitely malleable. We must deal with ourselves as we are, not as how we want ourselves to be. The formulation of complex new systems of morality that purport to be in accord with the demands of the modern world may seem like a noble endeavor. In reality, the formulation of new “goods” always implies the formulation of new “evils.” It would be better to understand the destructive aspects of our nature and deal with them logically rather than by creating ever more refined moral systems. To the extent that they fail to take the innate aspects of human behavior into account, these can be dangerous. Consider, for example, the new moral paradigm of Communism, with its “good” proletariat and “bad” bourgeoisie. The practical application of this noble new system resulted in the deaths of 100 million “bourgeoisie,” and what amounted to the national decapitation of Cambodia and the Soviet Union. In view of such recent historical occurrences, the current fashion of demonizing and reacting with moral indignation to those who disagree with us politically would seem to be ill-advised.

Morality is an evolved trait. Our problem is that we perceive it as an independent object, a transcendental thing-in-itself, something that it is not and cannot ever be. We must act according to our moral nature, but let us consult our logical minds in the process.

The Left and its Holy Causes: The Pose is Everything

As Byron York (via Instapundit) points out,

I attended the first YearlyKos convention, in 2006, and have kept up with later ones, and it’s safe to say that while people who attended those gatherings couldn’t stand George W. Bush in general, their feelings were particularly intense when it came to opposing the war in Iraq. It animated their activism; they hated the war, and they hated Bush for starting it. They weren’t that fond of the fighting in Afghanistan, either. Now, with Obama in the White House, all that has changed. . . . Not too long ago, with a different president in the White House, the left was obsessed with America’s wars. Now, they’re not even watching.

Instapundit adds, “Yeah, funny how the fierce moral urgency drained out of the antiwar movement as soon as a Democrat was elected President.” I suspect the level of “fierce moral urgency” has more to do with personalities than parties. After all, the level of antiwar activism on the left was much greater under Johnson, another Democratic president, than it ever was under Bush. Of course, Johnson lacked Obama’s charisma, but I suspect that the main driver of the left’s “noble commitment to peace” in the 60’s was fear of the draft. Once the draft went away, the level of devotion to the cause of world peace became a great deal more subdued.

In any case, it’s obvious that the level of “moral urgency” of the left’s assorted holy causes has more to do with emotional posing than logic. For the time being, peace must take a back seat to the health care issue, at least until the “progressives” succeed in enlisting state power to force their version of “compassion” on the rest of us. Meanwhile, Cindy Sheehan’s blog has become strangely inactive over at Huffpo, although I suspect she’ll surface again at some point as such useful idiots often do.

Emotion trumps reason when it comes to the left’s other pet causes as well. It never bothered them a couple of years ago that hooded anarchists who threatened violence to counter-demonstrators always tagged along at their “peace demonstrations,” but now let grandma and grandpa hold up signs and get a little raucus at a town hall meeting and they suddenly become “enraged, crazy” nut cases, mindless fools manipulated by “astroturfers.” Meanwhile, they would have us believe that they are really serious about reducing greenhouse gases while they continue to oppose nuclear power, the one most effective step we could take to do just that. They preach to us about saving the environmental but, at the same time, wax eloquent in their promotion of illegal immigration to the US and other heavily industrialized countries in spite of the massive increase in global environmental degradation that entails.

Irrational support for holy causes is hardly a monopoly of the left, although it tends to be more a more dangerous characteristic of those who want to change the status quo than those who want to leave it alone. I suspect political proclivities in general are better understood as emotionally conditioned behavior than as a logical response to a given situation. Perhaps we all have innate psychological characteristics that make it more or less likely that we will tend to adopt a “liberal” as opposed to a “conservative” world view. Once we do, our opinions on any given subject will tend to be aligned with the prevailing dogma of our group. Logic will only be brought in as an afterthought to prop up these highly predictable “opinions.”

In a later post I will revisit this subject in the context of an earlier day.

E. O. Wilson: “Consilience,” Ethics and Fate

I first became aware of the work of E. O. Wilson when he published a pair of books in the 70’s (“Sociobiology” in 1975 and “On Human Nature” in 1979) that placed him in the camp of those who, like Ardrey, insisted on the role of genetically programmed predispositions in shaping human behavior. He touches on some of the issues we’ve been discussing here in one of his more recent works, “Consilience.” In a chapter entitled “Ethics and Religion,” he takes up the two competing fundamental assumptions about ethics that, according to Wilson, “make all the difference in the way we view ourselves as a species.” These two contradictory assumptions can be stated as, “I believe in the independence of moral values,” and “I believe that moral values come from humans alone.” This formulation is somewhat imprecise, as animals other than humans act morally. However, I think the general meaning of what Wilson is saying is clear. He refers to these two schools of thought as the “transcendentalists,” and “empiricists,” respectively. He then goes on to express a sentiment with which I very heartily agree;

The time has come to turn the cards face up. Ethicists, scholars who specialize in moral reasoning, are not prone to declare themselves on the foundations of ethics, or to admit fallibility. Rarely do you see an argument that opens with the simple statement: This is my starting point, and it could be wrong. Ethicists instead favor a fretful passage from the particular into the ambiguous, or the reverse, vagueness into hard cases. I suspect that almost all are transcendentalists at heart, but they rarely say so in simple declarative sentences. One cannot blame them very much; it is difficult to explain the ineffable, and they evidently do not wish to suffer the indignity of having their personal beliefs clearly understood. So by and large they steer around the foundation issue altogether.

Here he hits the nail on the head. It’s normal for human beings to be “transcendentalists at heart,” because that’s our nature. We’re wired to think of good and evil as having an objective existence independent of our minds. Unfortunately, that perception is not true and yet the “scholars who specialize in moral reasoning,” appear singularly untroubled by the fact. Someone needs to explain to them that we’re living in the 21st century, not the 18th, and their pronouncements that they “hold these truths to be self-evident” don’t impress us anymore. In the meantime, we’ve had a chance to peek at the man behind the curtain. If they really think one thing is good, and another evil, it’s about time they started explaining why.

Wilson declares himself an empiricist, and yet, as was also evident in his earlier works, he is not quite able to make a clean break with the transcendentalist past. I suspect he has imbibed too deeply at the well of traditional philosophy and theology. As a result, he has far more respect for the logic-free notions of today’s moralists than they deserve. I have a great deal of respect for Martin Luther as one of the greatest liberators of human thought who ever lived, and I revere Voltaire as a man who struck the shackles of obscurantism from the human mind. That doesn’t imply that I have to take Luther’s pronouncements about the Jews or Voltaire’s notions about his deist god seriously.

I once had a friend who, when questioned too persistently about something for which he had no better answer would reply, “Because there are no bones in ice cream.” The proposition that morality is an evolved human trait seems just as obvious to me as the proposition that there are no bones in ice cream. If anyone cares to dispute the matter with me, they need to begin by putting a package with bones on the table. Otherwise I will not take them seriously. The same goes for Wilson’s menagerie of philosophers and theologians. I respect them because, unlike so many others, they took the trouble to think. When it comes to ideas, however, we should respect them not because they are hoary and traditional, but because they are true. We have learned a great deal since the days of Kant and St. Augustine. We cannot ignore what we have learned in the intervening years out of respect for their greatness.

In the final chapter of his book, entitled “To What End,” Wilson discusses topics such as the relationship between environmental degradation and overpopulation, and considers the future of genetic engineering. His comments on the former are judicious enough, and it would be well if the developed countries of the world considered them carefully before continuing along the suicidal path of tolerating massive legal and illegal immigration. As for the latter, here, again, I find myself in agreement with him when he says that, “Once established as a practical technology, gene therapy will become a commercial juggernaut. Thousands of genetic defects, many fatal, are already known. More are discovered each year… It is obvious that when genetic repair becomes safe and affordable, the demand for it will grow swiftly. Some time in the next (21st) century that trend will lead into the full volitional period of evolution… Evolution, including genetic progress in human nature and human capacity, will be from (then) on increasingly the domain of science and technology tempered by ethics and political choice.”

As often happens, Wilson reveals his emotional heart of hearts to us with a bit of hyperbole in his final sentence:

And if we should surrender our genetic nature to machine-aided ratiocination, and our ethics and art and our very meaning to a habit of careless discursion in the name of progress, imagining ourselves godlike and absolved from our ancient heritage, we will become nothing.

This is a bit flamboyant, and begs the question of who or what gets to decide our “meaning.” Still, Wilson’s work is full of interesting and thought-provoking ideas, and he is well worth reading.

Sam Harris and his Butterfly Net: An Account of the Capture of the “Real, Objective” Good

The human brain is a wonderful survival mechanism. It endows our species with unrivaled powers of reasoning, allowing us to discern truths about subatomic particles and distant planets that our unaided senses can’t even detect. It has also supplied us with self-constructed, subjective “truths” about things that exist only in our own minds, endowing them with a legitimacy and reality of their own. Morality is such a thing. It does not and cannot have an independent existence of its own, but believing that it does has promoted our survival. Therefore, we believe. Our brains are wired to perceive good and evil as real things, and so we do. In spite of our vaunted logical powers, some of the greatest thinkers among us cannot rid themselves of the illusion. At some level they have grasped the truth that everything about us, including our minds, emotions, and predispositions, have evolved because they have promoted our survival. On the other hand, they truly believe that one such evolved trait, morality, which we happen to share with many other animals, somehow corresponds to a real thing that has an independent reality of its own. Logically, they cannot justify their belief that good and evil are real, objective things, but, still, they believe it. Nature insists.

The “Big Three” among the “new atheists,” Richard Dawkins, Chris Hitchens, and Sam Harris, provide interesting examples of the phenomena. None of them would be any more capable of providing a logical basis for their belief that there is a real, objective good and a real, objective evil, and that they know the real objective difference between the two anymore than Euthyphro could demonstrate the same to Socrates. Nonetheless, all three of them are convinced that that which their brains are wired to perceive as real must actually be real. They all believe in the objective existence of good and evil, and they all believe that their own moral standards apply not only to themselves, but to others as well. Read their books and you will find all of them laced with the moral judgments that are the artifacts of this belief.

I have pointed out in earlier posts the logical absurdity of the belief that morality, an evolved emotional trait, not only of humans but of other animals as well, somehow has an existence of its own, independent of the minds that host it. Let us consider how one of the “Big Three,” Sam Harris, has nevertheless managed to convince himself that what he perceives as real must actually be real. Harris is a neuroscience researcher. He set forth his thoughts on the subject in an essay entitled, “Brain Science and Human Values,” that recently appeared at the website of the Edge Foundation. After a discussion of the process of discovering scientific truth, Harris asks,

“But what about meaning and morality? Here we appear to move from questions of truth—which have long been in the domain of science if they are to be found anywhere—to questions of goodness. How should we live? Is it wrong to lie? If so, why and in what sense? Which personal habits, uses of attention, modes of discourse, social institutions, economic systems, governments, etc. are most conducive to human well-being? It is widely imagined that science cannot even pose, much less answer, questions of this sort.”

Here, Harris has begun the process of self-obfuscation. Let us set aside the issue of what he actually means by “conducive to human well-being” for the time being and focus on the question of morality. There is no more a logical reason to consider that which is “conducive to human well-being” objectively good than there is a logical reason to consider it objectively good to follow Pythagoras’ admonition to avoid the eating of beans. However, making the logical leap from fact to fiction is no problem for most of us. We “feel” that “human well-being” is a legitimate good. We might even feel the emotion of shame in denying it. If someone demanded that we defend the assertion that “human well-being” is not objectively good, we would likely feel some embarrassment. It is mentally easy for us to associate “human well-being” with “objective good” in this way. It is also illogical.

Instead of simply claiming that good and evil exist because he feels they must exist, all Harris is doing is adding an intermediate step. He points to a “self-evident” good and props it up as a “gold standard,” as “real good.” In essence, this “gold standard” serves the same purpose as God does for religious believers. They believe that God must really be good, and, because He is the standard of that which is good, His laws must really be good as well. Harris substitutes his “gold standard” for God. It must be “really good,” because, after all, everyone agrees it is good. Who can deny it? Everyone has the same perception, the same conviction, the same feeling. In reality, he is just chasing his tail. Instead of simply claiming that the existence of objective good and evil are self-evident to begin with, he claims that it is self-evident that “human well-being” is an objective good. Once we have accepted this “gold standard,” it follows that, since we have established that it is “really good,” then “real good” must exist as well as the basis for making this determination in the first place. Once he has established this “gold standard,” Harris cuts to the chase:

“Much of humanity is clearly wrong about morality—just as much of humanity is wrong about physics, biology, history, and everything else worth understanding. If, as I believe, morality is a system of thinking about (and maximizing) the well being of conscious creatures like ourselves, many people’s moral concerns are frankly immoral.”

In other words, we are to believe that morality isn’t merely a subjective predisposition, but a real thing. It is simply a question of determining scientifically what it is. Once we have done that, then we really should do good and avoid doing evil. Harris continues:

“Morality—in terms of consciously held precepts, social-contracts, notions of justice, etc.—is a relatively recent invention. Such conventions require, at a minimum, language and a willingness to cooperate with strangers, and this takes us a stride or two beyond the Hobbesian ‘state of nature.’”

Here Harris commits the fallacy of associating “Consciously held precepts, social contracts, notions of justice, etc.,” with morality itself. They are not morality, but merely manifestations of morality in human beings living in the modern world. Morality itself predates human beings by millions of years, and many other animal species act morally in addition to ourselves. The most significant difference between us and them is that they lack the capacity to speculate about whether morality is objectively real. Indeed, for them, morality is likely a more effective evolutionary adaptation than it is for us. They simply act as they are wired to act, and feel no need to invent objective reasons for their actions in the form of Gods or Harris’ ersatz god, “the imperative to act for the well being of conscious creatures.”

Harris would do well to go back to square one and consider what morality really is. It is an evolved subjective predisposition that exists because it promoted our survival. Furthermore, it promoted our survival at a time when we existed in small communities of genetically related individuals. It is a dual phenomena. We apply one standard of right and wrong to our interactions with those within our “in-group,” and another standard of right and wrong to “out-groups.” It is reasonable to assume that the wiring in our brain responsible for our predisposition to behave morally, which evolved at a time when we lived in small hunter-gatherer communities, is not ideally suited to similarly promote our survival in a world of gigantic nation states equipped with nuclear weapons. Instead of understanding this problem and addressing it rationally, Harris claims to have discovered the “real good,” in the form of “that which is conducive to human well-being.” In reality, Harris is as religious as the most phantastical southern Baptist. The only difference between him and them is that he believes in a “True Good” instead of a true God. He insists that, instead of understanding our own nature and accommodating ourselves to it, we should all be required to change our nature to conform to his phantasy that a scientifically discernable version of this “True Good” exists. In other words, he wants to take a giant step backwards to the era of the behaviorists and the “new Soviet man,” when it was assumed that human nature was infinitely malleable and could be molded as needed to conform to whatever arbitrary definition of “good” one chose to adopt. He won’t succeed any more than the Communists or all the other architects of heavens on earth have succeeded. Human nature is what it is, and won’t jump through hoops, even for Sam Harris. He thinks he can simply wave his hands, and inconvenient aspects of human morality, such as the Amity-Enmity Complex will just disappear. Others have tried that before him. It doesn’t work. It not only doesn’t work, but, in a world full of nuclear weapons, it is extremely dangerous. If we are to avoid self destruction, it will behoove us to understand our own nature. Creating “brave new moralities” out of thin air and insisting that others conform to them does not promote such understanding. Rather, it amounts to a deliberate burying of our heads in the sand.

I can only suggest that Harris go back to his neuroscientific research. Who knows, one day he may turn up at my doorstep and present me with a vial of distilled “Good”. However, I rather suspect it’s more likely he will eventually come to a more rational understanding of human morality. At least I hope he will, and I hope the same for his two illustrious peers, Hitchens and Dawkins. It happens that the latter has a wonderfully designed website with forums for the philosophically minded. It pleases me to see that, based on their comments, some of the brighter visitors to these forums “get it” when it comes to morality. I suggest that Harris, Dawkins, Hitchens, and the rest of the intellectual gentry at Edge.org take the time to read them.

Another Paradigm Shifts: The Hunting Hypothesis, Ardrey, and “Pop Ethology”

In 1976, Robert Ardrey published the last in a series of books about the evolution of human nature, entitled “The Hunting Hypothesis.” Ardrey was one of the great thinkers of the 20th century. Unfortunately, his thoughts were not politically correct at the time. They posed a direct challenge to any number of the ideological sacred cows of belief systems ranging from behaviorist psychology to Marxism. They implied that human nature was not infinitely malleable, but based on innate predispositions that rendered mankind unsuitable for the various and sundry utopias the ideologues were cobbling together. In a word, Ardrey had positioned himself squarely in the out-group of all these ideologically defined in-groups. A great collective shriek went up. As usual in such cases, Ardrey’s challenge was not met with dispassionate logic. Rather, he was vilified as a “fascist,” ridiculed as a “pop ethologist,” and denounced as a dilettante playwright who dared to invade the territory of “real scientists.” One would do well to go back and read his books today, because, as it happens, Ardrey was right and the ideologues posing as “scientists” who vilified him were wrong.

In particular he was right about the hunting hypothesis. The best argument his opponents could come up with against it was the absurd claim that, other than a few tortoises and other slow-moving animals, our early meat eating had been limited to scavenging. The idea that the rapid growth of brains with ever increasing energy requirements could have been fueled by the scavenging of four-foot tall, slow moving creatures who had somehow managed to beat sharp-eyed vultures and speedy hyenas to their feasts was really as absurd then as it is now. Ardrey demolished the notion in the first chapter of his book, but, like a dead man walking, it staggered on for years, propped up by the bitter faith of the ideologues.

I suspected at the time “The Hunting Hypothesis” was published that Ardrey and thinkers like him would eventually be vindicated, assuming free research could continue without ideologically imposed restraints. I never imagined it would happen so soon. It’s still hard for me to believe that we’ve passed through such a thorough paradigm shift, and I’m continually surprised when I see articles such as this one, entitled “Pre-humans had Stomach Cramps,” that appeared on the website of the German magazine “Der Spiegel” today. Among its matter-of-factly presented paragraphs regarding the meat eating habits of Australopithecus afarensis, a hominid that lived more than two million years ago, one finds,

The question of when meat consumption began is important because of its association with the development of a larger brain in pre- and early humans. In fact, the human brain is three times as big as that of a chimpanzee. In order to build up an organ of such dimensions, a very large and continuous supply of nourishment must be guaranteed, and that requires meat.

Hunting is the only way of systematically bringing down animals, and this, in turn, assumes a bigger brain. As with the question of what came first, the chicken or the egg, one can’t be sure what came first, meat eating or a larger brain. However, anthropologists assume that, in the beginning, there must have been at least occasional consumption of meat, because, without it, the brain could not have expanded in volume for purely physical reasons.

All this is presented in dead pan fashion, as if no other opinion could ever have prevailed, or the subject could ever have been the subject of the least controversy. Sad, that Ardrey could not have lived to see it.

And the moral of the story? Perhaps we should recall the words of T. S. Eliot from “Little Gidding,”

We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.

We live too much in the present, breathlessly awaiting the latest news from the worlds of science and politics. Occasionally, we would do well to recall that some very bright people, with a very different perspective, not to mention very different standards of political correctness, actually lived before our time. It would behoove us to learn from them if we really want to understand the time we’re in now. Never accept the moral certainties of today. Go back to the sources, and find out for yourself.

Even the Psychologists have Noticed Human Nature!

That invaluable bloodhound of the blogosphere, Instapundit, turned up another interesting link this morning. It turned out to be an article on the website of “Psychology Today.” Now it happens that I was actually a subscriber of PT decades ago, but I stopped reading it after concluding that, if I really wanted to learn something about psychology, my time would be much more profitably spent reading Stendhal. My sedate, philosophical eyebrow raised almost a full notch when, in reading the article in question, I found sections such as,

Most journalists take a number of psychology, sociology, political science, and humanities courses during their early years in college. Unfortunately, these courses have long served as ideological training programs—ignoring biological sources of self-serving, corrupt, and criminal behavior for a number of reasons, including lack of scientific training; postmodern, antiscience bias; and well-intentioned, facts-be-damned desire to have their students view the world from an egalitarian perspective.


But, having worked among the Soviets, I know that large groups of very intelligent people can fall into a collective delusion that what they are doing in certain areas is the right thing, when it’s actually not the right thing at all. It’s rather like the Skinnerian viewpoint on psychology. For a full half century, psychologists insisted it wasn’t proper to posit anything going on inside people’s heads. Advances in psychology ground to a halt during that time, but it was impossible to convince mainstream psychologists that there was anything wrong to their approach. After all–everybody was using Skinner’s approach, and everybody couldn’t be wrong.

Thinking it must be an aberration, or, perhaps, an example of the tokenism so often found in the mainstream media today, I took a closer look at the PT website. Eureka! I soon began turning up links like this. Evolutionary psychology at Psychology Today?! Can you say paradigm shift?

Well, it’s nice to see that progress actually happens, even in psychology, although I suspect I’ll still consult Stendhal as my primary source for the time being. Meanwhile, it would be nice if all the geniuses in the field who had their heads up their collective behaviorist rectums back in the 60’s and 70’s would visit Robert Ardrey’s grave, perhaps decorate it with a rose or two, and murmur, “Sorry for all the abuse, old man. You were right, and we were wrong.”