On the Irrelevance of Objective Morality

I don’t believe in objective morality. In other words, I don’t believe in the independent existence of the categories, “good” and “evil,” nor do I believe that we ought to do some things and ought not to do others by virtue of some moral law that exists as a thing in itself, independent of what anyone merely thinks ought or ought not to be done. I consider the above to be simple facts. As such they don’t imply anything whatever about how we ought or ought not to behave.

Of course, many people disagree with me. Given what morality actually is, that is entirely predictable. It is basically a manifestation of innate behavioral predispositions in creatures with large brains. Those predispositions exist by virtue of natural selection. They enhanced the odds that we would survive and reproduce by spawning a powerful illusion that some behaviors are good and others evil, regardless of what anyone’s opinion about them happens to be. Belief in objective morality is just that; an illusion. It’s an interesting fact that many atheists, who imagine they’ve freed themselves of religious illusions, nevertheless embrace this illusion that good and evil exist as real things. I submit that, if what they believe is true, and there actually is an objective moral law, then it is entirely irrelevant.

Most atheists, including myself, consider evolution by natural selection to be the most plausible explanation for the existence of all the diverse forms of life on our planet. If that theory is true, then we exist because our ancestors were successful at carrying the packets of genes responsible for programming the development of their physical bodies from one generation to the next. Of course, these genes have undergone many changes over the eons, and yet they have existed in an unbroken chain for a period of over two billion years. Each of the physical bodies they spawned in the process only existed for an insignificant fraction of that time, and that will be true of each of us as well. Seen from that perspective, you might say that “we” are our genes, not our conscious minds. They have existed for an unimaginably long time, and are potentially immortal, whereas our conscious selves come and go in the blink of an eye by comparison.

This process that explains our existence has neither a purpose nor a goal. It does not reflect a design, because there is no designer, nor do we or anything about us have a “function,” because a function implies the existence of such a designer. We simply exist as a result of a natural process that would appear to be very improbable, and yet is possible given conditions that are just right on one of the trillions of planets in our vast universe.

Under the circumstances, we must decide for ourselves what goal or purpose we are to have in life. The universe certainly hasn’t assigned one to us, but life would be rather boring without one. This begs the question of what that goal or purpose should be. There is no right or correct choice, because the universe doesn’t care one way or the other. In making it we are completely on our own. I personally have made my goals in life my own survival and reproduction, and the preservation of biological life in general into the indefinite future. It seems to me these goals are in harmony with the reasons I exist to begin with. They are not better or worse than anyone else’s goals, for the simple reason that there is no basis for making that judgment. They are, however, my goals, and I will pursue and defend them accordingly.

Let’s assume for the sake of argument that there is an objective morality, and moral goods and evils exist as real things. Suppose someone were to point out to me that my goals in life are bad according to that objective moral standard. My reply would be, “So what?” No God or other conscious entity is out there, monitoring whether I conform to the moral law or not. The universe has no conscious mind, and so is incapable of punishing or rewarding my behavior. For the same reason it is also completely incapable of assigning that responsibility to others of my species. Any atheist who believes differently is not really an atheist at all, because a universe or some entity in the universe capable of assigning purpose is, for all practical purposes, a God.

Suppose some defender of the objective moral law were to claim that my personal goals were only achievable if I behaved in obedience to that law. In the first place, I would respond that it is remarkable indeed that the objective moral law just happens to be the exact way I should behave in order to achieve my personally assigned goals. In the second, I would take note of the fact that no reliable way has yet been discovered of detecting what the objective moral law actually is. A bewildering array of different moralities exist, and new ones are concocted every day, all claiming to be the “real” moral law. Under the circumstances, it seems to me that it would be much simpler for me to pursue my goals directly rather than trying to pick the “real” objective moral law from among the myriad versions on tap, in the hope that being “good” according to the version I choose will have the indirect effect of promoting my chosen goals.

In short, the question of whether there is an objective morality “out there” or not is a matter of complete indifference. If such an entity does exist, we have been singularly incompetent at detecting what it is, and, as far as the universe is concerned, it doesn’t matter whether we conform to it or not. The universe isn’t keeping score.

Academic Follies: Chasing the Mirage of Objective Morality

The human mind is beset by no more powerful illusion than the belief in objective morality; that good and evil exist as things, independent of how or what we imagine them to be. One of the more whimsical proofs of this is the obvious survival of the illusion in the minds of those who, to all appearances, realize that morality exists because it evolved, and even claim to believe that it is subjective. For example, our purported experts in the behavioral sciences are all afflicted by the mirage, as far as I know without exception, and regardless of what they happen to say about it.

Examples of the above anomaly are particularly easy to find in the case of the denizens of academia. They may pledge their allegiance to Darwin, but they belong to an ingroup that requires their actual allegiance to a moral code that is subject to change from day to day, but is de rigueur regardless. The synthesis of this clash of thesis and antitheses is what George Orwell referred to as “doublethink.” These worthies may claim that morality is subjective, but accept the “objective” moral law of their ingroup without question. We find them declaring that one type of behavior is morally abhorrent, and another kind is “good,” to all appearances blithely unaware that there is anything even remotely contradictory in their behavior.

If Darwin was right, and morality is subjective, then there can be no truly evil or truly good individuals, because no such categories exist. Just as there are no preferred inertial reference frames in an Einsteinian universe, there are no preferred moral reference frames in the moral universe. An individual can certainly say that one thing is good and another evil according to his personal moral reference frame, but he can never claim that one thing is absolutely good and another absolutely evil. In spite of that, academic “experts” make such claims all the time. Under the circumstances, if one of them says that this behavior is morally good, and that behavior is morally unethical, it begs the question of why? Logically, the only possible answer must be that the one conforms to their personal moral reference frame, and the other violates it. Under the circumstances one might point out that morality only exists because it happened to enhance the odds that the responsible genes would survive and reproduce, albeit in an environment radically different than the one we live in now. One might then ask, “How does the ‘bad’ thing in question diminish the chances that you will reproduce?”, or “How does the ‘good’ thing in question enhance the odds that you will survive?”

Of course, if one actually asked such questions, one would be met with looks of blank incomprehension. When it comes to morality, academics are just like everyone else. They behave the way they do because it feels good. They act that way because they are inclined by their emotions to act that way. They don’t presume to analyze their behavior any more deeply than that.

I recently read a book that is an excellent example of what I’ve written above. Entitled “A Natural History of Human Morality,” by Michael Tomasello, it claims to be about the evolution of human morality, which is described as “a uniquely human version of cooperation.” The book relentlessly emphasizes what the author imagines to be the “good” aspects of human moral behavior, and glosses over the “bad.” Improbable as it seems, there is nothing in the book to suggest that an evolved trait like morality might not promote the same outcome in the environment of today as it did 100,000 years ago. All that has been neatly taken care of by “gene-culture co-evolution.” We can look forward to a future where our innate altruism has won the day, and mankind lives happily ever after. It goes without saying that the prominent ingroup/outgroup aspect of our behavior is glossed over in spite of its rather too obvious manifestation, for example, in the bitter hatred and contempt of garden variety academics for Trump and all his supporters. Presumably, the future altruistic utopia must await the “liquidation of the Deplorables as a class,” to paraphrase Comrade Stalin.

One need only read the “Conclusion” of this brief book to dispel any doubt about the author’s firm faith in objective Good, existing somewhat incongruously in his mind with his equally firm but logically completely incompatible belief that morality is an evolved behavior. Ingroup/outgroup behavior is certainly mentioned, but is ascribed to such “objective evils” as colonialism:

In addition, there are many other conflicts between different ethnic groups that for various reasons (quite often involving outside influences, e.g., colonialism) have been forced to coexist under the same political umbrella. These are again instances of in-group/out-group conflicts, but again it is almost certain that those involved in them are doing many moral things with their compatriots on a daily basis. And despite all this, it is still the case that warlike conflicts, as well as many other types of violence, are historically on the wane. (Pinker, 2011).

Here one might ask the author what on earth he means by a “moral thing” if there is no such thing as objective Good. Is not loyalty to one’s group and defense of it against evil outsiders a “moral thing?” We learn that the equalist dogmas currently prevailing in academia also belong in the class of “objective Goods.” For example, according to the author,

A final criticism of too much rosiness is that we have posited a sense of equivalence among persons as foundational to human morality. Those who are used to thinking in terms of recorded human history will point out that it is only with the Enlightenment that social theorists in Western societies began promoting the idea of all individuals as in some sense equal, with equal rights. This is of course true in terms of explicit political thinking about the social contract after the rise of civil societies in the past ten thousand years. But the hunter-gatherer societies that existed for the immediately preceding period – for more than ten times that long – were by all indications highly egalitarian (Boehm, 1999).

Where to begin? In the first place, nature does not recognize any objective standard of “rosiness.” However, the author does not qualify the first sentence in the above quote by noting that he is only referring to his own personal moral standards when he claims that “equivalence among persons” is “rosy.” It is stated as an objective fact. Violence may or may not be declining in modern human societies, but no explanation is given for that trend one way or another in terms of evolved human behavioral traits as manifested in modern societies, and, again, there is no objective reason to claim that this development is “rosy” or “not rosy.” It is, of course, just another statement of one of the author’s personal subjective preferences stated as an “objective Good.” It is also one which can quickly become an anachronism with a push of the nuclear button. Nature doesn’t care in the least whether humans are violent or not. As far as equalist dogmas go, one is treading on thin ice with the claim that hunter-gatherer societies “were by all indications highly egalitarian.” They were only “highly egalitarian” according to safely orthodox academics whose evidence for making such claims is questionable, to put it mildly. As we saw, for example, in the case of Napoleon Chagnon, anyone who dares to question such “scientific findings” can expect to be subjected to furious attacks. The author apparently hasn’t noticed. Finally, we read,

No, it is a miracle that we are moral, and it did not have to be that way. It just so happens that, on the whole, those of us who made mostly moral decisions most of the time had more babies. And so, again, we should simply marvel and celebrate the fact that, mirabile dictu (and Nietzsche notwithstanding), morality appears to be somehow good for our species, our cultures, and ourselves – at least so far.

Is it really necessary for me to point out how and where the author refers to “good” as if it were an objective thing in this paragraph? When the author says “we are moral,” he means that we act in a way that is objectively good. He says we should all “marvel and celebrate the fact,” a statement that would be completely irrational if he were only stating a personal, subjective preference. What possible reason could the rest of us have for celebrating his interpretation of what his personal emotions are trying to tell him? Morality could not be unequivocally good for our species unless there were an unequivocal, that is, objective good. No such object exists.  As far as babies are concerned, there is today a demonstrable lack of them among the “good” in the author’s ingroup. I suggest he travel to Utah or Idaho, and note that the opposite is true of the Mormons, a different ingroup that is presumably “not so good” from his point of view.

I note in passing the fashion among modern academics to take passing slaps at Nietzsche, a philosopher who most of them don’t even begin to understand, who in fact can’t be understood outside of the context of his times, and who was anything but “amoral.” His sin was apparently disagreeing with them about what is “good”.

In short, the author is similar to every other modern academic intellectual I’m aware of in that, regardless of what he claims about the nature of morality, he behaves and speaks as if good and evil were objective things. Why is this important? Look around! The author and others like him have virtually complete control over the “moral landscape” as it exists in academia, social and legacy media, the entertainment industry, and among our current rulers. They present their personal moral prejudices as if there were some kind of objective authority and legitimacy behind them, when in fact there is none whatsoever. Based on this false assumption of authority, they are in the habit of denouncing and attacking anyone who disagrees with them. Do you like to be denounced and pushed around? Attacks on others based on a false assumption of moral authority are certainly irrational, but there is nothing objectively “bad” about them. I simply happen to have a personal aversion to them. That’s why I persist in pointing out the lack of legitimacy and authority for such attacks by those making them. Do you have an aversion to being pushed around as well? If so, I suggest you do the same.

Is Secular Humanism a Religion? Is Secular Humanist Morality Really Subjective?

John Staddon, a professor of psychology at Duke, recently published an article at Quillette entitled Is Secular Humanism a Religion?  The question of whether secular humanism is a religion is, of course, a matter of how one defines religion. According to Staddon, religions are defined by three elements they possess in common, including,

  1. Belief in invisible or hidden beings, worlds, and processes – like God, heaven miracles, reincarnation, and the soul.
  2. Potentially verifiable claims about the real world, such as Noah’s flood, the age of the earth, etc.
  3. Rules for action – prohibitions and requirements – a morality

Many of the commenters on the article leapt to the conclusion that he was answering the question in the affirmative – that secular humanism actually is a religion. In fact, that’s not the case. Staddon actually claims that secular humanism fits only one of the three elements, namely, the third. As he puts it, “In terms of moral rules, secular humanism is indistinguishable from a religion.” However, in his opinion, that’s a very important similarity, because the first two elements have “no bearing on action,” including the very significant matter of action on “legal matters.” That is actually the whole point of the article. Staddon doesn’t attempt to answer the question of whether secular humanism is a religion one way or the other. He limits himself to the claim that, as far as the only element of the three that has a significant bearing on action, including legal action, is concerned, secular humanists are no different from religious believers. He’s right. Continue reading “Is Secular Humanism a Religion? Is Secular Humanist Morality Really Subjective?”

A New York Intellectual’s Unwitting Expose; Human Nature Among the Ideologues

Norman Podhoretz is one of the New York literati who once belonged to a group of leftist intellectuals he called the Family. He wrote a series of books, including Making It, Breaking Ranks, and Ex-Friends, describing what happened when he underwent an ideological metamorphosis from leftist radical to neoconservative. In the process he created a wonderful anthropological study of human nature in the context of an ingroup defined by ideology. Behavior within that ingroup was similar to behavior within ingroups defined by race, class, religion, ethnicity, or any of the other often subtle differences that enable ingroups to distinguish themselves from the “others.” The only difference was that, in the case of Podhoretz’ Family, the ingroup was defined by loyalty to ideological dogmas. Podhoretz described a typical denizen as follows:

Such a person takes ideas as seriously as an orthodox religious person takes, or anyway used to take, doctrine or dogma. Though we cluck our enlightened modern tongues at such fanaticism, there is a reason why people have been excommunicated, and sometimes even put to death, by their fellow congregants for heretically disagreeing with the official understanding of a particular text or even of a single word. After all, to the true believer everything important – life in this world as well as life in the next – depends on obedience to these doctrines and dogmas, which in turn depends on an accurate interpretation of their meaning and which therefore makes the spread of heresy a threat of limitless proportions.

This fear and hatred of the heretic, together with the correlative passion to shut him up one way or the other, is (to say the least, and in doing so I am bending over backward) as much a character trait of so-called liberal intellectuals as it is of conservatives… For we have seen that “liberal” intellectuals who tell us that tolerance and pluralism are the highest values, who profess to believe that no culture is superior to any other, and who are on that account great supporters of “multiculturalism” will treat these very notions as sacred orthodoxies, will enforce agreement with them in every venue in which they have the power to do so (the universities being the prime example at the moment), and will severely punish any deviation that dares to make itself known.

Podhoretz may not have been aware of the genetic roots responsible for such behavior, but he was certainly good at describing it. His description of status seeking, virtue signaling, hatred of the outgroup, allergic reaction to heretics, etc., within the Family would be familiar to any student of urban street gangs. As anthropological studies go, his books have the added advantage of being unusually entertaining, if only by virtue of the fact that his ingroup included such lions of literature as Norman Mailer, Hannah Arendt, Mary McCarthy, Allen Ginsburg, and Lionel Trilling,

Podhoretz was editor of the influential cultural and intellectual magazine Commentary from 1960 to 1995. When he took over the magazine already represented the anti-Communist Left. However, he originally planned to take a more radically leftist line, based on the philosophy of Paul Goodman, a utopian anarchist. In his Growing Up Absurd, Goodman claimed that American society was stuck with a number of “incomplete revolutions.” To escape this “absurdity” it was necessary to complete the revolutions. Podhoretz seized on Goodman’s ideas as the “radical” solution to our social ills he was seeking, and immediately started a three-part serialization of his book in Commentary. Another major influence on Podhoretz at the time was Life Against Death by Norman O. Brown, a late Freudian tract intended to reveal “the psychoanalytical meaning of history.” It is depressing to read these books today in the knowledge that they were once taken perfectly seriously by people who imagined themselves to be the cream of the intellectual crop. Goodman certainly chose the right adjective for them – “absurd.”

In any case, as the decade wore on, the Left did become more radicalized, but not in the way foreseen by Podhoretz. What was known then as the New Left emerged, and began its gradual takeover of the cultural institutions of the country, a process that has continued to this day. When he came of age, most leftists had abandoned the Stalinism or Trotskyism they had flirted with in the 30’s and 40’s, and become largely “pro-American” and anti-Communist as the magnitude of the slaughter and misery in the Soviet Union under Stalin became impossible to ignore. However, as the war in Vietnam intensified, the dogs returned to their vomit, so to speak. Leftists increasingly became useful idiots – effectively pro-Communist whether they admitted it or not. As Israel revealed its ability to effectively defend itself, they also became increasingly anti-Semitic as well, a development that also continues to this day. Then, as now, anti-Semitism was fobbed off as “anti-Zionism,” but Podhoretz, a Jew as were many of the other members of the family, was not buying it. He may have been crazy enough to take Goodman and Brown seriously, but he was not crazy enough to believe that it was preferable to live in a totalitarian Communist state than in the “imperialist” United States, nor, in light of the Holocaust, was he crazy enough to believe that the creation of a Jewish state was “unjust.” In the following passage he describes his response when he first began to notice this shift in the Zeitgeist, in this case on the part of an erstwhile “friend”:

I was not afraid of Jason. I never hesitated to cut him off when he began making outrageous statements about others, and once I even made a drunken public scene in a restaurant when he compared the United States to Nazi Germany and Lyndon Johnson to Hitler. This comparison was later to become a commonplace of radical talk, but I had never heard it made before, and it so infuriated me that I literally roared in response.

Today, of course, one no longer roars. One simply concludes that those who habitually resort to Hitler comparisons are imbeciles, and leaves it at that. In any case, Podhoretz began publishing “heretical” articles in Commentary, rejecting these notions, and nibbling away that the shibboleths that defined what had once been his ingroup in the process. In the end, he became a full-blown neoconservative. The behavioral responses to Podhoretz “treason” to his ingroup should be familiar to all students of human behavior. His first book length challenge to his ingroup’s sense of its own purity and righteousness was Making It, published in 1967. As Podhoretz recalls,

In an article about Making It and its reception that was itself none too friendly to the book, Norman Mailer summed up the critical response as “brutal – coarse, intimate, snide, grasping, groping, slavering, slippery of reference, crude and naturally tasteless.” But, he added, “the public reception of Making It was nevertheless still on the side of charity if one compared the collective hooligan verdict to the earlier fulminations of the Inner Clan.” By the “Inner Clan,” Mailer meant the community of New York literary intellectuals I myself had called the Family. According to Mailer, what they had been saying in private about Making It even before it was published made the “horrors” of the public reception seem charitable and kind. “Just about everyone in the Establishment” – i.e., the Family – was “scandalized, shocked, livid, revolted, appalled, disheartened, and enraged.” They were “furious to the point of biting their white icy lips… No fate could prove undeserved for Norman, said the Family in thin quivering late-night hisses.”

Podhoretz notes that academia was the first of the cultural institutions of the country to succumb to the radical Gleichschaltung that has now established such firm control over virtually all the rest, to the point that it has become the new “normalcy.” In his words,

For by 1968 radicalism was so prevalent among college students that any professor who resisted it at the very least risked unpopularity and at the worst was in danger of outright abuse. Indeed it was in the universities that the “terror” first appeared and where it operated most effectively.

By the late 60’s the type of behavior that is now ubiquitous on university campuses was hardly a novelty. “De-platforming” was already part of the campus culture:

By 1968 SDS (the leftist Students for a Democratic Society) had moved from argument and example to shouting down speakers with whom it disagreed on the ground that only the “truth” had a right to be heard. And it also changed its position on violence… and a number of its members had gone beyond advocacy to actual practice in the form of bombings and other varieties of terrorism.

As Podhoretz documents, the War in Vietnam had originally been supported, and indeed started and continued by intellectuals and politicians on the left of the political spectrum. He noted that Robert Kennedy had been prominent among them:

Kennedy too then grew more and more radicalized as radicalism looked more and more like the winning side. Having been one of the architects of the war in Vietnam and a great believer in resistance to Communist power in general, he now managed to suggest that he opposed these policies both in the small and in the large.

However, in one of the rapid changes in party line familiar to those who’ve read the history of Communism in the Soviet Union and memorialized by George Orwell in 1984, the hawks suddenly became doves:

…a point was soon reached where speakers supporting the war were either refused a platform or shouted down when they attempted to speak. A speaker whose criticisms were insufficiently violent could even expect a hard time, as I myself discovered when a heckler at Wayne State in Detroit accused me, to the clear delight of the audience, of not being “that much” against the war because in expressing my opposition to the American role I had also expressed my usual reservations about the virtues of the Communist side.

Of course, there was no Internet in the 60’s, so “de-platforming” assumed a form commensurate with the technology available at the time. Podhoretz describes it as follows:

The word “terror,” like everything else about the sixties, was overheated. No one was arrested or imprisoned or executed; no one was even fired from a job (though there were undoubtedly some who lost out on job opportunities or on assignments or on advances from book publishers they might otherwise have had). The sanctions of this particular reign of “terror” were much milder: one’s reputation was besmirched, with unrestrained viciousness in conversation and, when the occasion arose, by means of innuendo in print. People were written off with the stroke of an epithet – “fink” or “racist” or “fascist” as the case might be – and anyone so written off would have difficulty getting a fair hearing for anything he might have to say. Conversely, anyone who went against the Movement party line soon discovered that the likely penalty was dismissal from the field of discussion.

Seeing others ruthless dismissed in this way was enough to prevent most people from voicing serious criticisms of the radical line and – such is the nature of intellectual cowardice – it was enough in some instances to prevent them even from allowing themselves to entertain critical thoughts.

The “terror” is more powerful and pervasive today than it ever was in the 60’s, and it’s ability to “dismiss from the field of discussion” is far more effective. As a result, denizens of the leftist ingroup or those who depend on them for their livelihood tend to be very cautious about rocking the boat.  That’s why young, pre-tenure professors include ritualistic denunciations of the established heretics in their fields before they dare to even give a slight nudge to the approved dogmas. Indeed, I’ve documented similar behavior by academics approaching retirement on this blog, so much do they fear ostracism by their own “Families.” Podhoretz noticed the same behavior early on by one of his erstwhile friends:

As the bad boy of American letters – itself an honorific status in the climate of the sixties – he (Normal Mailer) still held a license to provoke and he rarely hesitated to use it, even if it sometimes meant making a fool of himself in the eyes of his own admirers. But there were limits he instinctively knew how to observe; and he observed them. He might excoriate his fellow radicals on a particular point; he might discomfit them with unexpected sympathies (for right-wing politicians, say, or National Guardsmen on the other side of a demonstration) and equally surprising antipathies (homosexuality and masturbation, for example, he insisted on stigmatizing as vices); he might even on occasion describe himself as (dread word) a conservative. But always in the end came the reassuring gesture, the wink of complicity, the subtle signing of the radical loyalty oath.

So much for Podhoretz description of the behavioral traits of the denizens of an ideologically defined ingroup. I highly recommend all of the three books noted above, not only as an unwitting but wonderfully accurate studies of “human nature,” but as very entertaining descriptions of some of the many famous personalities Podhoretz crossed paths with during his long career. One of them was Jackie Kennedy, who happened to show up at his door one day in the company of his friend, Richard Goodwin, “who had worked in various capacities for President Kennedy.”

She and I had never met before, but we seemed to strike an instant rapport, and at her initiative I soon began seeing her on a fairly regular basis. We often had tea alone together in her apartment on Fifth Avenue where I would give her the lowdown on the literary world and the New York intellectual community – who was good, who was overrated, who was amusing, who was really brilliant – and she would reciprocate with the dirt about Washington society. She was not in Mary McCarthy‘s league as a bitchy gossip (who was?), but she did very well in her own seemingly soft style. I enjoyed these exchanges, and she (an extremely good listener) seemed to get a kick out of them too.

Elsewhere Podhoretz describes McCarthy as “our leading bitch intellectual.” Alas, she was an unrepentant radical, too, and even did a Jane Fonda in North Vietnam, but I still consider her one of our most brilliant novelists. I guess there’s no accounting for taste when it comes to ingroups.

Robert Plomin’s “Blueprint”: The Reply of the Walking Dead

The significance of Robert Plomin’s Blueprint is not that every word therein is infallible. Some reviewers have questioned his assertions about the relative insignificance of the role that parents, schools, culture, and other environmental factors play in the outcome of our lives, and it seems to me the jury is still out on many of these issues. See, for example, the thoughtful review of Razib Khan in the National Review. What is significant about it is Plomin’s description of new and genuinely revolutionary experimental tools of rapidly increasing power and scope that have enabled us to confirm beyond any reasonable doubt that our DNA has a very significant influence on human behavior. In other words, there is such a thing as “human nature,” and it is important. This truth might see obvious today. It is also a fact, however, that this truth was successfully suppressed and denied for over half a century by the vast majority of the “scientists” who claimed to be experts on human behavior.

There is no guarantee that such scientific debacles are a thing of the past. Ideologues devoted to the quasi-religious faith that the truth must take a back seat to their equalist ideals are just as prevalent now as they were during the heyday of the Blank Slate. Indeed, they are at least as powerful now as they were then, and they would like nothing better than to breathe new life into the flat earth dogmas they once foisted on the behavioral sciences. Consider, for example, a review of Blueprint by Nathaniel Comfort entitled “Genetic determinism rides again,” that appeared in the prestigious journal Nature. The first paragraph reads as follows:

It’s never a good time for another bout of genetic determinism, but it’s hard to imagine a worse one than this. Social inequality gapes, exacerbated by climate change, driving hostility towards immigrants and flares of militant racism. At such a juncture, yet another expression of the discredited, simplistic idea that genes alone control human nature seems particularly insidious.

Can anyone with an ounce of common sense, not to mention the editors of a journal that purports to speak for “science,” read such a passage and conclude that the author will continue with a dispassionate review of the merits of the factual claims made in a popular science book? One wonders what on earth they were thinking. Apparently Gleichschaltung is sufficiently advanced at Nature that the editors have lost all sense of shame. Consider, for example, the hoary “genetic determinism” canard. A “genetic determinist” is a strawman invented more than 50 years ago by the Blank Slaters of old. These imaginary beings were supposed to believe that our behavior is rigidly programmed by “instincts.” I’ve searched diligently during the ensuing years, but have never turned up a genuine example of one of these unicorns. They are as mythical as witches, but the Blank Slaters never tire of repeating their hackneyed propaganda lines. It would be hard to “discredit” the “simplistic idea that genes alone control human nature” by virtue of the fact that no one ever made such a preposterous claim to begin with, and Plomin certainly wasn’t the first. Beyond that, what could possibly be the point of dragging in all the familiar dogmas of the “progressive” tribe? Apparently Nature would have us believe that scientific “truth” is to be determined by ideological litmus tests.

In the next paragraph Comfort supplies Plomin, a professor of behavior genetics, with the title “educational psychologist,” and sulks that his emphasis on chromosomal DNA leaves microbiologists, epigeneticists, RNA experts, and developmental biologists out in the cold. Seriously? Since when did these fields manage to hermetically seal themselves off from DNA and become “non-overlapping magisteria?” Do any microbiologists, epigeneticists, RNA experts or developmental biologists actually exist who consider DNA irrelevant to their field?

Comfort next makes the logically questionable claim that, because “Darwinism begat eugenics”, “Mendelism begat worse eugenics,” and medical genetics begat the claim that men with an XYY genotype were violent, therefore behavioral genetics must also “begat” progeny that are just as bad. QED

Genome-wide association (GWA) methods, the increasingly powerful tool described in Blueprint that has now put the finishing touches on the debunking of the Blank Slate, are dismissed as something that “lures scientists” because of its “promise of genetic explanations for complex traits, such as voting behavior or investment strategies.” How Comfort distills this “promise” out of anything that actually appears in the book is beyond me. One wonders if he ever actually read it. That suspicion is greatly strengthened when one reads the following paragraph:

A polygenic score is a correlation coefficient. A GWAS identifies single nucleotide polymorphisms (SNPs) in the DNA that correlate with the trait of interest. The SNPs are markers only. Although they might, in some cases, suggest genomic neighborhoods in which to search for genes that directly affect the trait, the polygenic score itself is in no sense causal. Plomin understands this and says so repeatedly in the book – yet contradicts himself several times by arguing that the scores are in fact, causal.

You have to hand it to Comfort, he can stuff a huge amount of disinformation into a small package. In the first place, the second and third sentences contradict each other. If SNPs are variations in the rungs of DNA that occur between individuals, they are not just markers, and they don’t just “suggest genomic neighborhoods in which to search for genes that directly affect the trait.” If they are reliable and replicable GWA hits, they are one of the actual points at which the trait is affected. Plomin most definitely does not “understand” that polygenic scores are in no sense causal, and nowhere does he say anything of the sort, far less “repeatedly.” What he does say is:

In contrast, correlations between a polygenic score and a trait can only be interpreted causally in one direction – from the polygenic score to the trait. For example, we have shown that the educational attainment polygenic score correlates with children’s reading ability. The correlation means that the inherited DNA differences captured by the polygenic score cause differences between children in their school achievement, in the sense that nothing in our brains, behavior, or environment can change inherited differences in DNA sequence.

I would be very interested to hear what Comfort finds “illogical” about that passage, and by virtue of what magical mental prestidigitations he proposes to demonstrate that the score is a “mere correlation.” Elsewhere we read,

Hereditarian books such as Charles Murray and Richard Herrnstein’s The Bell Curve (1994) and Nicholas Wade’s 2014 A Troublesome Inheritance (see N. Comfort Nature 513, 306–307; 2014) exploited their respective scientific and cultural moments, leveraging the cultural authority of science to advance a discredited, undemocratic agenda. Although Blueprint is cut from different ideological cloth, the consequences could be just as grave.

In fact, neither The Bell Curve nor A Troublesome Inheritance have ever been discredited, if by that term is meant being proved factually wrong. If books are “discredited” by how many ideological zealots begin foaming at the mouth on reading them, of course, it’s a different matter. Beyond that, if something is true, it does not become false by virtue of Comfort deeming it “undemocratic.” I could go on, but what’s the point? Suffice it to say that Comfort’s favorite “scientific authority” is Richard Lewontin, an obscurantist high priest of the Blank Slate if ever there was one, and author of Not in Our Genes.

I can understand the editors of Nature’s desire to virtue signal their loyalty to the prevailing politically correct fashions, but this “review” is truly abject. It isn’t that hard to find authors on the left of the political spectrum who can write a book review that is at least a notch above the level of tendentious ideological propaganda. See, for example, Kathryn Paige Harden’s review of Blueprint in the Spectator. Somehow she managed to write it without implying that Plomin is a Nazi in every second sentence.  I suggest that next time they look a little harder.

My initial post about Blueprint tended to emphasize the historical ramifications of the book in the context of the Blank Slate disaster. As a result, my description of the scientific substance of the book was very broad brush. However, there are many good reviews out there that cover that ground, expressing some of my own reservations about Plomin’s conclusions about the importance of environment in the process. See, for example, the excellent review by Razib Khan in the National Review linked above. As I mentioned in my earlier post, the book itself is only 188 pages long, so, by all means, read it.

Morality Whimsy: What the Philosophers “Learned” from Darwin

When he published The Descent of Man, Charles Darwin practically spoon fed the rest of us the truth about human morality. He explained that it was as much a result of evolution by natural selection as any of our more obvious physical features. Similar versions of the heritable mental traits responsible for its existence are also present in other animals. The only difference between us and them is our ability to contemplate what we experience as a result of those traits with our large brains, and communicate our thoughts to others. As the result of a natural process, morality is not fixed, and could potentially be entirely different in other animals that might eventually happen to acquire levels of intelligence close to our own. In other words, it is a purely subjective phenomenon that does not “track” some imaginary “true” version of objective moral law. As a natural phenomenon, there is no reason to expect that it is striving towards some imaginary goal, such as human perfection or ideal virtue. It’s hard to imagine how Darwin could have expressed these facts in simpler or more straightforward terms.

If Darwin’s claim that morality is derived from heritable mental traits that exist by virtue of natural selection is right, it follows that it is not a perfectly malleable manifestation of environment or culture. Human beings cannot be programmed by learning or environment to adopt completely arbitrary versions of morality. It also follows that humans will perceive moral rules as absolutes. Furthermore, human beings are social animals. If morality exists by virtue of evolved mental traits, it follows that it enhances the probability of the survival and reproduction of the responsible genes in a group environment. It would hardly be effective in doing so if it predisposed us to believe that certain of our behaviors are “good” and others “evil” merely as individuals, but that no such rules or categories apply to the behavior of others. In that case altruism would certainly be a losing strategy in the struggle for survival. However, altruism exists. It follows that we must perceive the moral “rules” not only as absolute, and not only as applying to ourselves, but to everyone else as well. In short, belief in objective morality is an entirely predictable illusion, but an illusion regardless. If it were not an illusion, Darwin’s comment that completely different versions of morality could evolve for different intelligent species would necessarily be false. Whatever else one thinks of objective morality, it is certainly un-Darwinian.

In the years that followed, Darwin’s great theory spawned a host of different versions of “evolutionary morality.” One cannot but experience a sinking feeling in reading through them. Not a single one of the authors had a clue what Darwin was talking about. As far as I can tell, every single one of the systems of “evolutionary morality” concocted in the 19th century was based on the assumption of objective moral law. Evolution was merely the “natural” process of mankind’s progress towards the “goal” of compliance with this objective law, and the outcome of this “natural” process would be (of course) human moral perfection, in harmony with assorted versions of “true” morality. In other words, the power of the illusion asserted itself with a vengeance. “Man the wise” proved incapable of putting two and two together. Instead we clung to the old, familiar mirage that good and evil exist as objective things, just as our minds have always portrayed them to us.

One can confirm the above by reviewing some representative samples of the early versions of evolutionary morality. Many of them were described by Charles Mallory Williams in his A Review of the Systems of Ethics Founded on the Theory of Evolution, published in 1893. By that time such systems were hardly a novelty. As Williams put it,

Now every year and almost every month brings with it a fresh supply of books, pamphlets and magazine articles on The Evolution of Morality. So many are the waters which now pour themselves into this common stream that the current threatens soon to become too deep and swift for any but the most expert swimmers.

Noting that it was already impossible to do justice to all the theories in a single book, Williams limited himself to reviewing the systems proposed by the most prominent authors in the field. These included Ernst Haeckel, who suggested substituting a “nature religion” based on evolution for the old “church religions.” According to Haeckel,

The greatest rudeness and barbarity of custom often goes hand in hand with the absolute dominion of an all-powerful church; in confirmation of which assertion one need only remember the Middle Ages. On the other hand, we behold the highest standard of perfection attained by men who have severed connection with every creed. Independent of every confession of faith, there lives in the breast of every human being the germ of a pure nature religion; this is indissolubly bound up with the noblest sides of human life. Its highest commandment is love, the restraint of our natural egoism for the benefit of our fellow-men, and for the good of human society, whose members we are.

The very un-Darwinian assumptions that evolution had resulted in a moral sense that was in tune with some version of ideal goodness, referred to by Haeckel as “a pure nature religion,” and that this moral sense existed to serve “the good of human society,” or the good of the species, are characteristic of all the early versions of “evolutionary morality.” For example, from the system proposed by Herbert Spencer,

From the fundamental laws of life and the conditions of social existence are inducible certain imperative limitations to individual action – limitations which are essential to a perfect life, individual and social, or in other words essential to the greatest possible happiness. And these limitations following inevitably as they do from undeniable first principles deep as the nature of life itself constitute what we may distinguish as absolute morality… In the ideal state towards which evolution tends, any falling short of function implies deviation from perfectly moral conduct.

Spencer’s friend, John Fiske, imagined that Darwin, “properly understood” pointed in a similar direction:

Man is slowly passing from a primitive social state, in which he was little better than a brute, toward an ultimate social state in which his character shall have become so transformed that nothing of the brute can be detected in it. The “original sin” of theology is the brute inheritance, which is being gradually eliminated; and the message of Christianity: “Blessed are the meek for they shall inherit the earth” will be realized in the state of universal peace towards which mankind is tending. Strife and Sorrow shall disappear. Peace and Love shall reign supreme. The goal of evolution is the perfecting of man, whereby we see, more than ever, that he is the chief object of divine care, the fruition of that creative energy which is manifested throughout the knowable universe.

Another Englishman, Alfred Barratt, proposed an even more confused version of “Darwinian morality:”

The Moral Sense therefore is merely one of the emotions, though the last of all in the order of evolution. It can only claim a life of some two or three centuries, (!) and there are even some who still doubt its existence. Man, at any rate, is the only animal who possesses it in its latest development, for even in horses and dogs we cannot believe that it has passed the intentional or conscious stage. Good with them has no artificial meaning; it is simply identical with the greatest pleasure. Only by complete and perfect obedience to all emotions can perfect freedom from regret be obtained in the gratification of all desire. Man is at present passion’s slave because he is so only in part, for the cause of repentance is never the attainment of some pleasure, but always the non-attainment of more; not the satisfaction of one desire, but the inability to satisfy all. The highest virtue, therefore, consists in being led not by one desire but by all in the complete organization of the Moral Nature.

According to the abstruce version of “Darwinism” proposed by Austrian philosopher Bartolomäus von Carneri, evolution had a “goal.” Happily, it was “the perfection of man.”

When we do away with all concessions to one sided extravagant desires, abstain from placing mind above the universal law of causality, and are content with the facts made known to us by science, we perceive that the absolute True, Beautiful, and Good bears the character of the Universal. In this universal character it has always finally found expression in human life and in this character it will always find expression… There is no absolute Evil in contrast to the absolute Good. Evil is negative. The perfection of man is identical with the attainment of absolute Good through evolution.

So much for “evolutionary morality” in the 19th century.  None of these philosophers had a clue that they were spouting nonsense that flew in the face of what Darwin had actually said about morality.  None of them so much as stopped to think that there is no path from a natural process such as evolution by natural selection to objective “oughts.”  They could not free themselves of the powerful illusion that good and evil are real things. It took a critic of Darwin who rejected the idea that evolution had anything to do with morality to see the blatant fallacies at the bottom of all these systems of “evolutionary morality.” Such a man was Jacob Gould Schurman, who took occasion to point out some of the gaping holes in all these fine theories in his The Ethical Import of Darwinism, published in 1888. The diehard Schurman commented bitterly that,

It is a historical fact that no one nowadays seems to doubt the validity of the general theory of evolution. However, the same cannot be said of natural selection.

He cited several prominent contemporary scientists, including Alfred Russel Wallace, who rejected Darwin’s theory either in whole or in part. Noting that “Darwin is certainly the father of evolutionary ethics,” Schurman then continued with a scathing attack on the whole idea, pointing out gaping holes in the above theories of “evolutionary morality” that are just as applicable to the tantrums of modern SJWs. For example,

It is worse than idle for mechanical evolutionists to talk of the reason or end or ground of morality.

The mental and moral faculties are both reduced to the rank of natural phenomena.

The absolute ought cannot be the product of (evolution).

Will not evolution, then, as thus interpreted, work revolution in our views of the moral nature of man, since it implies that morality is not grounded in the nature of things, but something purely relative to man’s circumstances; a happy device whereby man’s ancestors managed to cohere in a united society, and so kill out rival and disunited groups.

Exactly! If Darwin was right, then the claims of any system of “evolutionary morality” to represent objective moral truths must be dismissed as absurd. It is impossible for objective Good and Evil to be “grounded in the nature of things” if morality is the outcome of a random natural process. Indeed, it is not out of the question that intelligent life may already have evolved on other planets by a process similar to the one that occurred on earth, resulting in entirely different versions of good and evil.  It is a tribute to the power of the illusions that our evolved “moral sense” spawns in our brains that it is only obvious to those who disagree with our preferred version of “moral truth” that we are delusional.

Today we suffer from an infestation of secular “Social Justice Warriors,” who are in the habit of delivering themselves of bombastic moral pronunciamientos, and become furious when the rest of us pay no attention to them. Only Christians and other theists appear capable of noticing that they lack any basis for the legitimacy of their moral claims. In fact, they are behaving just as Darwin would have predicted, blindly responding to innate moral emotions, oblivious to the fact that the consequences of doing so today are highly unlikely to be the same as those that applied in the radically different world in which those emotions evolved. Just as the Darwin critic Schurman immediately recognized that the evolutionary moralists’ fantastic notion that they had discovered a philosopher’s stone to prop up their “absolute ought” was absurd, today’s theists can immediately see that the fine “objective truths” in which secular humanists imagine they’ve arrayed their moralistic emperor are purely figments of their imaginations.  Their emperor is naked.

As far as “evolutionary morality” is concerned, little has changed since the 19th century.  “Evolutionary moralists” flourish even more luxuriantly now than they did then.  Some of them even deny the existence of objective moral truths.  None that I am aware of are to be taken seriously when they make that claim.  In nearly the same breath in which they announce their belief in subjective morality, they will launch into a morally drenched rant against conservatives, or populists, or nationalists, or capitalists, or whoever else has the honor of belonging to their outgroup.  They do this without the least explanation, as if there were nothing at all contradictory about it.  They announce that there are no moral truths, and then proceed to furiously defend whatever flavor of moral truth they happen to prefer. Nothing could be further from their minds than explaining just how they imagine the particular “moral truths” they endorse will enhance the odds that the responsible genes they happen to carry will survive and reproduce. Only the great Edvard Westermarck popped for a brief moment out of the prevailing fog and followed the teachings of Darwin to their logical conclusion.  He was quickly forgotten.

Why is all this important?  I can only answer that question from a personal point of view.  It may not be important to some people.  That said, it is important to me because I find it expedient to know and base my actions and decisions on the truth.  I can’t say with absolute certainty whether anything is true or not, so I settle for what I consider probably true, and I deem it highly probable that there is no such thing as objective moral truth.

Some have argued that acknowledging this particular truth will harm society, because it will lead to moral relativism and moral chaos.  Human history in general, and the historical facts I have cited above in particular, demonstrate that this conclusion is false.  In view of what Darwin wrote about morality, it would seem perfectly clear and perfectly obvious that no system of objective morality can be based on his theory of evolution by natural selection.  This was abundantly clear to many of his opponents.  It remains obvious to the theists who reject his theory today.  However, almost to a man, those who considered themselves “Darwinians” and proposed systems of morality supposedly based on his theory concluded that there are objective moral truths, and that it is the “goal” of evolution to realize these truths! I can think of no rational explanation for this fact other than the existence of a powerful, innate human predisposition to perceive moral rules as independent, objective facts.  The power of this common illusion is demonstrated by the fact that highly intelligent “Darwinian” moral philosophers could not wean themselves from it even after Darwin had, for all practical purposes, told them point blank that they were fooling themselves.  In short, our species faces no danger from moral relativism.  The opposite is true. We are moral absolutists by nature, and will continue to be moral absolutists regardless of the scribblings of philosophers.  The real danger we face is our tendency to blindly follow the promptings of our “moral sense” in an environment that is radically different from the one in which that moral sense evolved.

Demonstrating the truth of the above couldn’t be simpler. Just gather up as many evolutionary moralists, postmodernists, and self-proclaimed believers in subjective morality as you please. Then take a close look at what they’ve actually written.  You’ll quickly find that every single one of them has made and continues to make morally loaded pronouncements that make no sense whatever absent the implicit assumption that there are objective moral truths.  They will announce that someone in their outgroup is immoral, or that we “ought” to do something, not merely as a matter of utility, but because it is the “right” thing to do, or that we have a “duty” to do something and refrain from doing something else.  They will proclaim their desire for “moral progress” or “human flourishing” without feeling in the least embarrassed by their failure to explain how “moral progress” or “human flourishing” will promote the survival of the genes that are the ultimate reason they find these nebulous utopias so attractive to begin with.

I, too, am human, and tend to wander off into such irrationalities myself sometimes.  However, if challenged, I will at least admit that I am merely expressing whims spawned by my own “moral sense,” and that I know of no legitimate basis whatever for claiming that my whims have some magical power to dictate to others what they ought or ought not to do.

We are not threatened by moral relativism.  We are threatened by the pervasive illusion that the objects we refer to as good and evil are real, and that we and the members of our ingroup have a monopoly on the knowledge of what these imaginary objects look like.  We cannot free ourselves of this illusion.  We are moral absolutists by nature.  Under the circumstances, it might behoove us to construct an “absolute morality” that is as benign, useful, and unobtrusive as possible.  If nothing else, it would pull the rug out from under the feet of the pious bullies and self-appointed moral dictators that I personally find an insufferable blight on modern society.  With luck, it might even encourage some of our benighted fellow creatures, who are now rushing down “morally pure” paths to extinction, to think twice about the wisdom of what they are doing, or as least to refrain from insisting that the rest of us accompany them on the journey.

Why the Blank Slate? Let Max Eastman Explain

In my opinion, science, broadly construed, is the best “way of knowing” we have.  However, it is not infallible, is never “settled,” cannot “say” anything, and can be perverted and corrupted for any number of reasons.  The Blank Slate affair was probably the worst instance of the latter in history.  It involved the complete disruption of the behavioral sciences for a period of more than half a century in order to prop up the absurd lie that there is no such thing as human nature.  It’s grip on the behavioral sciences hasn’t been completely broken to this day.  It’s stunning when you think about it.  Whole branches of the sciences were derailed to support a claim that must seem ludicrous to any reasonably intelligent child.  Why?  How could such a thing have happened?  At least part of the answer was supplied by Max Eastman in an article that appeared in the June 1941 issue of The Reader’s Digest.  It was entitled, Socialism Doesn’t Jibe with Human Nature.

Who was Max Eastman?  Well, he was quite a notable socialist himself in his younger days.  He edited a radical magazine called The Masses from 1913 until it was suppressed in 1918 for its antiwar content.  In 1922 he traveled to the Soviet Union, and stayed to witness the reality of Communism for nearly two years, becoming friends with a number of Bolshevik worthies, including Trotsky.  Evidently he saw some things that weren’t quite as ideal as he had imagined.  He became increasingly critical of the Stalin regime, and eventually of socialism itself.  In 1941 he became a roving editor for the anti-Communist Reader’s Digest, and the above article appeared shortly thereafter.

In it, Eastman reviewed the history of socialism from it’s modest beginnings in Robert Owen’s utopian village of New Harmony through a host of similar abortive experiments to the teachings of Karl Marx, and finally to the realization of Marx’s dream in the greatest experiment of them all; the Bolshevik state in Russia.  He noted that all the earlier experiments had failed miserably but, in his words, “The results were not better than Robert Owen’s but a million times worse.”  The outcome of Lenin’s great experiment was,

Officialdom gone mad, officialdom erected into a new and merciless exploiting class which literally wages war on its own people; the “slavery, horrors, savagery, absurdities and infamies of capitalist exploitation” so far outdone that men look back to them as to a picnic on a holiday; bureaucrats everywhere, and behind the bureaucrats the GPU; death for those who dare protest; death for theft – even of a piece of candy; and this sadistic penalty extended by a special law to children twelve years old!  People who still insist that this is a New Harmony are for the most part dolts or mental cowards.  To honest men with courage to face facts it is clear that Lenin’s experiment, like Robert Owen’s, failed.

It would seem the world produced a great many dolts and mental cowards in the years leading up to 1941.  In the 30’s Communism was all the rage among intellectuals, not only in the United States but worldwide.  As Malcolm Muggeridge put it in his book, The Thirties, at the beginning of the decade it was rare to find a university professor who was a Marxist, but at the end of the decade it was rare to find one who wasn’t.  If you won’t take Muggeridge’s word for it, just look at the articles in U.S. intellectual journals such as The Nation, The New Republic, and the American Mercury during, say, the year 1934.  Many of them may be found online.  These were all very influential magazines in the 30’s, and at times during the decade they all took the line that capitalism was dead, and it was now merely a question of finding a suitable flavor of socialism to replace it.  If you prefer reality portrayed in fiction, read the guileless accounts of the pervasiveness of Communism among the intellectual elites of the 1930’s in the superb novels of Mary McCarthy, herself a leftist radical.

Eastman was too intelligent to swallow the “common sense” socialist remedies of the news stand journals.  He had witnessed the reality of Communism firsthand, and had followed its descent into the hellish bloodbath of the Stalinist purges and mass murder by torture and starvation in the Gulag system.  He knew that socialism had failed everywhere else it had been tried as well.  He also knew the reason why.  Allow me to quote him at length:

Why did the monumental efforts of these three great men (Owen, Marx and Lenin, ed.) and tens of millions of their followers, consecrated to the cause of human happiness – why did they so miserably fail? They failed because they had no science of human nature, and no place in their science for the common sense knowledge of it.

In October 1917, after the news came that Kerensky’s government had fallen, Lenin, who had been in hiding, appeared at a meeting of the Workers and Soldiers’ Soviet of Petrograd.  He mounted the rostrum and, when the long wild happy shouts of greeting had died down, remarked: “We will now proceed to the construction of a socialist society.” He said this as simply as though he were proposing to put up a new cowbarn.  But in all his life he had never asked himself the equally simple question: “How is this newfangled contraption going to fit in with the instinctive tendencies of the animals it was made for?”

Lenin actually knew less about the science of man, after a hundred years, than Robert Owen did.  Owen had described human nature, fairly well for an amateur, as “a compound of animal propensities, intellectual faculties and moral qualities.”  He had written into the preamble of the constitution of New Harmony that “man’s character… is the result of his formation, his location, and of the circumstances within which he exists.”

It seems incredible, but Karl Marx, with all his talk about making socialism “scientific,” took a step back from this elementary notion. He dropped out the factor of man’s hereditary nature altogether.  He dropped out man altogether, so far as he might present an obstacle to social change.  “The individual,” he said, “has no real existence outside the milieu in which he lives.” By which he meant: Change the milieu, change the social relations, and man will change as much as you like.  That is all Marx ever said on the primary question.  And Lenin said nothing.

That is why they failed.  They were amateurs – and worse than amateurs, mystics – in the subject most essential to their success.

To begin with, man is the most plastic and adaptable of animals.  He truly can be changed by his environment, and even by himself, to a unique degree, and that makes extreme ideas of progress reasonable.  On the other hand, he inherits a set of emotional impulses or instincts which, although they can be trained in various ways in the individual, cannot be eradicated from the race.  And no matter how much they may be repressed or redirected by training, they reappear in the original form – as sure as a hedgehop puts out spines – in every baby that is born.

Amazing, considering these words were written in 1941.  Eastman had a naïve faith that science would remedy the situation, and that, as our knowledge of human behavior advanced, mankind would see the truth.  In fact, by 1941, those who didn’t want to hear the inconvenient truth that the various versions of paradise on earth they were busily concocting for the rest of us were foredoomed to failure already had the behavioral sciences well in hand.  They made sure that “science said” what they wanted it to say.  The result was the Blank Slate, a scientific debacle that brought humanity’s efforts to gain self-understanding to a screeching halt for more than half a century, and one that continues to haunt us even now.  Their agenda was simple – if human nature stood in the way of heaven on earth, abolish human nature!  And that’s precisely what they did.  It wasn’t the first time that ideological myths have trumped the truth, and it certainly won’t be the last, but the Blank Slate may well go down in history as the deadliest myth of all.

I note in passing that the Blank Slate was the child of the “progressive Left,” the same people who today preen themselves on their great respect for “science.”  In fact, all the flat earthers, space alien conspiracy nuts, and anti-Darwin religious fanatics combined have never pulled off anything as damaging to the advance of scientific knowledge as the Blank Slate debacle.  It’s worth keeping in mind the next time someone tries to regale you with fairy tales about what “science says.”

Morality and the Floundering Philosophers

In my last post I noted the similarities between belief in objective morality, or the existence of “moral truths,” and traditional religious beliefs. Both posit the existence of things without evidence, with no account of what these things are made of (assuming that they are not things that are made of nothing), and with no plausible explanation of how these things themselves came into existence or why their existence is necessary. In both cases one can cite many reasons why the believers in these nonexistent things want to believe in them. In both cases, for example, the livelihood of myriads of “experts” depends on maintaining the charade. Philosophers are no different from priests and theologians in this respect, but their problem is even bigger. If Darwin gave the theologians a cold, he gave the philosophers pneumonia. Not long after he published his great theory it became clear, not only to him, but to thousands of others, that morality exists because the behavioral traits which give rise to it evolved. The Finnish philosopher Edvard Westermarck formalized these rather obvious conclusions in his The Origin and Development of the Moral Ideas (1906) and Ethical Relativity (1932). At that point, belief in the imaginary entities known as “moral truths” became entirely superfluous. Philosophers have been floundering behind their curtains ever since, trying desperately to maintain the illusion.

An excellent example of the futility of their efforts may be found online in the Stanford Encyclopedia of Philosophy in an entry entitled Morality and Evolutionary Biology. The most recent version was published in 2014.  It’s rather long, but to better understand what follows it would be best if you endured the pain of wading through it.  However, in a nutshell, it seeks to demonstrate that, even if there is some connection between evolution and morality, it’s no challenge to the existence of “moral truths,” which we are to believe can be detected by well-trained philosophers via “reason” and “intuition.”  Quaintly enough, the earliest source given for a biological explanation of morality is E. O. Wilson.  Apparently the Blank Slate catastrophe is as much a bugaboo for philosophers as for scientists.  Evidently it’s too indelicate for either of them to mention that the behavioral sciences were completely derailed for upwards of 50 years by an ideologically driven orthodoxy.  In fact, a great many highly intelligent scientists and philosophers wrote a great deal more than Wilson about the connection between biology and morality before they were silenced by the high priests of the Blank Slate.  Even during the Blank Slate men like Sir Arthur Keith had important things to say about the biological roots of morality.  Robert Ardrey, by far the single most influential individual in smashing the Blank Slate hegemony, addressed the subject at length long before Wilson, as did thinkers like Konrad Lorenz and Niko Tinbergen.  Perhaps if its authors expect to be taken seriously, this “Encyclopedia” should at least set the historical record straight.

It’s already evident in the Overview section that the author will be running with some dubious assumptions.  For example, he speaks of “morality understood as a set of empirical phenomena to be explained,” and the “very different sets of questions and projects pursued by philosophers when they inquire into the nature and source of morality,” as if they were examples of the non-overlapping magisterial once invoked by Stephen Jay Gould. In fact, if one “understands the empirical phenomena” of morality, then the problem of the “nature and source of morality” is hardly “non-overlapping.”  In fact, it solves itself.  The suggestion that they are non-overlapping depends on the assumption that “moral truth” exists in a realm of its own.  A bit later the author confirms he is making that assumption as follows:

Moral philosophers tend to focus on questions about the justification of moral claims, the existence and grounds of moral truths, and what morality requires of us.  These are very different from the empirical questions pursued by the sciences, but how we answer each set of questions may have implications for how we should answer the other.

He allows that philosophy and the sciences must inform each other on these “distinct” issues.  In fact, neither philosophy nor the sciences can have anything useful to say about these questions, other than to point out that they relate to imaginary things.  “Objects” in the guise of “justification of moral claims,” “grounds of moral truths,” and the “requirements of morality” exist only in fantasy.  The whole burden of the article is to maintain that fantasy, and insist that the mirage is real.  We are supposed to be able to detect that the mirages are real by thinking really hard until we “grasp moral truths,” and “gain moral knowledge.”  It is never explained what kind of a reasoning process leads to “truths” and “knowledge” about things that don’t exist.  Consider, for example, the following from the article:

…a significant amount of moral judgment and behavior may be the result of gaining moral knowledge, rather than just reflecting the causal conditioning of evolution.  This might apply even to universally held moral beliefs or distinctions, which are often cited as evidence of an evolved “universal moral grammar.”  For example, people everywhere and from a very young age distinguish between violations of merely conventional norms and violations of norms involving harm, and they are strongly disposed to respond to suffering with concern.  But even if this partly reflects evolved psychological mechanisms or “modules” governing social sentiments and responses, much of it may also be the result of human intelligence grasping (under varying cultural conditions) genuine morally relevant distinctions or facts – such as the difference between the normative force that attends harm and that which attends mere violations of convention.

It’s amusing to occasionally substitute “the flying spaghetti monster” or “the great green grasshopper god” for the author’s “moral truths.”  The “proofs” of their existence work just as well.  In the above, he is simply assuming the existence of “morally relevant distinctions,” and further assuming that they can be grasped and understood logically.  Such assumptions fly in the face of the work of many philosophers who demonstrated that moral judgments are always grounded in emotions, sometimes referred to by earlier authors as “sentiments,” or “passions,” and it is therefore impossible to arrive at moral truths through reason alone.  Assuming some undergraduate didn’t write the article, one must assume the author had at least a passing familiarity with some of these people.  The Earl of Shaftesbury, for example, demonstrated the decisive role of “natural affections” as the origins of moral judgment in his Inquiry Concerning Virtue or Merit (1699), even noting in that early work the similarities between humans and the higher animals in that regard.  Francis Hutcheson very convincingly demonstrated the impotence of reason alone in detecting moral truths, and the essential role of “instincts and affections” as the origin of all moral judgment in his An Essay on the Nature and Conduct of the Passions and Affections (1728).  Hutcheson thought that God was the source of these passions and affections.  It remained for David Hume to present similar arguments on a secular basis in his A Treatise on Human Nature (1740).

The author prefers to ignore these earlier philosophers, focusing instead on the work of Jonathan Haidt, who has also insisted on the role of emotions in shaping moral judgment.  Here I must impose on the reader’s patience with a long quote to demonstrate the type of “logic” we’re dealing with.  According to the author,

There are also important philosophical worries about the methodologies by which Haidt comes to his deflationary conclusions about the role played by reasoning in ordinary people’s moral judgments.

To take just one example, Haidt cites a study where people made negative moral judgments in response to “actions that were offensive yet harmless, such as…cleaning one’s toilet with the national flag.” People had negative emotional reactions to these things and judged them to be wrong, despite the fact that they did not cause any harms to anyone; that is, “affective reactions were good predictors of judgment, whereas perceptions of harmfulness were not” (Haidt 2001, 817). He takes this to support the conclusion that people’s moral judgments in these cases are based on gut feelings and merely rationalized, since the actions, being harmless, don’t actually warrant such negative moral judgments. But such a conclusion would be supported only if all the subjects in the experiment were consequentialists, specifically believing that only harmful consequences are relevant to moral wrongness. If they are not, and believe—perhaps quite rightly (though it doesn’t matter for the present point what the truth is here)—that there are other factors that can make an action wrong, then their judgments may be perfectly appropriate despite the lack of harmful consequences.

This is in fact entirely plausible in the cases studied: most people think that it is inherently disrespectful, and hence wrong, to clean a toilet with their nation’s flag, quite apart from the fact that it doesn’t hurt anyone; so the fact that their moral judgment lines up with their emotions but not with a belief that there will be harmful consequences does not show (or even suggest) that the moral judgment is merely caused by emotions or gut reactions. Nor is it surprising that people have trouble articulating their reasons when they find an action intrinsically inappropriate, as by being disrespectful (as opposed to being instrumentally bad, which is much easier to explain).

Here one can but roll ones eyes.  It doesn’t matter a bit whether the subjects are consequentialists or not.  Haidt’s point is that logical arguments will always break down at some point, whether they are based on harm or not, because moral judgments are grounded in emotions.  Harm plays a purely ancillary role.  One could just as easily ask why the action in question is considered disrespectful, and the chain of logical reasons would break down just as surely.  Whoever wrote the article must know what Haidt is really saying, because he refers explicitly to the ideas of Hume in the same book.  Absent the alternative that the author simply doesn’t know what he’s talking about, we must conclude that he is deliberately misrepresenting what Haidt was trying to say.

One of the author’s favorite conceits is that one can apply “autonomous applications of human intelligence,” meaning applications free of emotional bias, to the discovery of “moral truths” in the same way those logical faculties are applied in such fields as algebraic topology, quantum field theory, population biology, etc.  In his words,

We assume in general that people are capable of significant autonomy in their thinking, in the following sense:

Autonomy Assumption: people have, to greater or lesser degrees, a capacity for reasoning that follows autonomous standards appropriate to the subjects in question, rather than in slavish service to evolutionarily given instincts merely filtered through cultural forms or applied in novel environments. Such reflection, reasoning, judgment and resulting behavior seem to be autonomous in the sense that they involve exercises of thought that are not themselves significantly shaped by specific evolutionarily given tendencies, but instead follow independent norms appropriate to the pursuits in question (Nagel 1979).

This assumption seems hard to deny in the face of such abstract pursuits as algebraic topology, quantum field theory, population biology, modal metaphysics, or twelve-tone musical composition, all of which seem transparently to involve precisely such autonomous applications of human intelligence.

This, of course, leads up to the argument that one can apply this “autonomy assumption” to moral judgment as well.  The problem is that, in the other fields mentioned, one actually has something to reason about.  In mathematics, for example, one starts with a collection of axioms that are simply accepted as true, without worrying about whether they are “really” true or not.  In physics, there are observables that one can measure and record as a check on whether one’s “autonomous application of intelligence” was warranted or not.  In other words, one has physical evidence.  The same goes for the other subjects mentioned.  In each case, one is reasoning about something that actually exists.  In the case of morality, however, “autonomous intelligence” is being applied to a phantom.  Again, the same arguments are just as strong if one applies them to grasshopper gods.  “Autonomous intelligence” is useless if it is “applied” to something that doesn’t exist.  You can “reflect” all you want about the grasshopper god, but he will still stubbornly refuse to pop into existence.  The exact nature of the recondite logical gymnastics one must apply to successfully apply “autonomous intelligence” in this way is never explained.  Perhaps a Ph.D. in philosophy at Stanford is a prerequisite before one can even dare to venture forth on such a daunting logical quest.  Perhaps then, in addition to the sheepskin, they fork over a philosopher’s stone that enables one to transmute lead into gold, create the elixir of life, and extract “moral truths” right out of the vacuum.

In short, the philosophers continue to flounder.  Their logical demonstrations of nonexistent “moral truths” are similar in kind to logical demonstrations of the existence of imaginary super-beings, and just as threadbare.  Why does it matter?  I can’t supply you with any objective “oughts,” here, but at least I can tell you my personal prejudices on the matter, and my reasons for them.  We are living in a time of moral chaos, and will continue to do so until we accept the truth about the evolutionary origin of human morality and the implications of that truth.  There are no objective moral truths, and it will be extremely dangerous for us to continue to ignore that fact.  Competing morally loaded ideologies are already demonstrably disrupting our political systems.  It is hardly unlikely that we will once again experience what happens when fanatics stuff their “moral truths” down our throats as they did in the last century with the morally loaded ideologies of Communism and Nazism.  Do you dislike being bullied by Social Justice Warriors?  I’m sorry to inform you that the bullying will continue unabated until we explode the myth that they are bearers of “moral truths” that they are justified, according to “autonomous logic” in imposing on the rest of us.  I could go on and on, but do I really need to?  Isn’t it obvious that a world full of fanatical zealots, all utterly convinced that they have a monopoly on “moral truth,” and a perfect right to impose these “truths” on everyone else, isn’t exactly a utopia?  Allow me to suggest that, instead, it might be preferable to live according to a simple and mutually acceptable “absolute” morality, in which “moral relativism” is excluded, and which doesn’t change from day to day in willy-nilly fashion according to the whims of those who happen to control the social means of communication?  As counter-intuitive as it seems, the only practicable way to such an outcome is acceptance of the fact that morality is a manifestation of evolved human nature, and of the truth that there are no such things as “moral truths.”

 

Morality and the Spiritualism of the Atheists

I’m an atheist.  I concluded there was no God when I was 12 years old, and never looked back.  Apparently many others have come to the same conclusion in western democratic societies where there is access to diverse opinions on the subject, and where social sanctions and threats of force against atheists are no longer as intimidating as they once were.  Belief in traditional religions is gradually diminishing in such societies.  However, they have hardly been replaced by “pure reason.”  They have merely been replaced by a new form of “spiritualism.”  Indeed, I would maintain that most atheists today have as strong a belief in imaginary things as the religious believers they so often despise.  They believe in the “ghosts” of good and evil.

Most atheists today may be found on the left of the ideological spectrum.  A characteristic trait of leftists today is the assumption that they occupy the moral high ground. That assumption can only be maintained by belief in a delusion, a form of spiritualism, if you will – that there actually is a moral high ground.  Ironically, while atheists are typically blind to the fact that they are delusional in this way, it is often perfectly obvious to religious believers.  Indeed, this insight has led some of them to draw conclusions about the current moral state of society similar to my own.  Perhaps the most obvious conclusion is that atheists have no objective basis for claiming that one thing is “good” and another thing is “evil.”  For example, as noted by Tom Trinko at American Thinker in an article entitled “Imagine a World with No Religion,”

Take the Golden Rule, for example. It says, “Do onto others what you’d have them do onto you.” Faithless people often point out that one doesn’t need to believe in God to believe in that rule. That’s true. The problem is that without God, there can’t be any objective moral code.

My reply would be, that’s quite true, and since there is no God, there isn’t any objective moral code, either.  However, most atheists, far from being “moral relativists,” are highly moralistic.  As a consequence, they are dumbfounded by anything like Trinko’s remark.  It pulls the moral rug right out from under their feet.  Typically, they try to get around the problem by appealing to moral emotions.  For example, they might say something like, “What?  Don’t you think it’s really bad to torture puppies to death?”, or, “What?  Don’t you believe that Hitler was really evil?”  I certainly have a powerful emotional response to Hitler and tortured puppies.  However, no matter how powerful those emotions are, I realize that they can’t magically conjure objects into being that exist independently of my subjective mind.  Most leftists, and hence, most so-called atheists, actually do believe in the existence of such objects, which they call “good” and “evil,” whether they admit it explicitly or not.  Regardless, they speak and act as if the objects were real.

The kinds of speech and actions I’m talking about are ubiquitous and obvious.  For example, many of these “atheists” assume a dictatorial right to demand that others conform to novel versions of “good” and “evil” they may have concocted yesterday or the day before.  If those others refuse to conform, they exhibit all the now familiar symptoms of outrage and virtuous indignation.  Do rational people imagine that they are gods with the right to demand that others obey whatever their latest whims happen to be?  Do they assume that their subjective, emotional whims somehow immediately endow them with a legitimate authority to demand that others behave in certain ways and not in others?  I certainly hope that no rational person would act that way.  However, that is exactly the way that many so-called atheists act.  To the extent that we may consider them rational at all, then, we must assume that they actually believe that whatever versions of “good” or “evil” they happen to favor at the moment are “things” that somehow exist on their own, independently of their subjective minds.  In other words, they believe in ghosts.

Does this make any difference?  I suggest that it makes a huge difference.  I personally don’t enjoy being constantly subjected to moralistic bullying.  I doubt that many people enjoy jumping through hoops to conform to the whims of others.  I submit that it may behoove those of us who don’t like being bullied to finally call out this type of irrational, quasi-religious behavior for what it really is.

It also makes a huge difference because this form of belief in imaginary objects has led us directly into the moral chaos we find ourselves in today.  New versions of “absolute morality” are now popping up on an almost daily basis.  Obviously, we can’t conform to all of them at once, and must therefore put up with the inconvenience of either keeping our mouths shut or risk being furiously condemned as “evil” by whatever faction we happen to offend.  Again, traditional theists are a great deal more clear-sighted than “atheists” about this sort of thing.  For example, in an article entitled, “Moral relativism can lead to ethical anarchy,” Christian believer Phil Schurrer, a professor at Bowling Green State University, writes,

…the lack of a uniform standard of what constitutes right and wrong based on Natural Law leads to the moral anarchy we see today.

Prof. Schurrer is right about the fact that we live in a world of moral anarchy.  I also happen to agree with him that most of us would find it useful and beneficial if we could come up with a “uniform standard of what constitutes right and wrong.”  Where I differ with him is on the rationality of attempting to base that standard on “Natural Law,” because there is no such thing.  For religious believers, “Natural Law” is law passed down by God, and since there is no God, there can be no “Natural Law,” either.  How, then, can we come up with such a uniform moral code?

I certainly can’t suggest a standard based on what is “really good” or “really bad” because I don’t believe in the existence of such objects.  I can only tell you what I would personally consider expedient.  It would be a standard that takes into account what I consider to be some essential facts.  These are as follows.

  • What we refer to as morality is an artifact of “human nature,” or, in other words, innate predispositions that affect our behavior.
  • These predispositions exist because they evolved by natural selection.
  • They evolved by natural selection because they happened to improve the odds that the genes responsible for their existence would survive and reproduce at the time and in the environment in which they evolved.
  • We are now living at a different time, and in a different environment, and it cannot be assumed that blindly responding to the predispositions in question will have the same outcome now as it did when those predispositions evolved.  Indeed, it has been repeatedly demonstrated that such behavior can be extremely dangerous.
  • Outcomes of these predispositions include a tendency to judge the behavior of others as “good” or “evil.”  These categories are typically deemed to be absolute, and to exist independently of the conscious minds that imagine them.
  • Human morality is dual in nature.  Others are perceived in terms of ingroups and outgroups, with different standards applying to what is deemed “good” or “evil” behavior towards those others depending on the category to which they are imagined to belong.

I could certainly expand on this list, but the above are certainly some of the most salient and essential facts about human morality.  If they are true, then it is possible to make at least some preliminary suggestions about how a “uniform standard” might look.  It would be as simple as possible.  It would be derived to minimize the dangers referred to above, with particular attention to the dangers arising from ingroup/outgroup behavior.  It would be limited in scope to interactions between individuals and small groups in cases where the rational analysis of alternatives is impractical due to time constraints, etc.  It would be in harmony with innate human behavioral traits, or “human nature.”  It is our nature to perceive good and evil as real objective things, even though they are not.  This implies there would be no “moral relativism.”  Once in place, the moral code would be treated as an absolute standard, in conformity with the way in which moral standards are usually perceived.  One might think of it as a “moral constitution.”  As with political constitutions, there would necessarily be some means of amending it if necessary.  However, it would not be open to arbitrary innovations spawned by the emotional whims of noisy minorities.

How would such a system be implemented?  It’s certainly unlikely that any state will attempt it any time in the foreseeable future.  Perhaps it might happen gradually, just as changes to the “moral landscape” have usually happened in the past.  For that to happen, however, it would be necessary for significant numbers of people to finally understand what morality is, and why it exists.  And that is where, as an atheist, I must part company with Mr. Trinko, Prof. Schurrer, and the rest of the religious right.  Progress towards a uniform morality that most of us would find a great deal more useful and beneficial than the versions currently on tap, regardless of what goals or purposes we happen to be pursuing in life, cannot be based on the illusion that a “natural law” exists that has been handed down by an imaginary God, any more than it can be based on the emotional whims of leftist bullies.  It must be based on a realistic understanding of what kind of animals we are, and how we came to be.  However, such self knowledge will remain inaccessible until we shed the shackles of religion.  Perhaps, as they witness many of the traditional churches increasingly becoming leftist political clubs before their eyes, people on the right of the political spectrum will begin to find it less difficult to free themselves from those shackles.  I hope so.  I think that an Ansatz based on simple, traditional moral rules, such as the Ten Commandments, is more likely to lead to a rational morality than one based on furious rants over who should be allowed to use what bathrooms.  In other words, I am more optimistic that a useful reform of morality will come from the right rather than the left of the ideological spectrum, as it now stands.  Most leftists today are much too heavily invested in indulging their moral emotions to escape from the world of illusion they live in.  To all appearances they seriously believe that blindly responding to these emotions will somehow magically result in “moral progress” and “human flourishing.”  Conservatives, on the other hand, are unlikely to accomplish anything useful in terms of a rational morality until they free themselves of the “God delusion.”  It would seem, then, that for such a moral “revolution” to happen, it will be necessary for those on both the left and the right to shed their belief in “spirits.”

 

On Legitimizing Moral Laws: “Purpose” as a God Substitute

The mental traits responsible for moral behavior did not evolve because they happened to correspond to “universal moral truths.”  They evolved because they increased the odds that the responsible genes would survive and reproduce.  The evolutionary origins of morality explain why we imagine the existence of “universal moral truths” to begin with.  We imagine that “moral truths” exist as objective things, independent of the minds that imagine them, because there was a selective advantage to perceiving them in that way.  Philosophers have long busied themselves with the futile task of “proving” that these figments of their imaginations really do exist just as they imagine them – as independent things.  Of course, even though they’ve been trying for thousands of years, they’ve never succeeded, for the very good reason that the things whose existence they’ve been trying to prove don’t exist.  No matter how powerfully our imaginations portray these illusions to us as real things, they remain illusions.

God has always served as a convenient prop for objective morality.  It has always seemed plausible to many that, if God says something is morally good, it really is good.  Plato exposed the logical flaws of this claim in his Euthyphro.  However, such quibbles may be conveniently ignored by those who believe that the penalty for meddling with the logical basis of divine law is an eternity in hell.  They dispose of Plato by simply accepting without question the axiom that God is good.  If God is good, then his purposes must be good.  If, as claimed by the 18th century Scottish philosopher Francis Hutcheson, he endowed us with an innate moral sense, which serves as the fundamental source of morality, then he must have done it for a purpose.  Since that purpose is Godly, and therefore good in itself, moral rules that are true expressions of our God-given moral sense must be good in themselves as well. QED

Unfortunately, there is no God, a fact that has become increasingly obvious over the years as the naturalistic explanations of the universe supplied by the advance of science have supplanted supernatural ones at an accelerating rate.  As a result, atheists already make up a large proportion of the population in many countries where threats of violence and ostracism are no longer effective props for the old religions.  However, most of these atheists haven’t yet succeeded in divorcing themselves from the spirit world.  They still believe that disembodied Goods and Evils hover about us in ghostly form, endowed with a magical power to dictate “right” behavior, not only to themselves, but to everyone else as well.

The challenge these latter day moralists face, of course, is to supply an explanation of just how it is that the moral rules supplied by their vivid imaginations acquire the right to dictate behavior to the rest of us.  In view of the fact that, if one really believes in objective morality, independent of the subjective minds of individuals, one must also account for the recent disconcerting habit of the “moral law” to undergo drastic changes on an almost daily basis, this is no easy task.

In fact, it is an impossible task, since the “objective” ghosts of Good and Evil exist no more in reality than does God.  However, there are powerful incentives to believe in these ghosts, just as there are powerful incentives to believe in God.  As a result, there has been no lack of trying.  One gambit in this direction, entitled Could Morality Have a Transcendent Evolved Purpose?, recently turned up at From Darwin to Eternity, one of the blogs hosted by Psychology Today.  According to the author, Michael Price, the “standard naturalistic conclusion” is that,

It is hard to see how morality could ultimately serve any larger kind of purpose.  Conventional religions sidestep this problem, of course, by positing a supernatural purpose provider.  But that’s an unsatisfactory solution, if you wish to maintain a naturalistic worldview.

Here it is important to notice an implied assumption that becomes increasingly obvious as we read further in the article.  The assumption is that, if we can successfully identify a “larger kind of purpose,” then the imagined good is somehow transformed into objective Good, and imagined evil into objective Evil.  There is no basis whatsoever for this assumption, regardless of where the “larger kind of purpose” comes from.  It is important to notice this disconnect, because Price apparently believes that, if morality can be shown to serve a “transcendent naturalistic purpose,” then it must thereby gain objective legitimacy and independent normative power.  He doesn’t say so explicitly, but if he doesn’t believe it, his article is pointless.  He goes on to claim that, according to the “conventional interpretation,” of those who accept the fact of evolution by natural selection,

There can be no transcendent purpose, because no widely-understood natural process can generate such purpose. Transcendent purpose is a subject for religion, and maybe for philosophy, but not for science. That’s the standard naturalistic conclusion.

I note in passing that, while this may be “the standard naturalistic conclusion,” it certainly hasn’t stopped the vast majority of its proponents from thinking and acting just as if they believed in objective morality.  I know of not a single exception among contemporary scientists or philosophers of any note, regardless of what their theories on the subject happen to be.  One can find artifacts in the writings or sayings of all of them that make no sense unless they believe in objective morality, regardless of what their philosophical theories on the subject happen to be.  Typically these artifacts take the form of assertions that some individual or group of individuals is morally good or evil, without any suggestion that the assertion is merely an opinion.  Such statements make no sense absent a belief in some objective Good, generally applicable to others besides themselves, and not merely an artifact of their subjective whims.  The innate illusion of objective Good has been too powerful for any of them to entirely free themselves of the fantasy.  Be that as it may, Price tells us that there is also an “unconventional interpretation.” He poses the rhetorical question,

Could morality be “universal” in the sense that there is some transcendent moral purpose to human existence itself?… This is a tricky question because natural selection is the only process known to science that can ultimately engineer “purpose” (moral or otherwise). It does so by generating “function,” which is essentially synonymous with “purpose”: the function/purpose of an eye, for example, is to see.

Notice the quotation marks around “purpose” and “function” when they’re first used in this quote.  That’s as it should be, as the terms are only used in this context as a convenient form of shorthand.  They refer to the reasons that the characteristics in question happened to enhance the odds that the responsible genes would survive and reproduce.  However, these shorthand terms should never be confused with a real function or purpose.  In the case of “purpose,” for example, consider the actual definition found in the Merriam-Webster Dictionary:

Purpose: 1: something set up as an object or end to be attained 2 : a subject under discussion or an action in course of execution

Clearly, someone must be there to set up the object or end, or to discuss the subject.  In the case of evolution, no “someone” is there.  In other words, there is no purpose to evolution or its outcomes in the proper sense of the term.  However, if you look at the final sentence in the Price quote above, you’ll notice something odd has happened.  The quote marks have disappeared.  “Function/purpose” has suddenly become function/purpose!  One might charitably assume that Price is still using the terms in the same sense, and has simply neglected the quote marks.  If so, one would be assuming wrong.  A bit further on, the “purpose” that we saw change to purpose metastasizes again.  It is now not just a purpose, but a “transcendent naturalistic purpose!”  In Price’s words,

I think the standard naturalistic conclusion is premature, however. There is one way in which transcendent naturalistic purpose could in fact exist.

In the very next sentence, “transcendent naturalistic purpose” has completed the transformation from egg to butterfly, and becomes “transcendent moral purpose!” Again quoting Price,

If selection is the only natural source of purpose, then transcendent moral purpose could exist if selection were operating at some level more fundamental than the biological.  Specifically, transcendent purpose would require a process of cosmological natural selection, with universes being selected from a multiverse based on their reproductive ability, and intelligence emerging (as a subroutine of cosmological evolution) as a higher-level adaptation for universe reproduction.  From this perspective, intelligent life (including its moral systems) would have a transcendent purpose: to eventually develop the sociopolitical and technical expertise that would enable it to cooperatively create new universes…  These ideas are highly speculative and may seem strange, especially if you haven’t heard them before.

That’s for sure! In his conclusion Price gets a bit slippery about whether he personally buys into this extravagant word game. As he puts it,

At any rate, my goal here is not to argue that these ideas are likely to be true, nor that they are likely to be false. I simply want to point out that if they’re false, then it seems like it must also be false – from a naturalistic perspective, at least – that morality could have any transcendent purpose.

This implies that Price accepts the idea that, if “these ideas are likely to be true,” then morality actually could have a “transcendent purpose.”  Apparently we are to assume that moral rules could somehow acquire objective legitimacy by virtue of having a “transcendent purpose.”  The “proof” goes something like this:

1. Morality evolved because it serves a “purpose.”
2. Miracle a happens
3. Therefore, morality evolved because it serves a purpose.
4. Miracle b happens
5. Therefore, morality evolved to serve an independent naturalistic purpose.
6. Miracle c happens
7. Therefore, morality evolved to serve a transcendental moral purpose.
8. Miracle d happens
9. If a transcendental moral purpose exists, then it automatically becomes our duty to obey moral rules that serve that purpose. The rules acquire objective legitimacy.

So much for a rigorous demonstration that a new God in the form of “transcendental moral purpose” exists to replace the old God.  I doubt much has been gained here.  At least the “proofs” of the old God’s existence didn’t require such a high level of “mental flexibility.”  Would it be impertinent to ask how the emotional responses we normally associate with morality could become completely divorced from the “transcendental moral purpose,” to serve which we are to believe they actually exist?  Has anyone told the genes responsible for the predispositions that are the ultimate cause of our moral behavior about this “transcendental moral purpose?”

In short, it’s clear that while belief in God is falling out of fashion, at least in some countries, belief in an equally imaginary “objective morality” most decidedly is not.  We have just reviewed an example of the ludicrous lengths to which our philosophers and “experts on morality” are willing to go to prop up their faith in this particular mirage.  It has been much easier for them to give up the God fantasy than the fantasy of their own moral righteousness.  Indeed, legions of these “experts on morality” would quickly find themselves unemployed if it were generally realized that what they claim to be “expert” about is a mere fantasy.  So goes life in the asylum.