Posted on September 30th, 2014 No comments
One would think that, at the very least, evolutionary psychologists would have jettisoned their belief in objective morality by now. After all, every day new papers are published about the evolutionary roots of morality, the actual loci in the brain that give rise to different types of moral behavior, and the existence in animals of some of the same traits we associate with morality in humans. Now, if morality evolved, it must have done so because it enhanced the odds that the genes responsible for it would survive and reproduce. It cannot somehow acquire a life of its own and decide that it actually has some other “purpose” in mind. The spectacle of human “experts in ethics” arbitrarily reassigning its purpose in that way is even more ludicrous. In spite of all that, faith in the existence of disembodied good and evil persists in the academy, in defiance of all logic, in evolutionary psychology as in other disciplines. It’s not surprising really. For some time now academics of all stripes have been heavily invested in the myth of their own moral superiority. Eliminate objective morality, and the basis of that myth evaporates like a mirage. Self-righteousness and heroin are both hard habits to kick.
Examples aren’t hard to find. An interesting one turned up in the journal Evolutionary Psychology lately. Entitled Evolutionary Awareness and submitted by authors Gregory Gorelick and Todd Shackelford, the abstract reads as follows:
In this article, we advance the concept of “evolutionary awareness,” a metacognitive framework that examines human thought and emotion from a naturalistic, evolutionary perspective. We begin by discussing the evolution and current functioning of the moral foundations on which our framework rests. Next, we discuss the possible applications of such an evolutionarily-informed ethical framework to several domains of human behavior, namely: sexual maturation, mate attraction, intrasexual competition, culture, and the separation between various academic disciplines. Finally, we discuss ways in which an evolutionary awareness can inform our cross-generational activities—which we refer to as “intergenerational extended phenotypes”—by helping us to construct a better future for ourselves, for other sentient beings, and for our environment.
Those who’ve developed a nose for such things can already sniff the disembodied good and evil things-in-themselves levitating behind the curtain. The term “better future” is a dead giveaway. No future can be “better” than any other in an objective sense unless there is some legitimate standard of comparison that doesn’t depend on the whim of individuals. As we read on, our suspicions are amply confirmed. As far as its theme is concerned, the paper is just a rehash of what Konrad Lorenz and Robert Ardrey were suggesting back in the 60’s; that there are such things as innate human behavioral predispositions, that on occasion they have promoted warfare and the other forms of mayhem that humans have indulged in over the millennia, and that it would behoove us to take this fact into account and try to find ways to limit the damage. Unfortunately, they did so at a time when the Blank Slate, probably the greatest scientific imposture ever heard of, was at its most preposterous extreme. They were ridiculed and ignored by the “men of science” and forgotten. Now that the Blank Slate orthodoxy has finally collapsed after reigning supreme for the better part of half a century, their ideas are belatedly being taken seriously again, albeit without ever giving them credit or mentioning their names.
There is, however, an important difference. In reading through the paper, one finds that the authors believe not only in evolved morality, necessarily a subjective phenomenon, but are true believers in a shadowy thing-in-itself that exists alongside of it. This thing is objective morality, as noted above, an independent, and even “scientific,” something that has a “purpose” quite distinct from the reasons that explain the existence of evolved morality. The “purpose” in high fashion at the moment is usually some version of the “human flourishing” ideology advocated by Sam Harris. No evidence has ever been given for this concoction. Neither Sam Harris nor anyone else has ever been able to capture one of these ghostly “goods” or “evils” and submit it for examination in the laboratory. No matter, their existence is accepted as a matter of faith, accompanied by a host of “proofs” similar to those that are devised to “prove” the existence of God.
Let us examine the artifacts of the faith in these ghosts in the paper at hand. As it happens, it’s lousy with them. On page 785, for example, we read,
Because individual choices lead to cultural movements and social patterns (Kenrick, Li, and Butner, 2003), it is up to every individual to accept the responsibility of an evolutionarily-informed ethics.
Really? If so, where does this “responsibility” come from? How does it manage to acquire legitimacy? Reading a bit further on page 785, we encounter the following remarkable passage:
However, as with any intellectually-motivated course of action, developing an evolutionarily-informed ethics entails an intellectual sacrifice: Are we willing to forego certain reproductive benefits or personal pleasures for the sake of building a more ethical community? Such an intellectual endeavor is not just relevant to academic debates but is also of great practical and ethical importance. To apply the paleontologist G. G. Simpson’s (1951) ethical standard of knowledge and responsibility, evolutionary scientists have the responsibility of ensuring that their findings are disseminated as widely as possible. In addition, evolutionarily-minded researchers should expand their disciplinary boundaries to include the application of an evolutionary awareness to problems of ethical and practical importance. Although deciphering the ethical dimension of life’s varying circumstances is difficult, the fact that there are physical consequences for every one of our actions—consequences on other beings and on the environment—means that, for better or worse, we are all players in constructing the future of our society and that all our actions, be they microscopic or macroscopic, are reflected in the emergent properties of our society (Kenrick et al., 2003).
In other words, not only is the existence of this “other” morality simply assumed, but we also find that its “purpose” actually contradicts the reasons that have resulted in the evolution of morality to begin with. It is supposed to be “evolutionarily-informed,” and yet we are actually to “forego certain reproductive benefits” in its name. Later in the paper, on page 804, we find that this apparent faith in “real” good and evil, existing independently of the subjective variety that has merely evolved, is not just a momentary faux pas. In the author’s words,
It is not clear what the effects of being evolutionarily aware of our political and social behaviors will be. At the least, we can raise the level of individual and societal self-awareness by shining the light of evolutionary awareness onto our religious, political, and cultural beliefs. Better still, by examining our ability to mentally time travel from an evolutionarily aware perspective, we might envision more humane futures rather than using this ability to further our own and our offspring’s reproductive interests. In this way, we may be able to monitor our individual and societal outcomes and direct them to a more ethical and well-being-enhancing direction for ourselves, for other species, for our—often fragile—environment, and for the future of all three.
Here the authors leave us in no doubt. They have faith in an objective something utterly distinct from evolved morality, and with entirely different “goals.” Not surprisingly, as already noted above, this “something” actually does turn out to be a version of the “scientific” objective morality proposed by Sam Harris. For example, on page 805,
As Sam Harris suggested in The Moral Landscape (2010), science has the power not only to describe reality, but also to inform us as to what is moral and what is immoral (provided that we accept certain utilitarian ethical foundations such as the promotion of happiness, flourishing, and well-being—all of which fall into Haidt’s (2012) “Care/Harm” foundation of morality).
No rational basis is ever provided, by Harris or anyone else, for how these “certain utilitarian ethical foundations” are magically transmuted from the whims of individuals to independent objects, which then somehow hijack human moral emotions and endow them with a “purpose” that has little if anything to do with the reasons that explain the evolution of those emotions to begin with. It’s all sufficiently absurd on the face of it, and yet understandable. Jonathan Haidt gives a brilliant description of the reasons that self-righteousness is such a ubiquitous feature of our species in The Righteous Mind. As a class, academics are perhaps more addicted to self-righteousness than any other. There are, after all, whole departments of “ethical experts” whose very existence becomes a bad joke unless they can maintain the illusion that they have access to some mystic understanding of the abstruse foundations of “real” good and evil, hidden from the rest of us. The same goes for all the assorted varieties of “studies” departments, whose existence is based on the premise that there is a “good” class that is being oppressed by an “evil” class. At least since the heyday of Communism, academics have cultivated a faith in themselves as the special guardians of the public virtue, endowed with special senses that enable them to sniff out “real” morality for the edification of the rest of us.
In 1931, protests were made in Parliament against a broadcast by a Cambridge economist, Mr. Maurice Dobb, on the ground that he was a Marxist; now (at the end of the decade, ed.) the difficulty would be to find an economist employed in any university who was not one.
Of course, this earlier sure-fire prescription for “human flourishing” cost 100 million human lives, give or take, and has hence been abandoned by more forward-looking academics. However, a few hoary leftovers remain on campus, and there is an amusing reminder of the fact in the paper. On page 784 the authors admit that attempts to tinker with human nature in the past have had unfortunate results:
Indeed, totalitarian philosophies, whether Stalinism or Nazism, often fail because of their attempts to radically change human nature at the cost of human beings.
Note the delicate use of the term “Stalinism” instead of Communism. Meanwhile, the proper term is used for Nazism instead of “Hitlerism.” Of course, mass terror was well underway in the Soviet Union under Lenin, long before Stalin took over supreme power, and the people who carried it out weren’t inspired by the “philosophy” of “socialism in one country,” but by a fanatical faith in a brave new world of “human flourishing” under Communism. Nazism in no way sought to “radically change human nature,” but masterfully took advantage of it to gain power. The same could be said of the Communists, the only difference being that they actually did attempt to change human nature once they were in power. I note in passing that some other interesting liberties are taken with history in the paper. For example,
Christianity may have indirectly led to the fall of the Roman Empire by pacifying its population into submission to the Vandals (Frost, 2010), as well as the fall of the early Viking settlers in Greenland to “pagan” Inuit invaders (Diamond, 2005)—two outcomes that collectively highlight the occasional inefficiency (from a gene’s perspective) of cultural evolution.
Of course, the authors apparently only have these dubious speculations second hand from Frost and Diamond, whose comments on the subject I haven’t read, but they would have done well to consider some other sources before setting down these speculations as if they had any authority. The Roman Empire never “fell” to the Vandals. They did sack Rome in 455 with the permission, if not of the people, at least of the gatekeepers, but the reason had a great deal more to do with an internal squabble over who should be emperor than with any supposed passivity due to Christianity. Indeed, the Vandals themselves were Christians, albeit of the Arian flavor, and their north African kingdom was itself permanently crushed by an army under Belisarius sent by the emperor Justinian in 533. Both certainly considered themselves “Romans,” as the date of 476 for the “fall of the Roman Empire” was not yet in fashion at the time. There are many alternative theories to the supposition that the Viking settlements in Greenland “fell to the Inuits,” and to state this “outcome” as a settled fact is nonsense.
But I digress. To return to the subject of objective morality, it actually appears that the authors can’t comprehend the fact that it’s possible to believe anything else. For example, they write,
Haidt’s approach to the study of human morality is non-judgmental. He argues that the Western, cosmopolitan mindset—morally centered on the Care/Harm foundation—is limited because it is not capable of processing the many “moralities” of non-Western peoples. We disagree with this sentiment. For example, is Haidt really willing to support the expansion of the “Sanctity/Degradation” foundation (and its concomitant increase in ethnocentrism and out-group hostility)? As Pinker (2011) noted, “…right or wrong, retracting the moral sense from its traditional spheres of community, authority, and purity entails a reduction of violence” (p. 637).
Here the authors simply can’t grok the fact that Haidt is stating an “is,” not an “ought.” As a result, this passage is logically incomprehensible as it stands. The authors are disagreeing with a “sentiment” that doesn’t exist. They are incapable of grasping the fact that Haidt, who has repeatedly rejected the notion of objective morality, is merely stating a theory, not some morally loaded “should.”
From my own subjective point of view, it is perhaps unfair to single out these two authors. The academy is saturated with similar irrational attempts to hijack morality in the name of assorted systems designed to promote “human flourishing,” in the fond hope that the results won’t be quite so horrific as were experienced under Communism, the last such attempt to be actually realized in practice. The addiction runs deep. Perhaps we shouldn’t take it too hard. After all, the Blank Slate was a similarly irrational addiction, but it eventually collapsed under the weight of its own absurdity after a mere half a century, give or take. Perhaps, like the state was supposed to do under Communism, faith in the chimera of objective morality, or at least those versions of it not dependent on the existence of imaginary super-beings, will “whither away” in the next 50 years as well. We can but hope.
Posted on September 22nd, 2014 2 comments
Atheists often scorn those who believe in the God Delusion. The faithful, in turn, scorn those atheists who believe in the Objective Morality Delusion. The scorn is understandable in both cases, but I give the nod to the faithful on this one. Philosophers and theologians have come up with many refined and subtle arguments in favor of the existence of imaginary super beings. The arguments in favor of imaginary objective moralities are threadbare by comparison. I can hardly blame the true believers for laughing at the obvious imposture. They don’t require such a crutch to maintain the illusion of superior virtue. As a result, they see through the charade immediately.
Let me put my own cards on the table. I consider morality to be the expression of a subset of the innate human behavioral traits that exist as a result of evolution by natural selection. It follows that I do not believe that the comments of Darwin, who specifically addressed the subject, can be simply ignored. Neither do I believe that all the books and papers on the evolved wellsprings of morality that have been rolling of the presses lately can be simply ignored. I agree with Hume, who pointed out that reason is a slave of the passions, and with Haidt, who wrote about the emotional dog and its rational tail, and take a dubious view of those who think the points made by either author can be simply ignored. In short, I consider morality a purely subjective phenomenon. There are, of course, many implications of this conclusion that are uncomfortable to the pious faithful and pious atheists alike. However, if what I say is true, their discomfort will not make it untrue.
I’ve discussed the arguments of Sam Harris and several other “objective moralists” in earlier posts. As it happens, Daniel Fincke, another member of the club who writes the Camels with Hammers blog at Patheos.com has just chimed in. Perhaps his comments on the subject will provide some insight into whether the supercilious smiles of the godly are out of place or not.
Fincke has a Ph.D. in philosophy from Fordham, and teaches interactive philosophy classes online. His comments appeared in the context of a pair of responses to Jerry Coyne, who differed with him on the subject at the latest Pennsylvania State Atheists Humanists Conference. According to Fincke,
When we talk about an endeavor being objective in the main or subjective in the main we’re talking about whether there can be objective principles that can often, at least theoretically, lead to determinations independent of our preferences.
Of course, this statement that objective principles are those principles that are objective is somewhat lacking as a rigorous definition, but it’s on the right track. Objective phenomena exist independently of the experiences or impressions in the minds of individuals. Like Harris, Fincke associates morality with “human flourishing”:
As to the nature of human flourishing, my basic view can be briefly boiled down to this. What we are as individuals is defined by the functional powers that constitute our being. In other words, we do not just “have” the powers of reasoning, emotional life, technological/artistic capacities, sociability, sexuality, our various bodily capabilities, etc., but we exist through such powers. We cannot exist without them. They constitute us ourselves. When they suffer, we suffer. Some humans might be drastically deficient in any number of them and there’s nothing they can do about that but make the best of it. But in general our inherent good is the objectively determinable good functioning of these basic powers (and all the subset powers that compose them and all the combined powers that integrate powers from across these roughly distinguishable kinds).
One can almost guess where this is heading without reading the rest. Like so many other “objective moralists,” Fincke will conflate that which is morally good with that which is “good” in the sense that it serves some useful purpose. This gets us nowhere, because it merely begs the question of why the purpose served is itself morally good. In what follows, our suspicions are amply confirmed. For example, Fincke continues,
Morality comes in at the stage of where any people who live lives impacting each other develop implicit or explicit rules and practices and judgments, etc. geared at cooperative living. Each of us has an interest in morality because we are social beings in vital ways.
First, we socially depend for our basic flourishing on a society that is minimally orderly, where people are trustworthy, where we’re not swamped with chaotic violence, etc.
Second, the more others around us are empowered to develop their functioning in their excellent powers is the more that they provide the means of us doing the same. So a society with greater functioning, powerful people is a society where we’ll be enriched by the things they create—be they technological or social—that help us thrive in our abilities.
and so on. In other words, moral rules are “objectively good” only in the sense that one can demonstrate their objective usefulness in advancing some other, higher “good.” According to Fincke, this “higher good” is a “thriving, flourishing power” in each individual which is “beyond his body and beyond his awareness.” Fine, but in that case the burden is still on him to demonstrate the objective nature of this “higher good.” Unfortunately, he shrugs off the burden. According to Fincke, the “higher good” is “objectively good” just because he says so. For example,
So, moral rules and practices and behaviors are a practical project. What objectively constitutes good instances of these are what lead to our objective good of maximally empowered functioning according to the abilities we have and what leads us to coordinate best with others for mutual empowerment on the long term.
…with no explanation of why the “objective good” referred to is objectively good. In a similar vein,
The good of our powers thriving is inherently good for us because we are our powers. And the inherent good of a power thriving is objectively determinable in the sense that it has a characteristic function that makes it the power that it is.
Again, Fincke doesn’t tell us why this “inherent good” is good in any objective sense, and why we should associate it with moral good at all. Apparently we must simply take his word for it that he’s not just expressing a personal whim, but has some mysterious way of knowing that his “good” is both “objective” and “moral.” Normally, when one claims objective existence for something, it must somehow manifest itself outside of the subjective minds of individuals. If one is to believe in such an entity, one requires evidence of its independent existence. That’s the main argument atheists have against the existence of God. There’s no evidence for it. How, then, is it reasonable for those same atheists to claim the objective existence of moral “good” with a similar lack of evidence. The faithful can at least point to faith, and tell us that they believe because of the grace of God. Atheists don’t have that luxury. One of Fincke’s favorite arguments is as follows:
Within this framework we can reason rationally. Does it mean we will always come to conclusive answers? No, of course not. Reasoning involves dealing with the real world and it’s empirical variables. Science can only go so far too, because we’re stuck with contingencies. You need information, sometimes impossible to precisely ascertain information about the future or the expected consequences of one path or another.
That’s quite true, but science has something to back it up that Fincke can’t claim for his objective morality; data in the form of experimentally repeatable evidence. We can be confident in the objective existence of electrons and photons, and on the fact that they don’t depend on our subjective whims for that existence, because we can observe and measure their physical characteristics. To the best of my knowledge, neither Fincke nor Harris nor any of the rest have ever captured an objective “good” in their butterfly nets and produced any data regarding its physical or other qualities and characteristics. If something is supposed to have an objective existence outside of our subjective minds, but we have not the faintest shred of evidence about it, we have only one alternative if we are to believe in it; blind faith.
For Fincke, morality is infinitely malleable. We can make it up as we go along to serve the “ultimate good” as our cultural and social circumstances change:
Morality is a technological endeavor too. It’s one of determining what should be done for us all to live as well as we can collectively and individually. We should, as naturalists who have learned the lessons of empirical thinking in the hard sciences, determine our moral codes and practices according to what serves our purposes best.
Unfortunately, this flies in the face of everything we have been learning recently about the innate wellsprings of morality. It requires that we simply ignore it. The claim that human flourishing is the ultimate good, and that morality is an objective something that exists to serve this end excludes any evolutionary contribution to morality whatsoever. Some claim that evolution may occur as high as the level of groups, but no process or mathematical model has yet been heard of that predicts that it can occur at the level of the human species as a whole.
If Fincke is right, then there can be no analogs of morality in animals, as claimed not only by Darwin, but by many others after him, and as suggested in Wild Justice by Marc Bekoff and Jessica Pierce and in several other recent books on the subject. Objective moral rules as he describes then would only be discoverable by highly intelligent creatures through the exercise of high-powered reasoning that is beyond the capacity of animals or, for that matter even humans other than Fincke and a few other enlightened philosophers, whom we must apparently depend on forevermore to explain things to us. No doubt the popes would all have loved this line of reasoning. These purported rules exist to support an end that can never be the direct result of natural selection, as it only applies at a level where selection does not occur.
Again, if Fincke is right, then the emotions we associate with morality become absurd. After all, what room is there for emotion in deriving perfectly rational “moral rules” from some “objective” ultimate good? Why, indeed, do such reactions as virtuous indignation and moral outrage exist? They are, after all, emotional rather than reasonable, and they can be observed across all cultures. If true moral good is only discoverable by gurus like Fincke, and often contradicts our natural appetites and proclivities, where do these emotions come from? Are they, as we were informed by the Blank Slaters of old, merely learned, along with such things as the pleasure we feel from eating when hungry, and the orgasms we experience during sex? If not, how can we possibly explain their existence? Here’s another excerpt from Fincke’s posts that raises some doubts about his “objective morality.”
People seem to recognize this readily with respect to every art–that doing it in the way that evinces excellent ability and has the result effect of empowering others is obviously desirable over the way that doesn’t–except when it comes to something like ruling or acquiring wealth. In those cases, people start talking like they think mere domination and accumulation is sufficiently desirable. But there’s no reason to think that’s correct. The ruler is a failure if they cannot create a powerful citizenry. What is the intrinsic goodness of merely getting your way compared to the actual creative power, the actual excellent ability, to create greater flourishing through your efforts. The great ruler, by the ruler’s own internal standards of success, should obviously be to rule for generations even beyond death. To do that means to be so shrewd in one’s decisions that what one builds outlives you and thrives beyond your mortal coil. It means to be a contributor to the thriving of your citizens while you’re alive so you can take credit for your role in their thriving (and for as many subsequent generations as possible).
Just because some tyrants realize that’s impossible because they’re incompetent to create that and keep power and so instead choose to rule a graveyard through terror doesn’t mean those tyrants are being rational. They’re functioning badly. They’re epically failing to do the actually powerful task of ruling.
Genghis Khan might beg to differ. In spite of recent attempts to rehabilitate him, it’s not an exaggeration to say he ruled a graveyard through terror throughout much of Asia, and was, therefore, an epic failure according to Fincke. However, he left millions of descendants throughout the continent. He would certainly have regarded this outcome as “good” and “powerful.” It’s a human legacy that will certainly last much longer than the constitution of any state, or the opinion harbored by certain intellectuals in the 21st century concerning “human flourishing.” Indeed, it’s a legacy that has the potential to last for billions of years, as demonstrated by the reality of our own existence as descendants of creatures who lived that long ago in the past. How can we detect or identify an objective rule according to which the great Khan’s good is not really good, but evil? Obviously, what we are looking for here is something more compelling than Fincke’s opinion on the matter. According to Fincke,
…we set up moral systems to regulate and make it so people are able to resist the temptation to think in short term, microlevel, temporarily selfish ways about what is good for them.
Again, if moral systems are just something we “set up” at will to serve Fincke’s “inherent and ultimate good,” then Hume must be wrong. Reason can’t be the slave of the passions. Rather, the passions must be suppressed to serve reason. Morality cannot possibly be associated with evolution in any way, because it would be impossible to “set up” the innate predispositions that would presumably be the result. As it happens, our species already has extensive experience with “setting up” just such a moral system as Fincke describes, based on “science” and devoted to the ultimate goal of “human flourishing.” It was called Communism. It didn’t work. As E. O. Wilson famously put it, “Great theory, wrong species.” Am I being paranoid if I would prefer, on behalf of myself and my species, to avoid trying it twice?
In the end, Fincke’s arguments really boil down to a statement of subjective morality in a nutshell: “Human flourishing as defined by me and right-thinking individuals like me is the ultimate good, because I say so.”
Posted on July 27th, 2014 No comments
As Hume pointed out long ago, moral emotions are not derived by reason. They exist a priori. They belong, not at the end, but at the beginning of reason. They are not derived by reason. Rather, they are reasoned about. Given the variations in the innate wellsprings of morality among individuals, huge variations in culture and experience, and the imperfections of human reason, the result has been the vast kaleidoscope of human moralities we see today, with all their remarkable similarities and differences.
Most of us understand the concept Good, and most of us also understand the concept Evil. Good and Evil are subjective entities in the minds of individuals, not fundamentally different from any of our other appetites and whims. However, unlike other whims, such as hunger or sexual desire, it is our nature to perceive them as things, existing independently of our subjective minds. We don’t imagine that, if we are hungry, everyone else in the world must be hungry, too. However, we do imagine that if we perceive something as Good, it must be Good for everyone else as well. That’s where reason comes in. We use it in myriad variations to prop up the delusion that our Good possesses independent legitimacy, and therefore applies to everyone. Familiar variations are the God prop, the “Brave New World of the Future” prop, and the “human flourishing” prop. We commonly find even the most brilliant intellectuals among us attempting to hop over the is/ought divide in this way, differing from the rest of us only in the sophistication of their mirages.
Consider, for example, the case of Herbert Spencer. According to his Wiki entry, he was “the single most famous European intellectual in the closing decades of the nineteenth century”. He “developed an all-embracing conception of evolution as the progressive development of the physical world, biological organisms, the human mind, and human culture and societies. He was ‘an enthusiastic exponent of evolution’ and even ‘wrote about evolution before Darwin did.’” Unfortunately, there was a problem with his version of the theory. He could never come up with a coherent explanation of what made evolution work. His attempts were usually based on Lamarckian notions of use-inheritance, but he was no more successful than Lamarck in coming up with an actual mechanism for use-inheritance – something that would actually drive the process. When Darwin came up with the actual mechanism, natural selection, Spencer grasped the concept immediately. It certainly influenced his later work, but could not destroy his faith in evolution as a “theory of everything.” For him, evolution was the mystical wellspring of “progress” in all things. Morality and ethics were no exception.
It’s a testimony to the power of the delusion that the truth was actually staring Spencer in the face. Consider, for example, his comments on what he referred to as “animal ethics.” Like Darwin, Spencer was well aware of the analogs to human moral behavior in animals. He wrote about them in the first two chapters of his Justice, published in 1891, long before such ideas were dropped down the memory hole by the Blank Slaters, and more than a century before they were finally disinterred by the animal behaviorists of our own day. Pick out a paragraph here and a phrase there, and Spencer comes across as a perfectly orthodox Darwinian. For example,
Speaking generally, we may say that gregariousness and cooperation more or less active, establish themselves in a species only because they are profitable to it since otherwise survival of the fittest must prevent establishment of them.
For the association to be profitable the acts must be restrained to such extent as to leave a balance of advantage. Survival of the fittest will else exterminate that variety of the species in which association begins.
Thus then it is clear that acts which are conducive to preservation of offspring or of the individual we consider as good relatively to the species and conversely.
In the third chapter of his book, Spencer makes the obvious connection between sub-human and human morality, pointing out that they form a “continuous whole.”
The contents of the last chapter foreshadow the contents of this. As from the evolution point of view human life must be regarded as a further development of sub-human life it follows that from this same point of view human justice must be a further development of sub-human justice. For convenience the two are here separately treated but they are essentially of the same nature and form parts of a continuous whole.
In a word, Spencer seems to realize that morality is an artifact of evolution by natural selection, that it exists because it enhanced the probability that individuals and their offspring would survive, and that its innate origins manifest themselves in sub-human species as well as human beings. In other words, he seems to have identified just those aspects of morality that establish its subjective nature and the absurdity of the notion that it can somehow transcend the minds of one individual acquire independent legitimacy or normative power over other individuals. The truth seems to be staring him in the face, and yet, in the end, he evades it. His illusion that his version of human progress, formulated long before Darwin, really is the Good-in-itself, blinds him to the implications of what he has just written. Before long we find him hopelessly enmeshed in the naturalistic fallacy, busily converting “is” into “ought.” First, we find passages like the following that not only have a suspicious affinity with Spencer’s libertarian ideology, but reveal his continued, post-Darwin faith in Lamarckism:
The necessity for observance of the condition that each member of the group, while carrying on self-sustentation and sustentation of offspring, shall not seriously impede the like pursuits of others makes itself so felt where association is established as to mould the species to it. The mischiefs from time to time experienced when the limits are transgressed continually discipline all in such ways as to produce regard for the limits so that such regard becomes in course of time a natural trait of the species.
A little later, the crossing of the is/ought Rubicon is made quite explicit:
To those who take a pessimist view of animal life in general contemplation of these principles can of course yield only dissatisfaction. But to those who take an optimist view or a meliorist view of life in general, and who accept the postulate of hedonism, contemplation of these principles must yield greater or less satisfaction and fulfilment of them must be ethically approved. Otherwise considered these principles are according to the current belief expressions of the Divine will or else according to the agnostic belief indicate the mode in which works the Unknowable Power throughout the universe, and in either case they have the warrant hence derived.
It’s not that Spencer was a stupid man. In fact, he was brilliant. Among other things, he analyzed the flaws in socialist theory and predicted the outcome of the Communist experiment with amazing prescience long before it was actually tried. Rather, Spencer didn’t see the truth that was staring him in the face because he was human. Like all humans, he suffered from the delusion that his version of the Good must surely be the “real” Good, and rationalized that conclusion. It continues to be similarly rationalized in our own day by our own public intellectuals, in spite of a century and more of great advances in evolutionary theory, neuroscience, and understanding of the innate wellsprings of both human and non-human behavior.
I suppose there’s some solace in the fact that, as Jonathan Haidt put it, the emotional dog continues to wag its rational tail, and not vice versa. It certainly lays to rest fears that some fragile thread of religion or philosophy is all that suspends us over the abyss of moral relativism. We will not become moral relativists because it is not our nature to be moral relativists, even if legions of philosophers declare that we are being unreasonable. On the other hand, there are always drawbacks to not recognizing the truth. We experienced two of those drawbacks in the 20th century in the form of the highly moralistic Nazi and Communist ideologies. Perhaps it would be well for us to recognize the obvious before the next messiah turns up on the scene.
Posted on May 9th, 2014 No comments
There’s been a lot of chatter on the Internet lately about MSNBC host Krystal Ball’s “re-interpretation” of Animal Farm as an anti-capitalist parable. The money quote from her take in the video below:
At its heart, Animal Farm is about tyranny and the likelihood of those in power to abuse that power. It’s clear that tendency is not only found in the Soviet communist experience. In fact, if you read Animal Farm today, it seems to warn not of some now non-existent communist threat but of the power concentrated in the hands of the wealthy elites and corporations…
As new research shows that we already live a sort of oligarchy that the preferences of the masses literally do not matter and that the only thing that counts is the needs and desires of the elites, Animal Farm is a useful cautionary tale warning of the corruption of concentrated power, no matter in whose hands that power rests.
Well, not exactly, Krystal. As astutely pointed out by CJ Ciaramella at The Federalist,
This is such a willfully stupid misreading that it doesn’t warrant much comment. However, for those who haven’t read Animal Farm since high school, as seems to be the case with Ball: The book is a satire of Soviet Russia specifically and a parable about totalitarianism in general. Every major event in the book mirrors an event in Soviet history, from the Bolshevik Revolution to Trotsky fleeing the country to Stalin’s cult of personality.
Indeed. Animal Farm’s Napoleon as the Koch Brothers? Snowball as Thomas Picketty? I don’t think so. True, you have to be completely clueless about the history of the Soviet Union to come up with such a botched interpretation, but, after all, that’s not too surprising. For citizens of our fair Republic, cluelessness about the history of the Soviet Union is probably the norm. The real irony here is that you also have to be completely clueless about Orwell to bowdlerize Animal Farm into an anti-capitalist parable. If that’s your agenda, why not fish out something more appropriate from his literary legacy. Again, quoting Ciaramella,
What is most impressive, though, is that MSNBC couldn’t locate an appropriate reference to inequality in the works of a lifelong socialist. It’s not as if one has to search hard to find Orwell railing against class divisions. He wrote an entire book, The Road to Wigan Pier, about the terrible living conditions in the industrial slums of northern England.
Not to mention Down and Out in Paris and London and four volumes of essays full of rants against the Americans for being so backward about accepting the blessings of socialism. Indeed, Orwell, has been “re-interpreted” on the Right just as enthusiastically as on the Left of the political spectrum. For example, from Brendan Bordelon at The Libertarian Republic,
Leaving aside the obvious historical parallels between Animal Farm and the Soviet Union, the inescapable message is that government-enforced equality inevitably leads to oppression and further inequality, as fallible humans (or pigs) use powerful enforcement tools for their own personal gain.
Sorry, Brendan, but that message is probably more escapable than you surmise. Orwell was, in fact, a firm supporter of “government-enforced equality,” at least to the extent that he was a life-long, dedicated socialist. Indeed, he thought the transition to socialism in the United Kingdom was virtually inevitable in the aftermath of World War II.
In short, if you’re really interested in learning what Orwell was trying to “tell” us, whether in Animal Farm or the rest of his work, it’s probably best to read what he had to say about it himself.
Posted on May 2nd, 2014 2 comments
Should there be a death penalty? Of course, the pros and cons are always trotted out after every botched execution. There was more chatter than usual after the last one because it happened to coincide with the publication of a study in the Proceedings of the National Academy of Sciences (PNAS), according to which more than 4% of death row inmates are innocent. We usually decide such questions by consulting our moral emotions. As the highly moral Communists and Nazis demonstrated, that’s not a good idea.
The 4% study recalls the adage, usually attributed to Blackstone, that “It is better that ten guilty persons escape than that one innocent suffer.” Relying on moral emotions alone, one could probably come up with a host of “trolley problems” to demonstrate that Blackstone’s formulation is either “true” or “false.” I take a more practical view of the matter. If I am dead, it won’t matter a bit to me whether I was killed by a criminal or by the state. Assuming there must be a death penalty, then, I would prefer to minimize the odds that I will be killed by either one. In other words, I would favor a policy which minimizes the number of innocent victims, regardless of whether they are killed by the state or by criminals.
In fact, that’s the reason that I oppose the death penalty. Again, my reasons are entirely practical. I want to minimize the odds that I will suffer an untimely death. The state has always been the most prolific and efficient murderer, and the recent trend has not been towards greater compassion. In the twentieth century, for example, at least two states, Cambodia and the Soviet Union, became so adept at mass murder that they effectively beheaded their own populations. I conclude that it would be better to get states out of the execution business once and for all, and I would not be at all squeamish about exploiting moral emotions to accomplish that end. For example, one might come up with something like a version of the Ten Commandments for states, one article of which would be, “Thou Shalt Not Kill.” One might establish the “human right” not to be executed. The goal would not be the establishment of some abstract standard of “justice,” but self-preservation, pure and simple.
For the entertainment of my readers, I have included an old episode of “The Outer Limits” that explores the philosophical ramifications of this issue in greater detail. Notice that the bad guy is Bruce Dern. He was fantastic in the lead role of the movie “Nebraska,” that came out a few months ago. Sorry about the commercials.
Posted on March 30th, 2014 2 comments
One of the favorite hobbies of secular philosophers of late has been the fabrication of new and improved systems of morality. Perhaps the best known example is outlined in Sam Harris’ The Moral Landscape. If conscientiously applied, we are promised, they will usher in nebulous utopias in which a common thread is some version of “human flourishing.” We have already completed an experimental investigation of how these fancy theories work in practice. It was called Communism. Many eggs were broken to make that omelet, but the omelet never materialized. That unfortunate experience alone should be enough to dissuade us from poking a stick into the same hornet’s nest again.
The Communists were at least realistic enough to realize that their system wouldn’t work without a radical transformation in human behavior. For that to happen, it was necessary for our behavioral habits to be almost infinitely malleable, a requirement that spawned many of the 20th century versions of the Blank Slate, and perverted the behavioral sciences for more than half a decade. Since it became clear, as Trotsky once put it rather euphemistically just before Stalin had him murdered, that Communism had “ended in a utopia,” most of the “not in our genes” crowd have either mercifully died or been dragged kicking and screaming back into the real world. Practitioners of the behavioral “sciences” are now at least generally agreed as to the truth of the proposition, sufficiently obvious to any ten-year old, that there actually is such a thing as human nature.
That hasn’t deterred the inventers of sure-fire new universal moralities. They seem to think that they can finesse the problem by persuading us that we should just ignore those aspects of our nature that stand in the way of “human flourishing.” It won’t work for them any more than it worked for the Communists. This stubborn fact was demonstrated yet again in rather amusing fashion on the occasion of the publication of a somewhat controversial book in Australia.
The title of the book was The Conservative Revolution by Cory Bernardi. The particular aspect of human nature that its release highlighted was our predisposition to adopt dual systems of morality, in which radically different rules apply depending on whether one is dealing with one’s ingroup or one’s outgroup. Robert Ardrey called the phenomenon the “Amity/Enmity Complex,” and it has played a profound and fundamental role in the endemic warfare our species has engaged in since time immemorial. The philosophy outlined in The Conservative Revolution would be familiar to most southern Republicans in the US. His ingroup is the Australian political right. In other words, he is positioned firmly in the outgroup of the political left. When he published the book, “warfare” was not long in coming.
The reaction of the leftist ingroup in Australia was furious. To characterize it as hysterical frothing at the mouth would be putting it mildly. The data demonstrating this enraged reaction has been kindly collected by the people at Amazon in the form of reader reviews of the book. As I write this, there are 554 of them, and virtually all of them, whether “five star” or “one star,” are literary reflections of a two-year old’s temper tantrum. Here are some excerpts from some of the 421 “one star” reviews:
It’s only 178 pages long, and at the current price of just under $27, it’s quite expensive as well. So already one’s expectations are for a good quality product, given that it costs over 15 cents per page (or 30 cents per sheet, in other words). Just for comparison, my local Woolworths has toilet paper on sale for 20 cents per ONE HUNDRED sheets, or less than 1% the price per sheet of this book!!
It made an excellent liner for my bird cage. I love seeing my rainbow parakeets taking a dump on his head.
The Dark One hungers. In his pit of eternal hatred he squats in the darkness feeding on the screams of the weak. Soon, his blood tide reaches a peak and he will scourge the unbelievers.
…and so on. Here are some of the 105 “five star” reviews:
Many of the rituals I frequently practice – mostly summonings of minor demons – require ‘hate’ as an active ingredient. Before this book, I never really knew what to do. When I attempted to provide the hate myself, I found it difficult to focus and the rituals often went wrong (I even ended up losing a hand once, that was a pain to deal with). After that, I tried kidnapping some of my particularly nasty neighbours, but while that worked considerably better, it certainly wasn’t perfect – often fear would override the hate I needed, and of course I had to kill them afterwards, and disposing of all of the bodies was starting to get really annoying. Then this book came along, and all of the took away all of the hassle of finding hatred.
“Conservative Revolution” is the much-anticipated release by Cory Bestiality, after the success of his collaborative work on the ‘Real Solutions’ pamphlet. Effortlessly blending the Palaeofantasy, Historical Fiction and Political and Philosophical Satire genres, Bestiality creates a largely effective and revealing expose of the fallacies of Christian Fundamentalism and neoconservative ideology. Whilst lacking the insight and depth of ‘Real Solutions’, Bestiality’s new work is clearly inspired by similar writings, from Adolf Hitler’s stirring call to action, “Mein Kampf”, to Sarah Palin’s “Going Rogue”
Short and succinct! In just over 100 pages I learned that Adolf Hitler was a very moderate, balanced, caring and compassionate man in comparison to Corey Bernardi.
One wonders that there are so many people in Australia who trouble themselves to write such stuff. It’s certainly a tribute to the power of Ardrey’s “Complex.” The shear irrationality of it is demonstrated by the fact that Bernardi is laughing all the way to the bank. The book has already gone to a second printing, and the publisher is rubbing his hands as copies fly off the book store shelves. The affair is just another data point swimming in an ocean of others, all pointing to a very fundamental truth; the outgroup have ye always with you.
Consider the ingroup responsible for composing most of these furious anathemas. It is the ingroup of the secular left, which lives in more or less the same ideological box in Australia as its analogs in Western Europe and North America. In other words, this stuff is coming from the very ingroup most busily engaged in cobbling together spiffy new moralities which are to be characterized by universal human brotherhood! Sorry my friends – no ingroup without an outgroup. Even if you ushered in the Brave New World of “human flourishing” by exterminating the very significant proportion of the population that agrees with Cory Bernardi, another outgroup would inevitably crop up to take its place. In the absence of an outgroup, it is our nature to simply create another one.
It’s hard to imagine a less promising ingroup to gladden the rest of us with “human flourishing” than the modern secular left. As Catholic philosopher Joseph Bottum notes in his book, An Anxious Age: The Post-Protestant Ethic and the Spirit of America, in the US these people are the direct descendants of the Puritans. The overbearing self-righteousness evident in these “book reviews” seems to confirm that assessment. They are saturated with a level of bile and hatred of the “other” that one normally expects to find only among religious fanatics. And according to Bottum, that is basically what they are. His take is summarized in a review of his book by David Goldman:
Joseph Bottum, by contrast, examines post-Protestant secular religion with empathy, and contends that it gained force and staying power by recasting the old Mainline Protestantism in the form of catechistic worldly categories: anti-racism, anti-gender discrimination, anti-inequality, and so forth. What sustains the heirs of the now-defunct Protestant consensus, he concludes, is a sense of the sacred, but one that seeks the security of personal salvation through assuming the right stance on social and political issues. Precisely because the new secular religion permeates into the pores of everyday life, it sustains the certitude of salvation and a self-perpetuating spiritual aura. Secularism has succeeded on religious terms. That is an uncommon way of understanding the issue, and a powerful one.
Perhaps “human flourishing” would be a bit more plausible if we were all Benjamin Franklins, or Abraham Lincolns, or even Neville Chamberlains. As William Shakespeare put it in Twelfth Night, “Anything but a devil of a Puritan.”
Posted on March 22nd, 2014 4 comments
There is none. An atheist is someone who doesn’t believe in a God or gods, period! Somehow, that simple definition just never seems to register in the minds of large cohorts of atheists and believers alike. Take, for example, Theo Hobson, who supplies us with his own, idiosyncratic definition in a piece entitled “Atheism is an Offshoot of Deism” that recently turned up in the Guardian:
Atheism is less distinct from deism than it thinks. It inherits the semi-Christian assumptions of this creed.
Atheism derives from religion? Surely it just says that no gods exist, that rationalism, or ‘scientific naturalism’, is to be preferred to any form of supernaturalism. Actually, no: in reality what we call atheism is a form of secular humanism; it presupposes a moral vision, of progressive humanitarianism, of trust that universal moral values will triumph. (Of course there is also the atheism of Nietzsche, which rejects humanism, but this is not what is normally meant by ‘atheism’).
So what we know as atheism should really be understood as an offshoot of deism. For it sees rationalism as a benign force that can liberate our natural goodness. It has a vision of rationalism saving us, uniting us.
Sorry, but I beg to differ with you Theo. There certainly are many delusional atheists who embrace such a “moral vision,” but the notion that all of them do is nonsense. For example, I reject any such “moral vision,” which Michael Rosen accurately described as “Religion Lite.” If you’ll trouble yourself to read the comments after your article, you’ll see I’m not alone. For example, from commenter Whiterthanwhite,
So is my Afairyism an offshoot of my five-year-old’s belief in fairies? Is my Afatherchristmasism an offshoot of her belief in Father Christmas?
Topher chimes in,
Indeed. Certain (rather more arrogant) religious people insist on seeing atheism as a reflection of theism rather than a rejection of it. It makes them feel better I guess, but of course is absolutely misguided.
Yes what a bag of bollocks this is. Atheism is an ‘offshoot’ of deism the way that absence is an offshoot of presence. It seems that what theists can’t stand about atheism is the sheer absence of belief. Get over it.
Can you, and others like you, please stop talking about atheism as if it were a belief system? I don’t believe in God. Doesn’t mean a subscribe to whatever incoherent, ill-thought-out Humanism you’re passing off as philosophy.
There are many similar comments, but as noted above, it never seems to register, even among some atheists. Follow an atheist website long enough, and you’re sure to run across commenters who insist on associating atheism with veganism, progressivism, schemes for gladdening us with assorted visions of “human flourishing,” and miscellaneous secular Puritans of all stripes. No. I don’t think so! Atheism doesn’t even come pre-packaged with “scientific rationalism.” It is merely the absence of a belief in a God or gods – Period! Aus! Schluss! Basta!
If any word is long overdue for a re-definition, it’s “religion,” not “atheism.” Instead of being rigidly associated with theism, it should embrace all forms of belief in imaginary, supernatural entities, or at least those with normative powers. In particular, in addition to a God or gods, it should include belief in such things as Rights, Good, and Evil as things-in-themselves, independent of the subjective impressions of them that exist in the minds of individuals.
Among other things, such a re-definition would add a certain coherence to theories according to which the predisposition to embrace “religion” is an evolved behavior. I rather doubt that we’ll eventually find something quite so specific as “You shall believe in supernatural beings!” hard-wired in our brains. On the other hand, there may be predispositions that make it substantially more likely that belief in such beings will follow once a certain level of intelligence is reached. I suspect that the origins of secular religions such as Communism will eventually be found by rummaging about in the very same behavioral baggage. I’m not the only one who’s seen the affinity. Many others have spoken of the “popes,” “bishops,” and “priesthood” of Communism and its antecedents, for almost as long as they’ve been around.
In any case, not all atheists are secular Puritans who embrace these various versions of “religion lite.” I personally hope our species will eventually grow up enough to jettison them along with the older editions. Darwin immediately grasped the truth, as did many others since him. It follows immediately from his theory. Evolved behavioral traits are the ultimate cause for the existence of morality and the perception of such subjective entities as Good and Evil that go with it. That is the simple truth, and it follows that belief in the existence of Good and Evil as objective things with some kind of a legitimate, independent normative power, whether ones tastes run to the versions preferred by the “heavy” or “lite” versions of religion, is a chimera.
Does that mean it’s time to jettison morality? No, sorry, our species doesn’t have that option. We will continue to act morally in spite of the vociferous objections of legions of philosophers, because it is our nature to act morally. It’s a “good” thing, too, because even if morality isn’t “real,” we would have a very hard time getting along without it. On the other hand, we do have the option of recognizing the pathologically self-righteous among us for the charlatans they are.
Posted on March 2nd, 2014 No comments
The notion that the suite of behavioral traits we associate with morality is dual in nature goes back at least a century. It was first formalized back in the 40’s by Sir Arthur Keith, who wrote,
Human nature has a dual constitution; to hate as well as to love are parts of it; and conscience may enforce hate as a duty just as it enforces the duty of love. Conscience has a two-fold role in the soldier: it is his duty to save and protect his own people and equally his duty to destroy their enemies… Thus conscience serves both codes of group behavior; it gives sanction to practices of the code of enmity as well as the code of amity.
Seeing that all social animals behave in one way to members of their own community and in an opposite manner to those of other communities, we are safe in assuming that early humanity, grouped as it was in the primal world, had also this double rule of behavior. At home they applied Huxley’s ethical code, which is Spencer’s code of amity; abroad their conduct was that of Huxley’s cosmic code, which is Spencer’s code of enmity.
Robert Ardrey combined the two words and gave them a Freudian twist to come up with a term for the phenomenon. Writing during the heyday of the Blank Slate, long before Sociobiology was even a twinkle in E. O. Wilson’s eye, he called it the Amity/Enmity Complex. The truth that it exists is highly corrosive to utopian schemes for “human flourishing” of all stripes, from Communism to the more up-to-date versions favored by the likes of Sam Harris and Joshua Greene. As a result, it is also a truth that has been furiously resisted, obvious explanation though it is for the warfare that has been such a ubiquitous aspect of our history as well as such phenomena as racism, religious bigotry, homophobia, etc., all of which are really just different varieties of outgroup identification.
The current situation in Ukraine, dangerous though it is, presents us with a splendid laboratory for studying the Complex. The underlying manifestation is, of course, nationalism, a form of ingroup identification that has been a thorn in our collective sides since the French Revolution. It was the inspiration for the panacea of “national self-determination” after World War I, based on the disastrously flawed assumption that nice, clean national boundaries could be drawn that would all enclose so many pristine, pure ethnic states. In reality, no such pristine territories existed, and “national self-determination” became a vehicle for the persecution of ethnic minorities all over Europe. Human nature hasn’t changed, and it continues to function in the same way today. For example, in the immediate aftermath of the overthrow of Yanukovych, the Ukrainian majority ingroup quickly began acting like one. A Jewish community center and synagogue were firebombed. The new rump parliament almost immediately voted to eliminate Russian as an official language, relegating Russian speakers to the status of second class citizens.
Such high-handed actions were virtually ignored in western Europe and the United States, where the collective memory of the Russians as outgroup, still strong more than two decades after the fall of Communism, insured that the Ukrainian nationalists would be perceived as the “good guys.” Oblivious to the fact that they had quite recently established a precedent by collaborating in the chopping off of a piece of Serbia and handing it to an ethnic minority, and ignoring such theoretical niceties as the claim that, if the Ukrainian majority in the west of the country had a right to vote itself special rights in the west of the country, the Russian majority in the east of the country must have similar rights in their territories, they began a war of words against the Russians, claiming that their occupation of the Crimea and protection of the Russian majority there was unheard of.
In a word, there is nothing rational about what is going on in Ukraine unless one takes into account the behavioral idiosyncrasies of our species that predispose us to perceive the world in terms of ingroups and outgroups. Such manifestations are hardly unique to Ukraine,. Just look around on the Internet a little. It’s full of a bewildering array of ideological, religious, ethnic, and political ingroups, all busily engaged in cementing “amity” within the group, and all with comment sections full of “enmity” directed at their respective outgroups in the form of furious anathemas and denunciations.
The “Complex” is inseparable from human moral behavior. No morality will ever exist that doesn’t come complete with its own outgroup. Think we can “expand our consciousness” until our ingroup includes all mankind? Dream on! The “other” will always be there. The “Complex” is the main reason that puttering away at new moralities is so dangerous. No matter how idealistic their intentions, the outgroup will always remain. We saw how that worked in the 20th century, with the annihilations of millions in the Jewish outgroup of the Nazis, and millions more in the “bourgeois” outgroup of the Communists. Before we start playing with fire again, it would probably behoove us to finally recognize the ways in which it can burn us.
Posted on December 1st, 2013 2 comments
Morality exits because of evolved behavioral traits. They are its ultimate cause. Without them, morality as we know it, in all of its various complex manifestations would cease to exist. Without them, the subjective perception in the brains of individuals of such things as good, evil, and rights would disappear as well. We perceive all of these as objects, as independent things-in-themselves, because individuals who perceived them in that way were more likely to survive and reproduce. However, they do not exist as things-in-themselves, a fact that has led to endless confusion in creatures such as ourselves, who are capable of reasoning about these nonexistent objects that seem so real.
It follows that, in spite of the legions of philosophers over the centuries who have presumed to enlighten us about the objective “should,” such an entity is as imaginary as unicorns. There is no objective reason why individuals “should” do anything in order to embrace good and reject evil, because good and evil are not objects. The same applies to the State. From a moral point of view (and it can be assumed in what follows that I am speaking of that point of view when I use the term “should”), there is no objective reason why the State should act one way in order to be good, or should not act another way in order to avoid evil. When an individual says that the state should do one thing, and not another, (s)he is simply expressing a personal desire. That, of course, applies to my own point of view. When I speak of what the State should or should not do, I am merely expressing a personal opinion, based on my own conjecture about the kind of state I would like to live in.
In the first place, we can say that there is no essential connection between the modern State and morality, because no such entity as the modern State existed during the time over which the behavioral traits we associate with morality evolved. However, a State that does not take morality into account is unlikely to be effective at achieving the goals its citizens have set for it, because it is the nature of those citizens to be influenced by moral predispositions. If a sufficient number of them perceive that the State is acting immorally, or violating what seem to them to be their rights, they may resist its laws, or rebel.
If the State is to act “morally,” does it follow that there should be an establishment of religion, whether of the spiritual or the secular variety? Based on the empirical evidence of our history, and what I know of human behavior, it seems to me that it does not. The value to the state of an established moral system lies in the potential of welding all its citizens into a single ingroup. It seems plausible that a single ingroup would be more effective at achieving the common goals of a State’s citizens then a collection distinct ingroups, each of which might perceive one or more of the others as outgroups. In such cases the expression of hatred and hostility towards the outgroup(s) would likely be disruptive.
Unfortunately, established moral systems throughout history have all tended to be unstable and counterproductive. From the time Christianity became the state religion of the Roman Empire until the final fall of its Byzantine remnant, there was constant strife between Trinitarians and Anti-Trinitarians, iconodules and iconoclasts, those who accepted the Three Chapters and those who condemned them, etc. Later attempts to preserve single ingroup orthodoxy spawned the massacre of the Albigensians, the long decades of the Hussite wars, the century of intermittent warfare between the Catholics and the Huguenots in France, and many another bloody chapter in human history. Established religions became instruments of exploitation in the hands of the powerful, resulting in the bloody reprisals of the French Revolution, the Spanish Civil War, etc. A problem with established religions has always been that people cannot change deeply held beliefs at will, and they resent being forced to pretend they believe things when they don’t. Typically, force is necessary to suppress that resentment, as we have seen in modern Iran. The “right” of Freedom of Religion” is basically a recognition of these drawbacks.
The more recent secular religions have fared no better. The two most familiar examples of the 20th century, Communism and Nazism, for example, both found it necessary to brutally suppress any opposition. The “great rewards” of such religions, whether in the form of a utopian classless society or a Teutonic golden age, are worldly rather than in the great beyond, and eventually become noticeable by their absence. All moral systems have outgroups as well as ingroups and, in the case of the secular religions, these also tend to be worldly rather than spiritual. In the case of the Communists and the Nazis, this led to the mass slaughter of the “bourgeoisie” and the Jews, respectively, robbing the State of many citizens, who often happened to be among the most intelligent and productive. It would seem that these two dire examples would be enough in themselves to deter us from any further experiments along similar lines. Remarkably, however, as those who have read the books of the likes of Sam Harris and Joshua Greene are aware, we continue to cobble away on new “scientific” versions, seemingly oblivious to the outcomes of our past attempts.
As an anodyne to all these problems, the philosophers of the Enlightenment sought to limit the power of the State by establishing Rights, such as freedom of religion, freedom of speech, freedom of assembly, etc. While these Rights are not things-in-themselves, they are perceived as such. Though they are merely subjective constructs, they can still acquire legitimacy if they are generally accepted and hallowed by tradition. Democracy was held forth as the proper vehicle for promoting these rights and guarding against the abuse of power by autocratic rulers. As implemented, modern democracies have hardly been perfect, but have been more stable than autocratic forms of government, and have often, although not invariably, survived such challenges as hard economic times and war. However, their drawbacks are also clearly visible. For example, recently they have been powerless to resist the massive influx of culturally alien populations that are far more likely to be a source of future civil strife if not worse than to be of any long term benefit to the existing citizens whose welfare these democratic states are supposed to be protecting. They benefit elites as a source of votes and cheap labor, but are likely to be harmful to society as a whole in the long term. In short, the jury is still out as to whether the post-Enlightenment democracies will eventually be perceived as Good or Evil.
It is not clear what if any alternative system might actually be better than democracy. The Chinese oligarchy has certainly had remarkable success in expanding the economic and military power of that country. However, its legitimacy is based on its supposed representation of the bankrupt, foreign ideology of Marxism. In spite of that, in a traditionalist country like China it may hold onto “the mandate of heaven” for a long time in spite of the glaring contradictions between its supposed ideology and its practice.
In general, “virtuous” states – those free of corruption, that do not cheat or steal from their citizens, and that are effective in enforcing laws that are perceived as just – are more effective at promoting the common weal than their opposites. Heraclitus’ dictum that “character is destiny” likely applies to states as well as individuals. I personally think that states are far more likely to be “virtuous” in that sense if their powers are carefully circumscribed and limited. Whenever new moral systems are implemented, “scientific” or otherwise, those limits tend to be dissolved. When it comes to the State, it is probably better to think in terms of “Thou shalt not” than in terms of “Thou shalt.” Two that come to mind include Thou shalt not kill (except, as Voltaire suggested, in large numbers and to the sound of trumpets), and Thou shalt not torture.
Posted on August 27th, 2013 No comments
While strolling through the local Barnes and Noble the other day, I decided to see what I could find on the shelves by Solzhenitsyn. There were two thin copies of One Day in the Life of Ivan Denisovich. That’s it! No Cancer Ward, no The First Circle, and, most depressing of all, no copies of The Gulag Archipelago. So much for the work chosen by Time Magazine as the “best non-fiction book of the 20th century.” If it were up to me, a copy would be in every hotel room along with the Bible. Communism was the greatest secular religion of all time. It came complete with its own “scientific” morality, and when it had finished eradicating “evil” in the world in order to clear the way for “good,” it had claimed 10’s of millions of victims, shot, tortured, and starved to death under conditions of almost inconceivable brutality. Solzhenitsyn was an eyewitness, and Gulag records the accounts of many others.
It would seem, assuming we place any value on our own survival, that every one of us should know something about these events, including historical background, and have more than a vague idea of what happened to some of the individual victims. Millions of those victims, typically including the most intelligent and productive members of the societies in which these events occurred, were murdered in the death cellars and camps, all within living memory and in a relatively short space of time, by the zealots of a religion whose God was a future utopia here on earth rather than a superman in the sky. It is hardly out of the question that something similar could happen again. Assuming we want to avert that possibility, would it not be useful to understand how it happened in the past?
Instead of taking heed and learning from the past when it comes to Communism we seem to be afflicted with a remarkable level of historical myopia. It’s as if we just wanted to forget the whole subject. Why? Our children are drenched with victimology in our schools and universities, learning versions of history that are often one-sided and distorted. Somehow the Communists are off limits as victimizers. Human morality works in wondrous ways.
I suspect one of the reasons for the blind spot when it comes to Communism is the fact that too many connections still exist to people who collaborated in the crime. For example, Hollywood cheerfully promoted the new faith in the 30’s in spite of the fact that the crimes Solzhenitsyn chronicled were already happening in plain sight. The notion that Communism in Hollywood was just a myth concocted in the fevered imaginations of delusional latter day John Birchers is nonsense. Many stars, directors and writers made no secret of the Communist connections, but were perfectly open about their promotion of the “great cause.” Their support was a matter of pride, not some guilty secret. Thumb through the copies of The American Mercury for, say, 1939 and 1940, if you seriously believe the whole episode was just a McCarthyite fairy tale. Today we are expected to wring our hands over the fates of those who ended up on the blacklist, suffering damage to their precious careers, but ignore the victims, many thousands of times greater in number, who were arrested and murdered to fill some apparatchik’s quota of bodies while these same stooges cheered on their murderers.
The identity of history’s “victims” is entirely dependent on who is telling the story. We gain some insight into the political complexion of the story tellers from Malcolm Muggeridge in his book, The Thirties, where he writes that, at the beginning of the decade it was rare to find a university professor who was a Marxist, but at its end it was hard to find one who wasn’t. Of course, British intellectuals prided themselves on being way ahead of their dense American cousins in that regard. Fellow travelers were hardly a rarity in many other professions, and they made no secret of their political affinities. They were reliable shills for Stalin, keeping a perfectly straight face during the Great Purge Trials, and swallowing any propaganda he saw fit to feed them, no matter how absurd. Stalin was gone in the 60’s and 70’s, but the intellectual descendants of his earlier apologists were still there, loudly cheering the likes of Ho Chi Minh, Mao Zedong, and Pol Pot. Many of them are still around. Obviously, it is more pleasing for them to pose as the saviors of other victims than to dredge up inconvenient truths about the ones they helped to bury.
The Gulag Archipelago should be required reading for every student in every public high school in the country. If it were necessary for them to learn about the millions who were shot in the back of the head, or had their teeth knocked out and their genitals crushed in brutal interrogations, or were slowly starved or frozen to death in the squalid islands of the Archipelago, perhaps there might be some slight reduction in the chances that they and their children will suffer a similar fate. It’s not likely to happen, though. Instead, if the data point of my local Barnes and Noble is in any way representative, “the best non-fiction book of the 20th century” is being gradually forgotten. Victims are all the rage; just not the victims of Communism, by far the largest and most savagely brutalized class of victims in human history. I suppose I shouldn’t be surprised. As Stalin so astutely pointed out, “The death of one man is a tragedy. The death of millions is a statistic.”