Posted on August 14th, 2016 7 comments
“Moral progress” is impossible. It is a concept that implies progress towards a goal that doesn’t exist. We exist as a result of evolution by natural selection, a process that has simply happened. Progress implies the existence of an entity sufficiently intelligent to formulate a goal or purpose towards which progress is made. No such entity has directed the process, nor did one even exist over most of the period during which it occurred. The emotional predispositions that are the root cause of what we understand by the term “morality” are as much an outcome of natural selection as our hands or feet. Like our hands and feet, they exist solely because they have enhanced the probability that the genes responsible for their existence would survive and reproduce. There is increasing acceptance of the fact that morality owes its existence to evolution by natural selection among the “experts on ethics” among us. However, as a rule they have been incapable of grasping the obvious implication of that fact; that the notion of “moral progress” is a chimera. It is a truth that has been too inconvenient for them to bear.
It’s not difficult to understand why. Their social gravitas and often their very livelihood depend on propping up the illusion. This is particularly true of the “experts” in academia, who often lack marketable skills other than their “expertise” in something that doesn’t exist. Their modus operandi consists of hoodwinking the rest of us into believing that satisfying some whim that happens to be fashionable within their tribe represents “moral progress.” Such “progress” has no more intrinsic value than a five year old’s progress towards acquiring a lollipop. Often it can be reasonably expected to lead to outcomes that are the opposite of those that account for the existence of the whim to begin with, resulting in what I have referred to in earlier posts as a morality inversion. Propping up the illusion in spite of recognition of the evolutionary roots of morality in a milieu that long ago dispensed with the luxury of a God with a big club to serve as the final arbiter of what is “really good” and “really evil” is no mean task. Among other things it requires some often amusing intellectual contortions as well as the concoction of an arcane jargon to serve as a smokescreen.
Consider, for example, a paper by Professors Allen Buchanan and Russell Powell entitled Toward a Naturalistic Theory of Moral Progress. It turned up in the journal Ethics, that ever reliable guide to academic fashion touching on the question of “human flourishing.” Far from denying the existence of human nature after the fashion of the Blank Slaters of old, the authors positively embrace it. They cheerfully admit its relevance to morality, noting in particular the existence of a predisposition in our species to perceive others of our species in terms of ingroups and outgroups; what Robert Ardrey used to call the Amity/Enmity Complex. Now, if these things are true, and absent the miraculous discovery of any other contributing “root cause” for morality other than evolution by natural selection, whether in this world or the realm of spirits, it follows logically that “progress” is a term that can no more apply to morality than it does to evolution by natural selection itself. It further follows that objective Good and objective Evil are purely imaginary categories. In other words, unless one is merely referring to the scientific investigation of evolved behavioral traits, “experts on ethics” are experts about nothing. Their claim to possess a philosopher’s stone pointing the way to how we should act is a chimera. For the last several thousand years they have been involved in a sterile game of bamboozling the rest of us, and themselves to boot.
Predictly, the embarrassment and loss of gravitas, not to mention the loss of a regular paycheck, implied by such a straightforward admission of the obvious has been more than the “experts” could bear. They’ve simply gone about their business as if nothing had happened, and no one had ever heard of a man named Darwin. It’s actually been quite easy for them in this puritanical and politically correct age, in which the intellectual life and self-esteem of so many depends on maintaining a constant state of virtuous indignation and moral outrage. Virtuous indignation and moral outrage are absurd absent the existence of an objective moral standard. Since nothing of the sort exists, it is simply invented, and everyone stays outraged and happy.
In view of this pressing need to prop up the moral fashions of the day, then, it follows that no great demands are placed on the rigor of modern techniques for concocting real Good and real Evil. Consider, for example, the paper referred to above. The authors go to a great deal of trouble to assure their readers that their theory of “moral progress” really is “naturalistic.” In this enlightened age, they tell us, they will finally be able to steer clear of the flaws that plagued earlier attempts to develop secular moralities. These were all based on false assumptions “based on folk psychology, flawed attempts to develop empirically based psychological theories, a priori speculation, and reflections on history hampered both by a lack of information and inadequate methodology.” “For the first time,” they tell us, “we are beginning to develop genuinely scientific knowledge about human nature, especially through the development of empirical psychological theories that take evolutionary biology seriously.” This begs the question, of course, of how we’ve managed to avoid acquiring “scientific knowledge about human nature” and “taking evolutionary biology seriously” for so long. But I digress. The important question is, how do the authors manage to establish a rational basis for their “naturalistic theory of moral progress” while avoiding the Scylla of “folk psychology” on the one hand and the Charybdis of “a priori speculation” on the other? It turns out that the “basis” in question hardly demands any complex mental gymnastics. It is simply assumed!
Here’s the money passage in the paper:
A general theory of moral progress could take a more a less ambitious form. The more ambitious form would be to ground an account of which sorts of changes are morally progressive in a normative ethical theory that is compatible with a defensible metaethics… In what follows we take the more modest path: we set aside metaethical challenges to the notion of moral progress, we make no attempt to ground the claim that certain moralities are in fact better than others, and we do not defend any particular account of what it is for one morality to be better than another. Instead, we assume that the emergence of certain types of moral inclusivity are significant instances of moral progress and then use these as test cases for exploring the feasibility of a naturalized account of moral progress.
This is indeed a strange approach to being “naturalistic.” After excoriating the legions of thinkers before them for their faulty mode of hunting the philosopher’s stone of “moral progress,” they simply assume it exists. It exists in spite of the elementary chain of logic leading inexorably to the conclusion that it can’t possibly exist if their own claims about the origins of morality in human nature are true. In what must count as a remarkable coincidence, it exists in the form of “inclusivity,” currently in high fashion as one of the shibboleths defining the ideological box within which most of today’s “experts on ethics” happen to dwell. Those who trouble themselves to read the paper will find that, in what follows, it is hardly treated as a mere modest assumption, but as an established, objective fact. “Moral progress” is alluded to over and over again as if, by virtue this original, “modest assumption,” the real thing somehow magically popped into existence in the guise of “inclusivity.”
Suppose we refrain from questioning the plot, and go along with the charade. If inclusivity is really to count as moral progress, than it must not only be desirable in certain precincts of academia, but actually feasible. However if, as the authors agree, humans are predisposed to perceive others of their species in terms of ingroups and outgroups, the feasibility of inclusivity is at least in question. As the authors put it,
Attempts to draw connections between contemporary evolutionary theories of morality and the possibility of inclusivist moral progress begin with the standard evolutionary psychological assertion that the main contours of human moral capacities emerged through a process of natural selection on hunter-gatherer groups in the Pleistocene – in the so-called environment of evolutionary adaptation (EEA)… The crucial claim, which leads some thinkers to draw a pessimistic inference about the possibility of inclusivist moral progress, is that selection pressures in the EEA favored exclusivist moralisties. These are moralities that feature robust moral commitments among group members but either deny moral standing to outsiders altogether, relegate out-group members to a substantially inferior status, or assign moral standing to outsiders contingent on strategic (self-serving) considerations.
No matter, according to the authors, this flaw in our evolved moral repertoire can be easily fixed. All we have to do is lift ourselves out of the EEA, achieve universal prosperity so great and pervasive that competition becomes unnecessary, and the predispositions in question will simply fade away, more or less like the state under Communism. Invoking that wonderful term “plasticity,” which seems to pop up with every new attempt to finesse human behavioral traits out of existence, they write,
According to an account of exclusivist morality as a conditionally expressed (adaptively plastic) trait, the suite of attitudes and behaviors associated with exclusivist tendencies develop only when cues that were in the past highly correlated with out-group threat are detected.
In other words, it is the fond hope of the authors that, if only we can make the environment in which inconvenient behavioral predispositions evolved disappear, the traits themselves will disappear as well! They go on to claim that this has actually happened, and that,
…exclusivist moral tendencies are attenuated in populations inhabiting environments in which cues of out-group threat are absent.
Clearly we have seen a vast expansion in the number of human beings that can be perceived as ingroup since the Pleistocene, and the inclusion as ingroup of racial and religious categories that once defined outgroups. There is certainly plasticity in how ingroups and outgroups are actually defined and perceived, as one might expect of traits evolved during times of rapid environmental change in the nature of the “others” one happened to be in contact with or aware of at any given time. However, this hardly “proves” that the fundamental tendency to distinguish between ingroups and outgroups itself will disappear or is likely to disappear in response to any environmental change whatever. Perhaps the best way to demonstrate this is to refer to the paper itself.
Clearly the authors imagine themselves to be “inclusive,” but is that really the case? Hardly! It turns out they have a very robust perception of outgroup. They’ve merely fallen victim to the fallacy that it “doesn’t count” because it’s defined in ideological rather than racial or religious terms. Their outgroup may be broadly defined as “conservatives.” These “conservatives” are mentioned over and over again in the paper, always in the guise of the bad guys who are supposed to reject inclusivism and resist “moral progress.” To cite a few examples,
We show that although current evolutionary psychological understandings of human morality do not, contrary to the contentions of some authors, support conservative ethical and political conclusions, they do paint a picture of human morality that challenges traditional liberal accounts of moral progress.
…there is no good reason to believe conservative claims that the shift toward greater inclusiveness has reached its limit or is unsustainable.
These “evoconservatives,” as we have labeled them, infer from evolutionary explanations of morality that inclusivist moralities are not psychologically feasible for human beings.
At the same time, there is strong evidence that the development of exclusivist moral tendencies – or what evolutionary psychologists refer to as “in-group assortative sociality,” which is associated with ethnocentric, xenophobic, authoritarian, and conservative psychological orientations – is sensitive to environmental cues…
and so on, and so on. In a word, although the good professors are fond of pointing with pride to their vastly expanded ingroup, they have rather more difficulty seeing their vastly expanded outgroup as well, more or less like the difficulty we have seeing the nose at the end of our face. The fact that the conservative outgroup is perceived with as much fury, disgust, and hatred as ever a Grand Dragon of the Ku Klux Klan felt for blacks or Catholics can be confirmed by simply reading through the comment section of any popular website of the ideological Left. Unless professors employed by philosophy departments live under circumstances more reminiscent of the Pleistocene than I had imagined this bodes ill for their theory of “moral progress” based on “inclusivity.” More evidence that this is the case is easily available to anyone who cares to look for “diversity” in the philosophy department of the local university in the form of a professor who can be described as conservative by any stretch of the imagination.
I note in passing another passage in the paper that demonstrates the fanaticism with which the chimera of “moral progress” is pursued in some circles. Again quoting the authors,
Some moral philosophers whom we have elsewhere called “evoliberals,” have tacitly affirmed the evo-conservative view in arguing that biomedical interventions that enhance human moral capacities are likely to be crucial for major moral progress due to evolved constraints on human moral nature.
In a word, the delusion of moral progress is not necessarily just a harmless toy for the entertainment of professors of philosophy, at least as far as those who might have some objection to “biomedical interventions” carried out be self-appointed “experts on ethics” are concerned.
What’s the point? The point is that we are unlikely to make progress of any kind without first accepting the truth about our own nature, and the elementary logical implications of that truth. Darwin saw them, Westermarck saw them, and they are far more obvious today than they were then. We continue to ignore them at our peril.
Posted on June 5th, 2016 17 comments
It’s heartening to learn that there is a serious basis for recent speculation to the effect that the science of animal cognition may gradually advance to a level long familiar to any child with a pet dog. Frans de Waal breaks the news in his latest book, Are We Smart Enough to Know How Smart Animals Are? In answer to his own question, de Waal writes,
The short answer is “Yes, but you’d never have guessed.” For most of the last century, science was overly cautious and skeptical about the intelligence of animals. Attributing intentions and emotions to animals was seen as naïve “folk” nonsense. We, the scientists, knew better! We never went in for any of this “my dog is jealous” stuff, or “my cat knows what she wants,” let alone anything more complicated, such as that animals might reflect on the past or feel one another’s pain… The two dominant schools of thought viewed animals as either stimulus-response machines out to obtain rewards and avoid punishment or as robots genetically endowed with useful instincts. While each school fought the other and deemed it too narrow, they shared a fundamentally mechanistic outlook: there was no need to worry about the internal lives of animals, and anyone who did was anthropomorphic, romantic and unscientific.
Did we have to go through this bleak period? In earlier days, the thinking was noticeably more liberal. Charles Darwin wrote extensively about human and animal emotions, and many a scientist in the nineteenth century was eager to find higher intelligence in animals. It remains a mystery why these efforts were temporarily suspended, and why we voluntarily hung a millstone around the neck of biology.
Here I must beg to differ with de Waal. It is by no means a “mystery.” This “mechanization” of animals in the sciences was more or less contemporaneous with the Blank Slate debacle, and was motivated by more or less the same ideological imperatives. I invite readers interested in the subject to consult the first few chapters of Robert Ardrey’s African Genesis, published as far back as 1961. Noting a blurb in Scientific American by Marshall Sahlins, more familiar to later readers as a collaborator in the slander of Napoleon Chagnon, to the effect that,
There is a quantum difference, at points a complete opposition, between even the most rudimentary human society and the most advanced subhuman primate one. The discontinuity implies that the emergence of human society required some suppression, rather than direct expression, of man’s primate nature. Human social life is culturally, not biologically determined.
Ardrey, that greatest of all debunkers of the Blank Slate, continues,
Dr. Sahlins’ conclusion is startling to no one but himself. It is a scientific restatement, 1960-style, of the philosophical conclusion of an eighteenth-century Neapolitan monk (Giambattista Vico, ed.): Society is the work of man. It is just another prop, fashioned in the shop of science’s orthodoxies from the lumber of Zuckerman’s myth, to support the fallacy of human uniqueness.
The Zuckerman Ardrey refers to is anthropologist Solly Zuckerman. I invite anyone who doubts the fanaticism with which “science” once insisted on the notion of human uniqueness alluded to in de Waal’s book to read some of Zuckerman’s papers. For example, in The Social Life of Monkeys and Apes, he writes,
It is now generally recognized that anthropomorphic preoccupations do not help the critical development of knowledge, either in fields of physical or biological inquiry.
He exulted in the great “advances” science had made in correcting the “mistakes” of Darwin:
The Darwinian period, in which animal behavior as a distinct study was born, was one in which anthropomorphic interpretation flourished. Anecdotes were regarded in the most generous light, and it was believed that many animals were highly rational creatures, possessed of exalted ethical codes of social behavior.
According to Zuckerman, “science” had now discovered that the very notion of animal “intelligence” was absurd. As he put it,
Until 1890, the study of the social behavior of mammals developed hand in hand with the study of their “intelligence,” and both subjects were usually treated in the same books.
Such comments, which are ubiquitous in the literature of the Blank Slate era, make it hard to understand how de Waal can still be “mystified” about the motivation for the “scientific” denial of animal intelligence. Be that as it may, he presents a wealth of data derived from recent experiments and field studies debunking all the lingering rationale for claims of human uniqueness one by one, whether it be the ability to experience emotion, a “theory of mind,” social problem solving ability, ability to contemplate the past and future, or even consciousness. In the process he documents the methods “science” used to hermetically seal itself off from reality, such as the invention of pejorative terms like “anthropomorphism” to denounce and dismiss anyone who dared to challenge the human uniqueness orthodoxy, and the rejection of all evidence not supplied by members of the club as mere “anecdotes.” In the process he notes,
Needing a new term to make my point, I invented anthropodenial, which is the a priori rejection of humanlike traits in other animals or animallike traits in us.
It’s hard to imagine that anyone could seriously believe that “science” consists of fanatically rejecting similarities between human and animal behavior that are obvious to everyone but “scientists” as “anthropomorphism” and “anecdotes” and assuming a priori that they’re of no significance until it can be absolutely proven that everyone else was right all along. This does not strike me as a “parsimonious” approach.
Not the least interesting feature of de Waal’s latest is his “rehabilitation” of several important debunkers of the Blank Slate who were unfortunate enough to publish before the appearance of E. O. Wilson’s Sociobiology in 1975. According to the fairy tale that currently passes for the “history” of the Blank Slate, before 1975 “darkness was on the face of the deep.” Only then did Wilson appear on the scene as the heroic slayer of the Blank Slate dragon. A man named Robert Ardrey was never heard of, and anyone mentioned in his books as an opponent of the Blank Slate before the Wilson “singularity” is to be ignored. The most prominent of them all, a man on whom the anathemas of the Blank Slaters often fell, literally in the same breath as Ardrey, was Konrad Lorenz. Sure enough, in Steven Pinker’s fanciful “history” of the Blank Slate, Lorenz is dismissed, in the same paragraph with Ardrey, no less, as “totally and utterly wrong,” and a delusional believer in “archaic theories such as that aggression was like the discharge of a hydraulic pressure.” De Waal’s response must be somewhat discomfiting to the promoters of Pinker’s official “history.” He simply ignores it!
Astoundingly enough, de Waal speaks of Lorenz as one of the great founding fathers of the modern sciences of animal behavior and cognition. In other words, he tells the truth, as if it had never been disputed in any bowdlerized “history.” Already at the end of the prologue we find the matter-of-fact observation that,
…behavior is, as the Austrian ethologist Konrad Lorenz put it, the liveliest aspect of all that lives.
Reading on, we find that this mention of Lorenz wasn’t just an anomaly designed to wake up drowsy readers. In the first chapter we find de Waal referring to the field of phylogeny,
…when we trace traits across the evolutionary tree to determine whether similarities are due to common descent, the way Lorenz had done so beautifully for waterfowl.
A few pages later he writes,
The maestro of observation, Konrad Lorenz, believed that one could not investigate animals effectively without an intuitive understanding grounded in love and respect.
and notes, referring to the behaviorists, that,
The power of conditioning is not in doubt, but the early investigators had totally overlooked a crucial piece of information. They had not, as recommended by Lorenz, considered the whole organism.
And finally, in a passage that seems to scoff at Pinker’s “totally and utterly wrong” nonsense, he writes,
Given that the facial musculature of humans and chimpanzees is nearly identical, the laughing, grinning, and pouting of both species likely goes back to a common ancestor. Recognition of the parallel between anatomy and behavior was a great leap forward, which is nowadays taken for granted. We all now believe in behavioral evolution, which makes us Lorenzians.
Stunning, really for anyone who’s followed what’s been going on in the behavioral and animal sciences for any length of time. And that’s not all. Other Blank Slate debunkers who published long before Wilson, like Niko Tinbergen and Desmond Morris, are mentioned with a respect that belies the fact that they, too, were once denounced by the Blank Slaters as right wing fascists and racists in the same breath with Lorenz. I have a hard time believing that someone as obviously well read as de Waal has never seen Pinker’s The Blank Slate. I honestly don’t know what to make of the fact that he can so blatantly contradict Pinker, and yet never trouble himself to mention even the bare existence of such a remarkable disconnect. Is he afraid of Pinker? Does he simply want to avoid hurting the feelings of another member of the academic tribe? I must leave it up to the reader to decide.
And what of Ardrey, who brilliantly described both “anthropodenial” and the reasons that it was by no means a “mystery” more than half a century before the appearance of de Waal’s latest book? Will he be rehabilitated, too? Don’t hold your breath. Unlike Lorenz, Tinbergen and Morris, he didn’t belong to the academic tribe. The fact that it took an outsider to smash the Blank Slate and give a few academics the courage to finally stick their noses out of the hole they’d dug for themselves will likely remain deep in the memory hole. It happens to be a fact that is just too humiliating and embarrassing for them to ever admit. It would seem the history of the affair can be adjusted, but it will probably never be corrected.
Posted on May 12th, 2016 5 comments
Hardly a day goes by without some pundit bemoaning the decline in religious faith. We are told that great evils will inevitably befall mankind unless we all believe in imaginary super-beings. Of course, these pundits always assume a priori that the particular flavor of religion they happen to favor is true. Absent that assumption, their hand wringing boils down to the argument that we must all somehow force ourselves to believe in God whether that belief seems rational to us or not. Otherwise, we won’t be happy, and humanity won’t flourish.
An example penned by Dennis Prager entitled Secular Conservatives Think America Can Survive the Death of God that appeared recently at National Review Online is typical of the genre. Noting that even conservative intellectuals are becoming increasingly secular, he writes that,
They don’t seem to understand that the only solution to many, perhaps most, of the social problems ailing America and the West is some expression of Judeo-Christian religion.
When, after the fall of the Roman Empire, the West embraced Christianity as a faith superior to all others, as its founder was the Son of God, the West went on to create modern civilization, and then went out and conquered most of the known world.
The truths America has taught the world, of an inherent human dignity and worth, and inviolable human rights, are traceable to a Christianity that teaches that every person is a child of God.
Today, however, with Christianity virtually dead in Europe and slowly dying in America, Western culture grows debased and decadent, and Western civilization is in visible decline.
Both pundits draw attention to a consequence of the decline of traditional religions that is less a figment of their imaginations; the rise of secular religions to fill the ensuing vacuum. The examples typically cited include Nazism and Communism. There does seem to be some innate feature of human behavior that predisposes us to adopt such myths, whether of the spiritual or secular type. It is most unlikely that it comes in the form of a “belief in God” or “religion” gene. It would be very difficult to explain how anything of the sort could pop into existence via natural selection. It seems reasonable, however, that less specialized and more plausible behavioral traits could account for the same phenomenon. Which begs the question, “So what?”
Pundits like Prager and Buchanan are putting the cart before the horse. Before one touts the advantages of one brand of religion or another, isn’t it first expedient to consider the question of whether it is true? If not, then what is being suggested is that mankind can’t handle the truth. We must be encouraged to believe in a pack of lies for our own good. And whatever version of “Judeo-Christian religion” one happens to be peddling, it is, in fact, a pack of lies. The fact that it is a pack of lies, and obviously a pack of lies, explains, among other things, the increasingly secular tone of conservative pundits so deplored by Buchanan and Prager.
It is hard to understand how anyone who uses his brain as something other than a convenient stuffing for his skull can still take traditional religions seriously. The response of the remaining true believers to the so-called New Atheists is telling in itself. Generally, they don’t even attempt to refute their arguments. Instead, they resort to ad hominem attacks. The New Atheists are too aggressive, they have bad manners, they’re just fanatics themselves, etc. They are not arguing against the “real God,” who, we are told, is not an object, a subject, or a thing ever imagined by sane human beings, but some kind of an entity perched so high up on a shelf that profane atheists can never reach Him. All this spares the faithful from making fools of themselves with ludicrous mental flip flops to explain the numerous contradictions in their holy books, tortured explanations of why it’s reasonable to assume the “intelligent design” of something less complicated by simply assuming the existence of something vastly more complicated, and implausible yarns about how an infinitely powerful super-being can be both terribly offended by the paltry sins committed by creatures far more inferior to Him than microbes are to us, and at the same time incapable of just stepping out of the clouds for once and giving us all a straightforward explanation of what, exactly, he wants from us.
In short, Prager and Buchanan would have us somehow force ourselves, perhaps with the aid of brainwashing and judicious use of mind-altering drugs, to believe implausible nonsense, in order to avoid “bad” consequences. One can’t dismiss this suggestion out of hand. Our species is a great deal less intelligent than many of us seem to think. We use our vaunted reason to satisfy whims we take for noble causes, without ever bothering to consider why those whims exist, or what “function” they serve. Some of them apparently predispose us to embrace ideological constructs that correspond to spiritual or secular religions. If we use human life as a metric, P&B would be right to claim that traditional spiritual religions have been less “bad” than modern secular ones, costing only tens of millions of lives via religious wars, massacres of infidels, etc., whereas the modern secular religion of Communism cost, in round numbers, 100 million lives, and in a relatively short time, all by itself. Communism was also “bad” to the extent that we value human intelligence, tending to selectively annihilate the brightest portions of the population in those countries where it prevailed. There can be little doubt that this “bad” tendency substantially reduced the average IQ in nations like Cambodia and the Soviet Union, resulting in what one might call their self-decapitation. Based on such metrics, Prager and Buchanan may have a point when they suggest that traditional religions are “better,” to the extent that one realizes that one is merely comparing one disaster to another.
Can we completely avoid the bad consequences of believing the bogus “truths” of religions, whether spiritual or secular? There seems to be little reason for optimism on that score. The demise of traditional religions has not led to much in the way of rational self-understanding. Instead, as noted above, secular religions have arisen to fill the void. Their ideological myths have often trumped reason in cases where there has been a serious confrontation between the two, occasionally resulting in the bowdlerization of whole branches of the sciences. The Blank Slate debacle was the most spectacular example, but there have been others. As belief in traditional religions has faded, we have gained little in the way of self-knowledge in their wake. On the contrary, our species seems bitterly determined to avoid that knowledge. Perhaps our best course really would be to start looking for a path back inside the “Matrix,” as Prager and Buchanan suggest.
All I can say is that, speaking as an individual, I don’t plan to take that path myself. I has always seemed self-evident to me that, whatever our goals and aspirations happen to be, we are more likely to reach them if we base our actions on an accurate understanding of reality rather than myths, on truth rather than falsehood. A rather fundamental class of truths are those that concern, among other things, where those goals and aspirations came from to begin with. These are the truths about human behavior; why we want what we want, why we act the way we do, why we are moral beings, why we pursue what we imagine to be noble causes. I believe that the source of all these truths, the “root cause” of all these behaviors, is to be found in our evolutionary history. The “root cause” we seek is natural selection. That fact may seem inglorious or demeaning to those who lack imagination, but it remains a fact for all that. Perhaps, after we sacrifice a few more tens of millions in the process of chasing paradise, we will finally start to appreciate its implications. I think we will all be better off if we do.
Posted on April 24th, 2016 2 comments
When the keepers of the official dogmas in the Academy encounter an inconvenient truth, they refute it by calling it bad names. For example, the fact of human biodiversity is “racist,” and the fact of human nature was “fascist” back in the heyday of the Blank Slate. I encountered another example in “Ethics” journal in one of the articles I discussed in a recent post; Only All Naturalists Should Worry About Only One Evolutionary Debunking Argument, by Tomas Bogardus. It was discretely positioned in a footnote to the following sentence:
Do these evolutionary considerations generate an epistemic challenge to moral realism, that is, the view that evaluative properties are mind-independent features of reality and we sometimes have knowledge of them?
The footnote reads as follows:
As opposed to nihilism – on which there are no moral truths – and subjectivist constructivism or expressivism, on which moral truths are functions of our evaluative attitudes themselves.
This “scientific” use of the pejorative term “nihilism” to “refute” the conclusion that there are no moral truths fits the usual pattern. According to its Wiki blurb, the term “nihilism” was used in a similar manner when it was first coined by Friedrich Jacobi to “refute” disbelief in the transcendence of God. Wiki gives a whole genealogy of the various uses of the term. However, the most common image the term evokes is probably one of wild-eyed, bomb hurling 19th century Russian radicals. No matter. If something is true, it will remain true regardless of how often it is denounced as racist, fascist, or nihilist.
At this point in time, the truth about morality is sufficiently obvious to anyone who cares to think about it. It is a manifestation of behavioral predispositions that evolved at times very different from the present. It has no purpose. It exists because the genes responsible for its existence happened to improve the odds that the package of genes to which they belonged would survive and reproduce. That truth is very inconvenient. It reduces the “expertise” of the “experts on ethics,” an “expertise” that is the basis of their respect and authority in society, and not infrequently of their gainful employment as well, to an expertise about nothing. It also exposes that which the vast majority of human beings “know in their bones” to be true as an illusion. For all that, it remains true.
To the extent that the term “nihilist” has any meaning in the context of morality at all, it suggests that the world will dissolve in moral chaos unless some basis for objective morality can be extracted from the vacuum. Rape, murder and mayhem will prevail when we all realize we’ve been hoodwinked by the philosophers all these years, and there really is no such basis. The truth is rather more prosaic. Human beings will behave morally regardless of the intellectual fashions prevailing among the philosophers because it is their nature to act morally.
Moral chaos will not result from mankind finally learning the “nihilist” truth about morality. Indeed, it’s hard to imagine a state of moral chaos worse than the one we’re already in. Chaos doesn’t exist because of a gradually spreading understanding of the subjective roots of morality. Rather, it exists as a byproduct of continued attempts to prop up the façade of moral realism. The current “bathroom wars” are an instructive if somewhat ludicrous example. They demonstrate both the strong connection between custom and morality, and the typical post hoc rationalization of moral “truths” described by Jonathan Haidt in his paper, The Emotional Dog and its Rational Tail.
Customs are not merely public habits – the habits of a certain circle of men, a racial or national community, a rank or class of society – but they are at the same time rules of conduct. As Cicero observes, the customs of a people “are precepts in themselves.” We say that “custom commands,” or “custom demands,” and even when custom simply allows the commission of a certain class of actions, it implicitly lays down the rule that such actions are not to be interfered with. And the rule of custom is conceived of as a moral rule, which decides what is right and wrong.
However, the rule of custom can be challenged. Westermarck noted that, as societies became more complex,
Individuals arose who found fault with the moral ideas prevalent in the community to which they belonged, criticizing them on the basis of their own individual feelings… In the course of progressive civilization the moral consciousness has tended towards a greater equalization of rights, towards an expansion of the circle within which the same moral rules are held applicable. And this process has been largely due to the example of influential individuals and their efforts to raise public opinion to their own standard of right.
As Westermarck points out, in both cases the individuals involved are responding to subjective moral emotions, yet in both cases they suffer from the illusion that their emotions somehow correspond to objective facts about good and evil. In the case of the bathroom wars, the defenders of custom rationalize their disapproval after the fact by evoking lurid pictures of perverts molesting little girls. The problem is that, at least to the best of my knowledge, there is no data indicating that anything of the sort involving a transgender person has ever happened. On the other side, the LGBT community points to this disconnect without realizing that they are just as deluded in their belief that their preferred bathroom rules are distilled straight out of objective Good and Evil. In fact, they are nothing but personal preferences, with no more legitimate normative authority than the different rules preferred by others. It seems to me that the term “nihilism” is better applied to this absurd state of affairs than to a correct understanding of what morality is and why it exists.
Suppose that in some future utopia the chimera of “moral realism” were finally exchanged for such a correct understanding, at least by most of us. It would change very little. Our moral emotions would still be there, and we would respond to them as we always have. “Moral relativism” would be no more prevalent than it is today, because it is not our nature to be moral relativists. However, we might have a fighting chance of coming up with a set of moral “customs” that most of us could accept, along with a similarly accepted way to change them if necessary. I would certainly prefer such a utopia to the moral obscurantism that prevails today. If nothing else it would tend to limit the moral exhibitionism and virtuous grandstanding that led directly to the ideological disasters of the 20th century, and yet still pass as the “enlightened” way to alter the moral rules that apply in bathrooms and elsewhere. Perhaps in such a utopia “nihilism” would be rejected even more firmly than it is today, because people would finally realize that, in spite of the subjective, emotional source of all moral rules, human societies can’t exist without them.
Posted on April 13th, 2016 No comments
According to Wikipedia, anti-natalism is “a philosophical position that assigns a negative value to birth.” In general, it includes the claim that having children is immoral. Commenter Simon Elliot asked that I take up the topic again, adding,
I remember you said that you didn’t take it seriously because you thought it demonstrated a “morality inversion” of sorts, but I’ve since spoken to a fellow anti-natalist who has heard that argument many times and has found a way around it.
I’ll gladly take up the topic again. As for the anti-natalist who’s “found a way around it,” all I can say is, more power to him. I don’t peddle objective “oughts” on this blog, because no one has ever succeeded in capturing one and showing it to me. As far as I’m concerned, there are only subjective oughts, and I know of no mechanism whereby the ones that happen to reside inside my skull can manage to escape and acquire normative power over other human beings. My personal ought regarding natalism applies only to myself.
According to that ought, I should have as many children as possible. Since I also believe that I and my descendants would be much better off if the population of the planet were greatly reduced, I certainly don’t want everyone else to share this particular ought. Ideally, I would prefer that only a small percentage of the current population share my opinion on the subject. The subset in question would consist of those individuals whose survival would contribute most to the survival of my own kin in particular, and to the indefinite survival of life as we know it in general.
Simon is right when he says that I consider anti-natalism an example of a “morality inversion.” By that I mean that anti-natalists typically rely on moralistic arguments to render themselves biological dead ends, whereas morality exists because the genes that are its root cause were selected by virtue of the fact that they resulted in just the opposite. Why am I a natalist? You might say it’s a matter of aesthetic taste. I perceive morality inversions as symptoms that a biological entity is sick and dysfunctional. I don’t like to think of myself as sick and dysfunctional. Therefore I tend to avoid morality inversions.
My position on the matter also has to do with my perception of my consciousness. My consciousness is the “me” that I perceive, but it will survive but a short time. On the other hand, there is something about me that has survived 3 billion years, give or take, carried by an unbroken chain of physical entities, culminating in myself. That part of me, my genes, is potentially immortal. I consider them, and not my consciousness, the real “me.” My consciousness is really just an ancillary feature of my current phenotype that exists because it happened to increase the odds that the real “me” would survive. I find the thought that my consciousness might “malfunction” and break the chain disturbing. I would prefer that the chain remain unbroken. Therefore, I am a natalist. However, I have no interest whatsoever in “converting” anti-natalists. Other than the exceptions noted above, the more of them the better as far as I’m concerned.
Good and evil have no objective existence. It is therefore impossible that I could have a “duty” to be either a natalist or an anti-natalist, independent of what is thought to be my duty in my own or anyone else’s subjective mind. It does not occur to me that my personal opinion on the matter has some kind of a normative power on anyone else, nor am I willing to allow anyone else’s opinion to have any normative power over me.
I realize perfectly well that anti-natalists like David Benatar seek to justify their opinions on what they perceive as objective moral standards. However, that perception is an illusion. In view of what moral emotions really are, and the reasons that they exist to begin with, I consider attempts to apply morality to decide this issue not only irrational, but potentially dangerous, at least in terms of the goals in life that are important to me. They are irrational and potentially dangerous for more or less the same reasons that it is irrational and potentially dangerous to blindly consult moral emotions in any situation significantly more complex than the routine interactions of individuals. Western societies are currently in the process of demonstrating the fact by engaging in suicidal behavior that is routinely fobbed off as an expression of moral righteousness. No doubt the verdict of history on the effects of this “righteousness” will be quite educational for whoever happens to occupy the planet a century from now. Unfortunately, the anti-natalists won’t be around to witness what the resulting “human flourishing” will look like in the real world.
In a word, then, my position on the matter is, “anti-natalism for thee, but not for me.” No doubt it is a position that is immoral according to the subjective standards prevailing in the academy and among the like-minded denizens of the ideological Left. However, I am confident I can bear the shame until the individuals in question manage to successfully remove themselves from the gene pool.
Posted on April 12th, 2016 3 comments
Moral realism died with Darwin. He was perfectly well aware that there is such a thing as human nature, and that morality is a manifestation thereof. He also had an extremely pious wife and lived in Victorian England, so was understandably reticent about discussing the subject. However, in one of his less guarded moments he wrote (in The Descent of Man and Selection in Relation to Sex),
If, for instance, to take an extreme case, men were reared under precisely the same conditions as hive-bees, there can hardly be a doubt that our unmarried females would, like the worker bees, think it a sacred duty to kill there brothers, and mothers would strive to kill their fertile daughters, and no one would think of interfering.
Assuming he believed his own theory, Darwin was merely stating the obvious. Francis Hutcheson had demonstrated more than a century earlier that morality is a manifestation of innate moral sentiments. He was echoed by David Hume, who pointed out that morality could not be derived from pure reason operating alone, and suggested that other than divine agencies might explain the existence of the sentiments in question. Darwin supplied the final piece of the puzzle, discovering what that agency was.
Many writers discussed the evolutionary origins of morality in the late 19th and early 20th centuries. Few, however, were prepared to accept the conclusion that logically followed; the non-existence of objective Good and Evil, independent of any human opinion on the matter. One of the few who did accept that conclusion, and outline its implications, was Edvard Westermarck, in his The Origin and Development of the Moral Ideas (1906), and Ethical Relativity (1932). Westermarck was well aware that, although Good and Evil are not real, objective things, human moral emotions are easily strong enough to portray them as such to our imaginations. They are so strong, in fact, that, more than a century after Westermarck took up the subject, the illusion is still alive and well, not only in the public at large, but even among the “experts on ethics.”
Or at least that is the impression one gets on glancing through the pages of the academic journal Ethics. There one commonly finds papers by learned professors who doggedly promote the notion of “moral realism,” and the objective existence of Good and Evil, presumably either as “spirits” or in some higher dimension beyond the ken of our best scientific instruments. True, their jobs and social gravitas depend on how well they can maintain the charade, but I get the distinct impression that some of them actually believe what they write. Lately, however, they have begun to feel the heat, in the form of what is referred to in the business as “evolutionary debunking.”
The obvious implication of Darwin’s theory is that the innate predispositions responsible for human morality evolved, and the various and occasionally gaudy ways in which those predispositions manifest themselves in our behavior is pretty much what one would expect when those emotions are mediated and interpreted in the minds of creatures with large brains. The existence of Good and Evil as independent things is about as likely as the existence of fairies in Richard Dawkins’ garden. How is it, then, that the “experts on ethics” haven’t closed up shop and moved on to less futile occupations? To answer that question, we must again refer to the pages of Ethics.
Two articles that appeared in the most recent issue demonstrate the degree to which the shock waves from the collapse of the Blank Slate have penetrated into even the darkest and most remote nooks of academia. The first, by Tomas Bogardus, is entitled “Only All Naturalists Should Worry About Only One Evolutionary Debunking Argument.” It begins with the rhetorical question, “Do the facts of evolution undermine moral realism.” You think you know the answer, don’t you, dear reader? But wait! Before you jump to conclusions, you should be aware that the bar is set fairly high for “evolutionary debunking” arguments. You may agree with me that the existence of pink unicorns is improbable, but can you absolutely prove it? That’s the kind of standard we’re talking about. It’s not necessary for today’s crop of moral realists to explain the mode of existence of such imaginary categories as Good and Evil. It’s not necessary for them to explain the mysteries of their creation. It’s not necessary for them to explain how moral emotions turned up in human brains, or why the possibility of their evolutionary origins is irrelevant, or how they manage to jump from the skull of one human being onto the back of another with ease. No, “evolutionary debunking” requires that you absolutely prove that there are no pink unicorns.
Let’s refer to Prof. Bogardus’ paper to see how this works in practice. According to the author, one species of evolutionary debunking arguments runs as follows:
Our moral faculty was naturally selected to produce adaptive moral beliefs, and not naturally selected to produce true moral beliefs.
Therefore, it is false that: had the moral truths been different, and had we formed our moral beliefs using the same method we actually used, or moral beliefs would have been different.
Therefore, our moral beliefs are not sensitive
Therefore, our moral beliefs do not count as knowledge
In other words, nothing as tiresome as demonstrating that moral realism is the least bit plausible is necessary to defeat evolutionary debunking arguments. All that’s necessary is to show that any of the “therefores” in the above “argument” is at all shaky. In that case, then the pink unicorn must still be out there roaming around. Prof. Bogardus reviews other evolutionary debunking arguments, and ends his paper on the hopeful note that one of them, which he describes as the “Argument from Symmetry,” may actually be bulletproof, if only to the assaults of the “Naturalists.” (It turns out there are other, less vulnerable tribes of moral realists, such as “Rationalists,” and “Divine Revelationists.) I’m not as sanguine as the good professor. I suspect that proving a negative will be difficult even with the “Argument from Symmetry.”
In another paper, entitled “Reductionist Moral Realism and the Contingency of Moral Evolution,” author Max Barkhausen reveals some of the astounding intellectual double back flips moral realists routinely perform in order to accept both the evolution of moral emotions and the existence of objective Good and Evil at the same time. For example, one strategy, which he attributes to philosophers Frank Jackson and Philip Pettit and aptly refers to as “Panglossianism,” posits that, while human morality does indeed have evolutionary roots, by pure coincidence the end product just happened to agree with “true” morality. Such luck! Barkhausen assures us that his paper debunks such notions, and I am content to take him at his word.
Here again, however, there is no hint of a suggestion that those who posit the existence of Good and Evil as objective things existing independently of human minds lay their cards on the table and reveal what substance those things consist of, or defend the alternative belief that things can consist of nothing, or suggest what experiments might be performed to actually snag a “Good” or “Evil” as it floats about, whether in the material world or the realm of ghosts. The only standard they are held to is the mere avoidance of absolute proof that their pink unicorns are a figment of their imagination. It stands to reason. After all, as far as the “experts on ethics” are concerned, the closest thing to “absolute Good” they will ever encounter is a tenured position with a substantial and regular paycheck. They would have to sacrifice that particular “absolute Good” if they were ever required to stop waving their hands about objective morality and either explain to the rest of us the mode of existence of these “objects” they’ve been imagining all these years, or admit the sterility of their “expertise.” Barkhausen admits as much, concluding with the sentence,
I believe that it will be a great challenge to construct a meta-ethical theory that accommodates both contingency and our intuitions about objectivity and mind-independence. How to reconcile the two is, no doubt, and issue that merits further thought.
Yes, and no doubt the effort to do so will be a virtually inexhaustible topic for the papers in journals like Ethics that are the coin of the realm in academia. On the other hand, admitting the obvious – that objectivity and mind-independence are illusions – would tend to bring the whole, futile exercise to a screeching halt.
I note in passing that the jargon in use to prop up the illusion is becoming increasingly arcane and abstruse. If you’re masochistic enough to try to read these journals for yourself, be sure to bring along your secret decoder ring. There’s no better way to defend your academic turf than to deny access to anyone who hasn’t mastered the lingo.
Westermarck had it right. Back in 1906 he wrote,
As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion.
The presumed objectivity of moral judgments thus being a chimera there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is that the moral concepts are based upon emotions and that the contents of an emotion fall entirely outside the category of truth.
No “moral progress” will be possible until we recognize that salient fact. It’s hard to construe what one finds in the pages of journals like Ethics as “progress” by any rational definition of the term in any case. In the papers referred to above, for example, cultural evolution is referred to as something entirely independent of biological evolution, instead of the manifestation of biological evolution that it actually is. There are constant references to the “function” of morality, as if morality had a “purpose.” One cannot speak of a purpose or a function of something that exists because it happened to increase the odds that particular genes would survive and reproduce. “Function” implies a creator with conscious intent, and nothing of the sort is involved in the process of evolution by natural selection. Such terms may be useful as a form of shorthand for describing what actually happened, but only if one is careful to avoid misunderstanding of the sense in which they are being used. When used carelessly in discussions of moral realism, they serve mainly to distract and obfuscate.
What is really necessary for “moral progress?” For starters, we need to understand why morality exists, and the subjective nature of its existence. We need to understand that it evolved, at least for the most part, in times vastly different from the present. We need to stop pretending that morality’s only “function” is to promote intergroup and intragroup cooperation. Altruism has a real subjective existence in our brains, but so do outgroup identification, hatred, rage and “aggression.” These “immoral” tendencies are seldom mentioned in the pages of Ethics, but we ignore them at our peril. As long as we continue to ignore them, it is premature to speak of “progress.”
Posted on April 9th, 2016 6 comments
Back in the day when the Blank Slaters were putting the finishing touches on the greatest scientific debacle of all time, there was much wringing of hands about “aggression.” The “evolutionary psychologists” of the day, who were bold enough even then to insist that there actually is such a thing as human nature, were suggesting that, in certain circumstances, human beings were predisposed to act aggressively. Not only that, but the warfare that has been such a ubiquitous aspect of our history since the dawn of recorded time might not be just an unfortunate cultural artifact of the transition to agricultural economies. Rather, it might be the predictable manifestation of innate behavioral traits. They suggested that, instead of hoping the traits in question would disappear if we just pretended they didn’t exist, it might be wiser to seek to understand them. If we understood the problem, we might actually be able to take reasonable steps to do something about it.
Fast forward to the present, and the Blank Slate is still with us, but only as a pale shadow of its former self. References to human nature are commonly found in both the popular and academic literature, as if the subject had never been the least bit controversial. The fact that innate predispositions have a significant impact on human behavior is accepted as a matter of course. However, it turns out that the assumption that if only the power of the Blank Slate orthodoxy could be broken, we could start to seriously address problems, such as warfare, that are a threat to our security and perhaps our very survival, was a bit premature. In retrospect, it seems the Blank Slaters should have learned to stop worrying and love human nature.
What has happened in evolutionary psychology and the other scientific disciplines that address human behavior may be described by a term that was fashionable during the Third Reich – Gleichschaltung. Literally translated it means “equal switching,” or, in plain English, something like “getting in step.” The Blank Slate was a brute force attempt to sweep undesirable traits under the rug, and portray human behavior as almost perfectly malleable through brainwashing (or “education” and “culture” as it was more delicately put at the time). Such “ideal” creatures would be infinitely adaptable as future denizens of the utopias crafted by the ideological Left. In spite of the manifest absurdity of the Blank Slate dogmas, and the failure over and over again of actual human beings to behave as the Blank Slaters claimed they should, the Blank Slate orthodoxy prevailed in the behavioral sciences over a period of many decades. It turns out that the whole charade may have been completely unnecessary.
In retrospect, the solution was obvious; Gleichschaltung. Today we find the process in full swing. The number of papers currently appearing in the academic journals that take even a sideways glance at “ungood” human behaviors like aggression is vanishingly small. Rather, most of the papers that are published may be broadly grouped into two “safe” subject areas; 1) sex, always good for attracting at least a few of those citations that look so good on academic CVs, and 2) “approved” forms of behavior, such as altruism.
Examples are not hard to find. For example, glance through the articles in recent editions of the journal, Evolutionary Psychology. They include such titles as “Are Women’s Mate Preferences for Altruism Also Influenced by Physical Attractiveness?,” “Male and Female Perception of Physical Attractiveness; An Eye Movement Study,” “The Young Male Cigarette and Alcohol Syndrome; Smoking and Drinking as a Short-Term Mating Strategy,” “Effects of Humor Production, Humor Receptivity, and Physical Attractiveness on Partner Desirability,” and “Mating and Memory; Can Mating Cues Enhance Cognitive Performance?” So much for sex. There is also a plentiful supply of papers in the second broad area mentioned above, generally with impeccably politically correct titles that signal the virtue of the authors, such as “Empowering Women; The Next Step in Human Evolution?,” “Upset in Response to a Sibling’s Partner’s Infidelity; A Study With Siblings of Gays and Lesbians, From an Evolutionary Perspective,” and “Western Europe, State Formation, and Genetic Pacification.” The last of these suggests the very rapid evolution of “peaceful” individuals thanks to the fortuitous effects of culture during the last thousand years or so. Occasionally one even finds titles that mix the two categories, such as “Sexual Selection and Humor in Courtship; A Case for Warmth and Extroversion.” The point here is not that the authors of these papers are wrong, but that their findings and theories tend to be “in step.”
When it comes to economic behavior, a subject near and dear to the hearts of those on the ideological Left, recent discoveries about our innate traits are equally reassuring. Ample confirmation may be found at the website of Evonomics, where one finds the following in the “about” blurb; “Orthodox economics is quickly being replaced by the latest science of human behavior and how social systems work. Evonomics is the home for thinkers who are applying the ground-breaking science to their lives and who want to see their ideas influence society.” Here one may find such encouraging titles as “Traditional Economics Failed. Here’s a New Blueprint; Why true self-interest is mutual interest,” “Does Behavioral Economics Undermine the Welfare State?” (of course not! As the author hopefully if somewhat diffidently opines, “Like any field, behavioral economics gives you lots of opportunity to pick and choose, and if you’re willing to be superficial or unscrupulous, you can justify lots of policy positions with it. But on balance I think it cuts in favor of the welfare state.”), and “Why the Economics of ‘Me’ Can’t Replace the Economics of ‘We.'” It turns out that “evolved behavior” deals Conservative and Libertarian heroine Ayn Rand an especially severe smackdown. The author of one article, entitled “What Happens When You Believe in Ayn Rand and Modern Economic Theory,” concludes that, “Our very survival as a species depended on cooperation, and humans excel at cooperative effort. Rather than keeping knowledge, skills and goods ourselves, early humans exchanged them freely across cultural groups.” According to other papers, “science says” that evolved human behavior promotes altruism, not selfishness, and Rand must therefore be all wet. See, for example, “What Ayn Rand Got Wrong About Human Nature and Free Markets; When altruism trumps selfishness” and “Ayn Rand Was Wrong about Human Nature; Rand would be surprised by the new science of selfishness and altruism.” Indeed, the “evonomicists” seem obsessed by Rand, going so far as to suggest that a Soviet style cure might have been called for to treat her ideologically suspect notions. The author of the last article mentioned above asks the rhetorical question, “I believe a strong case could be made that Ayn Rand was projecting her own sense of reality into the mind’s of her fictional protagonists. Does this mean that Rand was a sociopath?,” adding remarks in the remainder of the paragraph that leave the reader with the impression that she almost certainly was. In an article entitled, “Let’s Take Objectivism Back From Ayn Rand,” group selection stalwart David Sloan Wilson piles on with, “…it is no secret that the Ayn Rand movement had all the earmarks of a cult.”
Far be it for me to retrospectively assess the mental health of Ayn Rand one way or the other. My point is that, when it comes to innate behavior, the process of Gleichschaltung is well underway. One can already predict with a fair degree of certainly what most of the “discoveries” about innate human behavior will look like for the foreseeable future. Be that as it may, one still detects glimmers of light here and there. As yet, no such “iron curtain” shrouds thought and theory in the behavioral sciences as prevailed during the darkest days of the Blank Slate. One occasionally finds articles that are “noch nicht gleichgeschaltet” (still not in step), in both Evolutionary Psychology and at Evonomics. In the former, for example, see “Book Review: What Men Endure to Be Men: A review of Jonathan Gottschall, The professor in the cage: Why men fight, and why we like to watch,” and in the latter a article by Michael Shermer entitled Would Darwin be a Socialist or a Libertarian? that actually has some nice things to say about Friedrich Hayek. It would seem, then that the process of Gleichschaltung is not yet quite complete, although, given the almost universal lack of ideological diversity in academia, there is no telling how long those few who persist in being “out of step” will still be tolerated.
Perhaps the greatest cause for optimism is the simple fact that the Blank Slate has been crushed. There is no longer a serious debate about whether innate human nature exists. If its existence is accepted as a fact, then psychologists, sociologists, anthropologists, and economists may continue to publish papers portraying it as universally benign and dovetailing perfectly with leftist ideological shibboleths until they are blue in the face. Neuroscientists, evolutionary biologists, and geneticists will still be out there investigating how these innate processes actually work at the microscopic level in the brain. With luck, they may eventually be able to discover ways to isolate a few kernels of truth from the chaff of “just so stories” that are inevitable in the publish or perish world of academia. One must hope they will sooner rather than later, because it is likely that our very survival will depend on acquiring an accurate knowledge of exactly what kind of creatures we are.
In a world full of nuclear weapons, it is probably more important for us to learn what innate aspects of our nature have contributed to the incessant warfare that has plagued our species since before the dawn of recorded time than it is to know how male eye movements influence female sexual receptiveness. Similarly, it is important for us to be familiar, not just with the “good” innate behaviors commonly found within ingroups, but also with the “ungood” innate behaviors we exhibit towards outgroups, and for that matter, the mere fact that there actually are such things as ingroups and outgroups. One hardly needs the services of a professional evolutionary psychologist to observe the latter. Just read the comments at any liberal or conservative website. There one will find ample documentation of the fact that members of the outgroup are not just wrong, but evil, hateful, and deserving severe punishment which is not infrequently imagined in the form of beating, killing, or, as was recently called for in the case of Sarah Palin, gang rape and other forms of sexual assault. In other words, “aggression” is still out there, and it isn’t going anywhere. It might be useful for us to learn how to deal with it without either annihilating ourselves or destroying the planet we live on. Behavioral scientists might want to keep that in mind while they’re composing their next paper on the “nice” aspects of human behavior.
Posted on February 27th, 2016 5 comments
The Blank Slate is not over. True, behavioral scientists, intellectuals, and ideologues of all stripes now grudgingly admit something that has always been obvious to those Donald Trump refers to as the “poorly educated,” not to mention reasonably perceptive children; namely, that there is such a thing as human nature. However, many of them only admit it to the point where it interferes with their imaginary utopias of universal brotherhood and human flourishing, and no further. Allow me to consult the source material to illustrate what I’m talking about. In Man and Aggression, published in 1968, Blank Slate high priest Ashley Montagu wrote,
…man is man because he has no instincts, because everything he is and has become he has learned, acquired, from his culture, from the man-made part of the environment, from other human beings… The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless… Human nature is what man learns to become a human being.
A bit later, in 1984, fellow Blank Slater Richard Lewontin generously expanded the repertoire of “innate” human behavior to include urinating and defecating in his Not in Our Genes. One still finds such old school denialists in the darker nooks of academia today, but now one can at least speak of human nature without being denounced as a fascist, and the existence of such benign aspects thereof as altruism is generally admitted. However, no such tolerance is extended to aspects of our behavior that contradict ideological shibboleths. Here, for example, is a recent quote from a review of Jerry Coyne’s Faith Versus Fact (a good read, by the way, and one I highly recommend) by critic George Sciallaba:
For all the vigor with which Coyne pursues his bill of indictment against organized religion, he leaves out one important charge. As he says, the conflict between religion and science is “only one battle in a wider war—a war between rationality and superstition.” There are other kinds of superstition. Coyne mentions astrology, paranormal phenomena, homeopathy, and spiritual healing, but religion “is the most widespread and harmful form.” I’m not so sure. Political forms of superstition, like patriotism, tribalism, and the belief that human nature is unalterably prone to selfishness and violence, seem to me even more destructive.
Aficionados will immediate recognize the provenance of this claim. It is a reworked version of the old “genetic determinism” canard, already hackneyed in the heyday of Ashley Montagu. It serves as a one size fits all accusation applied to anyone who suggests that any aspect of the human behavioral repertoire might be “bad” as opposed to “good.” Patriotism and tribalism are, of course, “bad.” There’s only one problem. If “genetic determinists” exist at all, they must be as rare as unicorns. I’ve never encountered a genuine specimen, and I’ve search long and hard. In other words, the argument is a straw man. There certainly are, however, people, myself included, who believe that our species is predisposed to behave in ways that can easily lead to such “bad” behaviors as tribalism, selfishness and violence. However, to the best of my knowledge, none of them believe that we are “unalterably prone” to such behavior. What they do believe is that the most destructive forms of human behavior may best be avoided by understanding what causes them rather than denying that those causes exist.
Which finally brings us to the point of this post. Human beings are predisposed to categorize others of their species into ingroups and outgroups. They associate “good” qualities with the ingroup, and “evil” qualities with the outgroup. This fact was familiar to behavioral scientists at the beginning of the 20th century, before the Blank Slate curtain fell, and was elaborated into a formal theory by Sir Arthur Keith in the 1940’s. I can think of no truth about the behavior of our species that is so obvious, so important to understand, and at the same time so bitterly denied and resisted by “highly educated” ideologues. Tribalism is not a “superstition,” as Mr. Sciallaba would have us believe, but a form of ingroup/outgroup behavior and, as such, a perfectly predictable and natural trait of our species. It has played a major role as the sparkplug for all the bloody and destructive wars that have plagued us since the dawn of recorded time and before. It is also the “root cause” of virtually every ideological controversy ever heard of. It does not make us “unalterably prone” to engage in warfare, or any other aggressive behavior. I have little doubt that we can “alter” and control its most destructive manifestations. Before we can do that, however, we must understand it, and before we can understand it we must accept the fact that it exists. We are far from doing so.
Nowhere is this fact better illustrated today than in the struggle over international borders. Take, for example, the case of Germany. Her “conservative” government, led by Chancellor Angela Merkel, long followed a policy of treating the countries borders as if they didn’t exist. More than a million culturally alien Moslem “refugees” were allowed to pour across them in a single year. This policy of the “conservative” German government was cheered on by the “leftist” German news media, demonstrating that the pleasant mirage of universal human brotherhood is hardly a monopoly of either extreme of the political spectrum. The masses in Germany reacted more or less the same way they have reacted in every other western European country, demonstrating what some have referred to as an “immune” response. They resisted the influx of immigrants, and insisted that the government reestablish control over the nation’s borders. For this, one finds them condemned every day in both the “right wing” and “left wing” German media as “haters.”
A remarkable fact about all this, at least as far as Germany is concerned, is that the very same German media, whether of the “right” or the “left,” quite recently engaged in a campaign of anti-American hatemongering that would put anything they accuse the local “tribalists” of completely in the shade. The magazine Der Spiegel, now prominent in condemning as “haters” anyone who dares to suggest that uncontrolled immigration might not be an unalloyed blessing, was in the very forefront of this campaign of hate against the United States. One could almost literally feel the spittle flying from the computer screen if one looked at their webpage during the climax of this latest orgy of anti-Americanism. It was often difficult to find any news about Germany among the furious denunciations of the United States for one imagined evil or another. It was hardly “all about Bush,” as sometimes claimed. These rants came complete with quasi-racist stereotyping of all Americans as prudish, gun nuts, religious fanatics, etc. If ever there were a textbook example of what Robert Ardrey once called the “Amity-Enmity Complex,” that was it. After indulging in this orgy of hatemongering, Der Spiegel and the rest are now sufficiently hypocritical to point the finger at others as “haters.”
There is another remarkable twist to this story as far as Germany is concerned. There were a few brave little bloggers and others in Germany who resisted the epidemic of hate. Amid a storm of abuse, they insisted on the truth, exposed the grossly exaggerated and one-sided nature of the media’s anti-American rants, and exposed the attempts in the media to identify Americans as an outgroup. Today one finds the very same people who resisted this media hate campaign among those Der Spiegel and the rest point the finger at as “haters.” In general, they include anyone who insists on the existence of national borders and the sovereign right of the citizens in every country to decide who will be allowed to enter, and who not.
The point here is that the outgroup have ye always with you. Those most prone to strike self-righteous poses and hurl down anathemas on others as “haters” are often the most virulent haters themselves. To further demonstrate that fact, one need only look at the websites, magazines, books, and other media produced by the most ardent proponents of “universal human brotherhood.” If you find a website with comment threads, by all means look at them as well. I guarantee you won’t have to look very far to find the outgroup. It will always be there, decorated with all the usual pejoratives and denunciations we commonly associate with the “immoral,” and the “other.” The “tribe” of “others” can come in many forms. In the case of the proponents of “human flourishing,” the “other” is usually defined in ideological terms. For leftists, one sometimes finds the “Rethugs,” or “Repugs” in the role of outgroup. For rightists, they are “Commies” and “socialists.” It’s never difficult to exhume the hated outgroup of even the most profuse proponents of future borderless utopias as long as one knows where to dig. We are all “tribalists.” Those who think tribalism is just a “superstition” can easily demonstrate the opposite by simply looking in the mirror.
Today we find another interesting artifact of this aspect of human nature in the phenomenon of Donald Trump. The elites of both parties don’t know whether to spit or swallow as they watch him sweep to victory after victory in spite of “gaffes,” “lies,” and all kinds of related “buffoonery,” that would have brought his political career to a screeching halt in the past. The explanation is obvious to the “poorly educated.” Trump has openly called for an end to uncontrolled illegal immigration. The “poorly educated” were long cowed into silence, fearing the usual hackneyed accusations of racism, but now a man who can’t be cowed has finally stepped forward and openly proclaimed what they’ve been thinking all along; that uncontrolled immigration is an evil that will lead to no good in the long run. This fact is as obvious to the “poorly educated” in Europe as it is to the “poorly educated” in the United States.
Ingroups and outgroups are a fundamental manifestation of human morality. There is an objective reason for the existence of that morality. It exists because it has promoted the survival and reproduction of the genes responsible for it in times not necessarily identical to the present. It does not exist for the “purpose” of promoting universal brotherhood, or the “purpose” of promoting “human flourishing,” or the “purpose” of eliminating international boundaries. It has no “purpose” at all. It simply is. I am a moral being myself. I happen to prefer a version of morality that accomplishes ends that I deem in harmony with the reasons that morality exists to begin with. Those ends include my own survival and the survival of others like me. Uncontrolled immigration of culturally alien populations into the United States or any other country is most unlikely to promote either the “flourishing” or the survival of the populations already there. As has been demonstrated countless times in the past, it normally accomplishes precisely the opposite, typically in the form of bitter civil strife, and often in the form of civil war. I happen to consider civil strife and civil war “evil,” from what is admittedly my own, purely subjective point of view. I realize that my resistance to these “evils” really amounts to nothing more than a whim. However, it happens to be a whim that is obviously shared by many of my fellow citizens. I hope this “ingroup” of people who agree with me can make its influence felt, for the very reason that I don’t believe that human beings must forever remain “unalterably prone” to constantly repeating the same mistake of substituting a mirage for reality when it comes to understanding their own behavior. That is what the Blank Slaters have done, and continue to do. I hope they will eventually see the light, for their own “good” as well as mine. We are not “unalterably prone” to anything. However, before one can alter, one must first understand.
Posted on February 14th, 2016 3 comments
Rather than leave my readers in suspense, let me just say it up front. If any of the “little people” working for the government, whether as feds or contractors, used a private computer for official business the way Hillary Clinton did, they would be fired. If they used it to store and send classified information, a lot worse might happen to them. By the letter of the law, they would certainly be punishable with heavy fines and/or jail time. It is one of the more amusing and/or disturbing phenomena of the early 21st century, depending on your point of view, that such a person is even being seriously considered as a candidate for President of the United States.
The latest story about the subject on Foxnews is typical of the rampant disinformation being spread on the subject in the mass media. Under the headline, “New batch of Clinton emails released, 84 now marked ‘classified’,” it continues with the byline, “State Department release 551 documents from former Secretary of State Hillary Clinton’s email account, including 84 that are considered to be classified today, but not at the time they were initially sent.” Is it really too much to ask that these people occasionally consult, at the very least, some new hire who’s actually taken the elementary training course in security typically required for anyone who routinely handles sensitive information? One can never assume information is unclassified because it has not been officially declared and marked classified. If there is any doubt on the matter, it cannot simply be blown out to the general public without a second thought. It must be submitted to a competent authority for a decision on whether and at what level it should be protected. Regardless of whether it is classified or otherwise sensitive or not, it is illegal to transfer official government information to a private computer.
Let me explain how this works. There are two major types of classified information; that which is protected by executive order, and that which is protected by statute. Information protected by executive order is known as National Security Information, or NSI. Each President typically releases an order early in his term with details on how such information is to be protected, for how long, etc. The latest such order, E.O. 13526, was issued by President Obama in 2009. Other major types of information, dealing with such things as the design and use of nuclear weapons and the production of special nuclear material such as enriched uranium and plutonium, are classified by statute, namely, the Atomic Energy Act of 1954, as amended. The most sensitive category of this type of information is known as Restricted Data, or RD. The Atomic Energy Act established another category of such information pertaining mainly to the military use of and yield of nuclear weapons, known as Formerly Restricted Data, or FRD. There is also a third, seldom encountered category, dealing mainly with foreign intelligence information, known as Transclassified Foreign Nuclear Information, or TFNI. The levels of classification, from top to bottom in order of sensitivity, are Top Secret, Secret and Confidential. The categories, from top to bottom in order of sensitivity, are RD, FRD, TFNI, and NSI.
Information that is protected by statute, such as RD, is “born classified.” It is under the purview of the Department of Energy (DOE). If there is any doubt whether it must be protected or not, it must be submitted to a “Derivative Classifier,” who consults classification guides within his/her area of competence to decide whether it is classified or not, and at what level. If the guides don’t cover it, it may be submitted to one of the few individuals in the country with Original Classification authority for a determination. RD is never automatically declassified, nor must a date or event be set for its eventual declassification. RD information may occasionally be declassified by proper authority. In that case, another specially trained individual, known as a Derivative Declassifier, is appointed to decide whether documents are no longer classified, or may be classified at a lower level. Note the important distinction between information and documents. In some cases declassifiers are authorized to act alone, and in others declassification decisions must be made by a declassifier and another classifier or declassifier acting together.
NSI is not “born classified.” However, it may not be automatically assumed unclassified, either. Each government agency has authority over its own NSI. Each typically has the equivalent of DOE’s Original Classifiers, Derivative Classifiers, and classification guides. Unlike RD, a date or event must be set for declassification of NSI. Under the current executive order, declassification must occur within 25 years, except under special circumstances. Currently, NSI documents may not be automatically declassified, even when the declassification date has passed or the declassification event has happened. They must first be reviewed by an authorized declassifier.
Other than classified information, there are other types of information which must be protected, and to which legal penalties apply if released deliberately or through negligence. These include Official Use Only, which must meet one of the nine exemptions to the Freedom of Information Act, or FOIA, Unclassified Controlled Nuclear Information, or UCNI (DOE), Safeguards Information (NRC), Protected Critical Infrastructure Information or PCII (DHS), etc. It is illegal, and the lowliest employee of the federal government should know it’s illegal, to have any of these types of information on a private computer.
So much for a very elementary description of the classification process in the US. Some of the above is relevant to the case of Hillary Clinton, and some not. However, the fact that she simply ignored all the legal and administrative requirements regarding the handling and protection of sensitive information demonstrates that she is incompetent to be a federal mailroom employee, far less President of the United States. It is sad but hardly surprising in this day and age that most journalists and media organizations have such an abject lack of any sense of a responsibility to inform the public that they ignore all these facts. Their main function, as far as they are concerned, is to defeat the hated and despised conservative outgroup. As a result we find them circling the wagons around her, determined to suppress any hint of the real gravity and implications of her incompetence as Secretary of State. This should provide us with a rather clear indication of what they are talking about when they speak of the “moral compass” referred to in my previous post.
As my readers know, I don’t believe in the existence of objective moral truths. However, I am human. As a result, I experience moral emotions. When I contemplate the fact that Hillary Clinton is very likely to become President of my country, I experience a moral emotion that is familiar to all of us. Shame.
Posted on February 13th, 2016 2 comments
It’s important to understand morality. For example, once we finally grasp the fact that it exists solely as an artifact of evolution, it may finally occur to us that attempting to solve international conflicts in a world full of nuclear weapons by consulting moral emotions is probably a bad idea. Syria is a case in point. Consider, for example, an article by Nic Robertson entitled, From Sarajevo to Syria: Where is the world’s moral compass?, that recently turned up on the website of CNN. The author suggests that we “solve” the Syrian civil war by consulting our “moral compass.” In his opinion that is what we did in the Balkans to end the massacres in Bosnia and Kosovo. Apparently we are to believe that the situation in Syria is so similar that all we have to do is check the needle of the “moral compass” to solve that problem as well. I’m not so sure about that.
In the first place, the outcomes of following a “moral compass” haven’t always been as benign as they were in Bosnia and Kosovo. Czar Nicholas was following his “moral compass” when he rushed to the aid of Serbia in 1914, precipitating World War I. Hitler was following his “moral compass” when he attacked Poland in 1939, bringing on World War II. Apparently it’s very important to follow the right “moral compass,” but the author never gets around to specifying which one of the many available we are to choose. We must assume he is referring to his own, personal “moral compass.” He leaves us in doubt regarding its exact nature, but no doubt it has much in common with the “moral compass” of the other journalists who work for CNN. Unlike earlier versions, we must hope that this one is proof against precipitating another world war.
If we examine this particular “moral compass” closely, we find that it possesses some interesting idiosyncrasies. It points to the conclusion that there is nothing wrong with using military force to depose a government recognized as legitimate by the United Nations. According to earlier, now apparently obsolete versions of the “moral compass,” this sort of thing was referred to as naked aggression, and was considered “morally bad.” Apparently all that has changed. Coming to the aid of a government so threatened, as Russia is now doing in Syria, used to be considered “good.” Under the new dispensation, it has become “bad.” It used to be assumed that governments recognized by the international community as legitimate had the right to control their own airspaces. Now the compass needle points to the conclusion that control over airspaces is a matter that should be decided by the journalists at CNN. We must, perforce, assume that they have concocted a “moral compass” superior to anything ever heard of by Plato and Socrates, or any of the other philosophers who plied the trade after them.
I suggest that, before blindly following this particular needle, we consider rationally what the potential outcomes might be. Robertson never lays his cards on the table and tells us exactly what he has in mind. However, we can get a pretty good idea by consulting the article. In his words,
Horror and outrage made the world stand up to Bosnia’s bullies after that imagination and fear had ballooned to almost insurmountable proportion.
Today it is Russia’s President Vladimir Putin whose military stands alongside Syrian President Bashar al-Assad’s army. Together they’ve become a force no nation alone dares challenge. Their power is seemingly set in stone.
It would seem, then, based on the analogies of Bosnia and Kosovo, where we did “good,” that Robertson is suggesting we replace the internationally recognized government of Syria by force and confront Russia, whose actions within Syria’s borders are in response to a request for aid by that government. In the process it would be necessary for us to defeat and humiliate Russia. It was out of fear of humiliation that Russia came to Serbia’s aid in 1914. Are we really positive that Russia will not risk nuclear war to avoid a similar humiliation today? It might be better to avoid pushing our luck to find out.
What of the bright idea of replacing the current Syrian government? It seems to me that similar “solutions” really didn’t work out too well in either Iraq or Libya. Some would have us believe that “moderates” are available in abundance to spring forth and fill the power vacuum. So far, I have seen no convincing evidence of the existence of these “moderates.” Supposing they exist, I suspect the chances that they would be able to control a country brimming over with religious fanatics of all stripes without a massive U.S. military presence are vanishingly small. In other words, I doubt the existence of a benign alternative to Bashar al-Assad. Under the circumstances, is it really out of the question that the best way to minimize civilian casualties is not by creating a power vacuum, or by allowing the current stalemate to drag on, but by ending the civil war in exactly the way Russia is now attempting to do it; by defeating the rebels? Is it really worth risking a nuclear war just so we can try the rather dubious alternatives?
Other pundits (see, for example, here, here, and here) inform us that Turkey “cannot stand idly by” while Syria and her Russian ally regain control over Aleppo, a city within her own borders. Great shades of the Crimean War! What on earth could lead anyone to believe that Turkey is our “ally” in any way, shape or form other than within the chains of NATO? Turkey is a de facto Islamist state. She actively supports the Palestinians against another of our purported allies, Israel. Remember the Palestinians? Those were the people who danced in the streets when they saw the twin towers falling. She reluctantly granted access to Turkish bases for U.S. airstrikes against ISIS only so she would have a free hand attacking the Kurds, one of the most consistently pro-U.S. factions in the Middle East. She was foolhardy enough to shoot down a Russian plane in Syrian territory, killing its pilot, for the “crime” of violating her airspace for a grand total of 17 seconds. She cynically exploits the flow of refugees to Europe as a form of “politics by other means.” Could there possibly be any more convincing reasons for us to stop playing with fire and get out of NATO? NATO is a ready-made fast track to World War III on behalf of “allies” like Turkey.
But I digress. The point is that the practice of consulting something as imaginary as a “moral compass” to formulate foreign policy is unlikely to end well. It assumes that, after all these centuries, we have finally found the “correct” moral compass, and the equally chimerical notion that “moral truths” exist, floating about as disembodied spirits, quite independent of the subjective imaginations of the employees of CNN. Forget about the “moral compass.” Let us identify exactly what it is we want to accomplish, and the emotional motivation for those desires. Then, assuming we can achieve some kind of agreement on the matter, let us apply the limited intelligence we possess to realize those desires.
Morality exists because the behavioral predispositions responsible for it evolved, and they evolved because they happened to promote the survival of genes in times radically different than the present. It exists for that reason alone. It follows that, if there really were such things as “moral truths,” then nothing could possibly be more immoral than failing to survive. We would do well to keep that consideration in mind in determining the nature of our future relationship with Russia.