Posted on November 26th, 2015 5 comments
The history of the rise and fall of the Blank Slate is fascinating, and not only as an example of the pathological derailment of whole branches of science in favor of ideological dogmas. The continuing foibles of the “men of science” as they attempt to “readjust” that history are nearly as interesting in their own right. Their efforts at post-debacle damage control are a superb example of an aspect of human nature at work – tribalism. There is much at stake for the scientific “tribe,” not least of which is the myth of the self-correcting nature of science itself. What might be called the latest episode in the sometimes shameless, sometimes hilarious bowdlerization of history just appeared in the form of another PBS special; E. O. Wilson – Of Ants and Men. You can watch it online by clicking on the link.
Before examining the latest twists in this continuously evolving plot, it would be useful to recap what has happened to date. There is copious source material documenting not only the rise of the Blank Slate orthodoxy to hegemony in the behavioral sciences, but also the events that led to its collapse, not to mention the scientific apologetics that followed its demise. In its modern form, the Blank Slate manifested itself as a sweeping denial that innate behavioral traits, or “human nature,” had anything to do with human behavior beyond such basic functions as breathing and the elimination of waste. It was insisted that virtually everything about our behavior was learned, and a reflection of “culture.” By the early 1950’s its control of the behavioral sciences was such that any scientist who dared to publish anything in direct opposition to it was literally risking his career. Many scientists have written of the prevailing atmosphere of fear and intimidation, and through the 1950s, ‘60s, and early ‘70s there was little in the way of “self-correction” emanating from within the scientific professions themselves.
The “correction,” when it came, was supplied by an outsider – a playwright by the name of Robert Ardrey who had taken an interest in anthropology. Beginning with African Genesis in 1961, he published a series of four highly popular books that documented the copious evidence for the existence of human nature, and alerted a wondering public to the absurd extent to which its denial had been pursued in the sciences. It wasn’t a hard sell, as that absurdity was obvious enough to any reasonably intelligent child. Following Ardrey’s lead, a few scientists began to break ranks, particularly in Europe where the Blank Slate had never achieved a level of control comparable to that prevailing in the United States. They included the likes of Konrad Lorenz (On Aggression, first published in German in 1963), Desmond Morris (The Naked Ape, 1967), Lionel Tiger (Men in Groups, 1969), and Robin Fox (The Imperial Animal, 1971, with Lionel Tiger). The Blank Slate reaction to these works, not to mention the copious coverage of Ardrey and the rest that began appearing in the popular media, was furious. Man and Aggression, a collection of Blank Slater rants directed mainly at Ardrey and Lorenz, with novelist William Golding thrown in for good measure, is an outstanding piece of historical source material documenting that reaction. Edited by Ashley Montagu and published in 1968, it typifies the usual Blank Slate MO – attacks on straw men combined with accusations of racism and fascism. That, of course, remains the MO of the “progressive” Left to this day.
The Blank Slaters could intimidate the scientific community, but not so the public at large. Thanks to Ardrey and the rest, by the mid-70s the behavioral sciences were in danger of becoming a laughing stock. Finally, in 1975, E. O. Wilson broke ranks and published Sociobiology, a book that was later to gain a notoriety in the manufactured “history” of the Blank Slate out of all proportion to its real significance. Of the 27 chapters, 25 dealt with animal behavior. Only the first and last chapters focused on human behavior. Nothing in those two chapters, nor in Wilson’s On Human Nature, published in 1978, could reasonably be described as other than an afterthought to the works of Ardrey and others that had appeared much earlier as far as human nature is concerned. Its real novelty wasn’t its content, but the fact that it was the first popular science book asserting the existence and importance of human nature by a scientist in the United States that reached a significant audience. This fact was well known to Wilson, not to mention his many Blank Slate detractors. In their diatribe Against Sociobiology, which appeared in the New York Review of Books in 1975 they wrote, “From Herbert Spencer, who coined the phrase “survival of the fittest,” to Konrad Lorenz, Robert Ardrey, and now E. O. Wilson, we have seen proclaimed the primacy of natural selection in determining most important characteristics of human behavior.
As we know in retrospect, the Blank Slaters were facing a long, losing battle against recognition of the obvious. By the end of the 1990s, even the editors at PBS began scurrying off the sinking ship. Finally, in the scientific shambles left in the aftermath of the collapse of the Blank Slate orthodoxy, Steven Pinker published his The Blank Slate. It was the first major attempt at historical revisionism by a scientist, and it contained most of the fairytales about the affair that are now widely accepted as fact. I had begun reading the works of Ardrey, Lorenz and the rest in the early 70s, and had followed the subsequent unraveling of the Blank Slate with interest. When I began reading The Blank Slate, I assumed I would find a vindication of the seminal role they had played in the 1960s in bringing about its demise. I was stunned to find that, instead, as far as Pinker was concerned, the 60s never happened! Ardrey was mentioned only a single time, and then only with the assertion that “the sociobiologists themselves” had declared him and Lorenz “totally and utterly” wrong! The “sociobiologist” given as the source for this amazing assertion was none other than Richard Dawkins! Other than the fact that Dawkins was never a “sociobiologist,” and especially not in 1972 when he published The Selfish Gene, the book from which the “totally and utterly wrong” quote was lifted, he actually praised Ardrey in other parts of the book. He never claimed that Ardrey and the rest were “totally and utterly wrong” because they defended the importance of innate human nature, in Ardrey’s case the overriding theme of all his work. Rather, Dawkins limited that claim to their support of group selection, a fact that Pinker never gets around to mentioning in The Blank Slate. Dropping Ardrey, Lorenz and the rest down the memory hole, Pinker went on to assert that none other than Wilson had been the real knight in shining armor who had brought down the Blank Slate. As readers who have followed this blog for a while are aware, the kicker came in 2012, in the form of E. O. Wilson’s The Social Conquest of Earth. In the crowning (and amusing) irony of this whole shabby affair, Wilson outed himself as more “totally and utterly wrong” than Ardrey and Lorenz by a long shot. He wholeheartedly embraced – group selection!
Which finally brings me to the latest episode in the readjustment of Blank Slate history. It turned up recently in the form of a PBS special entitled, E. O. Wilson – Of Ants and Men. It’s a testament to the fact that Pinker’s deification of Wilson has succeeded beyond his wildest dreams. The only problem is that now it appears he is in danger of being tossed on the garbage heap of history himself. You see, the editors at the impeccably politically correct PBS picked up on the fact that, at least according to Wilson, group selection is responsible for the innate wellsprings of selflessness, love of others, at least in the ingroup, altruism, and all the other endearing characteristics that make the hearts of the stalwart leftists who call the tune at PBS go pitter-pat. Pinker, on the other hand, for reasons that should be obvious by now, must continue to reject group selection, lest his freely concocted “history” become a laughing stock. To see how all this plays out circa 2015, let’s take a closer look at the video itself.
Before I begin, I wish to assure the reader that I have the highest respect for Wilson himself. He is a great scientist, and his publication of Sociobiology was an act of courage regardless of its subsequent exploitation by historical revisionists. As we shall see, he has condoned the portrayal of himself as the “knight in shining armor” invented by Pinker, but that is a forgivable lapse by an aging scientist who is no doubt flattered by the “legacy” manufactured for him.
With that, on to the video. It doesn’t take long for us to run into the first artifact of the Wilson legend. At the 3:45 minute mark, none other than Pinker himself appears, informing us that Wilson, “changed the intellectual landscape by challenging the taboo against discussing human nature.” He did no such thing. Ardrey had very effectively “challenged the taboo” in 1961 with his publication of African Genesis, and many others had challenged it in the subsequent years before publication of Sociobiology. Pinker’s statement isn’t even accurate in terms of U.S. scientists, as several of them in peripheral fields such as political science, had insisted on the existence and importance of human nature long before 1975, and others, like Tiger and Fox, although foreign born, had worked at U.S. universities. At the 4:10 mark Gregory Carr chimes in with the remarkable assertion that,
If someone develops a theory about human nature or biodiversity, and in common living rooms across the world, it seems like common sense, but in fact, a generation ago, we didn’t understand it, it tells you that that person, in this case Ed Wilson, has changed the way all of us view the world.
One can but shake one’s head at such egregious nonsense. In the first place, Wilson didn’t “develop a theory about human nature.” He simply repeated hypotheses that Darwin himself and many others since him had developed. There is nothing of any significance about human nature in any of his books that cannot also be found in the works of Ardrey. People “in common living rooms” a generation ago understood and accepted the concept of human nature perfectly well. The only ones who were still delusional about it at the time were the so-called “experts” in the behavioral sciences. Many of them were also just as aware as Wilson of the absurdity of the Blank Slate dogmas, but were too intimidated to challenge them.
My readers should be familiar by now with such attempts to inflate Wilson’s historical role, and the reasons for them. The tribe of behavioral scientists has never been able to bear the thought that their “science” was not “self-correcting,” and they would probably still be peddling the Blank Slate dogmas to this day if it weren’t for the “mere playwright,” Ardrey. All their attempts at historical obfuscation won’t alter that fact, and source material is there in abundance to prove it to anyone who has the patience to search it out and look at it. We first get an inkling of the real novelty in this particular PBS offering at around minute 53:15, when Wilson, referring to eusociality in ant colonies, remarks,
This capacity of an insect colony to act like a single super-organism became very important to me when I began to reconsider evolutionary theory later in my career. It made me wonder if natural selection could operate not only on individuals and their genes, but on the colony as a whole. That idea would create quite a stir when I published it, but that was much later.
Which brings us to the most amusing plot twist in this whole, sorry farce; PBS’ wholehearted embrace of group selection. Recall that Pinker’s whole rationalization for ignoring Ardrey was based on some good things Ardrey had to say about group selection in his third book, The Social Contract. The subject hardly ever came up in his interviews, and was certainly not the central theme of all his books, which, as noted above, was the existence and significance of human nature. Having used group selection to declare Ardrey an unperson, Pinker then elevated Wilson to the role of the “revolutionary” who was the “real destroyer” of the Blank Slate in his place. Wilson, in turn, in what must have seemed to Pinker a supreme act of ingratitude, embraced group selection more decisively than Ardrey ever thought of doing, making it a central and indispensable pillar of his theory regarding the evolution of eusociality. Here’s how the theme plays out in the video.
Wilson at 1:09:50
Humans don’t have to be taught to cooperate. We do it instinctively. Evolution has hardwired us for cooperation. That’s the key to eusociality.
Wilson at 1:13:40
Thinking on this remarkable fact (the evolution of eusociality) has made me reconsider in recent years the theory of natural selection and how it works in complex social animals.
Pinker at 1:18:50
Starting in the 1960s, a number of biologists realized that if you think rigorously about what natural selection does, it operates on replicators. Natural selection, Darwin’s theory, is the theory of what happens when you have an entity that can make a copy of itself, and so it’s very clear that the obvious target of selection in Darwin’s theory is the gene. That became close to a consensus among evolutionary biologists, but I think it’s fair to say that Ed Wilson was always ambivalent about that turn in evolutionary theory.
I never doubted that natural selection works on individual genes or that kin selection is a reality, but I could never accept that that is the whole story. Our group instincts, and those of other eusocial species, go far beyond the urge to protect our immediate kin. After a lifetime studying ant societies, it seemed to me that the group must also have an important role in evolution, whether or not its members are related to each other.
1:20:15 Jonathan Haidt:
So there’ve been a few revolutions in evolutionary thinking. One of them happened in the 1960s and ‘70s, and it was really captured in Dawkins famous book ‘The Selfish Gene,’ where if you just take the gene’s eye view, you have the simplest elements, and then you sort of build up from there, and that works great for most animals, but Ed was studying ants, and of course you can make the gene’s eye view work for ants, but when you’re studying ants, you don’t see the ant as the individual, you don’t see the ant as the organism, you see the colony or the hive as the entity that really matters.
At 1:20:55 Wilson finally spells it out:
Once you see a social insect colony as a superorganism, the idea that selection must work on the group as well as on the individual follows very naturally. This realization transformed my perspective on humanity, too. So I proposed an idea that goes all the way back to Darwin. It’s called group selection.
Ed was able to see group selection in action. It’s just so clear in the ants, the bees, the wasps, the termites and the humans.” Wilson: “The fact of group selection gives rise to what I call multilevel evolution, in which natural selection is operating both at the level of the individual and the level of the group… And that got Ed into one of the biggest debates of his career, over multilevel selection, or group selection.
Ed Wilson did not give up the idea that selection acted on groups, while most of his fellow biologists did. Then several decades later, revived that notion in a full-throated manifesto, which I think it would be an understatement to say that he did not convince his fellow biologists.
At this point, a picture of Wilson’s The Social Conquest of Earth, appears on the screen, shortly followed by stills of a scowling Richard Dawkins. Then we see an image of the cover of his The Selfish Gene. The film describes Dawkins furious attack on Wilson for daring to promote group selection.
The brouhaha over group selection has brought me into conflict with defenders of the old faith, like Richard Dawkins and many others who believe that ultimately the only thing that counts in the evolution of complex behavior, is the gene, the selfish gene. They believe the gene’s eye view of social evolution can explain all of our groupish behavior. I do not.
And finally, at 1:25, after Wilson notes Pinker is one of his opponents, Pinker reappears to deny the existence of group selection:
Most people would say that, if there’s a burning building, and your child is in one room and another child is in another room, then you are entitled to rescue your child first, right? There is a special bond between, say, parents and children. This is exactly what an evolutionary biologist would predict because any gene that would make you favor your child will have a copy of itself sitting in the body of that child. By rescuing your child the gene for rescuing children, so to speak, will be helping a copy of itself, and so those genes would proliferate in the population. Not just the extreme case of saving your child from a burning building but for being generous and loyal to your siblings, your very close cousins. The basis of tribalism, kinship, family feelings, have a perfectly sensible sensible evolutionary basis. (i.e., kin selection)
At this point one can imagine Pinker gazing sadly at the tattered remains of his whole, manufactured “history” of the Blank Slate lying about like a collapsed house of cards, faced with the bitter realization that he had created a monster. Wilson’s group selection schtick was just too good for PBS to pass up. I seriously doubt whether any of their editors really understand the subject well enough to come up with a reasoned opinion about it one way or the other. However, how can you turn your nose up at group selection if, as Wilson claims, it is responsible for altruism and all the other “good” aspects of our nature, whereas the types of selection favored by Pinker, not to mention Dawkins, are responsible for selfishness and all the other “bad” parts of our nature?
And what of Ardrey, whose good words about group selection no longer seem quite as “totally and utterly wrong” as Pinker suggested when he swept him under the historical rug? Have the editors at PBS ever even heard of him? We know very well that they have, and that they are also perfectly well aware of his historical significance, because they went to the trouble of devoting a significant amount of time to him in another recent special covering the discovery of Homo naledi. It took the form of a bitter denunciation of Ardrey for supporting the “Killer Ape Theory,” a term invented by the Blank Slaters of yore to ridicule the notion that pre-human apes hunted and killed during the evolutionary transition from ape to man. This revealing lapse demonstrated the continuing strength of the obsession with the “unperson” Ardrey, the man who was “totally and utterly wrong.” That obsession continues, not only among ancient, unrepentant Blank Slaters, but among behavioral scientists in general who happen to be old enough to know the truth about what happened in the 15 years before Wilson published Sociobiology, in spite of Pinker’s earnest attempt to turn that era into an historical “Blank Slate.”
Dragging in Ardrey was revealing because, in the first place, it was irrelevant in the context of a special about Homo naledi. As far as I know, no one has published any theories about the hunting behavior of that species one way or the other. It was revealing in the second place because of the absurdity of bringing up the “Killer Ape Theory” at all. That straw man was invented back in the 60s, when it was universally believed, even by Ardrey himself, that chimpanzees were, as Ashley Montagu put it, “non-aggressive vegetarians.” That notion, however, was demolished by Jane Goodall, who observed chimpanzees both hunting and killing, not to mention their capacity for extremely aggressive behavior. Today, few people like to mention the vicious, ad hominem attacks she was subjected to at the time for publishing those discoveries, although those attacks, too, are amply documented for anyone who cares to look for them. In the ensuing years, even the impeccably PC Scientific American has admitted the reality of hunting behavior in early man. In other words, the “Killer Ape Theory” debate has long been over, and Ardrey, who spelled out his ideas on the subject in his last book, The Hunting Hypothesis, won it hands down.
Why does all this matter? It seems to me the integrity of historical truth is worth defending in its own right. Beyond that, there is much to learn from the Blank Slate affair and its aftermath regarding the integrity of science itself. It is not invariably self-correcting. It can become derailed, and occasionally outsiders must play an indispensable role in putting it back on the tracks. Ideology can trump reason and common sense, and it did in the behavioral sciences for a period of more than half a century. Science is not infallible. In spite of that, it is still the best way of ferreting out the truth our species has managed to come up with so far. We can’t just turn our back on it, because, at least in my opinion, all of the alternatives are even worse. As we do science, however, it would behoove us to maintain a skeptical attitude and watch for signs of ideology leaking through the cracks.
I note in passing that excellent readings of all of Ardrey’s books are now available at Audible.com.
The Moral Philosophy of G. E. Moore, or Why You Don’t Need to Bother with Aristotle, Hegel, and KantPosted on November 7th, 2015 No comments
G. E. Moore isn’t exactly a household name these days, except perhaps among philosophers. You may have heard of his most famous concoction, though – the “naturalistic fallacy.” If we are to believe Moore, not only Aristotle, Hegel and Kant, but virtually every other philosopher you’ve ever heard of got morality all wrong because of it. He was the first one who ever got it right. On top of that, his books are quite thin, and he writes in the vernacular. When you think about it, he did us all a huge favor. Assuming he’s right, you won’t have to struggle with Kant, whose sentences can run on for a page and a half before you finally get to the verb at the end, and who is comprehensible, even to Germans, only in English translation. You won’t have to agonize over the correct interpretation of Hegel’s dialectic. Moore has done all that for you. Buy his books, which are little more than pamphlets, and you’ll be able to toss out all those thick tomes and learn all the moral philosophy you will ever need in a week or two.
Or at least you will if Moore got it right. It all hinges on his notion of the “Good-in-itself.” He claims it’s something like what philosophers call qualia. Qualia are the content of our subjective experiences, like colors, smells, pain, etc. They can’t really be defined, but only experienced. Consider, for example, the difficulty of explaining “red” to a blind person. Moore’s description of the Good is even more vague. As he puts it in his rather pretentiously named Principia Ethica,
Let us, then, consider this position. My point is that ‘good’ is a simple notion, just as ‘yellow’ is a simple notion; that, just as you cannot, by any manner of means, explain to any one who does not already know it, what yellow is, so you cannot explain what good is.
In other words, you can’t even define good. If that isn’t slippery enough for you, try this:
They (metaphysicians) have always been much occupied, not only with that other class of natural objects which consists in mental facts, but also with the class of objects or properties of objects, which certainly do not exist in time, are not therefore parts of Nature, and which, in fact, do no exist at all. To this class, as I have said, belongs what we mean by the adjective “good.” …What is meant by good? This first question I have already attempted to answer. The peculiar predicate, by reference to which the sphere of Ethics must be defined, is simple, unanalyzable, indefinable.
Or, as he puts it elsewhere, the Good doesn’t exist. It just is. Which brings us to the naturalistic fallacy. If, as Moore claims, Good doesn’t exist as a natural, or even a metaphysical, object, it can’t be defined with reference to such an object. Attempts to so define it are what he refers to as the naturalistic fallacy. That, in his opinion, is why every other moral philosopher in history, or at least all the ones whose names happen to turn up in his books, have been wrong except him. The fallacy is defined at Wiki and elsewhere on the web, but the best way to grasp what he means is to read his books. For example,
The naturalistic fallacy always implies that when we think “This is good,” what we are thinking is that the thing in question bears a definite relation to some one other thing.
That fallacy, I explained, consists in the contention that good means nothing but some simple or complex notion, that can be defined in terms of natural qualities.
To hold that from any proposition asserting “Reality is of this nature” we can infer, or obtain confirmation for, any proposition asserting “This is good in itself” is to commit the naturalistic fallacy.
In short, all the head scratching of all the philosophers over thousands of years about the question of what is Good has been so much wasted effort. Certainly, the average layman had no chance at all of understanding the subject, or at least he didn’t until the fortuitous appearance of Moore on the scene. He didn’t show up a moment too soon, either, because, as he explains in his books, we all have “duties.” It turns out that, not only did the intuition “Good,” pop up in his consciousness, more or less after the fashion of “yellow,” or the smell of a rose. He also “intuited” that it came fully equipped with the power to dictate to other individuals what they ought and ought not to do. Again, I’ll allow the philosopher to explain.
Our “duty,” therefore, can only be defined as that action, which will cause more good to exist in the Universe than any possible alternative… When, therefore, Ethics presumes to assert that certain ways of acting are “duties” it presumes to assert that to act in those ways will always produce the greatest possible sum of good.
But how on earth can we ever even begin to do our duty if we have no clue what Good is? Well, Moore is actually quite coy about explaining it to us, and rightly so, as it turns out. When he finally takes a stab at it in Chapter VI of Principia, it turns out to be paltry enough. Basically, it’s the same “pleasure,” or “happiness” that many other philosophers have suggested, only it’s not described in such simple terms. It must be part of what Moore describes as an “organic whole,” consisting not only of pleasure itself, for example, but also a consciousness capable of experiencing the pleasure, the requisite level of taste to really appreciate it, the emotional equipment necessary to react with the appropriate level of awe, etc. Silly old philosophers! They rashly assumed that, if the Good were defined as “pleasure,” it would occur to their readers that they would have to be conscious in order to experience it without them spelling it out. Little did they suspect the coming of G. E. Moore and his naturalistic fallacy.
When he finally gets around to explaining it to us, we gather that Moore’s Good is more or less what you’d expect the intuition of Good to be in a well-bred English gentleman endowed with “good taste” around the turn of the 20th century. His Good turns out to include nice scenery, pleasant music, and chats with other “good” people. Or, as he put it somewhat more expansively,
We can imagine the case of a single person, enjoying throughout eternity the contemplation of scenery as beautiful, and intercourse with persons as admirable, as can be imagined.
By far the most valuable things which we know or can imagine, are certain states of consciousness, which may be roughly described as the pleasures of human intercourse and the enjoyment of beautiful objects. No one, probably, who has asked himself the question, has ever doubted that personal affection and the appreciation of what is beautiful in Art or Nature, are good in themselves.
Really? No one? One can only surmise that Moore’s circle of acquaintance must have been quite limited. Unsurprisingly, Beethoven’s Fifth is in the mix, but only, of course, as part of an “organic whole.” As Moore puts it,
What value should we attribute to the proper emotion excited by hearing Beethoven’s Fifth Symphony, if that emotion were entirely unaccompanied by any consciousness, either of the notes, or of the melodic and harmonic relations between them?
It would seem, then, that even if you’re such a coarse person that you can’t appreciate Beethoven’s Fifth yourself, it is still your “duty” to make sure that it’s right there on everyone else’s smart phone.
Imagine, if you will, Mother Nature sitting down with Moore, holding his hand, looking directly into his eyes, and revealing to him in all its majesty the evolution of life on this planet, starting from the simplest, one celled creatures more than four billion years ago, and proceeding through ever more complex forms to the almost incredible emergence of a highly intelligent and highly social species known as Homo sapiens. It all happened, she explains to him with a look of triumph on her face, because, over all those four billion years, the chain of life remained unbroken because the creatures that made up the links of that chain survived and reproduced. Then, with a serious expression on her face, she asks him, “Now do you understand the reason for the existence of moral emotions?” “Of course,” answers Moore, “they’re there so I can enjoy nice landscapes and pretty music.” (Loud forehead slap) Mother Nature stands up and walks away shaking her head, consoling herself with the thought that some more advanced species might “get it” after another million years or so of natural selection.
And what of Aristotle, Hegel and Kant? Throw out your philosophy books and forget about them. Imagine being so dense as to commit the naturalistic fallacy!
Posted on October 30th, 2015 2 comments
One cannot make truth claims about morality because moral perceptions are subjective manifestations of evolved behavioral traits. That fact should have been obvious to any rational human being shortly after the publication of The Origin of Species in 1859. It was certainly obvious enough to Darwin himself. Edvard Westermarck spelled it out for anyone who still didn’t get it in his The Origin and Development of Moral Ideas, published in 1906. More than a century later one might think it should be obvious to any reasonably intelligent child. Alas, most of us still haven’t caught on. We still take our occasional fits of virtuous indignation seriously, and expect everyone else to take them seriously, too. As for the “experts” who have assumed the responsibility of explaining to the rest of us when our fits are “really” justified, and when not, well, it seems they’ve never heard of a man named Darwin. Or at least it does to anyone who takes the trouble to thumb through the pages of the journal Ethics.
You might describe Ethics as a playground for academic practitioners of moral philosophy. They use it to regale each other with articles full of rarefied hair splitting and arcane jargon describing the flavor of morality they happen to prefer at the moment. Of course, it also serves as a venue for accumulating the publications upon which academic survival depends. Look through the articles in any given issue, and you’ll find statements like the following:
The reasons why actions are right or wrong sometimes are relatively straightforward, and then explicit moral understanding may be quite easy to achieve.
Since almost all civilians are innocent in war, and since killing innocent civilians is worse than killing soldiers, killing civilians is worse than killing soldiers.
We are constrained, it seems, not only not to treat others in certain ways, but to do so because they have the moral standing to demand that we do so, and to hold us accountable for wronging them if we fail.
Some deontologists claim that harm-enabling is a species of harm-allowing. Others claim that while harm-enabling is properly classified as a species of harm-doing, it is nonetheless morally equivalent, all else equal, to harm-allowing.
Do you notice the common thread here? That’s right! All these statements are dependent on the tacit assumption that there actually is such a thing as moral truth. In the first that assumption comes in the form of a statement that implies that what we call “good” and “evil” actually exist as objective things. In the second it comes in the form of an assumption that there is an objective way to determine guilt or innocence. In the third it manifests itself as a belief the moral emotions can jump out of the skull of one individual and acquire “standing,” so that they apply to other individuals as well. In the fourth, it turns up in the form of a standard by which it can be determined whether acts are “morally equivalent” or not. Westermarck cut through the fog obfuscating the basis of such claims in the first chapter of his book. As he put it,
As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion. The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.
In other words, all the learned articles on the merits of this or that moral system in the pages of Ethics and similar journals are more or less the equivalent of a similar number of articles on the care and feeding of unicorns, or the number of persons, natures and wills of imaginary super-beings. Why don’t these people face the obvious? Well, perhaps first and foremost, because it would put them out of a job. Beyond that, all their laboriously acquired “expertise,” would become as futile as the expertise of physicians in the 18th century on the proper technique for bleeding patients suffering from smallpox. For that matter, most of them probably believe their own cant. As Julius Caesar, among many others, pointed out long ago, human beings tend to believe what they want to believe.
Morality is what it is, and won’t become something different even if the articles in learned journals on the subject multiply until the stack reaches the moon. What would happen if the whole world suddenly accepted the fact? Very little, I suspect. We don’t behave morally the way we do because of the scribblings of this or that philosopher. We behave the way we do because that is our nature. Accepting the truth about morality wouldn’t result in a chaos of moral relativism, or an astronomical increase in crime, or even a sudden jolt of the body politic to the right or the left of the political spectrum. With luck, a few people might start considering the implications of the truth, and point out that all the virtue posturing and outbursts of pious wrath that are such a pervasive feature of the age we live in are more or less equivalent to the tantrums of children. The result might be a world that is marginally less annoying to live in. I personally wouldn’t mind living in a world in which the posturing of moral buffoons had become more a source of amusement than annoyance.
Posted on October 4th, 2015 No comments
There’s a reason that the Blank Slaters clung so bitterly to their absurd orthodoxy for so many years. If there is such a thing as human nature, then all the grandiose utopias they concocted for us over the years, from Communism on down, would vanish like so many mirages. That orthodoxy collapsed when a man named Robert Ardrey made a laughing stock of the “men of science.” In this enlightened age, one seldom finds an old school, hard core Blank Slater outside of the darkest, most obscure rat holes of academia. Even PBS and Scientific American have thrown in the towel. Still, one occasionally runs across “makeovers” of the old orthodoxy, in the guise of what one might call Blank Slate Lite.
I recently discovered just such an artifact in the pages of Ethics magazine, which functions after a fashion as an asylum for “experts in ethics” who still cling to the illusion that they have anything relevant to say. Entitled The Limits of Evolutionary Explanations of Morality and Their Implications for Moral Progress, it was written by Prof. Allen Buchanan of Duke and Kings College, London, and Asst. Prof. Russell Powell of Boston University. Unfortunately, it’s behind a pay wall, and is quite long, but if you’re the adventurous type you might be able to access it at a local university library. In any case, the short version of the paper might be summarized as follows:
Conservatives have traditionally claimed that “human nature places severe limitations on social and moral reform,” but have “offered little in the way of scientific evidence to support this claim.” Now, however, a later breed of conservatives, knows as “evoconservatives,” have “attempted to fill this empirical gap in the conservative argument by appealing to the prevailing evolutionary explanation of morality to show that it is unrealistic to think that cosmopolitan and other “inclusivist” moral ideals can meaningfully be realized.” However, while evolved psychology can’t be discounted in moral theory, and there is such a thing as human nature, they are so plastic and malleable that it doesn’t stand in the way of moral progress.
This, at least, is the argument until one gets to the “Conclusion” section at the end. Then, as if frightened by their own hubris, the authors make noises in a quite contradictory direction, writing, for example,
…we acknowledge that evolved psychological capacities, interacting with particular social and institutional environments, can pose serious obstacles to using our rationality in ways that result in more inclusive moralities. For example, environments that mirror conditions of the EEA (environment of evolutionary adaptation, i.e., the environment in which moral behavioral predispositions presumably evolved, ed.)—such as those characterized by great physical insecurity, high parasite threat, severe intergroup competition for resources, and a lack of institutions for peaceful, mutually beneficial cooperation—will tend to be very unfriendly to the development of inclusivist morality.
However, they conclude triumphantly with the following:
At the same time, however, we have offered compelling reasons, both theoretical and empirical, to believe that human morality is only weakly constrained by human evolutionary history, leaving the potential for substantial moral progress wide open. Our point is not that human beings have slipped the “leash” of evolution, but rather that the leash is far longer than evoconservatives and even many evolutionary psychologists have acknowledged—and no one is in a position at present to know just how elastic it will turn out to be.
Students of the Blank Slate orthodoxy will see that all the main shibboleths are still there, if in somewhat attenuated form. The Blank Slate itself is replaced by a “long leash.” The “genetic determinist” strawman of the Blank Slaters is replaced by “evoconservatives.” These evoconservatives are no longer “fascists and racists,” but merely a nuisance standing in the way of “moral progress.” The overriding goal is no longer anything like the Marxist paradise on earth, but the somewhat less inspiring continued “development of inclusivist morality.”
Readers of this blog should immediately notice the unwarranted assumption that there actually is such a thing as “moral progress.” In that case, there must be a goal towards which morality is progressing. Natural selection occurs without any such goal or purpose. It follows that the authors believe that there must be some “mysterious, transcendental” origin other than natural evolution to account for this progress. However, they insist they don’t believe in such a “mysterious, transcendental” source. How, then, do they account for the existence of this “thing” they refer to as “moral progress?” What the authors are really referring to when they refer to this “moral progress” is “the way we and other good liberals want things.”
By “inclusivist” moralities, the authors mean versions that can be expanded to include very large subsets of the human population that are neither kin to the bearers of that morality nor members of any identifiable group that is likely to reciprocate their good deeds. Presumably the ultimate goal is to expand these subsets to “include” all mankind. The “evoconservatives” we are told, deny the possibility of such “inclusivism” in spite of the fact that one can cite many obvious examples to the contrary. At this point, one begins to wonder who these obtuse evoconservatives really are. The authors are quite coy about identifying them. The footnote following their first mention merely points to a blurb about what the authors will discuss later in the text. No names are named. Much later in the text Jonathan Haidt is finally identified as one of the evoconservatives. As the authors put it,
Leading psychologist Jonathan Haidt, who has stressed the moral psychological significance of in-group loyalty, expresses a related view: ‘It would be nice to believe that we humans were designed to love everyone unconditionally. Nice, but rather unlikely from an evolutionary perspective. Parochial love—love within groups—amplified by similarity, a sense of shared fate, and the suppression of free riders, may be the most we can accomplish.
In fact, as anyone who has actually read Haidt is aware, he neither believes that “inclusivist” moralities as defined by the authors are impossible, nor does this quote imply anything of the sort. A genuine conservative would doubtless classify Haidt as a liberal, but he has defended, or at least tried to explain, conservative moralities. Apparently that is sufficient to cast him into the outer darkness as an “evoconservative.”
The authors also point the finger at Larry Arnhart. Arnhart is neither a geneticist, nor an evolutionary biologist, nor an evolutionary psychologist, but a political scientist who apparently subscribes to some version of the naturalistic fallacy. Nowhere is it demonstrated that he actually believes that the inclusivist versions of morality favored by the authors are impossible. In a word, the few slim references to individuals who are supposed to fit the description of the evoconservative strawman concocted by the authors actually do nothing of the sort. Yet in spite of the fact that the authors can’t actually name anyone who explicitly embraces their version of evoconservatism, they describe the existence of “inclusivist morality” as a “major flaw in evoconservative arguments.”
A bit later, the authors appear to drop their evoconservative strawman, and expand their field of fire to include anyone who claims that “inclusivist morality” could have resulted from natural selection. For example, quoting from the article:
The key point is that none of these inclusivist features of contemporary morality are plausibly explained in standard selectionist terms, that is, as adaptations or predictable expressions of adaptive features that arose in the environment of evolutionary adaptation (EEA).
Here, “evoconservatives” have been replaced by “standard selectionists.” Invariably, the authors walk back such seemingly undistilled statements of Blank Slate ideology with assurances that no one believes more firmly than they in the evolutionary roots of morality. That, of course, begs the question of how “these inclusivist features,” if they are not explainable in “standard selectionist terms,” are plausibly explained in “non-standard selectionist terms,” and who these “non-standard selectionists” actually are. Apparently the only alternative is that the “inclusivist features” have a “transcendental” explanation, not further elaborated by the authors. This conclusion is not as far fetched as it seems. Interestingly enough, the authors’ work is partially funded by the Templeton Foundation, an accommodationist outfit with the ostensible goal of proving that religion and science are not mutually exclusive.
In fact, I know of not a single scientist whose specialty is germane to the subject of human morality who would dispute the existence of inclusive moralities. The authors limit themselves to statements to the effect that the work of such and such a person “suggests” that they don’t believe in inclusive moralities, or that the work of some other person “implies” that they don’t believe such moralities are stable. Wouldn’t it be more reasonable to simply go and ask these people what they actually believe regarding these matters, instead of putting words in their mouths?
Left out of all these glowing descriptions of inclusive moralities is the fact that not a single one of them exists without an outgroup. That fact is demonstrated by the authors themselves, whose outgroup obviously includes those they identify as “evoconservatives.” One might also point out that those who have “inclusive” ingroups commonly have “inclusive” outgroups as well, and liberals are commonly found among the most violent outgroup haters on the planet. To confirm this, one need only look at the comments at the websites of Daily Kos, or Talking Points Memo, or the Nation, or any other familiar liberal watering hole.
While I’m somewhat dubious about all the authors’ loose talk about “moral progress,” I think we can at least identify some real progress towards getting at the truth in their version of Blank Slate Lite. After all, it’s a far cry from the old school version. Throughout the article the authors question the ability of natural selection in the environment in which moral behavior presumably evolved in early humans to account for this or that feature of their observed “inclusive morality.” As noted above, however, as often as they do it, they are effusive in assuring the reader that by no means do they wish to imply that they find any fault whatsoever with innate theories of human morality. In the end, what more can one ask than the ability to continue seeking the truth about human moral behavior in every relevant area of science without fear of being denounced and intimidated as guilty of one type of villainy or another. That ability seems more assured if the existence of innate behavior is at least admitted, and is therefore unlikely to be criminalized as it was in the heyday of the Blank Slate. In that respect, Blank Slate Lite really does represent progress.
Of course, there remains the question of why so many of us still take seriously the authors’ fantasies about “moral progress” more than a century after Westermarck pointed out the absurdity of truth claims about morality. I suspect the answer lies in the fact that ending the charade would reduce all the pontifications of all the “experts in morality” catered to by learned journals like Ethics to gibberish. Experts don’t like to be confronted with the truth that their painstakingly acquired expertise is irrelevant. Admitting it would make it a great deal harder to secure grants from the Templeton Foundation.
UPDATE: I failed to mention another intriguing paragraph in the paper that reads as follows:
The human capacity to reflect on and revise our conceptions of duty and moral standing can give us reasons here and now to expand our capacities for moral behavior by developing institutions that economize on sympathy and enhance our ability to take the interests of strangers into account. This same capacity may also give us reasons, in the not-too-distant future, to modify our evolved psychology through the employment of biomedical interventions that enable us to implement new norms that we develop as a result of the process of reflection. In both cases, the limits of our evolved motivational capacities do not translate into a comparable constraint on our capacity for moral action. The fact that we are not currently motivationally capable of acting on the considered moral norms we have come to endorse is not a reason to trim back those norms; it is a reason to enhance our motivational capacity, either through institutional or biomedical means, so that it matches the demands of our considered morality.
Note the bolded wording. I’m not sure what to make of it, dear reader, but it appears that, one way or another, the authors intend to “get our minds right.”
Posted on October 2nd, 2015 5 comments
If you’re a regular reader of this blog, you know my take on morality. It is the manifestation of a subset of our suite of innate behavioral traits. The traits in question exist because they evolved. Absent those traits, morality as we know it would not exist. It follows that attempts to apply moral emotions in order to solve complex problems that arise in an environment that is radically different from the one in which the innate, “root causes” of morality evolved are irrational. That, however, is precisely how the Europeans are attempting to deal with an unprecedented flood of culturally and genetically alien refugees. The result is predictable – a classic morality inversion.
…moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached.
In other words, the “emotional dog” makes the judgment. Only after the judgment has been made does the “rational tail” begin “wagging the dog,” concocting good sounding “reasons” for the judgment. One can get a better idea of what’s really going on by tracking down the source of the moral emotions involved.
Let’s consider, then, what’s going on inside the “pro-refugee” brain. As in every other brain, the moral machinery distinguishes between ingroup and outgroup(s). In this case these categories are perceived primarily in ideological terms. The typical pro-refugee individual is often a liberal, as that rather slippery term is generally understood in the context of 21st century western democracies. Such specimens will occasionally claim that they have expanded their ingroup to include “all mankind,” so that it is no longer possible for them to be “haters.” Nothing could be further from the truth. The outgroup have ye always with you. It comes with the human behavioral package.
If anything, the modern liberal hates more violently than any other subgroup. He commonly hates the people within his own culture who disagree with the shibboleths of his ideology. Those particular “others,” almost always constitute at least a part of his outgroup. Outside of his own culture, ideology matters much less as a criterion of outgroup identification, as demonstrated, for example, by the odd affinity between many Western liberals and radical Islamists.
Beyond that, however, he is hardly immune from the more traditional forms of tribalism. For example, European liberals typically hate the United States. The intensity of that hatred tends to rise and fall over time, but can sometimes reach almost incredible levels. The most recent eruption occurred around the year 2000. Interestingly enough, one of the most spectacular examples occurred in Germany, the very country that now takes the cake for moralistic grandstanding in the matter of refugees. Der Spiegel, its number one news magazine, was certainly in the avant-garde of the orgasm of hatred. It was often difficult to find any news about Germany on the homepage of its website, so filled was it with furious, spittle-flinging rants about the imagined evils of “die Amerikaner.” However, virtually every other major German “news” outlet, whether it was nominally “liberal” or “conservative,” eventually jointed the howling pack. The most vicious examples of anti-American hate were typically found in just those publications that are now quick to denounce German citizens who express concern about the overwhelming waves of refugees now pouring into the country as “haters.”
On the other hand, refugees, or at least those of the type now pouring into Europe, seldom turn up in any of these common outgroups of the modern liberal. They land squarely in his ingroup. Humans are generally inclined to help ingroup members who, like the refugees, appear to be in trouble. This is doubly true of the liberal, who piques himself on what he imagines to be his moral superiority. Furthermore, as the refugees can be portrayed as victims of colonialism and imperialism, one might say they are a “most favored subset” of the ingroup. Throw in a few pictures of drowned children, impoverished women begging for help, etc., and all the moral ingredients are there to render the liberal an impassioned defender of the masses of humanity drawing a bead on his country. Nothing gives him more self-righteous joy than imagining himself a “savior.” This explains the fact that liberals are eternally in the process of “saving” one group of unfortunates or another without ever getting around to accomplishing anything actually recognizable as salvation. All the pleasure is in the charade. We find the same phenomenon whether its a matter of “saving” the environment, “saving” the planet from global warming, or “saving” the poor. For the liberal, the pose is everything, and the reality nothing.
Which brings us back to the theme of this post. All the sublime moral emotions now at play in the “salvation” of the refugees have an uncanny resemblance to many other instances of moral behavior as practiced by the modern liberal. They have a tendency to favor an outcome which is the opposite of what the same moral emotions accomplished at an earlier time, and that led to their preservation by natural selection to begin with. In a word, as noted above, we are witnessing yet another classic morality inversion.
Why an inversion? At the most fundamental level, because it will lead to the diminution or elimination of the genes whose survival a similar response once favored. At the moment, the pro-refugee side is calling the shots. It controls the governments of all the major European states. All of them more or less fit the pattern described above, whether they are nominally “liberal” or “conservative.” Indeed, foremost among them is Germany’s “conservative’ regime, which has positively invited a flood of alien refugees across its borders. Based on historical precedent, the outcome of all this altruism isn’t difficult to foresee. In terms of “culture” it will be a future of ethnic and religious strife, possibly leading to civil war. Genetically, it amounts to an attempt at ethnic suicide. I am well aware that these outcomes are disputed by those promoting the refugee inundation. However, I consider it pointless to argue about it. I am content to let history judge.
While we bide our time waiting for the train wreck to unfold, it may be of interest to examine some of the techniques being used to maintain this remarkable instance of moralistic play-acting. I take most of my examples from the German media, which includes some of the most avid refugee cheerleaders. Predictably, outgroup vilification is part of the mix. As noted above, anyone who objects to the flood of refugees is almost universally denounced as a “hater” by just those people who wear their own virulent hatreds on their sleeves while pretending they don’t exist. Of course, there are also the usual hackneyed violations of Godwin’s Law. For example, Jacob Augstein, leftwing stalwart for Der Spiegel, denounces them as “Browns” (i.e., brownshirts, Nazis) in a recent column. On the “positive” side, the “conservative” Frankfurter Allgemeine Zeitung optimistically suggests that the refugees will promote economic growth. According to another article in Der Spiegel, the eastern Europeans, who are not quite so refugee-friendly as the Germans, are “blowing their chance.” The ominous byline reads,
Europe is shrinking. The demographic downtrend is particularly dramatic in the eastern part of the continent, where the population is literally dying out. In spite of that, Hungary, Poland and company are resisting immigration. They will regret it.
In other words, before turning out the lights and committing suicide, the eastern Europeans should make sure an alien culture is in place to take over their territories when they’re gone. Of course, this flies in the face of the impassioned rhetoric the liberals have been feeding us about the need to reduce the surface population if we are to have an environmentally sustainable planet.
I note in passing that the European elites that are driving this process now seem to have taken a step back from the brink. They are having second thoughts. They realize that they don’t have their populations behind them, and that their defiance of popular opinion might eventually threaten their own power. As a result, the number of news articles about the refugees and their plight is only a shadow of what it was only a few weeks ago. Mild reservations about refugee wowserism are starting to appear even in such gung-ho forums as Der Spiegel where, as I write this, the lead article on their homepage is entitled “Now Things Are Getting Uncomfortable.” Ya think!? The byline reads,
There is a chance in tone in the refugee crisis. SPD (German Social Democratic Party) chief Gabriel warms about limits to Germany’s ability to absorb refugees. Minister of the Interior de Maziere deplores the misbehavior of many migrants. The pressure on Chancellor Merkel is increasing.
“Ought” the Europeans to alter their behavior? Is what they consider “good” really “evil?” Are they ignoring the real “goal” of natural selection? Certainly not, at least from an objective point of view. There is no objective criterion for determining what anyone “ought” to do, anymore than there is an objective way to distinguish the difference between things, such as good and evil, that have no objective existence. They are hardly failing to move towards the “goal” of natural selection, since that process does not have either a purpose or a goal. As you may have gathered, my own subjective whim is to oppose unlimited immigration. I have, however, not the slightest basis for declaring that anyone who doesn’t agree with me is “evil.” At best, I can try to explain my own whims.
I’m what you might call a moral compatibilist. I see myself sitting at the end of a chain of life spawned by genetic material that has evolved over a period of more than three billions years, surviving and reproducing over that incredible gulf of time via an almost infinite array of successive forms, culminating in the species to which I now belong. I consider the whole process, and the universe I live in, awesome and wonderful. Subjectively, it seems to me “good” to act in a way that is compatible with the natural processes that have given me life. It follows that, from my own, individual, subjective point of view, I “should” seek to preserve that life and pass it on into the indefinite future.
I have not the slightest basis for claiming that “my way” is better than the whimsical behavior of those I see around me exultantly pursuing their morality inversions. At best, I must limit myself to observing that “my way” seems more consistent.
Posted on September 10th, 2015 18 comments
US has as much moral duty to accept Syrian refugees as Europe. If not more.
It’s too bad Socrates isn’t still around to “learn” the nature of this “moral duty” from Dawkins the same way he did from Euthyphro. I’m sure the resulting dialog would have been most amusing.
Where on earth does an atheist like Dawkins get the idea that there is such a thing as moral duty? I doubt that he has even thought about it. After all, if moral duty is not just a subjective figment of his imagination and is capable of acquiring the legitimacy to apply not only to himself, but to the entire population of the United States as well, it must somehow exist as an entity in itself. How else could it acquire that legitimacy? There is no logical justification for the claim that mere subjective artifacts of the consciousness of Richard Dawkins, or any other human individual for that matter, are born automatically equipped with the right to dictate “oughts” to other individuals. They cannot possibly acquire the necessary legitimacy simply by virtue of the fact that the physical processes in the brain responsible for their existence have occurred. In what form, then, does “moral duty” exist as an independent thing-in-itself? To claim that “moral duty” is not a thing, or an object, is tantamount to admitting that it doesn’t exist. In what other form can it possibly manifest itself? As a spirit? If that is Dawkins’ claim, then he is every bit as religious as the most delusional speaker in tongues. As dark matter, perhaps? If so then Dawkins must know more about it then the world’s best physicists.
We’re not talking about a deep philosophical issue here. I really can’t understand why the question doesn’t occur immediately to anyone who claims to be an atheist. (Of course, it should occur to religious believers as well, as noted by Socrates well over 2000 years ago. However, the response that they have a “moral duty” because they don’t want to burn in hell for quintillions of years is at least worth considering). In any case, the question certainly occurred to me shortly after I became an atheist at the age of 12. Then, as now, the world was infested with are commonly referred to today as Social Justice Warriors. Then, as now, they were in a constant state of outrage over one thing or another. And then, as now, they expected the rest of the world to take their tantrums of virtuous indignation seriously. Is it really irrational to pose the simple question, “Why?” I asked myself that question, and quickly came to the conclusion that these people are charlatans.
The question remains and is just as relevant today as it was then, whether one accepts Darwinian explanations for the origin of morality or not. However, for atheists who have some respect for the methods of science, I would claim that natural selection is at once the most logical as well as the most parsimonious explanation for the existence of morality. It is the root cause from which spring all its gaudy and multifarious guises. If that is the case, then one can only speak of morality in scientific terms as a manifestation of evolved behavioral predispositions. As such, there is no possible way for it to acquire objective legitimacy. In other words, the claim that all Americans, or any other subset of the human population, has a genuine “moral duty” of any kind is a mirage. If anything, this would appear to be doubly true in the case claimed by Dawkins. It is yet another instance of what I have previously referred to as a “morality inversion.” “Morality” is invoked as the reason for doing things that accomplish the opposite of that which accounts for the very existence of morality to begin with.
What? You don’t agree with me? Well, if “moral duties” are not made of anything, then they don’t exist, so they must be objects of some kind. They must be made of something. By all means, go out and capture a free range “moral duty,” and prove me wrong. Show it to me! I hope it’s green. That’s my favorite color.
Posted on August 9th, 2015 2 comments
…the primary moral goal for today’s bioethics can be summarized in a single sentence. Get out of the way.
I would strengthen that a bit to something like, “Stop the mental masturbation and climb back into the real world.” At some level Pinker is aware of the fact that bioethicists and other “experts” in morality are not nearly as useful to the rest of us as they think they are. He just doesn’t understand why. As a result he makes the mistake of conceding the objective relevance of morality in solving problems germane to the field of biotechnology. The fundamental problem is that these people are chasing after imaginary objects, things that aren’t real. They have bamboozled the rest of us into taking them seriously because we have been hoodwinked by our emotional baggage just as effectively as they have. There is no premium on reality as far as evolution is concerned. There is a premium on survival. We perceive “good” and “evil” as real objects, not because they actually are real objects, but because our ancestors were more likely to pass on the relevant genes if they perceived these fantasies as real things. Bioethics is just one of the many artifacts of this delusion.
Consider what the bioethicists are really claiming. They are saying that mental impressions that exist because they happened to improve the evolutionary fitness of a species of advanced, highly social, bipedal apes correspond to real things, commonly referred to as “good” and “evil,” that have some kind of an objective existence independent of the minds of those creatures. Not only that, but if one can but capture these objects, which happen to be extremely elusive and slippery, one can apply them to make decisions in the field of biotechnology, which didn’t exist when the mental equipment that gives rise to the impressions in question evolved. Consider these extracts from the online conversation:
Forget Tuskegee. Forget Willowbrook and Holmesburg Prison. Pay no attention to the research subjects who died at Kano, Auckland Women’s Hospital or the Fred Hutchinson Cancer Center. Never mind about Jesse Gelsinger, Ellen Roche, Nicole Wan, Tracy Johnson or Dan Markingson. According to Steven Pinker, “we already have ample safeguards for the safety and informed consent of patients and research subjects.” So bioethicists should just shut up about abuses and let smart people like him get on with their work.
Indeed, biotechnology has moral implications that are nothing short of stupendous. But they are not the ones that worry the worriers.
What we need is less obstruction of good and ethical research, as Pinker correctly observes, and more vigilance at picking up unethical research. This requires competent, professional and trained bioethicists and improvement of ethics review processes.
Daniel K. Sokol, also at Practical Ethics:
The idea that research that has the potential to cause harm should be subject to ethical review should not be controversial in the 21st century. The words “this project has been reviewed and approved by the Research Ethics Committee” offers some reassurance that the welfare of participants has been duly considered. The thought of biomedical research without ethical review is a frightening one.
A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.”
One imagines oneself in Bedlam. These people are all trying to address what most people would agree is a real problem. They understand that most people don’t want to be victims of anything like the Tuskegee experiments. They also grasp the fact that most people would prefer to live longer, healthier lives. True, these, too, are merely subjective goals, whims if you will, but they are whims that most of us will agree with. The whims aren’t the problem. The problem is that we are trying to apply a useless tool to reach the goals; human moral emotions. We are trying to establish truths by consulting emotions to which no truth claims can possibly apply. Stuart Rennie got it right in spite of himself in his attack on Pinker at his Global Bioethics Blog:
My first reaction was: how is this new bioethics skill taught? Should there be classes that teach it in a stepwise manner, i.e. where you first learn not to butt in, then how to just step a bit aside, followed by somewhat getting out of the way, and culminating in totally screwing off? What would the syllabus look like? Wouldn’t avoiding bioethics class altogether be a sign of success?
Pinker, too, iterates to an entirely rational final sentence in his opinion piece:
Biomedical research will always be closer to Sisyphus than a runaway train — and the last thing we need is a lobby of so-called ethicists helping to push the rock down the hill.
I, too, would prefer not to be a Tuskegee guinea pig. I, too, would like to live longer and be healthier. I simply believe that emotional predispositions that exist because they happen to have been successful in regulating the social interactions within and among small groups of hunter-gatherers millennia ago, are unlikely to be the best tools to achieve those ends.
Posted on August 8th, 2015 No comments
The nature of morality and the reason for its existence have been obvious for more than a century and a half. Francis Hutcheson demonstrated that it must arise from a “moral sense” early in the 18th century. Hume agreed, and suggested the possibility that there may be a secular explanation for the existence of this moral sense. Darwin demonstrated the nature of this secular explanation for anyone willing to peak over the blindfold of faith and look at the evidence. Westermarck climbed up on the shoulders of these giants, gazed about, and summarized the obvious in his brilliant The Origin and Development of the Moral Ideas. In short, good and evil have no objective existence. They are subjective artifacts of behavioral predispositions that exist because they evolved. Absent that evolved “moral sense,” morality as we know it would not exist. It evolved because it happened to increase the probability that the genes responsible for its existence would survive and reproduce. There exists no mechanism whereby those genes can jump out of the DNA of one individual, grab the DNA of another individual by the scruff of the neck, and dictate what kind of behavior that other DNA should regard as “good” or “evil.”
In the years since Darwin and Westermarck our species has amply demonstrated its propensity to ignore such inconvenient truths. Once upon a time religion provided some semblance of a justification for belief in an objective “good-in-itself.” However, latter day “experts” on ethics and morality have jettisoned such anachronisms, effectively sawing off the branch they were sitting on. Then, with incomparable hubris, they’ve claimed a magical ability to distill objective “goods” and “evils” straight out of the vacuum they were floating in. In our own time the result is visible as a veritable explosion of abstruse algorithms, incomprehensible to all but a few academic scribblers, for doing just that. Encouraged by these “experts,” legions of others have indulged themselves in the wonderfully sweet delusion that the particular haphazard grab bag of emotions they happened to inherit from their ancestors provided them with an infallible touchstone for sniffing out “real good” and “real evil.” The result has been an orgy of secular piety that the religious Puritans of old would have shuddered to behold.
The manifestations of this latter day piety have been bizarre, to say the least. Instead of promoting genetic survival, they accomplish precisely the opposite. Genes that are the end result of an unbroken chain of existence stretching back billions of years into the past now seem intent on committing suicide. It’s not surprising really. Other genes gave rise to an intelligence capable of altering the environment so fast that the rest couldn’t possibly keep up. The result is visible in various forms of self-destructive behavior that can be described as “morality inversions.”
A classic example is the belief that it is “immoral” to have children. Reams of essays, articles, and even books have been written “proving” that, for various reasons, reproduction is “bad-in-itself.” If one searches diligently for the “root cause” of all these counterintuitive artifacts of human nature, one will always find them resting on a soft bed of moral emotions. What physical processes in the brain give rise to these moral emotions, and how, exactly, do they predispose us to act in some ways, but not others? No one knows. It’s a mystery that will probably remain unsolved until we unravel the secret of consciousness. One thing we do know, however. The emotions exist because they evolved, and they evolved because they enhanced the odds that the genes that gave rise to them would reproduce; or at least they did in a particular environment that no longer exists. In the vastly different environment we have now created for ourselves, however, they are obviously capable of promoting an entirely different end, at least in some cases; self destruction.
Of course, self destruction is not objectively evil because nothing is objectively evil. Neither is it unreasonable, because, as Hume pointed out, reason by itself cannot motivate us to do anything. We are motivated by “sentiments” or “passions” that we experience because it is our nature to experience them. These include the moral passions. Self destruction is a whim, and reason can be applied to satisfy the whim. I happen to have a different whim. I see myself as a link in a vast chain of millions of living organisms, my ancestors, if you will. All have successfully reproduced, adding another link to the chain. Suppose I were to fail to reproduce, thus becoming the final link in the chain and announcing, in effect, to those who came before me and made my life possible that, thanks to me, all their efforts had ended in a biological dead end. In that case I would see myself as a dysfunctional biological unit or, in a word, sick, the victim of a morality inversion. It follows that I have a different whim; to reproduce. And so I have. There can be nothing that renders my whims in any way objectively superior to those of anyone else. I merely describe them and outline what motivates them. I’m not disturbed by the fact that others have different whims, and choose self destruction. After all, their choice to remove themselves from the gene pool and stop taking up space on the planet may well be to my advantage.
Another interesting example of a morality inversion is the deep emotional high so many people in Europe and North America seem to get from inviting a deluge of genetically and culturally alien immigrants to ignore the laws of their countries and move in. One can but speculate on the reasons that the moral emotions, mediated by culture as they always are, result in such counterintuitive behavior. There is, of course, such a thing as human altruism, and it exists because it evolved. However, that evolutionary process took place in an environment that made it likely that such behavior would enhance the chances that the responsible genes would survive. People lived in relatively small ingroups surrounded by more or less hostile outgroups. We still categorize others into ingroups and outgroups, but the process has become deranged. Thanks to our vastly expanded knowledge of the world around us combined with vastly improved means of communication, the ingroup may now be perceived as “all mankind.”
Except, of course, for the ever present outgroup. The outgroup hasn’t gone anywhere. It has merely adopted a different form. Now, instead of the clan in the next territory over, the outgroup may consist of liberals, conservatives, Christians, Moslems, atheists, Jews, blacks, whites, or what have you. The many possibilities are familiar to anyone who has read a little history. Obviously, the moral equipment in our brains doesn’t have the least trouble identifying the population of Africa, the Middle East, or Mexico as members of the ingroup, and citizens of one’s own country who don’t quite see them in that light as the outgroup. In that case, anyone who resists a deluge of illegal immigrants is “evil.” If they point out that similar events in the past have led to long periods of ethnic and/or religious strife, occasionally culminating in civil war, or any of the other obvious drawbacks of uncontrolled immigration, they are simply shouted down with the epithets appropriate for describing the outgroup, “racist” being the most familiar and hackneyed example. In short, a morality inversion has occurred. Moral emotions have become dysfunctional, promoting behavior that will almost certainly be self-destructive in the long run. I may be wrong of course. The immigrants now pouring into Europe and North America without apparent limit may all eventually be assimilated into a big, happy, prosperous family. I seriously doubt it. Wait and see.
One could cite many other examples. The faithful, of course, have their own versions, such as removing themselves from the gene pool by acting as human bombs, often taking many others with them in the process. The “good” in this case is the delusional prospect of enjoying the services of 70 of the best Stepford wives ever heard of in the afterlife. Regardless, the point is that the evolved emotional baggage that manifests itself in so many forms as human morality has been left in the dust. It cannot possibly keep up with the frenetic pace of human social and technological progress. The result is morality inversions; behaviors that accomplish more or less the opposite of what they did in the environment in which they evolved. Under the circumstances, the practice of allowing people to wallow in their moral emotions, insisting that they have a monopoly on the “good” and anyone who opposes them is “evil” is becoming increasingly problematic. As noted above, I don’t have a problem with these people voluntarily removing themselves from the gene pool. I do have a problem with becoming collateral damage.
Posted on August 2nd, 2015 5 comments
According to the banner on its cover, Ethics is currently “celebrating 125 years.” It describes itself as “an international journal of social, political, and legal philosophy.” Its contributors consist mainly of a gaggle of earnest academics, all chasing about with metaphysical butterfly nets seeking to capture that most elusive quarry, the “Good.” None of them seems to have ever heard of a man named Westermarck, who demonstrated shortly after the journal first appeared that their prey was as imaginary as unicorns, or even Darwin, who was well aware of the fact, but was not indelicate enough to spell it out so blatantly.
The latest issue includes an entry on the “Transmission Principle,” defined in its abstract as follows:
If you ought to perform a certain act, and some other action is a necessary means for you to perform that act, then you ought to perform that other action as well.
As usual, the author never explains how you get to the original “ought” to begin with. In another article entitled “What If I Cannot Make a Difference (and Know It),” the author begins with a cultural artifact that will surely be of interest to future historians:
We often collectively bring about bad outcomes. For example, by continuing to buy cheap supermarket meat, many people together sustain factory farming, and the greenhouse gas emissions of millions of individuals together bring about anthropogenic climate change.
and goes on to note that,
Intuitively, these bad outcomes are not just a matter of bad luck, but the result of some sort of moral shortcoming. Yet in many of these situations, none of the individual agents could have made any difference for the better.
He then demonstrates that, because a equals b, and b equals c, we are still entirely justified in peering down our morally righteous noses at purchasers of cheap meat and emitters of greenhouse gases. His conclusion in academic-speak:
I have shown how Act Consequentialists can find fault with some agent in all cases where multiple agents who have modally robust knowledge of all the relevant facts gratuitously bring about collectively suboptimal outcomes, even if the agents individually cannot make any difference for the better due to the uncooperativeness of others.
The author does not explain the process by which emotions that evolved in a world without cheap supermarket meat have lately acquired the power to prescribe whether buying it is righteous or not.
It has been suggested by some that trading, the exchange of goods and services, is a defining feature of our species. In an article entitled “Markets without Symbolic Limits,” the authors conclude that,
In many cases, we are morally obligated to revise our semiotics in order to allow for greater commodification. We ought to revise our interpretive schemas whenever the costs of holding that schema are significant, without counterweight benefits. It is itself morally objectionable to maintain a meaning system that imbues a practice with negative meanings when that practice would save or improve lives, reduce or alleviate suffering, and so on.
No doubt that very thought occurred to our hunter-gatherer ancestors, enhancing their overall fitness. The happy result was the preservation of the emotional baggage that gave rise to it to later inform the pages of Ethics magazine.
In short, “moral progress,” as reflected in the pages of Ethics, depends on studiously ignoring Darwin, averting our eyes from the profane scribblings of Westermarck, pretending that the recent flood of books and articles on the evolutionary origins of morality and the existence of analogs of human morality in many animals are irrelevant, and gratuitously assuming that there really is some “thing” out there for the butterfly nets to catch. In other words, our “moral progress” has been a progress away from self-understanding. It saddens me, because I’ve always considered self-understanding a “good.” Just another one of my whims.
Posted on June 21st, 2015 33 comments
If we are evolved animals, then it is plausible that we have evolved behavioral traits, and among those traits are a “moral sense.” So much was immediately obvious to Darwin himself. To judge by the number of books that have been published about evolved morality in the last couple of decades, it makes sense to a lot of other people, too. The reason such a sense might have evolved is obvious, especially among highly social creatures such as ourselves. The tendency to act in some ways and not in others enhanced the probability that the genes responsible for those tendencies would survive and reproduce. It is not implausible that this moral sense should be strong, and that it should give rise to such powerful impressions that some things are “really good,” and others are “really evil,” as to produce a sense that “good” and “evil” exist independently as objective things. Such a moral sense is demonstrably very effective at modifying our behavior. It hardly follows that good and evil really are independent, objective things.
If an evolved moral sense really is the “root cause” for the existence of all the various and gaudy manifestations of human morality, is it plausible to believe that this moral sense has somehow tracked an “objective morality” that floats around out there independent of any subjective human consciousness? No. If it really is the root cause, is there some objective mechanism whereby the moral impressions of one human being can leap out of that individual’s skull and gain the normative power to dictate to another human being what is “really good” and “really evil?” No. Can there be any objective justification for outrage? No. Can there be any objective basis for virtuous indignation? No. So much is obvious. Under the circumstances it’s amazing, even given the limitations of human reason, that so many of the most intelligent among it just don’t get it. One can only attribute it to the tremendous power of the moral emotions, the great pleasure we get from indulging them, and the dominant role they play in regulating all human interactions.
These facts were recently demonstrated by the interesting behavior of some of the more prominent intellectuals among us in reaction to some comments at a scientific conference. In case you haven’t been following the story, the commenter in question was Tim Hunt,- a biochemist who won a Nobel Prize in 2001 with Paul Nurse and Leland H. Hartwell for discoveries of protein molecules that control the division (duplication) of cells. At a luncheon during the World Conference of Science Journalists in Seoul, South Korea, he averred that women are a problem in labs because “You fall in love with them, they fall in love with you, and when you criticize them, they cry.”
Hunt’s comment evoked furious moral emotions, not least among atheist intellectuals. According to PZ Myers, proprietor of Pharyngula, Hunt’s comments revealed that he is “bad.” Some of his posts on the subject may be found here, here, and here. For example, according to Myers,
Oh, no! There might be a “chilling effect” on the ability of coddled, privileged Nobel prize winners to say stupid, demeaning things about half the population of the planet! What will we do without the ability of Tim Hunt to freely accuse women of being emotional hysterics, or without James Watson’s proud ability to call all black people mentally retarded?
I thought Hunt’s plaintive whines were a big bowl of bollocks.
All I can say is…fuck off, dinosaur. We’re better off without you in any position of authority.
We can glean additional data in the comments to these posts that demonstrate the human version of “othering.” Members of outgroups, or “others,” are not only “bad,” but also typically impure and disgusting. For example,
Glad I wasn’t the only–or even the first!–to mention that long-enough-to-macramé nose hair. I think I know what’s been going on: The female scientists in his lab are always trying hard to not stare at the bales of hay peeking out of his nostrils and he’s been mistaking their uncomfortable, demure behaviour as ‘falling in love with him’.
However, in creatures with brains large enough to cogitate about what their emotions are trying to tell them, the same suite of moral predispositions can easily give rise to stark differences in moral judgments. Sure enough, others concluded that Myers and those who agreed with him were “bad.” Prominent among them was Richard Dawkins, who wrote in an open letter to the London Times,
Along with many others, I didn’t like Sir Tim Hunt’s joke, but ‘disproportionate’ would be a huge underestimate of the baying witch-hunt that it unleashed among our academic thought police: nothing less than a feeding frenzy of mob-rule self-righteousness.”
The moral emotions of other Nobel laureates informed them that Dawkins was right. For example, according to the Telegraph,
Sir Andre Geim, of the University of Manchester who shared the Nobel prize for physics in 2010 said that Sir Tim had been “crucified” by ideological fanatics , and castigated UCL for “ousting” him.
Avram Hershko, an Israeli scientist who won the 2004 Nobel prize in chemistry, said he thought Sir Tim was “very unfairly treated.” He told the Times: “Maybe he wanted to be funny and was jet lagged, but then the criticism in the social media and in the press was very much out of proportion. So was his prompt dismissal — or resignation — from his post at UCL .”
All these reactions have one thing in common. They are completely irrational unless one assumes the existence of “good” and “bad” as objective things rather than subjective impressions. Or would you have me believe, dear reader, that statements like, “fuck off, dinosaur,” and allusions to crucifixion by “ideological fanatics” engaged in a “baying witch-hunt,” are mere cool, carefully reasoned suggestions about how best to advance the officially certified “good” of promoting greater female participation in the sciences? Nonsense! These people aren’t playing a game of charades, either. Their behavior reveals that they genuinely believe, not only in the existence of “good” and “bad” as objective things, but in their own ability to tell the difference better than those who disagree with them. If they don’t believe it, they certainly act like they do. And yet these are some of the most intelligent representatives of our species. One can but despair, and hope that aliens from another planet don’t turn up anytime soon to witness such ludicrous spectacles.
Clearly, we can’t simply dispense with morality. We’re much too stupid to get along without it. Under the circumstances, it would be nice if we could all agree on what we will consider “good” and what “bad,” within the limits imposed by the innate bedrock of morality in human nature. Unfortunately, human societies are now a great deal different than the ones that existed when the predispositions that are responsible for the existence of morality evolved, and they tend to change very rapidly. It stands to reason that it will occasionally be necessary to “adjust” the types of behavior we consider “good” and “bad” to keep up as best we can. I personally doubt that the current practice of climbing up on rickety soap boxes and shouting down anathemas on anyone who disagrees with us, and then making the “adjustment” according to who shouts the loudest, is really the most effective way to accomplish that end. Among other things, it results in too much collateral damage in the form of shattered careers and ideological polarization. I can’t suggest a perfect alternative at the moment, but a little self-knowledge might help in the search for one. Shedding the illusion of objective morality would be a good start.