Posted on August 2nd, 2015 5 comments
According to the banner on its cover, Ethics is currently “celebrating 125 years.” It describes itself as “an international journal of social, political, and legal philosophy.” Its contributors consist mainly of a gaggle of earnest academics, all chasing about with metaphysical butterfly nets seeking to capture that most elusive quarry, the “Good.” None of them seems to have ever heard of a man named Westermarck, who demonstrated shortly after the journal first appeared that their prey was as imaginary as unicorns, or even Darwin, who was well aware of the fact, but was not indelicate enough to spell it out so blatantly.
The latest issue includes an entry on the “Transmission Principle,” defined in its abstract as follows:
If you ought to perform a certain act, and some other action is a necessary means for you to perform that act, then you ought to perform that other action as well.
As usual, the author never explains how you get to the original “ought” to begin with. In another article entitled “What If I Cannot Make a Difference (and Know It),” the author begins with a cultural artifact that will surely be of interest to future historians:
We often collectively bring about bad outcomes. For example, by continuing to buy cheap supermarket meat, many people together sustain factory farming, and the greenhouse gas emissions of millions of individuals together bring about anthropogenic climate change.
and goes on to note that,
Intuitively, these bad outcomes are not just a matter of bad luck, but the result of some sort of moral shortcoming. Yet in many of these situations, none of the individual agents could have made any difference for the better.
He then demonstrates that, because a equals b, and b equals c, we are still entirely justified in peering down our morally righteous noses at purchasers of cheap meat and emitters of greenhouse gases. His conclusion in academic-speak:
I have shown how Act Consequentialists can find fault with some agent in all cases where multiple agents who have modally robust knowledge of all the relevant facts gratuitously bring about collectively suboptimal outcomes, even if the agents individually cannot make any difference for the better due to the uncooperativeness of others.
The author does not explain the process by which emotions that evolved in a world without cheap supermarket meat have lately acquired the power to prescribe whether buying it is righteous or not.
It has been suggested by some that trading, the exchange of goods and services, is a defining feature of our species. In an article entitled “Markets without Symbolic Limits,” the authors conclude that,
In many cases, we are morally obligated to revise our semiotics in order to allow for greater commodification. We ought to revise our interpretive schemas whenever the costs of holding that schema are significant, without counterweight benefits. It is itself morally objectionable to maintain a meaning system that imbues a practice with negative meanings when that practice would save or improve lives, reduce or alleviate suffering, and so on.
No doubt that very thought occurred to our hunter-gatherer ancestors, enhancing their overall fitness. The happy result was the preservation of the emotional baggage that gave rise to it to later inform the pages of Ethics magazine.
In short, “moral progress,” as reflected in the pages of Ethics, depends on studiously ignoring Darwin, averting our eyes from the profane scribblings of Westermarck, pretending that the recent flood of books and articles on the evolutionary origins of morality and the existence of analogs of human morality in many animals are irrelevant, and gratuitously assuming that there really is some “thing” out there for the butterfly nets to catch. In other words, our “moral progress” has been a progress away from self-understanding. It saddens me, because I’ve always considered self-understanding a “good.” Just another one of my whims.
Posted on June 21st, 2015 33 comments
If we are evolved animals, then it is plausible that we have evolved behavioral traits, and among those traits are a “moral sense.” So much was immediately obvious to Darwin himself. To judge by the number of books that have been published about evolved morality in the last couple of decades, it makes sense to a lot of other people, too. The reason such a sense might have evolved is obvious, especially among highly social creatures such as ourselves. The tendency to act in some ways and not in others enhanced the probability that the genes responsible for those tendencies would survive and reproduce. It is not implausible that this moral sense should be strong, and that it should give rise to such powerful impressions that some things are “really good,” and others are “really evil,” as to produce a sense that “good” and “evil” exist independently as objective things. Such a moral sense is demonstrably very effective at modifying our behavior. It hardly follows that good and evil really are independent, objective things.
If an evolved moral sense really is the “root cause” for the existence of all the various and gaudy manifestations of human morality, is it plausible to believe that this moral sense has somehow tracked an “objective morality” that floats around out there independent of any subjective human consciousness? No. If it really is the root cause, is there some objective mechanism whereby the moral impressions of one human being can leap out of that individual’s skull and gain the normative power to dictate to another human being what is “really good” and “really evil?” No. Can there be any objective justification for outrage? No. Can there be any objective basis for virtuous indignation? No. So much is obvious. Under the circumstances it’s amazing, even given the limitations of human reason, that so many of the most intelligent among it just don’t get it. One can only attribute it to the tremendous power of the moral emotions, the great pleasure we get from indulging them, and the dominant role they play in regulating all human interactions.
These facts were recently demonstrated by the interesting behavior of some of the more prominent intellectuals among us in reaction to some comments at a scientific conference. In case you haven’t been following the story, the commenter in question was Tim Hunt,- a biochemist who won a Nobel Prize in 2001 with Paul Nurse and Leland H. Hartwell for discoveries of protein molecules that control the division (duplication) of cells. At a luncheon during the World Conference of Science Journalists in Seoul, South Korea, he averred that women are a problem in labs because “You fall in love with them, they fall in love with you, and when you criticize them, they cry.”
Hunt’s comment evoked furious moral emotions, not least among atheist intellectuals. According to PZ Myers, proprietor of Pharyngula, Hunt’s comments revealed that he is “bad.” Some of his posts on the subject may be found here, here, and here. For example, according to Myers,
Oh, no! There might be a “chilling effect” on the ability of coddled, privileged Nobel prize winners to say stupid, demeaning things about half the population of the planet! What will we do without the ability of Tim Hunt to freely accuse women of being emotional hysterics, or without James Watson’s proud ability to call all black people mentally retarded?
I thought Hunt’s plaintive whines were a big bowl of bollocks.
All I can say is…fuck off, dinosaur. We’re better off without you in any position of authority.
We can glean additional data in the comments to these posts that demonstrate the human version of “othering.” Members of outgroups, or “others,” are not only “bad,” but also typically impure and disgusting. For example,
Glad I wasn’t the only–or even the first!–to mention that long-enough-to-macramé nose hair. I think I know what’s been going on: The female scientists in his lab are always trying hard to not stare at the bales of hay peeking out of his nostrils and he’s been mistaking their uncomfortable, demure behaviour as ‘falling in love with him’.
However, in creatures with brains large enough to cogitate about what their emotions are trying to tell them, the same suite of moral predispositions can easily give rise to stark differences in moral judgments. Sure enough, others concluded that Myers and those who agreed with him were “bad.” Prominent among them was Richard Dawkins, who wrote in an open letter to the London Times,
Along with many others, I didn’t like Sir Tim Hunt’s joke, but ‘disproportionate’ would be a huge underestimate of the baying witch-hunt that it unleashed among our academic thought police: nothing less than a feeding frenzy of mob-rule self-righteousness.”
The moral emotions of other Nobel laureates informed them that Dawkins was right. For example, according to the Telegraph,
Sir Andre Geim, of the University of Manchester who shared the Nobel prize for physics in 2010 said that Sir Tim had been “crucified” by ideological fanatics , and castigated UCL for “ousting” him.
Avram Hershko, an Israeli scientist who won the 2004 Nobel prize in chemistry, said he thought Sir Tim was “very unfairly treated.” He told the Times: “Maybe he wanted to be funny and was jet lagged, but then the criticism in the social media and in the press was very much out of proportion. So was his prompt dismissal — or resignation — from his post at UCL .”
All these reactions have one thing in common. They are completely irrational unless one assumes the existence of “good” and “bad” as objective things rather than subjective impressions. Or would you have me believe, dear reader, that statements like, “fuck off, dinosaur,” and allusions to crucifixion by “ideological fanatics” engaged in a “baying witch-hunt,” are mere cool, carefully reasoned suggestions about how best to advance the officially certified “good” of promoting greater female participation in the sciences? Nonsense! These people aren’t playing a game of charades, either. Their behavior reveals that they genuinely believe, not only in the existence of “good” and “bad” as objective things, but in their own ability to tell the difference better than those who disagree with them. If they don’t believe it, they certainly act like they do. And yet these are some of the most intelligent representatives of our species. One can but despair, and hope that aliens from another planet don’t turn up anytime soon to witness such ludicrous spectacles.
Clearly, we can’t simply dispense with morality. We’re much too stupid to get along without it. Under the circumstances, it would be nice if we could all agree on what we will consider “good” and what “bad,” within the limits imposed by the innate bedrock of morality in human nature. Unfortunately, human societies are now a great deal different than the ones that existed when the predispositions that are responsible for the existence of morality evolved, and they tend to change very rapidly. It stands to reason that it will occasionally be necessary to “adjust” the types of behavior we consider “good” and “bad” to keep up as best we can. I personally doubt that the current practice of climbing up on rickety soap boxes and shouting down anathemas on anyone who disagrees with us, and then making the “adjustment” according to who shouts the loudest, is really the most effective way to accomplish that end. Among other things, it results in too much collateral damage in the form of shattered careers and ideological polarization. I can’t suggest a perfect alternative at the moment, but a little self-knowledge might help in the search for one. Shedding the illusion of objective morality would be a good start.
Posted on June 12th, 2015 10 comments
The fact that the various gods that mankind has invented over the years, including the currently popular ones, don’t exist has been sufficiently obvious to any reasonably intelligent pre-adolescent who has taken the trouble to think about it since at least the days of Jean Meslier. That unfortunate French priest left us with a Testament that exposed the folly of belief in imaginary super-beings long before the days of Darwin. It included most of the “modern” arguments, including the dubious logic of inventing gods to explain everything we don’t understand, the many blatant contradictions in the holy scriptures, the absurdity of the notion that an infinitely wise and perfect being could be moved to fury or even offended by the pathetic sins of creatures as abject as ourselves, the lack of any need for a supernatural “grounding” for human morality, and many more. Over the years these arguments have been elaborated and expanded by a host of thinkers, culminating in the work of today’s New Atheists. These include Jerry Coyne, whose Faith versus Fact represents their latest effort to talk some sense into the true believers.
Coyne has the usual human tendency, shared by his religious opponents, of “othering” those who disagree with him. However, besides sharing a “sin” that few if any of us are entirely free of, he has some admirable traits as well. For example, he has rejected the Blank Slate ideology of his graduate school professor/advisor, Richard Lewontin, and even goes so far as to directly contradict him in FvF. In spite of the fact that he is an old “New Leftist” himself, he has taken a principled stand against the recent attempts of the ideological Left to dismantle freedom of speech and otherwise decay to its Stalinist ground state. Perhaps best of all as far as a major theme of this blog is concerned, he rejects the notion of objective morality that has been so counter-intuitively embraced by Sam Harris, another prominent New Atheist.
For the most part, Faith versus Fact is a worthy addition to the New Atheist arsenal. It effectively dismantles the “sophisticated Christian” gambit that has encouraged meek and humble Christians of all stripes to imagine themselves on an infinitely higher intellectual plane than such “undergraduate atheists” as Richard Dawkins and Chris Hitchens. It refutes the rapidly shrinking residue of “God of the gaps” arguments, and clearly illustrates the difference between scientific evidence and religious “evidence.” It destroys the comfortable myth that religion is an “other way of knowing,” and exposes the folly of seeking to accommodate religion within a scientific worldview. It was all the more disappointing, after nodding approvingly through most of the book, to suffer one of those “Oh, No!” moments in the final chapter. Coyne ended by wandering off into an ideological swamp with a fumbling attempt to link obscurantist religion with “global warming denialism!”
As it happens, I am a scientist myself. I am perfectly well aware that when an external source of radiation such as that emanating from the sun passes through an ideal earthlike atmosphere that has been mixed with a dose of greenhouse gases such as carbon dioxide, impinges on an ideal earthlike surface, and is re-radiated back into space, the resulting equilibrium temperature of the atmosphere will be higher than if no greenhouse gases were present. I am also aware that we are rapidly adding such greenhouse gases to our atmosphere, and that it is therefore reasonable to be concerned about the potential effects of global warming. However, in spite of that it is not altogether irrational to take a close look at whether all the nostrums proposed as solutions to the problem will actually do any good.
In fact, the earth does not have an ideal static atmosphere over an ideal static and uniform surface. Our planet’s climate is affected by a great number of complex, interacting phenomena. A deterministic computer model capable of reliably predicting climate change decades into the future is far beyond the current state of the art. It would need to deal with literally millions of degrees of freedom in three dimensions, in many cases using potentially unreliable or missing data. The codes currently used to address the problem are probabilistic, reduced basis models, that can give significantly different answers depending on the choice of initial conditions.
In a recently concluded physics campaign at Lawrence Livermore National Laboratory, scientists attempted to achieve thermonuclear fusion ignition by hitting tiny targets containing heavy isotopes of hydrogen with the most powerful laser system ever built. The codes they used to model the process should have been far more accurate than any current model of the earth’s climate. These computer models included all the known relevant physical phenomena, and had been carefully benchmarked against similar experiments carried out on less powerful laser systems. In spite of that, the best experimental results didn’t come close to the computer predictions. The actual number of fusion reactions hardly came within two orders of magnitude of expected values. The number of physical approximations that must be used in climate models is far greater than were necessary in the Livermore fusion codes, and their value as predictive tools must be judged accordingly.
In a word, we have no way of accurately predicting the magnitude of the climate change we will experience in coming decades. If we had unlimited resources, the best policy would obviously be to avoid rocking the only boat we have at the moment. However, this is not an ideal world, and we must wisely allocate what resources we do have among competing priorities. Resources devoted to fighting climate change will not be available for medical research and health care, education, building the infrastructure we need to maintain a healthy economy, and many other worthy purposes that could potentially not only improve human well-being but save many lives. Before we succumb to frantic appeals to “do something,” and spend a huge amount of money to stop global warming, we should at least be reasonably confident that our actions will measurably reduce the danger. To what degree can we expect “science” to inform our decisions, whatever they may be?
For starters, we might look at the track record of the environmental scientists who are now sounding the alarm. The Danish scientist Bjorn Lomborg examined that record in his book, The Skeptical Environmentalist, in areas as diverse as soil erosion, storm frequency, deforestation, and declining energy resources. Time after time he discovered that they had been crying “wolf,” distorting and cherry-picking the data to support dire predictions that never materialized. Lomborg’s book did not start a serious discussion of potential shortcomings of the scientific method as applied in these areas. Instead he was bullied and vilified. A kangaroo court was organized in Denmark made up of some of the more abject examples of so-called “scientists” in that country, and quickly found Lomborg guilty of “scientific dishonesty,” a verdict which the Danish science ministry later had the decency to overturn. In short, the same methods were used against Lomborg as were used decades earlier to silence critics of the Blank Slate orthodoxy in the behavioral sciences, resulting in what was possibly the greatest scientific debacle of all time. At the very least we can conclude that all the scientific checks and balances that Coyne refers to in such glowing terms in Faith versus Fact have not always functioned with ideal efficiency in promoting the cause of truth. There is reason to believe that the environmental sciences are one area in which this has been particularly true.
Under the circumstances it is regrettable that Coyne chose to equate “global warming denialism” a pejorative term used in ideological squabbles that is by its very nature unscientific, with some of the worst forms of religious obscurantism. Instead of sticking to the message, in the end he let his political prejudices obscure it. Objections to the prevailing climate change orthodoxy are hardly coming exclusively from the religious fanatics who sought to enlighten us with “creation science,” and “intelligent design.” I invite anyone suffering from that delusion to have a look at some of the articles the physicist and mathematician Lubos Motl has written about the subject on his blog, The Reference Frame. Examples may be found here, here and, for an example with a “religious” twist, here. There he will find documented more instances of the type of “scientific” behavior Lomborg cited in The Skeptical Environmentalist. No doubt many readers will find Motl irritating and tendentious, but he knows his stuff. Anyone who thinks he can refute his take on the “science” had better be equipped with more knowledge of the subject than is typically included in the bromides that appear in the New York Times.
Alas, I fear that I am once again crying over spilt milk. I can only hope that Coyne has an arrow or two left in his New Atheist quiver, and that next time he chooses a publisher who will insist on ruthlessly chopping out all the political Nebensächlichkeiten. Meanwhile, have a look at his Why Evolution is True website. In addition to presenting a convincing case for evolution by natural selection and a universe free of wrathful super beings, Professor Ceiling Cat, as he is known to regular visitors for reasons that will soon become apparent to newbies, also posts some fantastic wildlife pictures. And if it’s any consolation, I see his book has been panned by John Horgan. Anyone with enemies like that can’t be all bad. Apparently Horgan’s review was actually solicited by the editors of the Wall Street Journal. Go figure! One wonders what rock they’ve been sleeping under lately.
Posted on June 6th, 2015 6 comments
Jerry Coyne just launched another New Atheist salvo against the Defenders of the Faith in the form of his latest book, Faith versus Fact. It’s well written and well reasoned, effectively squashing the “sophisticated Christian” gambit of the faithful, and storming some of their few remaining “God of the gaps” redoubts. However, one of its most striking features is its decisive rejection of the Blank Slate. The New Atheists have learned to stop worrying and love innate morality!
Just like the Blank Slaters of yore, the New Atheists may be found predominantly on the left of the political spectrum. In Prof. Coyne’s case the connection is even more striking. As a graduate student, his professor/advisor was none other than Blank Slate kingpin Richard Lewontin of Not In Our Genes fame! In spite of that, in Faith versus Fact he not only accepts but positively embraces evolutionary psychology in general and innate morality in particular. Why?
It turns out that, along with the origin of life, the existence of consciousness, the “fine tuning” of physical constants, etc., one of the more cherished “gaps” in the “God of the gaps” arguments of the faithful is the existence of innate morality. As with the other “gap” gambits, the claim is that it couldn’t exist unless God created it. As noted in an earlier post, the Christian philosopher Francis Hutcheson used a combination of reason and careful observation of his own species to demonstrate the existence of an innate “moral sense,” building on the earlier work of Anthony Ashley-Cooper and others early in the 18th century. The Blank Slaters would have done well to read his work. Instead, they insisted on the non-existence of human nature, thereby handing over this particular “gap” to the faithful by default. Obviously, Prof. Coyne had second thoughts, and decided to snatch it back. However, he doesn’t quite succeed in breaking entirely with the past. Instead, he insists on elevating “cultural morality” to a co-equal status with innate morality, and demonstrates that he has swallowed Steven Pinker’s fanciful “academic version” of the history of the Blank Slate in the process. Allow me to quote at length some of the relevant passages from his book:
Evolution disproves critical parts of both the Bible and the Quran – the creation stories – yet millions have been unable to abandon them. Finally, and perhaps most important, evolution means that human morality, rather than being imbued in us by God, somehow arose via natural processes: biological evolution involving natural selection on behavior, and cultural evolution involving our ability to calculate, foresee, and prefer the results of different behaviors.
Here we encounter the conflation of biological and cultural evolution, which are described as if they were independent factors accounting for the “rise” of human morality. This tendency to embrace innate explanations while at the same time clinging to the “culture and learning” of the Blank Slate as a distinct, quasi-independent determinant of moral behavior is a recurring theme in FvF. A bit later Coyne seems to return to the Darwinian fold, citing his comments on “well-marked social instincts.”
In his 1871 book The Descent of Man, and Selection in Relation to Sex, where Darwin first applied his theory of evolution by natural selection to humans, he did not neglect morality. In chapter 3, he floats what can be considered the first suggestion that our morality may be an elaboration by our large brains of social instincts evolved in our ancestors: “The following proposition seems to me in a high degree probable – namely, that any animal whatever, endowed with well-marked social instincts, would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well developed, or nearly as well developed, as in man.”
This impression is apparently confirmed in the following remarkable passage:
A century later, the biologist Edward O. Wilson angered many by asserting the complete hegemony of biology over ethics: “Scientists and humanists should consider together the possibility that the time has come for ethics to be removed temporarily from the hands of the philosophers and biologicized.” Wilson’s statement, in the pathbreaking book Sociobiology: The New Synthesis, really began the modern incursion of evolution into human behavior that has become the discipline of evolutionary psychology. In the last four decades psychologists, philosophers, and biologists have begun to dissect the cultural and evolutionary roots of morality.
Here we find, almost verbatim, Steven Pinker’s bowdlerized version of the “history” of the Blank Slate, featuring E. O. Wilson as the knight in shining armor who came out of nowhere to “begin the modern incursion of evolution into human behavior,” with the publication of Sociobiology in 1975. Anyone with even a faint familiarity with the source material knows that Pinker’s version is really nothing but a longish fairy tale. The “modern incursion of evolution into human behavior” was already well underway in Europe in 1951, when Niko Tinbergen published his The Study of Instinct. It was continued there through the 50’s and 60’s in the work of Konrad Lorenz, Irenäus Eibl-Eibesfeldt, and many others. Long before the appearance of Sociobiology, Robert Ardrey began the publication of a series of four books on evolved human nature that really set in motion the smashing of the Blank Slate orthodoxy in the behavioral sciences. There is literally nothing of any significance in Sociobiology bearing on the “incursion of evolution into human behavior” or the emergence of what came to be called evolutionary psychology that is not merely an echo of work that had been published by Ardrey, Lorenz, Tinbergen, and others many years earlier. No matter. It would seem that Pinker’s fanciful “history” has now been transmogrified into one of Coyne’s “facts.”
But I digress. As noted above, even as Coyne demolishes morality as one of the “gaps” that must be filled by inventing a God by noting its emergence as an evolved trait, and even as he explicitly embraces evolutionary psychology, which has apparently only recently become “respectable,” he can never quite entirely free himself from the stench of the Blank Slate. Finally, as if frightened by his own temerity, and perhaps feeling the withering gaze of his old professor/advisor Lewontin, Coyne executes a partial retreat from the territory he has just attempted to reconquer:
In The Better Angels of Our Nature, Steven Pinker makes a strong case that since the Middle Ages most societies have become much less brutal, due largely to changes in what’s considered moral. So if morality is innate, it’s certainly malleable. And that itself refutes the argument that human morality comes from God, unless the moral sentiments of the deity are equally malleable. The rapid change in many aspects of morality, even in the last century, also suggests that much of its “innateness” comes not from evolution but from learning. That’s because evolutionary change simply doesn’t occur fast enough to explain societal changes like our realization that women are not an inferior moiety of humanity, or that we shouldn’t torture prisoners. The explanation for these changes must reside in reason and learning: our realization that there is no rational basis for giving ourselves moral privilege over those who belong to other groups.
Here we find the good professor behaving for all the world like one of Niko Tinbergen’s famous sticklebacks who, suddenly realizing he has strayed far over the established boundary of his own territory, rushes back to more familiar haunts. Only one of Lewontin’s “genetic determinists” would be obtuse enough to suggest that the meanderings of 21st century morality are caused by “evolution,” and those are as rare as unicorns. Obviously, no such extraordinarily rapid evolution is necessary. The innate wellsprings of human morality need not “evolve” at all to account for these wanderings, which are adequately accounted for by the fact that they represent the mediation of a relatively static “moral sense” in a rapidly changing environment through the consciousness of creatures with large brains. As brilliantly demonstrated by Hutcheson in his An Essay on the Nature and Conduct of the Passions and Affections, absent this “root cause” in the form of evolved behavioral predispositions, “reason and learning” could chug along for centuries without spitting out anything remotely resembling morality. Innate behavioral predispositions are the basis of all moral behavior, and without them morality as we know it would not exist. The only role of “reason and learning” is in interpreting and mediating the “moral passions.” Absent those passions, there would be literally nothing to be reasoned about or learned that would manifest itself as moral behavior. They, and not “reason and learning” are the sine qua non for the existence of morality.
But let us refrain from looking this particular gift horse in the mouth. In general, as noted above, the New Atheists may be found more or less in the same region of the ideological spectrum as was once occupied by the Blank Slaters. If they are now constrained to add innate behavior to their arsenal as one more weapon in their continuing battle against the faithful, so much the better for all of us. If nothing else it enhances the chances that, at least for the time being, students of human behavior will be able to continue acquiring the knowledge we need to gain self-understanding without fear of being bullied and intimidated for pointing out facts that happen to be politically inconvenient.
Posted on April 19th, 2015 4 comments
The evolutionary origins of morality and the reasons for its existence have been obvious for over a century. They were no secret to Edvard Westermarck when he published The Origin and Development of the Moral Ideas in 1906, and many others had written books and papers on the subject before his book appeared. However, our species has a prodigious talent for ignoring inconvenient truths, and we have been studiously ignoring that particular truth ever since.
Why is it inconvenient? Let me count the ways! To begin, the philosophers who have taken it upon themselves to “educate” us about the difference between good and evil would be unemployed if they were forced to admit that those categories are purely subjective, and have no independent existence of their own. All of their carefully cultivated jargon on the subject would be exposed as gibberish. Social Justice Warriors and activists the world over, those whom H. L. Mencken referred to collectively as the “Uplift,” would be exposed as so many charlatans. We would begin to realize that the legions of pious prigs we live with are not only an inconvenience, but absurd as well. Gaining traction would be a great deal more difficult for political and religious cults that derive their raison d’être from the fabrication and bottling of novel moralities. And so on, and so on.
Just as they do today, those who experienced these “inconveniences” in one form or another pointed to the drawbacks of reality in Westermarck’s time. For example, from his book,
Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism. If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it. This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong. To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief. The unfortunate circumstance that there do exist dangerous things in the world, proves that something may be dangerous and yet true. another question is whether any scientific truth really is mischievous on the whole, although it may cause much discomfort to certain people. I venture to believe that this, at any rate, is not the case with that form of ethical subjectivism which I am here advocating.
I venture to believe it as well. In the first place, when we accept the truth about morality we make life a great deal more difficult for people of the type described above. Their exploitation of our ignorance about morality has always been an irritant, but has often been a great deal more damaging than that. In the 20th century alone, for example, the Communist and Nazi movements, whose followers imagined themselves at the forefront of great moral awakenings that would lead to the triumph of Good over Evil, resulted in the needless death of tens of millions of people. The victims were drawn disproportionately from among the most intelligent and productive members of society.
Still, just as Westermarck predicted more than a century ago, the bugaboo of “moral relativism” continues to be “repeated as an argument” in our own day. Apparently we are to believe that if the philosophers and theologians all step out from behind the curtain after all these years and reveal that everything they’ve taught us about morality is so much bunk, civilized society will suddenly dissolve in an orgy of rape and plunder.
Such notions are best left behind with the rest of the impedimenta of the Blank Slate. Nothing could be more absurd than the notion that unbridled license and amorality are our “default” state. One can quickly disabuse ones self of that fear by simply reading the comment thread of any popular news website. There one will typically find a gaudy exhibition of moralistic posing and pious one-upmanship. I encourage those who shudder at the thought of such an unpleasant reading assignment to instead have a look at Jonathan Haidt’s The Righteous Mind. As he puts it in the introduction to his book,
I could have titled this book The Moral Mind to convey the sense that the human mind is designed to “do” morality, just as it’s designed to do language, sexuality, music, and many other things described in popular books reporting the latest scientific findings. But I chose the title The Righteous Mind to convey the sense that human nature is not just intrinsically moral, it’s also intrinsically moralistic, critical and judgmental… I want to show you that an obsession with righteousness (leading inevitably to self-righteousness) is the normal human condition. It is a feature of our evolutionary design, not a bug or error that crept into minds that would otherwise be objective and rational.
Haidt also alludes to a potential reason that some of the people already mentioned above continue to evoke the scary mirage of moral relativism:
Webster’s Third New World Dictionary defines delusion as “a false conception and persistent belief in something that has no existence in fact.” As an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history: the rationalist delusion. It’s the idea that reasoning is our most noble attribute, one that makes us like the gods (for Plato) or that brings us beyond the “delusion” of believing in gods (for the New Atheists). The rationalist delusion is not just a claim about human nature. It’s also a claim that the rational caste (philosophers or scientists) should have more power, and it usually comes along with a utopian program for raising more rational children.
Human beings are not by nature moral relativists, and they are in no danger of becoming moral relativists merely by virtue of the fact that they have finally grasped what morality actually is. It is their nature to perceive Good and Evil as real, independent things, independent of the subjective minds that give rise to them, and they will continue to do so even if their reason informs them that what they perceive is a mirage. They will always tend to behave as if these categories were absolute, rather than relative, even if all the theologians and philosophers among them shout at the top of their lungs that they are not being “rational.”
That does not mean that we should leave reason completely in the dust. Far from it! Now that we can finally understand what morality is, and account for the evolutionary origins of the behavioral predispositions that are its root cause, it is within our power to avoid some of the most destructive manifestations of moral behavior. Our moral behavior is anything but infinitely malleable, but we know from the many variations in the way it is manifested in different human societies and cultures, as well as its continuous and gradual change in any single society, that within limits it can be shaped to best suit our needs. Unfortunately, the only way we will be able to come up with an “optimum” morality is by leaning on the weak reed of our ability to reason.
My personal preferences are obvious enough, even if they aren’t set in stone. I would prefer to limit the scope of morality to those spheres in which it is indispensable for lack of a viable alternative. I would prefer a system that reacts to the “Uplift” and unbridled priggishness and self-righteousness with scorn and contempt. I would prefer an educational system that teaches the young the truth about what morality actually is, and why, in spite of its humble origins, we can’t get along without it if we really want our societies to “flourish.” I know; the legions of those whose whole “purpose of life” is dependent on cultivating the illusion that their own versions of Good and Evil are the “real” ones stands in the way of the realization of these whims of mine. Still, one can dream.
Posted on March 22nd, 2015 17 comments
Let me put my own cards on the table. I consider the Blank Slate affair the greatest debacle in the history of science. Perhaps you haven’t heard of it. I wouldn’t be surprised. Those who are the most capable of writing its history are often also those who are most motivated to sweep the whole thing under the rug. In any case, in the context of this post the Blank Slate refers to a dogma that prevailed in the behavioral sciences for much of the 20th century according to which there is, for all practical purposes, no such thing as human nature. I consider it the greatest scientific debacle of all time because, for more than half a century, it blocked the path of our species to self-knowledge. As we gradually approach the technological ability to commit collective suicide, self-knowledge may well be critical to our survival.
Such histories of the affair as do exist are often carefully and minutely researched by historians familiar with the scientific issues involved. In general, they’ve personally lived through at least some phase of it, and they’ve often been personally acquainted with some of the most important players. In spite of that, their accounts have a disconcerting tendency to wildly contradict each other. Occasionally one finds different versions of the facts themselves, but more often its a question of the careful winnowing of the facts to select and record only those that support a preferred narrative.
Obviously, I can’t cover all the relevant literature in a single blog post. Instead, to illustrate my point, I will focus on a single work whose author, Hamilton Cravens, devotes most of his attention to events in the first half of the 20th century, describing the sea change in the behavioral sciences that signaled the onset of the Blank Slate. As it happens, that’s not quite what he intended. What we see today as the darkness descending was for him the light of science bursting forth. Indeed, his book is entitled, somewhat optimistically in retrospect, The Triumph of Evolution: The Heredity-Environment Controversy, 1900-1941. It first appeared in 1978, more or less still in the heyday of the Blank Slate, although murmurings against it could already be detected among academic and professional experts in the behavioral sciences after the appearance of a series of devastating critiques in the popular literature in the 60’s by Robert Ardrey, Konrad Lorenz, and others, topped off by E. O. Wilson’s Sociobiology in 1975.
Ostensibly, the “triumph” Cravens’ title refers to is the demise of what he calls the “extreme hereditarian” interpretations of human behavior that prevailed in the late 19th and early 20th century in favor of a more “balanced” approach that recognized the importance of culture, as revealed by a systematic application of the scientific method. One certainly can’t fault him for being superficial. He introduces us to most of the key movers and shakers in the behavioral sciences in the period in question. There are minutiae about the contents of papers in old scientific journals, comments gleaned from personal correspondence, who said what at long forgotten scientific conferences, which colleges and universities had strong programs in psychology, sociology and anthropology more than 100 years ago, and who supported them, etc., etc. He guides us into his narrative so gently that we hardly realize we’re being led by the nose. Gradually, however, the picture comes into focus.
It goes something like this. In bygone days before the “triumph of evolution,” the existence of human “instincts” was taken for granted. Their importance seemed even more obvious in light of the rediscovery of Mendel’s work. As Cravens put it,
While it would be inaccurate to say that most American experimentalists concluded as the result of the general acceptance of Mendelism by 1910 or so that heredity was all powerful and environment of no consequence, it was nevertheless true that heredity occupied a much more prominent place than environment in their writings.
This sort of “subtlety” is characteristic of Cravens’ writing. Here, he doesn’t accuse the scientists he’s referring to of being outright genetic determinists. They just have an “undue” tendency to overemphasize heredity. It is only gradually, and by dint of occasional reading between the lines that we learn the “true” nature of these believers in human “instinct.” Without ever seeing anything as blatant as a mention of Marxism, we learn that their “science” was really just a reflection of their “class.” For example,
But there were other reasons why so many American psychologists emphasized heredity over environment. They shared the same general ethnocultural and class background as did the biologists. Like the biologists, they grew up in middle class, white Anglo-Saxon Protestant homes, in a subculture where the individual was the focal point of social explanation and comment.
As we read on, we find Cravens is obsessed with white Anglo-Saxon Protestants, or WASPs, noting that the “wrong” kind of scientists belong to that “class” scores of times. Among other things, they dominate the eugenics movement, and are innocently referred to as Social Darwinists, as if these terms had never been used in a pejorative sense. In general they are supposed to oppose immigration from other than “Nordic” countries, and tend to support “neo-Lamarckian” doctrines, and believe blindly that intelligence test results are independent of “social circumstances and milieu.” As we read further into Section I of the book, we are introduced to a whole swarm of these instinct-believing WASPs.
In Section II, however, we begin to see the first glimmerings of a new, critical and truly scientific approach to the question of human instincts. Men like Franz Boas, Robert Lowie, and Alfred Kroeber, began to insist on the importance of culture. Furthermore, they believed that their “culture idea” could be studied in isolation in such disciplines as sociology and anthropology, insisting on sharp, “territorial” boundaries that would protect their favored disciplines from the defiling influence of instincts. As one might expect,
The Boasians were separated from WASP culture; several were immigrants, of Jewish background, or both.
A bit later they were joined by joined by John Watson and his behaviorists who, after performing some experiments on animals and human infants, apparently experienced an epiphany. As Cravens puts it,
To his amazement, Watson concluded that the James-McDougall human instinct theory had no demonstrable experimental basis. He found the instinct theorists had greatly overestimated the number of original emotional reactions in infants. For all practical purposes, he realized that there were no human instincts determining the behavior of adults or even of children.
Perhaps more amazing is the fact that Cravens suspected not a hint of a tendency to replace science with dogma in all this. As Leibniz might have put it, everything was for the best, in this, the best of all possible worlds. Everything pointed to the “triumph of evolution.” According to Cravens, the “triumph” came with astonishing speed:
By the early 1920s the controversy was over. Subsequently, psychologists and sociologists joined hands to work out a new interdisciplinary model of the sources of human conduct and emotion stressing the interaction of heredity and environment, of innate and acquired characters – in short, the balance of man’s nature and his culture.
Alas, my dear Cravens, the controversy was just beginning. In what follows, he allows us a glimpse at just what kind of “balance” he’s referring to. As we read on into Section 3 of the book, he finally gets around to setting the hook:
Within two years of the Nazi collapse in Europe Science published an article symptomatic of a profound theoretical reorientation in the American natural and social sciences. In that article Theodosius Dobzhansky, a geneticist, and M. F. Ashley-Montagu, an anthropologist, summarized and synthesized what the last quarter century’s work in their respective fields implied for extreme hereditarian explanations of human nature and conduct. Their overarching thesis was that man was the product of biological and social evolution. Even though man in his biological aspects was as subject to natural processes as any other species, in certain critical respects he was unique in nature, for the specific system of genes that created an identifiably human mentality also permitted man to experience cultural evolution… Dobzhansky and Ashley-Montagu continued, “Instead of having his responses genetically fixed as in other animal species, man is a species that invents its own responses, and it is out of this unique ability to invent… his responses that his cultures are born.”
and, finally, in the conclusions, after assuring us that,
By the early 1940s the nature-nurture controversy had run its course.
Cravens leaves us with some closing sentences that epitomize his “triumph of evolution:”
The long-range, historical function of the new evolutionary science was to resolve the basic questions about human nature in a secular and scientific way, and thus provide the possibilities for social order and control in an entirely new kind of society. Apparently this was a most successful and enduring campaign in American culture.
At this point, one doesn’t know whether to laugh or cry. Apparently Cravens, who has just supplied us with arcane details about who said what at obscure scientific conferences half a century and more before he published his book was completely unawares of exactly what Ashley Montagu, his herald of the new world order, meant when he referred to “extreme hereditarian explanations,” in spite of the fact that he spelled it out ten years earlier in an invaluable little pocket guide for the followers of the “new science” entitled Man and Aggression. There Montagu describes the sort of “balance of man’s nature and his culture” he intended as follows:
Man is man because he has no instincts, because everything he is and has become he has learned, acquired, from his culture, from the man-made part of the environment, from other human beings.
There is, in fact, not the slightest evidence or ground for assuming that the alleged “phylogenetically adapted instinctive” behavior of other animals is in any way relevant to the discussion of the motive-forces of human behavior. The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.
So much for Cravens’ “balance.” He spills a great deal of ink in his book assuring us that the Blank Slate orthodoxy he defends was the product of “science,” little influenced by any political or ideological bias. Apparently he also didn’t notice that, not only in Man and Aggression, but ubiquitously in the Blank Slate literature, the “new science” is defended over and over and over again with the “argument” that anyone who opposes it is a racist and a fascist, not to mention far right wing.
As it turns out, Cravens didn’t completely lapse into a coma following the publication of Ashley Montagu’s 1947 pronunciamiento in Science. In his “Conclusion” we discover that, after all, he had a vague presentiment of the avalanche that would soon make a shambles of his “new evolutionary science.” In his words,
Of course in recent years something approximating at least a minor revival of the old nature-nurture controversy seems to have arisen in American science and politics. It is certainly quite possible that this will lead to a full scale nature-nurture controversy in time, not simply because of the potential for a new model of nature that would permit a new debate, but also, as one historian has pointed out, because our own time, like the 1920s, has been a period of racial and ethnic polarization. Obviously any further comment would be premature.
Obviously, my dear Cravens. What’s the moral of the story, dear reader? Well, among other things, that if you really want to learn something about the Blank Slate, you’d better not be shy of wading through the source literature yourself. It’s still out there, waiting to be discovered. One particularly rich source of historical nuggets is H. L. Mencken’s American Mercury, which Ron Unz has been so kind as to post online. Mencken took a personal interest in the “nature vs. nurture” controversy, and took care to publish articles by heavy hitters on both sides. For a rather different take than Cravens on the motivations of the early Blank Slaters, see for example, Heredity and the Uplift, by H. M. Parshley. Parshley was an interesting character who took on no less an opponent than Clarence Darrow in a debate over eugenics, and later translated Simone de Beauvoir’s feminist manifesto The Second Sex into English.
Posted on March 15th, 2015 No comments
Human morality is the manifestation of innate behavioral traits in animals with brains large enough to reason about their own emotional reactions. It exists because those traits evolved. They did not evolve to serve any purpose, but purely because they happened to enhance the probability that individuals carrying them would survive and reproduce. In the absence of those traits morality as we know it would not exist. Darwin certainly suspected as much. Now, more than a century and a half after the publication of On the Origin of Species, so much is really obvious.
Scores of books have been published recently on the innate emotional wellsprings of morality. Its analogs have been clearly identified in other animals. Its expression has been demonstrated in infants, long before the they could have learned the responses in question via cultural transmission. Unless all these books are pure gibberish, and all these observations are delusions, morality is ultimately the expression of physical phenomena happening in the brains of individuals. In other words, it is subjective. It does not have an independent existence as a thing-in-itself, outside of the minds of individuals. It follows that it cannot somehow jump out of the skulls of those individuals and gain some kind of an independent, legitimate power to prescribe to other individuals what they should or should not do.
In spite of all that, the faith in objective morality persists, in defiance of the obvious. The truth is too jarring, too uncomfortable, too irreconcilable with what we “feel,” and so we have turned away from it. As the brilliant Edvard Westermarck put it in his The Origin and Development of the Moral Ideas,
As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivise the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion.
It follows that, as Westermarck puts it,
The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is, that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.
If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.
Westermarck wrote those words in 1906. More than a century later, we are still whistling past the graveyard of objective morality. Interested readers can confirm this by a quick trip to their local university library. Browsing through the pages of Ethics, one of the premier journals devoted to the subject, they will find articles on deontological, consequentialist, and several other abstruse flavors of morality. They will find a host of helpful recipes for what should or should not be done in a given situation. They will discover that it is their “duty” to do this, that, or the other thing. Finally, they will find all of the above ensconced in an almost impenetrable smokescreen of academic jargon. In a word, most of the learned contributors to Ethics have ignored Westermarck, and are still chasing their tails, doggedly pursuing a “scientific ethics” that will “fix rules for human conduct” once and for all.
Challenge one of these learned philosophers, and their response is typically threadbare enough. A common gambit is no more complex than the claim that objective morality must exist, because if it didn’t then the things we all know are bad wouldn’t be bad anymore. An example of the genre recently turned up on the opinion pages of The New York Times, entitled, Why Our Children Don’t Think There Are Moral Facts. Its author, Justin McBrayer, an associate professor of philosophy at Fort Lewis College in Durango, Colorado, opens with the line,
What would you say if you found out that our public schools were teaching children that it is not true that it’s wrong to kill people for fun or cheat on tests? Would you be surprised?
Now, as Westermarck pointed out, it is impossible for things to be “true” if they have no objective existence. Read the article carefully, and you’ll see that McBrayer doesn’t even attempt to dispute the logic behind Westermarck’s observation. Rather, he tells him the same thing Socrates’ judges told him as they handed him the hemlock: “I’m right and you’re wrong because what you claim is true is bad for the children.” In other words, there must be an objective bad because otherwise it would be bad. Other than that, the only attempt at an argument in the whole article is the following ad hominem remark about any philosopher who denies the existence of objective morality:
There are historical examples of philosophers who endorse a kind of moral relativism, dating back at least to Protagoras who declared that “man is the measure of all things,” and several who deny that there are any moral facts whatsoever. But such creatures are rare.
In other words, objective morality must be true, because those who deny it are “creatures.” No doubt, such “defenses” of objective morality have been around since time immemorial. They certainly were in Westermarck’s day. His response was as valid then as it is now:
Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism. If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it. This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong. To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief.
Obviously, as Westermarck foresaw, the argument is “still repeated” more than a century later. In McBrayer’s case, it goes like this:
Indeed, in the world beyond grade school, where adults must exercise their moral knowledge and reasoning to conduct themselves in the society, the stakes are greater. There, consistency demands that we acknowledge the existence of moral facts. If it’s not true that it’s wrong to murder a cartoonist with whom one disagrees, then how can we be outraged? If there are no truths about what is good or valuable or right, how can we prosecute people for crimes against humanity? If it’s not true that all humans are created equal, then why vote for any political system that doesn’t benefit you over others?
As a philosopher, I already knew that many college-aged students don’t believe in moral facts. While there are no national surveys quantifying this phenomenon, philosophy professors with whom I have spoken suggest that the overwhelming majority of college freshmen in their classrooms view moral claims as mere opinions that are not true or are true only relative to a culture.
One often hears such remarks about the supposed pervasiveness of moral relativism. They are commonly based on the fallacy that human morality is the product of human reason rather than human emotion. The reality is that Mother Nature has been blithely indifferent to the repeated assertions of philosophers that, unless we listen to them, morality will disappear. She designed morality to work, for better or worse, whether we take the trouble to reason about it or not. All these fears of moral relativism can’t even pass the “ho ho” test. They fly in the face of all the observable facts about moral behavior in the real world. Moral relativism on campus, you say? Please! There has never been such a hotbed of extreme, moralistic piety as exists today in academia since the heyday of the Puritans. No less a comedian than Chris Rock won’t even perform on college campuses anymore because of repeated encounters with the extreme manifestations of priggishness one finds there. One can’t tell a joke without “offending” someone.
Morality isn’t going anywhere. It will continue to function just as it always has, oblivious to whether it has the permission of philosophers or not. As can be seen by the cultural differences in the way that moral emotions are “acted out,” within certain limits morality is malleable. We have some control over whether it is “acted out” by the immolation of enemy pilots and the beheading and crucifixion of “infidels,” or in forms that promote what Sam Harris might call “human flourishing.” Regardless of our choice, I suspect that our chances of successfully shaping a morality that most of us would find agreeable will be enhanced if we base our actions on what morality actually is rather than on what we want it to be.
Posted on February 28th, 2015 No comments
All appearances to the contrary in the popular media, the Blank Slate lives on. Of course, its heyday is long gone, but it slumbers on in the more obscure niches of academia. One of its more recent manifestations just turned up at Scientia Salon in the form of a paper by one Mark Fedyk, an assistant professor of philosophy at Mount Allison University in Sackville, Canada. Entitled, “How (not) to Bring Psychology and Biology Together,” it provides the interested reader with a glimpse at several of the more typical features of the genre as it exists today.
Fedyk doesn’t leave us in doubt about where he’s coming from. Indeed, he lays his cards on the table in plain sight in the abstract, where he writes that, “psychologists should have a preference for explanations of adaptive behavior in humans that refer to learning and other similarly malleable psychological mechanisms – and not modules or instincts or any other kind of relatively innate and relatively non-malleable psychological mechanisms.” Reading on into the body of the paper a bit, we quickly find another trademark trait of both the ancient and modern Blank Slaters; their tendency to invent strawman arguments, attribute them to their opponents, and then blithely ignore those opponents when they point out that the strawmen bear no resemblance to anything they actually believe.
In Fedyk’s case, many of the strawmen are incorporated in his idiosyncratic definition of the term “modules.” Among other things, these “modules” are “strongly nativist,” they don’t allow for “developmental plasticity,” they imply a strong, either-or version of the ancient nature vs. nurture dichotomy, and they are “relatively innate and relatively non-malleable.” In Fedyk’s paper, the latter phrase serves the same purpose as the ancient “genetic determinism” strawman did in the heyday of the Blank Slate. Apparently that’s now become too obvious, and the new jargon is introduced by way of keeping up appearances. In any case, we gather from the paper that all evolutionary psychologists are supposed to believe in these “modules.” It matters not a bit to Fedyk that his “modules” have been blown out of the water literally hundreds of times in the EP literature stretching back over a period of two decades and more. A good example that patiently dissects each of his strawmen one by one is “Modularity in Cognition: Framing the Debate,” published by Barrett and Kurzban back in 2006. It’s available free online, and I invite my readers to have a look at it. It can be Googled up by anyone in a few seconds, but apparently Fedyk has somehow failed to discover it.
Once he has assured us that all EPers have an unshakable belief in his “modules,” Fedyk proceeds to concoct an amusing fairy tale based on that assumption. In the process, he presents his brilliant and original theory of “anticipated consilience.” According to this theory, researchers in new fields, such as EP, should rely on the findings of more mature “auxiliary disciplines,” particularly those which have been “extremely successful” in the past, to inform their own research. In the case of evolutionary psychology, the “auxiliary discipline” turns out to be evolutionary biology. As Fedyk puts it,
One of the more specific ways of doing this is to rely upon what can be called the principle of anticipated consilience, which says that it is rational to have a prima facie preference for those novel theories commended by previous scientific research which are most likely to be subsequently integrated in explanatorily- or inductively-fruitful ways with the relevant discipline as it expands. The principle will be reliable simply because the novel theories which are most likely to be subsequently integrated into the mature scientific discipline as it expands are just those novel theories which are most likely to be true.
He then proceeds to incorporate his strawmen into an illustration of how this “anticipated consilience” would work in practice:
To see how this would work, consider, for example, two fairly general categories of proximate explanations for adaptive behaviors in humans, nativist (i.e., bad, ed.) psychological hypotheses which posit some kind of module (namely the imaginary kind invented by Fedyk, ed.) and non-nativist (i.e., good, ed.) psychological hypotheses, which posit some kind of learning routine (i.e., the Blank Slate, ed.)
As the tale continues, we learn that,
…it is plausible that, for approximately the first decade of research in evolutionary psychology following its emergence out of sociobiology in the 1980s, considerations of anticipated consilience would have likely rationalized a preference for proximate explanations which refer to modules and similar types of proximate mechanisms.
The reason for this given by Fedyk turns out to be the biggest thigh-slapper in this whole, implausible yarn,
So by the time evolutionary psychology emerged in reaction to human sociobiology in the 1980s, (Konrad) Lorenz’s old hydraulic model of instincts really was the last positive model in biology of the proximate causes of adaptive behavior.
Whimsical? Yes, but stunning is probably a better adjective. If we are to believe Fedyk, we are forced to conclude that he never even heard of the Blank Slate! After all, some of that orthodoxy’s very arch-priests, such as Richard Lewontin and Stephen Jay Gould are/were evolutionary biologists. They, too, had a “positive model in biology of the proximate causes of adaptive behavior,” in the form of the Blank Slate. Fedyk is speaking of a time in which the Blank Slate dogmas were virtually unchallenged in the behavioral sciences, and anyone who got out of line was shouted down as a fascist, or worse. And yet we are supposed to swallow the ludicrous imposture that Lorenz’ hydraulic theory not only overshadowed the Blank Slate dogmas, but was the only game in town! But let’s not question the plot. Continuing on with Fedyk’s adjusted version of history, we discover that (voila!) the evolutionary biologists suddenly recovered from their infatuation with hydraulic theory, and got their minds right:
…what I want to argue is that, in the last decade or so, a new understanding of the biological importance of developmental plasticity has implications for evolutionary psychology. Whereas previously considerations of anticipated consilience with evolutionary biology and cognitive science may have provided support for those proximate hypotheses which posited modules, I argue in this section that these very same considerations now support significantly non-nativist proximate hypotheses. The argument, put simply, is that traits which have high degrees of plasticity will be more evolutionarily robust than highly canalized innately specified non-malleable traits like mental modules. The upshot is that a mind comprised mostly of modules is not plastic in this specific sense, and is therefore ultimately unlikely to be favoured by natural selection. But a mind equipped with powerful, domain general learning routines does have the relevant plasticity.
I leave it as an exercise for the student to pick out all the innumerable strawmen in this parable of the “great change of heart” in evolutionary biology. Suffice it to say that, as a result of this new-found “plasticity,” anticipated consilience now requires evolutionary psychologists to reject their silly notions about human nature in favor of a return to the sheltering haven of the Blank Slate. Fedyk helpfully spells it out for us:
This means that, given a choice between proximate explanations which reflect a commitment to the massive modularity hypothesis and proximate explanations which, instead, reflect an approach to the mind which privileges learning…, the latter is most plausible in light of evolutionary biology.
The kicker here is that if anyone even mildly suggests any connection between this latter day manifestation of cultural determinism and the dogmas of the Blank Slate, the Fedyks of the world scream foul. Apparently we are to believe that the “proximate explanations” of evolutionary psychology aren’t completely excluded as long as one can manage a double back flip over the rather substantial barrier of “anticipated consilience” that blocks the way. How that might actually turn out to be possible is never explained. In spite of these scowling denials, I personally will continue to prefer the naïve assumption that, if something walks like a duck, quacks like a duck, and flaps its wings like a duck, then it actually is a duck, or Blank Slater, as the case may be.
Posted on December 31st, 2014 3 comments
It’s great to see another title by E. O. Wilson. Reading his books is like continuing a conversation with a wise old friend. If you run into him on the street you don’t expect to hear him say anything radically different from what he’s said in the past. However, you always look forward to chatting with him because he’s never merely repetitious or tiresome. He always has some thought-provoking new insight or acute comment on the latest news. At this stage in his life he also delights in puncturing the prevailing orthodoxies, without the least fear of the inevitable anathemas of the defenders of the faith.
In his latest, The Meaning of Human Existence, he continues the open and unabashed defense of group selection that so rattled his peers in his previous book, The Social Conquest of Earth. I’ve discussed some of the reasons for their unease in an earlier post. In short, if it can really be shown that the role of group selection in human evolution has been as prominent as Wilson claims, it will seriously mar the legacy of such prominent public intellectuals as Richard Dawkins and Steven Pinker, as well as a host of other prominent scientists, who have loudly and tirelessly insisted on the insignificance of group selection. It will also require some serious adjustments to the fanciful yarn that currently passes as the “history” of the Blank Slate affair. Obviously, Wilson is firmly convinced that he’s on to something, because he’s not letting up. He dismisses the alternative inclusive fitness interpretation of evolution as unsupported by the evidence and at odds with the most up-to-date mathematical models. In his words,
Although the controversy between natural selection and inclusive fitness still flickers here and there, the assumptions of the theory of inclusive fitness have proved to be applicable only in a few extreme cases unlikely to occur on Earth on any other planet. No example of inclusive fitness has been directly measured. All that has been accomplished is an indirect analysis called the regressive method, which unfortunately has itself been mathematically invalidated.
Interestingly, while embracing group selection, Wilson then explicitly agrees with one of the most prominent defenders of inclusive fitness, Richard Dawkins, on the significance of the gene:
The use of the individual or group as the unit of heredity, rather than the gene, is an even more fundamental error.
Very clever, that, a preemptive disarming of the predictable invention of straw men to attack group selection via the bogus claim that it implies that groups are the unit of selection. The theory of group selection already has a fascinating, not to mention ironical, history, and its future promises to be no less entertaining.
When it comes to the title of the book, Wilson himself lets us know early on that its just a forgivable form of “poetic license.” In his words,
In ordinary usage the word “meaning” implies intention. Intention implies design, and design implies a designer. Any entity, any process, or definition of any word itself is put into play as a result of an intended consequence in the mind of the designer. This is the heart of the philosophical worldview of organized religions, and in particular their creation stories. Humanity, it assumes, exists for a purpose. Individuals have a purpose in being on Earth. Both humanity and individuals have meaning.
Wilson is right when he says that this is what most people understand by the term “meaning,” and he decidedly rejects the notion that the existence of such “meaning” is even possible later in the book by rejecting religious belief more bluntly than in any of his previous books. He provides himself with a fig leaf in the form of a redefinition of “meaning” as follows:
There is a second, broader way the word “meaning” is used, and a very different worldview implied. It is that the accidents of history, not the intentions of a designer, are the source of meaning.
I rather suspect most philosophers will find this redefinition unpalatable. Beyond that, I won’t begrudge Wilson his fig leaf. After all, if one takes the trouble to write books, one generally also has an interest in selling them.
As noted above, another significant difference between this and Wilson’s earlier books is his decisive support for what one might call the “New Atheist” line, as set forth in books by the likes of Richard Dawkins, Sam Harris, and Christopher Hitchens. Obviously, Wilson has been carefully following the progress of the debate. He rejects religions, significantly in both their secular as well as their traditional spiritual manifestations, as both false and dangerous, mainly because of their inevitable association with tribalism. In his words,
Religious warriors are not an anomaly. It is a mistake to classify believers of particular religious and dogmatic religionlike ideologies into two groups, moderate versus extremist. The true cause of hatred and violence is faith versus faith, an outward expression of the ancient instinct of tribalism. Faith is the one thing that makes otherwise good people do bad things.
and, embracing the ingroup/outgroup dichotomy in human moral behavior I’ve often alluded to on this blog,
The great religions… are impediments to the grasp of reality needed to solve most social problems in the real world. Their exquisitely human flaw is tribalism. The instinctual force of tribalism in the genesis of religiosity is far stronger than the yearning for spirituality. People deeply need membership in a group, whether religious or secular. From a lifetime of emotional experience, they know that happiness, and indeed survival itself, require that they bond with oth3ers who share some amount of genetic kinship, language, moral beliefs, geographical location, social purpose, and dress code – preferably all of these but at least two or three for most purposes. It is tribalism, not the moral tenets and humanitarian thought of pure religion, that makes good people do bad things.
Finally, in a passage worthy of New Atheist Jerry Coyne himself, Wilson denounces both “accommodationists” and the obscurantist teachings of the “sophisticated Christians:”
Most serious writers on religion conflate the transcendent quest for meaning with the tribalistic defense of creation myths. They accept, or fear to deny, the existence of a personal deity. They read into the creation myths humanity’s effort to communicate with the deity, as part of the search for an uncorrupted life now and beyond death. Intellectual compromisers one and all, they include liberal theologians of the Niebuhr school, philosophers battening on learned ambiguity, literary admirers of C. S. Lewis, and others persuaded, after deep thought, that there most be Something Out There. They tend to be unconscious of prehistory and the biological evolution of human instinct, both of which beg to shed light on this very important subject.
In a word, Wilson has now positioned himself firmly in the New Atheist camp. This is hardly likely to mollify many of the prominent New Atheists, who will remain bitter because of his promotion of group selection, but at this point in his career, Wilson can take their hostility pro granulum salis.
There is much more of interest in The Meaning of Human Existence than I can cover in a blog post, such as Wilson’s rather vague reasons for insisting on the importance of the humanities in solving our problems, his rejection of interplanetary and/or interstellar colonization, and his speculations on the nature of alien life forms. I can only suggest that interested readers buy the book.Anthropology, Atheism, Blank Slate, Christianity, Evolution, Evolutionary psychology, Extraterrestrial life, Group Selection, human evolution, Human nature, Hunting Hypothesis, Ingroups and Outgroups, Kin selection, Morality, Philosophy, Religion, Secular Religions, The Meaning of Life, The Purpose of Life group selection, Meaning of Life
Posted on November 19th, 2014 No comments
An article entitled “The Evolution of War – A User’s Guide,” recently turned up at “This View of Life,” a website hosted by David Sloan Wilson. Written by Anthony Lopez, it is one of the more interesting artifacts of the ongoing “correction” of the history of the debate over human nature I’ve seen in a while. One of the reasons it’s so remarkable is that Wilson himself is one of the foremost proponents of the theory of group selection, Lopez claims in his article that one of the four “major theoretical positions” in the debate over the evolution of war is occupied by the “group selectionists,” and yet he conforms to the prevailing academic conceit of studiously ignoring the role of Robert Ardrey, who was not only the most influential player in the “origins of war” debate, but overwhelmingly so in the whole “Blank Slate” affair as well. Why should that be so remarkable? Because at the moment the academics’ main rationalization for pretending they never heard of a man named Ardrey is (you guessed it) his support for group selection!
When it comes to the significance of Ardrey, you don’t have to take my word for it. His was the most influential voice in a growing chorus that finally smashed the Blank Slate orthodoxy. The historical source material is all still there for anyone who cares to trouble themselves to check it. One invaluable piece thereof is “Man and Aggression,” a collection of essays edited by arch-Blank Slater Ashley Montagu and aimed mainly at Ardrey, with occasional swipes at Konrad Lorenz, and with William Golding, author of “Lord of the Flies,” thrown in for comic effect. The last I looked you could still pick it up for a penny at Amazon. For example, from one of the essays by psychologist Geoffrey Gorer,
Almost without question, Robert Ardrey is today the most influential writer in English dealing with the innate or instinctive attributes of human nature, and the most skilled populariser of the findings of paleo-anthropologists, ethologists, and biological experimenters… He is a skilled writer, with a lively command of English prose, a pretty turn of wit, and a dramatist’s skill in exposition; he is also a good reporter, with the reporter’s eye for the significant detail, the striking visual impression. He has taken a look at nearly all the current work in Africa of paleo-anthropologists and ethologists; time and again, a couple of his paragraphs can make vivid a site, such as the Olduvai Gorge, which has been merely a name in a hundred articles.
In case you’ve been asleep for the last half a century, the Blank Slate affair was probably the greatest debacle in the history of science. The travails of Galileo and the antics of Lysenko are child’s play in comparison. For decades, whole legions of “men of science” in the behavioral sciences pretended to believe there was no such thing as human nature. As was obvious to any ten year old, that position was not only not “science,” it was absurd on the face of it. However, it was required as a prop for a false political ideology, and so it stood for half a century and more. Anyone who challenged it was quickly slapped down as a “fascist,” a “racist,” or a denizen of the “extreme right wing.” Then Ardrey appeared on the scene. He came from the left of the ideological spectrum himself, but also happened to be an honest man. The main theme of all his work in general, and the four popular books he wrote between 1961 and 1976 in particular, was that here is such a thing as human nature, and that it is important. He insisted on that point in spite of a storm of abuse from the Blank Slate zealots. On that point, on that key theme, he has been triumphantly vindicated. Almost all the “men of science,” in psychology, sociology, and anthropology were wrong, and he was right.
Alas, the “men of science” could not bear the shame. After all, Ardrey was not one of them. Indeed, he was a mere playwright! How could men like Shakespeare, Ibsen, and Moliere possibly know anything about human nature? Somehow, they had to find an excuse for dropping Ardrey down the memory hole, and find one they did! There were actually more than one, but the main one was group selection. Writing in “The Selfish Gene” back in 1976, Richard Dawkins claimed that Ardrey, Lorenz, and Irenäus Eibl-Eibesfeldt were “totally and utterly wrong,” not because they insisted there was such a thing as human nature, but because of their support for group selection! Fast forward to 2002, and Steven Pinker managed the absurd feat of writing a whole tome about the Blank Slate that only mentioned Ardrey in a single paragraph, and then only to assert that he had been “totally and utterly wrong,” period, on Richard Dawkins’ authority, and with no mention of group selection as the reason. That has been the default position of the “men of science” ever since.
Which brings us back to Lopez’ paper. He informs us that one of the “four positions” in the debate over the evolution of war is “The Killer Ape Hypothesis.” In fact, there never was a “Killer Ape Hypothesis” as described by Lopez. It was a strawman, pure and simple, concocted by Ardrey’s enemies. Note that, in spite of alluding to this imaginary “hypothesis,” Lopez can’t bring himself to mention Ardrey. Indeed, so effective has been the “adjustment” of history that, depending on his age, it’s quite possible that he’s never even heard of him. Instead, Konrad Lorenz is dragged in as an unlikely surrogate, even though he never came close to supporting anything even remotely resembling the “Killer Ape Hypothesis.” His main work relevant to the origins of war was “On Aggression,” and he hardly mentioned apes in it at all, focusing instead mainly on the behavior of fish, birds and rats.
And what of Ardrey? As it happens, he did write a great deal about our ape-like ancestors. For example, he claimed that Raymond Dart had presented convincing statistical evidence that one of them, Australopithecus africanus, had used weapons and hunted. That statistical evidence has never been challenged, and continues to be ignored by the “men of science” to this day. Without bothering to even mention it, C. K. Brain presented an alternative hypothesis that the only acts of “aggression” in the caves explored by Dart had been perpetrated by leopards. In recent years, as the absurdities of his hypothesis have been gradually exposed, Brain has been in serious row back mode, and Dart has been vindicated to the point that he is now celebrated as the “father of cave taphonomy.”
Ardrey also claimed that our apelike ancestors had hunted, most notably in his last book, “The Hunting Hypothesis.” When Jane Goodall published her observation of chimpanzees hunting, she was furiously vilified by the Blank Slaters. She, too, has been vindicated. Eventually, even PBS aired a program about hunting behavior in early hominids, and, miraculously, just this year even the impeccably politically correct “Scientific American” published an article confirming the same in the April edition! In a word, we have seen the vindication of these two main hypotheses of Ardrey concerning the behavior of our apelike and hominid ancestors. Furthermore, as I have demonstrated with many quotes from his work in previous posts, he was anything but a “genetic determinist,” and, while he strongly supported the view that innate predispositions, or “human nature,” if you will, have played a significant role in the genesis of human warfare, he clearly did not believe that it was unavoidable or inevitable. In fact, that belief is one of the main reasons he wrote his books. In spite of that, the “Killer Ape” zombie marches on, and turns up as one of the “four positions” that are supposed to “illuminate” the debate over the origins of war, while another of the “positions” is supposedly occupied by of all things, “group selectionists!” History is nothing if not ironical.
Lopez’ other two “positions” include “The Strategic Ape Hypothesis,” and “The Inventionists.” I leave the value of these remaining “positions” to those who want to “examine the layout of this academic ‘battlefield’”, as he puts it, to the imagination of my readers. Other than that, I can only suggest that those interested in learning the truth, as opposed to the prevailing academic narrative, concerning the Blank Slate debacle would do better to look at the abundant historical source material themselves than to let someone else “interpret” it for them.