Posted on March 22nd, 2015 17 comments
Let me put my own cards on the table. I consider the Blank Slate affair the greatest debacle in the history of science. Perhaps you haven’t heard of it. I wouldn’t be surprised. Those who are the most capable of writing its history are often also those who are most motivated to sweep the whole thing under the rug. In any case, in the context of this post the Blank Slate refers to a dogma that prevailed in the behavioral sciences for much of the 20th century according to which there is, for all practical purposes, no such thing as human nature. I consider it the greatest scientific debacle of all time because, for more than half a century, it blocked the path of our species to self-knowledge. As we gradually approach the technological ability to commit collective suicide, self-knowledge may well be critical to our survival.
Such histories of the affair as do exist are often carefully and minutely researched by historians familiar with the scientific issues involved. In general, they’ve personally lived through at least some phase of it, and they’ve often been personally acquainted with some of the most important players. In spite of that, their accounts have a disconcerting tendency to wildly contradict each other. Occasionally one finds different versions of the facts themselves, but more often its a question of the careful winnowing of the facts to select and record only those that support a preferred narrative.
Obviously, I can’t cover all the relevant literature in a single blog post. Instead, to illustrate my point, I will focus on a single work whose author, Hamilton Cravens, devotes most of his attention to events in the first half of the 20th century, describing the sea change in the behavioral sciences that signaled the onset of the Blank Slate. As it happens, that’s not quite what he intended. What we see today as the darkness descending was for him the light of science bursting forth. Indeed, his book is entitled, somewhat optimistically in retrospect, The Triumph of Evolution: The Heredity-Environment Controversy, 1900-1941. It first appeared in 1978, more or less still in the heyday of the Blank Slate, although murmurings against it could already be detected among academic and professional experts in the behavioral sciences after the appearance of a series of devastating critiques in the popular literature in the 60’s by Robert Ardrey, Konrad Lorenz, and others, topped off by E. O. Wilson’s Sociobiology in 1975.
Ostensibly, the “triumph” Cravens’ title refers to is the demise of what he calls the “extreme hereditarian” interpretations of human behavior that prevailed in the late 19th and early 20th century in favor of a more “balanced” approach that recognized the importance of culture, as revealed by a systematic application of the scientific method. One certainly can’t fault him for being superficial. He introduces us to most of the key movers and shakers in the behavioral sciences in the period in question. There are minutiae about the contents of papers in old scientific journals, comments gleaned from personal correspondence, who said what at long forgotten scientific conferences, which colleges and universities had strong programs in psychology, sociology and anthropology more than 100 years ago, and who supported them, etc., etc. He guides us into his narrative so gently that we hardly realize we’re being led by the nose. Gradually, however, the picture comes into focus.
It goes something like this. In bygone days before the “triumph of evolution,” the existence of human “instincts” was taken for granted. Their importance seemed even more obvious in light of the rediscovery of Mendel’s work. As Cravens put it,
While it would be inaccurate to say that most American experimentalists concluded as the result of the general acceptance of Mendelism by 1910 or so that heredity was all powerful and environment of no consequence, it was nevertheless true that heredity occupied a much more prominent place than environment in their writings.
This sort of “subtlety” is characteristic of Cravens’ writing. Here, he doesn’t accuse the scientists he’s referring to of being outright genetic determinists. They just have an “undue” tendency to overemphasize heredity. It is only gradually, and by dint of occasional reading between the lines that we learn the “true” nature of these believers in human “instinct.” Without ever seeing anything as blatant as a mention of Marxism, we learn that their “science” was really just a reflection of their “class.” For example,
But there were other reasons why so many American psychologists emphasized heredity over environment. They shared the same general ethnocultural and class background as did the biologists. Like the biologists, they grew up in middle class, white Anglo-Saxon Protestant homes, in a subculture where the individual was the focal point of social explanation and comment.
As we read on, we find Cravens is obsessed with white Anglo-Saxon Protestants, or WASPs, noting that the “wrong” kind of scientists belong to that “class” scores of times. Among other things, they dominate the eugenics movement, and are innocently referred to as Social Darwinists, as if these terms had never been used in a pejorative sense. In general they are supposed to oppose immigration from other than “Nordic” countries, and tend to support “neo-Lamarckian” doctrines, and believe blindly that intelligence test results are independent of “social circumstances and milieu.” As we read further into Section I of the book, we are introduced to a whole swarm of these instinct-believing WASPs.
In Section II, however, we begin to see the first glimmerings of a new, critical and truly scientific approach to the question of human instincts. Men like Franz Boas, Robert Lowie, and Alfred Kroeber, began to insist on the importance of culture. Furthermore, they believed that their “culture idea” could be studied in isolation in such disciplines as sociology and anthropology, insisting on sharp, “territorial” boundaries that would protect their favored disciplines from the defiling influence of instincts. As one might expect,
The Boasians were separated from WASP culture; several were immigrants, of Jewish background, or both.
A bit later they were joined by joined by John Watson and his behaviorists who, after performing some experiments on animals and human infants, apparently experienced an epiphany. As Cravens puts it,
To his amazement, Watson concluded that the James-McDougall human instinct theory had no demonstrable experimental basis. He found the instinct theorists had greatly overestimated the number of original emotional reactions in infants. For all practical purposes, he realized that there were no human instincts determining the behavior of adults or even of children.
Perhaps more amazing is the fact that Cravens suspected not a hint of a tendency to replace science with dogma in all this. As Leibniz might have put it, everything was for the best, in this, the best of all possible worlds. Everything pointed to the “triumph of evolution.” According to Cravens, the “triumph” came with astonishing speed:
By the early 1920s the controversy was over. Subsequently, psychologists and sociologists joined hands to work out a new interdisciplinary model of the sources of human conduct and emotion stressing the interaction of heredity and environment, of innate and acquired characters – in short, the balance of man’s nature and his culture.
Alas, my dear Cravens, the controversy was just beginning. In what follows, he allows us a glimpse at just what kind of “balance” he’s referring to. As we read on into Section 3 of the book, he finally gets around to setting the hook:
Within two years of the Nazi collapse in Europe Science published an article symptomatic of a profound theoretical reorientation in the American natural and social sciences. In that article Theodosius Dobzhansky, a geneticist, and M. F. Ashley-Montagu, an anthropologist, summarized and synthesized what the last quarter century’s work in their respective fields implied for extreme hereditarian explanations of human nature and conduct. Their overarching thesis was that man was the product of biological and social evolution. Even though man in his biological aspects was as subject to natural processes as any other species, in certain critical respects he was unique in nature, for the specific system of genes that created an identifiably human mentality also permitted man to experience cultural evolution… Dobzhansky and Ashley-Montagu continued, “Instead of having his responses genetically fixed as in other animal species, man is a species that invents its own responses, and it is out of this unique ability to invent… his responses that his cultures are born.”
and, finally, in the conclusions, after assuring us that,
By the early 1940s the nature-nurture controversy had run its course.
Cravens leaves us with some closing sentences that epitomize his “triumph of evolution:”
The long-range, historical function of the new evolutionary science was to resolve the basic questions about human nature in a secular and scientific way, and thus provide the possibilities for social order and control in an entirely new kind of society. Apparently this was a most successful and enduring campaign in American culture.
At this point, one doesn’t know whether to laugh or cry. Apparently Cravens, who has just supplied us with arcane details about who said what at obscure scientific conferences half a century and more before he published his book was completely unawares of exactly what Ashley Montagu, his herald of the new world order, meant when he referred to “extreme hereditarian explanations,” in spite of the fact that he spelled it out ten years earlier in an invaluable little pocket guide for the followers of the “new science” entitled Man and Aggression. There Montagu describes the sort of “balance of man’s nature and his culture” he intended as follows:
Man is man because he has no instincts, because everything he is and has become he has learned, acquired, from his culture, from the man-made part of the environment, from other human beings.
There is, in fact, not the slightest evidence or ground for assuming that the alleged “phylogenetically adapted instinctive” behavior of other animals is in any way relevant to the discussion of the motive-forces of human behavior. The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.
So much for Cravens’ “balance.” He spills a great deal of ink in his book assuring us that the Blank Slate orthodoxy he defends was the product of “science,” little influenced by any political or ideological bias. Apparently he also didn’t notice that, not only in Man and Aggression, but ubiquitously in the Blank Slate literature, the “new science” is defended over and over and over again with the “argument” that anyone who opposes it is a racist and a fascist, not to mention far right wing.
As it turns out, Cravens didn’t completely lapse into a coma following the publication of Ashley Montagu’s 1947 pronunciamiento in Science. In his “Conclusion” we discover that, after all, he had a vague presentiment of the avalanche that would soon make a shambles of his “new evolutionary science.” In his words,
Of course in recent years something approximating at least a minor revival of the old nature-nurture controversy seems to have arisen in American science and politics. It is certainly quite possible that this will lead to a full scale nature-nurture controversy in time, not simply because of the potential for a new model of nature that would permit a new debate, but also, as one historian has pointed out, because our own time, like the 1920s, has been a period of racial and ethnic polarization. Obviously any further comment would be premature.
Obviously, my dear Cravens. What’s the moral of the story, dear reader? Well, among other things, that if you really want to learn something about the Blank Slate, you’d better not be shy of wading through the source literature yourself. It’s still out there, waiting to be discovered. One particularly rich source of historical nuggets is H. L. Mencken’s American Mercury, which Ron Unz has been so kind as to post online. Mencken took a personal interest in the “nature vs. nurture” controversy, and took care to publish articles by heavy hitters on both sides. For a rather different take than Cravens on the motivations of the early Blank Slaters, see for example, Heredity and the Uplift, by H. M. Parshley. Parshley was an interesting character who took on no less an opponent than Clarence Darrow in a debate over eugenics, and later translated Simone de Beauvoir’s feminist manifesto The Second Sex into English.
Posted on March 15th, 2015 No comments
Human morality is the manifestation of innate behavioral traits in animals with brains large enough to reason about their own emotional reactions. It exists because those traits evolved. They did not evolve to serve any purpose, but purely because they happened to enhance the probability that individuals carrying them would survive and reproduce. In the absence of those traits morality as we know it would not exist. Darwin certainly suspected as much. Now, more than a century and a half after the publication of On the Origin of Species, so much is really obvious.
Scores of books have been published recently on the innate emotional wellsprings of morality. Its analogs have been clearly identified in other animals. Its expression has been demonstrated in infants, long before the they could have learned the responses in question via cultural transmission. Unless all these books are pure gibberish, and all these observations are delusions, morality is ultimately the expression of physical phenomena happening in the brains of individuals. In other words, it is subjective. It does not have an independent existence as a thing-in-itself, outside of the minds of individuals. It follows that it cannot somehow jump out of the skulls of those individuals and gain some kind of an independent, legitimate power to prescribe to other individuals what they should or should not do.
In spite of all that, the faith in objective morality persists, in defiance of the obvious. The truth is too jarring, too uncomfortable, too irreconcilable with what we “feel,” and so we have turned away from it. As the brilliant Edvard Westermarck put it in his The Origin and Development of the Moral Ideas,
As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivise the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion.
It follows that, as Westermarck puts it,
The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is, that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.
If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.
Westermarck wrote those words in 1906. More than a century later, we are still whistling past the graveyard of objective morality. Interested readers can confirm this by a quick trip to their local university library. Browsing through the pages of Ethics, one of the premier journals devoted to the subject, they will find articles on deontological, consequentialist, and several other abstruse flavors of morality. They will find a host of helpful recipes for what should or should not be done in a given situation. They will discover that it is their “duty” to do this, that, or the other thing. Finally, they will find all of the above ensconced in an almost impenetrable smokescreen of academic jargon. In a word, most of the learned contributors to Ethics have ignored Westermarck, and are still chasing their tails, doggedly pursuing a “scientific ethics” that will “fix rules for human conduct” once and for all.
Challenge one of these learned philosophers, and their response is typically threadbare enough. A common gambit is no more complex than the claim that objective morality must exist, because if it didn’t then the things we all know are bad wouldn’t be bad anymore. An example of the genre recently turned up on the opinion pages of The New York Times, entitled, Why Our Children Don’t Think There Are Moral Facts. Its author, Justin McBrayer, an associate professor of philosophy at Fort Lewis College in Durango, Colorado, opens with the line,
What would you say if you found out that our public schools were teaching children that it is not true that it’s wrong to kill people for fun or cheat on tests? Would you be surprised?
Now, as Westermarck pointed out, it is impossible for things to be “true” if they have no objective existence. Read the article carefully, and you’ll see that McBrayer doesn’t even attempt to dispute the logic behind Westermarck’s observation. Rather, he tells him the same thing Socrates’ judges told him as they handed him the hemlock: “I’m right and you’re wrong because what you claim is true is bad for the children.” In other words, there must be an objective bad because otherwise it would be bad. Other than that, the only attempt at an argument in the whole article is the following ad hominem remark about any philosopher who denies the existence of objective morality:
There are historical examples of philosophers who endorse a kind of moral relativism, dating back at least to Protagoras who declared that “man is the measure of all things,” and several who deny that there are any moral facts whatsoever. But such creatures are rare.
In other words, objective morality must be true, because those who deny it are “creatures.” No doubt, such “defenses” of objective morality have been around since time immemorial. They certainly were in Westermarck’s day. His response was as valid then as it is now:
Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism. If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it. This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong. To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief.
Obviously, as Westermarck foresaw, the argument is “still repeated” more than a century later. In McBrayer’s case, it goes like this:
Indeed, in the world beyond grade school, where adults must exercise their moral knowledge and reasoning to conduct themselves in the society, the stakes are greater. There, consistency demands that we acknowledge the existence of moral facts. If it’s not true that it’s wrong to murder a cartoonist with whom one disagrees, then how can we be outraged? If there are no truths about what is good or valuable or right, how can we prosecute people for crimes against humanity? If it’s not true that all humans are created equal, then why vote for any political system that doesn’t benefit you over others?
As a philosopher, I already knew that many college-aged students don’t believe in moral facts. While there are no national surveys quantifying this phenomenon, philosophy professors with whom I have spoken suggest that the overwhelming majority of college freshmen in their classrooms view moral claims as mere opinions that are not true or are true only relative to a culture.
One often hears such remarks about the supposed pervasiveness of moral relativism. They are commonly based on the fallacy that human morality is the product of human reason rather than human emotion. The reality is that Mother Nature has been blithely indifferent to the repeated assertions of philosophers that, unless we listen to them, morality will disappear. She designed morality to work, for better or worse, whether we take the trouble to reason about it or not. All these fears of moral relativism can’t even pass the “ho ho” test. They fly in the face of all the observable facts about moral behavior in the real world. Moral relativism on campus, you say? Please! There has never been such a hotbed of extreme, moralistic piety as exists today in academia since the heyday of the Puritans. No less a comedian than Chris Rock won’t even perform on college campuses anymore because of repeated encounters with the extreme manifestations of priggishness one finds there. One can’t tell a joke without “offending” someone.
Morality isn’t going anywhere. It will continue to function just as it always has, oblivious to whether it has the permission of philosophers or not. As can be seen by the cultural differences in the way that moral emotions are “acted out,” within certain limits morality is malleable. We have some control over whether it is “acted out” by the immolation of enemy pilots and the beheading and crucifixion of “infidels,” or in forms that promote what Sam Harris might call “human flourishing.” Regardless of our choice, I suspect that our chances of successfully shaping a morality that most of us would find agreeable will be enhanced if we base our actions on what morality actually is rather than on what we want it to be.
Posted on February 28th, 2015 No comments
All appearances to the contrary in the popular media, the Blank Slate lives on. Of course, its heyday is long gone, but it slumbers on in the more obscure niches of academia. One of its more recent manifestations just turned up at Scientia Salon in the form of a paper by one Mark Fedyk, an assistant professor of philosophy at Mount Allison University in Sackville, Canada. Entitled, “How (not) to Bring Psychology and Biology Together,” it provides the interested reader with a glimpse at several of the more typical features of the genre as it exists today.
Fedyk doesn’t leave us in doubt about where he’s coming from. Indeed, he lays his cards on the table in plain sight in the abstract, where he writes that, “psychologists should have a preference for explanations of adaptive behavior in humans that refer to learning and other similarly malleable psychological mechanisms – and not modules or instincts or any other kind of relatively innate and relatively non-malleable psychological mechanisms.” Reading on into the body of the paper a bit, we quickly find another trademark trait of both the ancient and modern Blank Slaters; their tendency to invent strawman arguments, attribute them to their opponents, and then blithely ignore those opponents when they point out that the strawmen bear no resemblance to anything they actually believe.
In Fedyk’s case, many of the strawmen are incorporated in his idiosyncratic definition of the term “modules.” Among other things, these “modules” are “strongly nativist,” they don’t allow for “developmental plasticity,” they imply a strong, either-or version of the ancient nature vs. nurture dichotomy, and they are “relatively innate and relatively non-malleable.” In Fedyk’s paper, the latter phrase serves the same purpose as the ancient “genetic determinism” strawman did in the heyday of the Blank Slate. Apparently that’s now become too obvious, and the new jargon is introduced by way of keeping up appearances. In any case, we gather from the paper that all evolutionary psychologists are supposed to believe in these “modules.” It matters not a bit to Fedyk that his “modules” have been blown out of the water literally hundreds of times in the EP literature stretching back over a period of two decades and more. A good example that patiently dissects each of his strawmen one by one is “Modularity in Cognition: Framing the Debate,” published by Barrett and Kurzban back in 2006. It’s available free online, and I invite my readers to have a look at it. It can be Googled up by anyone in a few seconds, but apparently Fedyk has somehow failed to discover it.
Once he has assured us that all EPers have an unshakable belief in his “modules,” Fedyk proceeds to concoct an amusing fairy tale based on that assumption. In the process, he presents his brilliant and original theory of “anticipated consilience.” According to this theory, researchers in new fields, such as EP, should rely on the findings of more mature “auxiliary disciplines,” particularly those which have been “extremely successful” in the past, to inform their own research. In the case of evolutionary psychology, the “auxiliary discipline” turns out to be evolutionary biology. As Fedyk puts it,
One of the more specific ways of doing this is to rely upon what can be called the principle of anticipated consilience, which says that it is rational to have a prima facie preference for those novel theories commended by previous scientific research which are most likely to be subsequently integrated in explanatorily- or inductively-fruitful ways with the relevant discipline as it expands. The principle will be reliable simply because the novel theories which are most likely to be subsequently integrated into the mature scientific discipline as it expands are just those novel theories which are most likely to be true.
He then proceeds to incorporate his strawmen into an illustration of how this “anticipated consilience” would work in practice:
To see how this would work, consider, for example, two fairly general categories of proximate explanations for adaptive behaviors in humans, nativist (i.e., bad, ed.) psychological hypotheses which posit some kind of module (namely the imaginary kind invented by Fedyk, ed.) and non-nativist (i.e., good, ed.) psychological hypotheses, which posit some kind of learning routine (i.e., the Blank Slate, ed.)
As the tale continues, we learn that,
…it is plausible that, for approximately the first decade of research in evolutionary psychology following its emergence out of sociobiology in the 1980s, considerations of anticipated consilience would have likely rationalized a preference for proximate explanations which refer to modules and similar types of proximate mechanisms.
The reason for this given by Fedyk turns out to be the biggest thigh-slapper in this whole, implausible yarn,
So by the time evolutionary psychology emerged in reaction to human sociobiology in the 1980s, (Konrad) Lorenz’s old hydraulic model of instincts really was the last positive model in biology of the proximate causes of adaptive behavior.
Whimsical? Yes, but stunning is probably a better adjective. If we are to believe Fedyk, we are forced to conclude that he never even heard of the Blank Slate! After all, some of that orthodoxy’s very arch-priests, such as Richard Lewontin and Stephen Jay Gould are/were evolutionary biologists. They, too, had a “positive model in biology of the proximate causes of adaptive behavior,” in the form of the Blank Slate. Fedyk is speaking of a time in which the Blank Slate dogmas were virtually unchallenged in the behavioral sciences, and anyone who got out of line was shouted down as a fascist, or worse. And yet we are supposed to swallow the ludicrous imposture that Lorenz’ hydraulic theory not only overshadowed the Blank Slate dogmas, but was the only game in town! But let’s not question the plot. Continuing on with Fedyk’s adjusted version of history, we discover that (voila!) the evolutionary biologists suddenly recovered from their infatuation with hydraulic theory, and got their minds right:
…what I want to argue is that, in the last decade or so, a new understanding of the biological importance of developmental plasticity has implications for evolutionary psychology. Whereas previously considerations of anticipated consilience with evolutionary biology and cognitive science may have provided support for those proximate hypotheses which posited modules, I argue in this section that these very same considerations now support significantly non-nativist proximate hypotheses. The argument, put simply, is that traits which have high degrees of plasticity will be more evolutionarily robust than highly canalized innately specified non-malleable traits like mental modules. The upshot is that a mind comprised mostly of modules is not plastic in this specific sense, and is therefore ultimately unlikely to be favoured by natural selection. But a mind equipped with powerful, domain general learning routines does have the relevant plasticity.
I leave it as an exercise for the student to pick out all the innumerable strawmen in this parable of the “great change of heart” in evolutionary biology. Suffice it to say that, as a result of this new-found “plasticity,” anticipated consilience now requires evolutionary psychologists to reject their silly notions about human nature in favor of a return to the sheltering haven of the Blank Slate. Fedyk helpfully spells it out for us:
This means that, given a choice between proximate explanations which reflect a commitment to the massive modularity hypothesis and proximate explanations which, instead, reflect an approach to the mind which privileges learning…, the latter is most plausible in light of evolutionary biology.
The kicker here is that if anyone even mildly suggests any connection between this latter day manifestation of cultural determinism and the dogmas of the Blank Slate, the Fedyks of the world scream foul. Apparently we are to believe that the “proximate explanations” of evolutionary psychology aren’t completely excluded as long as one can manage a double back flip over the rather substantial barrier of “anticipated consilience” that blocks the way. How that might actually turn out to be possible is never explained. In spite of these scowling denials, I personally will continue to prefer the naïve assumption that, if something walks like a duck, quacks like a duck, and flaps its wings like a duck, then it actually is a duck, or Blank Slater, as the case may be.
Posted on December 31st, 2014 3 comments
It’s great to see another title by E. O. Wilson. Reading his books is like continuing a conversation with a wise old friend. If you run into him on the street you don’t expect to hear him say anything radically different from what he’s said in the past. However, you always look forward to chatting with him because he’s never merely repetitious or tiresome. He always has some thought-provoking new insight or acute comment on the latest news. At this stage in his life he also delights in puncturing the prevailing orthodoxies, without the least fear of the inevitable anathemas of the defenders of the faith.
In his latest, The Meaning of Human Existence, he continues the open and unabashed defense of group selection that so rattled his peers in his previous book, The Social Conquest of Earth. I’ve discussed some of the reasons for their unease in an earlier post. In short, if it can really be shown that the role of group selection in human evolution has been as prominent as Wilson claims, it will seriously mar the legacy of such prominent public intellectuals as Richard Dawkins and Steven Pinker, as well as a host of other prominent scientists, who have loudly and tirelessly insisted on the insignificance of group selection. It will also require some serious adjustments to the fanciful yarn that currently passes as the “history” of the Blank Slate affair. Obviously, Wilson is firmly convinced that he’s on to something, because he’s not letting up. He dismisses the alternative inclusive fitness interpretation of evolution as unsupported by the evidence and at odds with the most up-to-date mathematical models. In his words,
Although the controversy between natural selection and inclusive fitness still flickers here and there, the assumptions of the theory of inclusive fitness have proved to be applicable only in a few extreme cases unlikely to occur on Earth on any other planet. No example of inclusive fitness has been directly measured. All that has been accomplished is an indirect analysis called the regressive method, which unfortunately has itself been mathematically invalidated.
Interestingly, while embracing group selection, Wilson then explicitly agrees with one of the most prominent defenders of inclusive fitness, Richard Dawkins, on the significance of the gene:
The use of the individual or group as the unit of heredity, rather than the gene, is an even more fundamental error.
Very clever, that, a preemptive disarming of the predictable invention of straw men to attack group selection via the bogus claim that it implies that groups are the unit of selection. The theory of group selection already has a fascinating, not to mention ironical, history, and its future promises to be no less entertaining.
When it comes to the title of the book, Wilson himself lets us know early on that its just a forgivable form of “poetic license.” In his words,
In ordinary usage the word “meaning” implies intention. Intention implies design, and design implies a designer. Any entity, any process, or definition of any word itself is put into play as a result of an intended consequence in the mind of the designer. This is the heart of the philosophical worldview of organized religions, and in particular their creation stories. Humanity, it assumes, exists for a purpose. Individuals have a purpose in being on Earth. Both humanity and individuals have meaning.
Wilson is right when he says that this is what most people understand by the term “meaning,” and he decidedly rejects the notion that the existence of such “meaning” is even possible later in the book by rejecting religious belief more bluntly than in any of his previous books. He provides himself with a fig leaf in the form of a redefinition of “meaning” as follows:
There is a second, broader way the word “meaning” is used, and a very different worldview implied. It is that the accidents of history, not the intentions of a designer, are the source of meaning.
I rather suspect most philosophers will find this redefinition unpalatable. Beyond that, I won’t begrudge Wilson his fig leaf. After all, if one takes the trouble to write books, one generally also has an interest in selling them.
As noted above, another significant difference between this and Wilson’s earlier books is his decisive support for what one might call the “New Atheist” line, as set forth in books by the likes of Richard Dawkins, Sam Harris, and Christopher Hitchens. Obviously, Wilson has been carefully following the progress of the debate. He rejects religions, significantly in both their secular as well as their traditional spiritual manifestations, as both false and dangerous, mainly because of their inevitable association with tribalism. In his words,
Religious warriors are not an anomaly. It is a mistake to classify believers of particular religious and dogmatic religionlike ideologies into two groups, moderate versus extremist. The true cause of hatred and violence is faith versus faith, an outward expression of the ancient instinct of tribalism. Faith is the one thing that makes otherwise good people do bad things.
and, embracing the ingroup/outgroup dichotomy in human moral behavior I’ve often alluded to on this blog,
The great religions… are impediments to the grasp of reality needed to solve most social problems in the real world. Their exquisitely human flaw is tribalism. The instinctual force of tribalism in the genesis of religiosity is far stronger than the yearning for spirituality. People deeply need membership in a group, whether religious or secular. From a lifetime of emotional experience, they know that happiness, and indeed survival itself, require that they bond with oth3ers who share some amount of genetic kinship, language, moral beliefs, geographical location, social purpose, and dress code – preferably all of these but at least two or three for most purposes. It is tribalism, not the moral tenets and humanitarian thought of pure religion, that makes good people do bad things.
Finally, in a passage worthy of New Atheist Jerry Coyne himself, Wilson denounces both “accommodationists” and the obscurantist teachings of the “sophisticated Christians:”
Most serious writers on religion conflate the transcendent quest for meaning with the tribalistic defense of creation myths. They accept, or fear to deny, the existence of a personal deity. They read into the creation myths humanity’s effort to communicate with the deity, as part of the search for an uncorrupted life now and beyond death. Intellectual compromisers one and all, they include liberal theologians of the Niebuhr school, philosophers battening on learned ambiguity, literary admirers of C. S. Lewis, and others persuaded, after deep thought, that there most be Something Out There. They tend to be unconscious of prehistory and the biological evolution of human instinct, both of which beg to shed light on this very important subject.
In a word, Wilson has now positioned himself firmly in the New Atheist camp. This is hardly likely to mollify many of the prominent New Atheists, who will remain bitter because of his promotion of group selection, but at this point in his career, Wilson can take their hostility pro granulum salis.
There is much more of interest in The Meaning of Human Existence than I can cover in a blog post, such as Wilson’s rather vague reasons for insisting on the importance of the humanities in solving our problems, his rejection of interplanetary and/or interstellar colonization, and his speculations on the nature of alien life forms. I can only suggest that interested readers buy the book.Anthropology, Atheism, Blank Slate, Christianity, Evolution, Evolutionary psychology, Extraterrestrial life, Group Selection, human evolution, Human nature, Hunting Hypothesis, Ingroups and Outgroups, Kin selection, Morality, Philosophy, Religion, Secular Religions, The Meaning of Life, The Purpose of Life group selection, Meaning of Life
Posted on November 19th, 2014 No comments
An article entitled “The Evolution of War – A User’s Guide,” recently turned up at “This View of Life,” a website hosted by David Sloan Wilson. Written by Anthony Lopez, it is one of the more interesting artifacts of the ongoing “correction” of the history of the debate over human nature I’ve seen in a while. One of the reasons it’s so remarkable is that Wilson himself is one of the foremost proponents of the theory of group selection, Lopez claims in his article that one of the four “major theoretical positions” in the debate over the evolution of war is occupied by the “group selectionists,” and yet he conforms to the prevailing academic conceit of studiously ignoring the role of Robert Ardrey, who was not only the most influential player in the “origins of war” debate, but overwhelmingly so in the whole “Blank Slate” affair as well. Why should that be so remarkable? Because at the moment the academics’ main rationalization for pretending they never heard of a man named Ardrey is (you guessed it) his support for group selection!
When it comes to the significance of Ardrey, you don’t have to take my word for it. His was the most influential voice in a growing chorus that finally smashed the Blank Slate orthodoxy. The historical source material is all still there for anyone who cares to trouble themselves to check it. One invaluable piece thereof is “Man and Aggression,” a collection of essays edited by arch-Blank Slater Ashley Montagu and aimed mainly at Ardrey, with occasional swipes at Konrad Lorenz, and with William Golding, author of “Lord of the Flies,” thrown in for comic effect. The last I looked you could still pick it up for a penny at Amazon. For example, from one of the essays by psychologist Geoffrey Gorer,
Almost without question, Robert Ardrey is today the most influential writer in English dealing with the innate or instinctive attributes of human nature, and the most skilled populariser of the findings of paleo-anthropologists, ethologists, and biological experimenters… He is a skilled writer, with a lively command of English prose, a pretty turn of wit, and a dramatist’s skill in exposition; he is also a good reporter, with the reporter’s eye for the significant detail, the striking visual impression. He has taken a look at nearly all the current work in Africa of paleo-anthropologists and ethologists; time and again, a couple of his paragraphs can make vivid a site, such as the Olduvai Gorge, which has been merely a name in a hundred articles.
In case you’ve been asleep for the last half a century, the Blank Slate affair was probably the greatest debacle in the history of science. The travails of Galileo and the antics of Lysenko are child’s play in comparison. For decades, whole legions of “men of science” in the behavioral sciences pretended to believe there was no such thing as human nature. As was obvious to any ten year old, that position was not only not “science,” it was absurd on the face of it. However, it was required as a prop for a false political ideology, and so it stood for half a century and more. Anyone who challenged it was quickly slapped down as a “fascist,” a “racist,” or a denizen of the “extreme right wing.” Then Ardrey appeared on the scene. He came from the left of the ideological spectrum himself, but also happened to be an honest man. The main theme of all his work in general, and the four popular books he wrote between 1961 and 1976 in particular, was that here is such a thing as human nature, and that it is important. He insisted on that point in spite of a storm of abuse from the Blank Slate zealots. On that point, on that key theme, he has been triumphantly vindicated. Almost all the “men of science,” in psychology, sociology, and anthropology were wrong, and he was right.
Alas, the “men of science” could not bear the shame. After all, Ardrey was not one of them. Indeed, he was a mere playwright! How could men like Shakespeare, Ibsen, and Moliere possibly know anything about human nature? Somehow, they had to find an excuse for dropping Ardrey down the memory hole, and find one they did! There were actually more than one, but the main one was group selection. Writing in “The Selfish Gene” back in 1976, Richard Dawkins claimed that Ardrey, Lorenz, and Irenäus Eibl-Eibesfeldt were “totally and utterly wrong,” not because they insisted there was such a thing as human nature, but because of their support for group selection! Fast forward to 2002, and Steven Pinker managed the absurd feat of writing a whole tome about the Blank Slate that only mentioned Ardrey in a single paragraph, and then only to assert that he had been “totally and utterly wrong,” period, on Richard Dawkins’ authority, and with no mention of group selection as the reason. That has been the default position of the “men of science” ever since.
Which brings us back to Lopez’ paper. He informs us that one of the “four positions” in the debate over the evolution of war is “The Killer Ape Hypothesis.” In fact, there never was a “Killer Ape Hypothesis” as described by Lopez. It was a strawman, pure and simple, concocted by Ardrey’s enemies. Note that, in spite of alluding to this imaginary “hypothesis,” Lopez can’t bring himself to mention Ardrey. Indeed, so effective has been the “adjustment” of history that, depending on his age, it’s quite possible that he’s never even heard of him. Instead, Konrad Lorenz is dragged in as an unlikely surrogate, even though he never came close to supporting anything even remotely resembling the “Killer Ape Hypothesis.” His main work relevant to the origins of war was “On Aggression,” and he hardly mentioned apes in it at all, focusing instead mainly on the behavior of fish, birds and rats.
And what of Ardrey? As it happens, he did write a great deal about our ape-like ancestors. For example, he claimed that Raymond Dart had presented convincing statistical evidence that one of them, Australopithecus africanus, had used weapons and hunted. That statistical evidence has never been challenged, and continues to be ignored by the “men of science” to this day. Without bothering to even mention it, C. K. Brain presented an alternative hypothesis that the only acts of “aggression” in the caves explored by Dart had been perpetrated by leopards. In recent years, as the absurdities of his hypothesis have been gradually exposed, Brain has been in serious row back mode, and Dart has been vindicated to the point that he is now celebrated as the “father of cave taphonomy.”
Ardrey also claimed that our apelike ancestors had hunted, most notably in his last book, “The Hunting Hypothesis.” When Jane Goodall published her observation of chimpanzees hunting, she was furiously vilified by the Blank Slaters. She, too, has been vindicated. Eventually, even PBS aired a program about hunting behavior in early hominids, and, miraculously, just this year even the impeccably politically correct “Scientific American” published an article confirming the same in the April edition! In a word, we have seen the vindication of these two main hypotheses of Ardrey concerning the behavior of our apelike and hominid ancestors. Furthermore, as I have demonstrated with many quotes from his work in previous posts, he was anything but a “genetic determinist,” and, while he strongly supported the view that innate predispositions, or “human nature,” if you will, have played a significant role in the genesis of human warfare, he clearly did not believe that it was unavoidable or inevitable. In fact, that belief is one of the main reasons he wrote his books. In spite of that, the “Killer Ape” zombie marches on, and turns up as one of the “four positions” that are supposed to “illuminate” the debate over the origins of war, while another of the “positions” is supposedly occupied by of all things, “group selectionists!” History is nothing if not ironical.
Lopez’ other two “positions” include “The Strategic Ape Hypothesis,” and “The Inventionists.” I leave the value of these remaining “positions” to those who want to “examine the layout of this academic ‘battlefield’”, as he puts it, to the imagination of my readers. Other than that, I can only suggest that those interested in learning the truth, as opposed to the prevailing academic narrative, concerning the Blank Slate debacle would do better to look at the abundant historical source material themselves than to let someone else “interpret” it for them.
Posted on October 26th, 2014 No comments
It’s become fashionable in some quarters to claim that philosophy is useless. I wouldn’t go that far. Philosophers have at least been astute enough to notice some of the more self-destructive tendencies of our species, and to come up with more or less useful formulas for limiting the damage. However, they have always had a tendency to overreach. We are not intelligent enough to reliably discover truth far from the realm of repeatable experiments. When we attempt to do so, we commonly wander off into intellectual swamps. That is where one often finds philosophers.
The above is well illustrated by the history of thought touching on the subject of morality in the decades immediately following the publication of On the Origin of Species in 1859. It was certainly realized in short order that Darwin’s theory was relevant to the subject of morality. Perhaps no one at the time saw it better than Darwin himself. However, the realization that the search for the “ultimate Go0d” was now over once and for all, because the object sought did not exist, was slow in coming. Indeed, for the most part, it’s still not realized to this day. The various “systems” of morality in the decades after Darwin’s book appeared kept stumbling forward towards the non-existent goal, like dead men walking. For the most part, their creators never grasped the significance of the term “natural selection.” Against all odds, they obstinately persisted in the naturalistic fallacy; the irrational belief that, to the extent that morality had evolved, it had done so “for the good of the species.”
An excellent piece of historical source material documenting these developments can be found at Google Books. Entitled, A Review of the Systems of Ethics Founded on the Theory of Evolution, it was written by one C. M. Williams, and published in 1893. According to one version on Google Books, “C. M.” stands for “Cora Mae,” apparently a complete invention. The copying is botched, so that every other page of the last part of the book is unreadable. The second version, which is at least readable, claims the author was Charles Mallory Williams and, indeed, that name is scribbled after the initials “C. M.” in the version copied. There actually was a Charles Mallory Williams. He was a medical doctor, born in 1872, and would have been 20 years old at the time the book was published. The chances that anyone so young wrote the book in question are vanishingly small. Unfortunately, I must leave it to some future historian to clear up the mystery of who “C. M.” actually was, and move on to consider what he wrote.
According to the author, by 1893 a flood of books and papers had already appeared addressing the connection between Darwin’s theory and morality. In his words,
Of the Ethics founded on the theory of Evolution, I have considered only the independent theories which have been elaborated to systems. I have omitted consideration of many works which bear on Evolutional Ethics as practical or exhortative treatises or compilations of facts, but which involve no distinctly worked out theory of morals.
The authors who made the cut include Alfred Russell Wallace, Ernst Haeckel, Herbert Spencer, John Fiske, W. H. Rolph, Alfred Barratt, Leslie Stephen, Bartholomäus von Carneri, Harald Hoffding, Georg von Gizycki, Samuel Alexander, and, last but not least, Darwin himself. Williams cites the books of each that bear on the subject, and most of them have a Wiki page. Wallace, of course, is occasionally mentioned as the “co-inventor” of the theory of evolution by natural selection with Darwin. Collectors of historical trivia may be interested to know that Barratt’s work was edited by Carveth Read, who was probably the first to propose a theory of the hunting transition from ape to man. Leslie Stephen was the father of Virginia Woolf, and Harald Hoffding was the friend and philosophy teacher of Niels Bohr.
I don’t intend to discuss the work of each of these authors in detail. However, certain themes are common to most, if not all, of them, and most of them, not to mention Williams himself, still clung to Lamarckism and other outmoded versions of evolution. It took the world a long time to catch up to Darwin. For example, in the case of Haeckel,
Even in the first edition of his Naturliche Schopfungsgeschichte Haeckel makes a distinction between conservative and progressive inheritance, and in the edition of 1889 he still maintains this division against Weismann and others, claiming the heredity of acquired habit under certain circumstances and showing conclusively that even wounds and blemishes received during the life of an individual may be in some instances inherited by descendants.
For Williams’ own Lamarckism, see chapter 1 of Volume II, in which he seems convinced that Darwin himself believes in inheritance of acquired characteristics, and that Lamarck’s theories are supported by abundant evidence. We are familiar with an abundance of similar types of “evidence” in our own day.
More troublesome than these vestiges of earlier theories of evolution are the vestiges of earlier systems of morality. Every one of the authors cited above has a deep background in the theories of morality concocted by philosophers, both ancient and modern. In general, they have adopted some version of one of these theories as their own. As a result, they have a tendency to fit evolution by natural selection into the Procrustean bed of their earlier theories, often as a mere extension of them. An interesting manifestation of this tendency is the fact that, almost to a man, they believed that evolution promoted the “good of the species.” For example, quoting Stephen:
The quality which makes a race survive may not always be a source of advantage to every individual, or even to the average individual. Since the animal which is better adapted for continuing its species will have an advantage in the struggle even though it may not be so well adapted for pursuing its own happiness, an instinct grows and decays not on account of its effects on the individual, but on account of its effects upon the race.
The case of Carneri, who happened to be a German, is even more interesting. Starting with the conclusion that “evolution by natural selection” must inevitably favor the species over the individual,
Every man has his own ends, and in the attempt to attain his ends, does not hesitate to set himself in opposition to all the rest of mankind. If he is sufficiently energetic and cunning, he may even succeed for a time in his endeavors to the harm of humanity. Yet to have the whole of humanity against oneself is to endeavor to proceed in the direction of greater resistance, and the process must sooner or later result in the triumph of the stronger power. In the struggle for existence in its larger as well as its smaller manifestations, the individual seeks with all his power to satisfy the impulse to happiness which arises with conscious existence, while the species as the complex of all energies developed by its parts has an impulse to self preservation of its own.
It follows, at least for Carneri, that Darwin’s theory is a mere confirmation of utilitarianism:
The “I” extends itself to an “I” of mankind, so that the individual, in making self his end, comes to make the whole of mankind his end. The ideal cannot be fully realized; the happiness of all cannot be attained; so that there is always choice between two evils, never choice of perfect good, and it is necessary to be content with the greatest good of the greatest number as principle of action.
which, in turn, leads to a version of morality worthy of Bismarck himself. As paraphrased by Williams,
He lays further stress upon the absence of morality, not only among the animals, in whom at least general ethical feelings in distinction from those towards individuals are not found, but also among savages, morality being not the incentive to, but the product of the state.
Alexander gives what is perhaps the most striking example of this perceived syncretism between Darwinism and pre-existing philosophies, treating it as a mere afterthought to Hegel and Kant:
Nothing is more striking at the present time than the convergence of different schools of Ethics. English Utilitarianism developing into Evolutional Ethics on the one hand, and the idealism associated with the German philosophy derived from Kant on the other. The convergence is not of course in mere practical precepts, but in method also. It consists in an objectivity or impartiality of treatment commonly called scientific. There is also a convergence in general results which consists in a recognition of a kind of proportion between individual and society, expressed by the phrase “organic connection.” The theory of egoism pure and simple has been long dead. Utilitarianism succeeded it and enlarged the moral end. Evolution continued the process of enlarging the individual interest, and has given precision to the relation between the individual and the moral law. But in this it has added nothing new, for Hegel in the early part of the century, gave life to Kant’s formula by treating the law of morality as realized in the society and the state.
Alexander continues by confirming that he shares a belief common to all the rest as well, in one form or another – in the reality of objective morality:
The convergence of dissimilar theories affords us some prospect of obtaining a satisfactory statement of the ethical truths towards which they seem to move.
Gyzicki embraces this version of the naturalistic fallacy even more explicitly:
Natural selection is therefore a power of judgment, in that it preserves the just and lets the evil perish. Will this war of the good with the evil always continue? Or will the perfect kingdom of righteousness one day prevail. We hope this last but we cannot know certainly.
There is much more of interest in this book by an indeterminate author. Of particular note is the section on Alfred Russell Wallace, but I will leave that for a later post. One might mention as an “extenuating circumstance” for these authors that none of them had the benefit of the scientific community’s belated recognition of the significance of Mendel’s discoveries. It’s well know that Darwin himself struggled to come up with a logical mechanism to explain how it was possible for natural selection to even happen. The notions of these moral philosophers on the subject must have been hopelessly vague by comparison. Their ideas about “evolution for the good of the species” must be seen in that context. The concocters of the modern “scientific” versions of morality can offer no such excuse.
Posted on October 12th, 2014 No comments
Franz de Waal’s The Bonobo and the Atheist is interesting for several reasons. As the title of this post suggests, it demonstrates the disconnect between the theory and practice of morality in the academy. It’s one of the latest brickbats in the ongoing spat between the New Atheists and the “accommodationist” atheists. It documents the current progress of the rearrangement of history in the behavioral sciences in the aftermath of the Blank Slate debacle. It’s a useful reality check on the behavior of bonobos, the latest “noble savage” among the primates. And, finally, it’s an entertaining read.
In theory, de Waal is certainly a subjective moralist. As he puts it, “the whole point of my book is to argue a bottom up approach” to morality, as opposed to the top down approach: “The view of morality as a set of immutable principles, or laws, that are ours to discover.” The “bottom” de Waal refers to are evolved emotional traits. In his words,
The moral law is not imposed from above or derived from well-reasoned principles; rather, it arises from ingrained values that have been there since the beginning of time.
My views are in line with the way we know the human mind works, with visceral reactions arriving before rationalizations, and also with the way evolution produces behavior. A good place to start is with an acknowledgment of our background as social animals, and how this background predisposes us to treat each other. This approach deserves attention at a time in which even avowed atheists are unable to wean themselves from a semireligious morality, thinking that the world would be a better place if only a white-coated priesthood could take over from the frocked one.
So far, so good. I happen to be a subjective moralist myself, and agree with de Waal on the origins of morality. However, reading on, we find confirmation of a prediction made long ago by Friedrich Nietzsche. In Human, All Too Human, he noted the powerful human attachment to religion and the “metaphysics” of the old philosophers. He likened the expansion of human knowledge to a ladder, or tree, up which humanity was gradually climbing. As we reached the top rungs, however, we would begin to notice that the old beliefs that had supplied us with such great emotional satisfaction in the past were really illusions. At that point, our tendency would be to recoil from this reality. The “tree” would begin to grow “sprouts” in reverse. We would balk at “turning the last corner.” Nietzsche imagined that developing a new philosophy that could accommodate the world as it was instead of the world as we wished it to be would be the task of “the great thinkers of the next century.” Alas, a century is long past since he wrote those words, yet to all appearances we are still tangled in the “downward sprouts.”
Nowhere else is this more apparent than in the academy, where a highly moralistic secular Puritanism prevails. Top down, objective morality is alive and well, and the self-righteous piety of the new, secular priesthood puts that of the old-fashioned religious Puritans in the shade. All this modern piety seems to be self-supporting, levitating in thin air, with none of the props once supplied by religion. As de Waal puts it,
…the main ingredients of a moral society don’t require religion, since they come from within.
Clearly, de Waal can see where morality comes from, and how it evolved, and why it exists, but, even with these insights, he too recoils from “climbing the last rungs,” and “turning the final corner.” We find artifacts of the modern objective morality prevalent in the academy scattered throughout his book. For example,
Science isn’t the answer to everything. As a student, I learned about the “naturalistic fallacy” and how it would be the zenith of arrogance for scientists to think that their work could illuminate the distinction between right and wrong. This was not long after World War II, mind you, which had brought us massive evil justified by a scientific theory of self-directed evolution. Scientists had been much involved in the genocidal machine, conducting unimaginable experiments.
American and British scientists were not innocent, however, because they were the ones who earlier in the century had brought us eugenics. They advocated racist immigration laws and forced sterilization of the deaf, blind, mentally ill, and physically impaired, as well as criminals and members of minority races.
I am profoundly skeptical of the moral purity of science, and feel that its role should never exceed that of morality’s handmaiden.
One can consider humans as either inherently good but capable of evil or as inherently evil yet capable of good. I happen to belong to the first camp.
None of these statements make any sense in the absence of objective good and evil. If, as de Waal claims repeatedly elsewhere in his book, morality is ultimately an expression of emotions or “gut feelings,” analogs of which we share with many other animals, and which exist because they evolved, then the notions that scientists are or were evil, period, or that science itself can be morally impure, period, or that humans can be good, period, or evil, period, are obvious non sequiturs. De Waal has climbed up the ladder, peaked at what lay just beyond the top rungs, and jumped back down onto Nietzsche’s “backward growing sprouts.” Interestingly enough, in spite of that de Waal admires the strength of one who was either braver or more cold-blooded, and kept climbing; Edvard Westermarck. But I will have more to say of him later.
The Bonobo and the Atheist is also interesting from a purely historical point of view. The narrative concocted to serve as the “history” of the behavioral sciences continues to be adjusted and readjusted in the aftermath of the Blank Slate catastrophe, probably the greatest scientific debacle of all time. As usual, the arch-villain is Robert Ardrey, who committed the grave sin of being right about human nature when virtually all the behavioral scientists and professionals, at least in the United States, were wrong. Imagine the impertinence of a mere playwright daring to do such a thing! Here’s what de Waal has to say about him:
Confusing predation with aggression is an old error that recalls the time that humans were seen as incorrigible murderers on the basis of signs that our ancestors ate meat. This “killer ape” notion gained such traction that the opening scene of Stanley Kubrick’s movie 2001: A Space Odyssey showed one hominin bludgeoning another with a zebra femur, after which the weapon, flung triumphantly into the air, turned into an orbiting spacecraft. A stirring image, but based on a single puncture wound in the fossilized skull of an ancestral infant, known as the Taung Child. It’s discoverer had concluded that our ancestors must have been carnivorous cannibals, an idea that the journalist Robert Ardrey repackaged in African Genesis by saying that we are risen apes rather than fallen angels. It is now considered likely, however, that the Taung Child had merely fallen prey to a leopard or eagle.
I had to smile when I read this implausible yarn. After all, anyone can refute it by simply looking up the source material, not to mention the fact that there’s no lack of people who’ve actually read Ardrey, and are aware that the “Killer Ape Theory” is a mere straw man concocted by his enemies. De Waal is not one of them. Not only has he obviously not read Ardrey, but he probably knows of him at all only at third or fourth hand. If he had, he’d realize that he was basically channeling Ardrey in the rest of his book. Indeed, much of The Bonobo and the Atheist reads as if it had been lifted from Ardrey’s last book, The Hunting Hypothesis, complete with the ancient origins of morality, Ardrey’s embrace of de Waal’s theme that humans are genuinely capable of altruism and cooperation, resulting in part, as also claimed by de Waal, from his adoption of a hunting lifestyle, and his rejection of what de Waal calls “Veneer Theory,” the notion that human morality is merely a thin veneer covering an evil and selfish core. For example, according to de Waal,
Hunting and meat sharing are at the root of chimpanzee sociality in the same way that they are thought to have catalyzed human evolution. The big-game hunting of our ancestors required even tighter cooperation.
This conclusion is familiar to those who have actually read Ardrey, but was anathema to the “Men of Science” as recently as 15 years ago. Ardrey was, of course, never a journalist, and his conclusion that Australopithecine apes had hunted was based, not on the “single puncture wound” in the Taung child’s skull, but mainly on the statistical anomaly of numbers of a particular type of bone that might have been used as a weapon found in association with the ape remains far in excess of what would be expected if they were there randomly. To date, no one has ever explained that anomaly, and it remains carefully swept under the rug. In a word, the idea that Ardrey based his hypothesis entirely “on a single puncture wound” is poppycock. In the first place, there were two puncture wounds, not one. Apparently, de Waal is also unaware that Raymond Dart, the man who discovered this evidence, has been rehabilitated, and is now celebrated as the father of cave taphonomy, whereas those who disputed his conclusions about what he had found, such as C. K. Brain, who claimed that the wounds were caused by a leopard, are now in furious rowback mode. For example, from the abstract of a paper in which Brain’s name appears at the end of the list of authors,
The ca. 1.0 myr old fauna from Swartkrans Member 3 (South Africa) preserves abundant indication of carnivore activity in the form of tooth marks (including pits) on many bone surfaces. This direct paleontological evidence is used to test a recent suggestion that leopards, regardless of prey body size, may have been almost solely responsible for the accumulation of the majority of bones in multiple deposits (including Swartkrans Member 3) from various Sterkfontein Valley cave sites. Our results falsify that hypothesis and corroborate an earlier hypothesis that, while the carcasses of smaller animals may have been deposited in Swartkrans by leopards, other kinds of carnivores (and hominids) were mostly responsible for the deposition of large animal remains.
Meanwhile, we find that none other than Stephen Jay Gould has been transmogrified into a “hero.” As documented by Steven Pinker in The Blank Slate, Gould was basically a radical Blank Slater, unless one cares to give him a pass because he grudgingly admitted that, after all, eating, sleeping, urinating and defecating might not be purely learned behaviors, after all. The real Steven Jay Gould rejected evolutionary psychology root and branch, and was a co-signer of the Blank Slater manifesto that appeared in the New York Times in response to claims about human nature as reserved as those of E. O. Wilson in his Sociobiology. He famously invented the charge of “just so stories” to apply to any and all claims for the existence of human behavioral predispositions. Now, in The Bonobo and the Atheist, we find Gould reinvented as a good evolutionary psychologist. His “just so stories” only apply to the “excesses” of evolutionary psychology. We find the real Gould, who completely rejected the idea of “human nature,” softened to a new, improved Gould who merely “vehemently resisted the idea that every single human behavior deserves an evolutionary account.” If anyone was a dyed-in-the-wool habitue of the Blank Slate establishment in its heyday, it was Gould, but suddenly we learn that “Several skirmishes between him and the evolutionary establishment unfolded in the pages of the New York Review of Books in 1997.” I can only suggest that anyone who honestly believes that a new “establishment” had already replaced the Blank Slate prior to 1997 should read Napoleon Chagnon’s Noble Savages: My Life Among Two Dangerous Tribes – The Yanomamö and the Anthropologists, published as recently as last year. No matter, according to de Waal, “The greatest public defender of evolution this country has ever known was Stephen Jay Gould.”
Perhaps one can best understand the Gould panegyrics in connection with another of the major themes of de Waal’s book; his rejection of Richard Dawkins and the rest of the New Atheists. De Waal is what New Atheist Jerry Coyne would refer to as an “accommodationist,” that is, an atheist who believes that the atheist lions should lie down with the religious sheep. As it happens, Gould was the Ur-accommodationist, and inventor of the phrase “nonoverlapping magisterial,” or NOMA to describe his claim that science and religion occupy separate spheres of knowledge. One can find a good summary of the objections to NOMA from the likes of “New Atheists” Dawkins, Christopher Hitchens, Sam Harris and Coyne on Prof. Coyne’s website, Why Evolution is True, for example, here and here.
It’s hard to understand de Waal’s bitter opposition to atheist activism as other than yet another example of Nietzsche’s “climbing down onto the backward pointing shoots.” Indeed, as one might expect from such instances of “turning back,” it’s not without contradictions. For example, he writes,
Religion looms as large as an elephant in the United States, to the point that being nonreligious is about the biggest handicap a politician running for office can have, bigger than being gay, unmarried, thrice married, or black.
And yet he objects to the same kind of activism among atheists that has been the most effective antidote to such bigotry directed at, for example, gays and blacks. For some reason, atheists are just supposed to smile and take it. De Waal accuses Dawkins, Harris and the rest of being “haters,” but I know of not a single New Atheist that term can really be accurately applied to, and certainly not to the likes of Dawkins, Harris or Coyne. Vehement, on occasion, yes, but haters of the religious per se? I don’t think so. De Waal agrees with David Sloan Wilson that “religion” evolved. I can certainly believe that predispositions evolved that have the potential to manifest themselves as religion, but “religion” per se, complete with imaginary spiritual beings? Not likely. Nevertheless, De Waal claims it is part of our “social skin.” And yet, in spite of this claim that religion “evolved,” a bit later we find him taking note of a social phenomenon that apparently directly contradicts this conclusion:
The secular model is currently being tried out in northern Europe, where it has progressed to the point that children naively ask why there are so many “plus signs” on large buildings called “churches.”
Apparently, then, “evolved religion” only infected a portion of our species in northern Europe, and they all moved to the United States. Finally, in his zeal to defend religion, de Waal comes up with some instances of “moral equivalence” that are truly absurd. For example,
I am as sickened (by female genital mutilation, ed.) as the next person, but if Harris’s quest is to show that religion fails to promote morality, why pick on Islam? Isn’t genital mutilation common in the United States, too, where newborn males are routinely circumcised without their consent? We surely don’t need to go all the way to Afghanistan to find valleys in the moral landscape.
As it happens I know of several instances in which my undergraduate classmates voluntarily had themselves circumcised, not for any religious motive, but because otherwise their girlfriends wouldn’t agree to oral sex. One wonders whether de Waal can cite similar instances involving FGM.
Oh, well, I suppose I shouldn’t look a gift horse in the mouth. Anyone who believes in a “bottom up” version of subjective morality can’t be all bad, according to my own subjective judgment, of course. Indeed, de Waal even has the audacity to point out that bonobos, those paragons of primate virtue extolled so often as role models for our own species do, occasionally fight. Along with Jonathan Haidt, he’s probably the closest thing to a “kindred spirit” I’m likely to find in academia. The icing on the cake is that he is aware of and admires the brilliant work of Edvard Westermarck on morality. What of Westermarck, you ask. Well, I’ll take that up in another post.
Posted on April 29th, 2014 No comments
You might say Leon Trotsky was the “best” of the old Bolsheviks. He was smart, was familiar with the work of a host of important thinkers beyond the usual Marx and Hegel, and wrote in a style that was a great deal more entertaining than the cock-sure, “scientific” certainties of Lenin or the quasi-liturgical screeds of Stalin. He also had a very rational understanding of morality, right up to the point where his embrace of Marxism forced him to stumble across the is-ought divide. He set down his essential thought on the subject in an essay entitled Their Morals and Ours, which appeared in the June 1938 edition of The New International.
Trotsky begins by jettisoning objective morality, summarizing in a nutshell a truth that is perfectly obvious to religious believers but that atheist moralists so often seem unable to grasp:
Let us admit for the moment that neither personal nor social ends can justify the means. Then it is evidently necessary to seek criteria outside of historical society and those ends which arise in its development. But where? If not on earth, then in the heavens. In divine revelation popes long ago discovered faultless moral criteria. Petty secular popes speak about eternal moral truths without naming their original source. However, we are justified in concluding: since these truths are eternal, they should have existed not only before the appearance of half-monkey-half-man upon the earth but before the evolution of the solar system. Whence then did they arise? The theory of eternal morals can in nowise survive without god.
It is a tribute to the power of human moral emotions that the Sam Harris school of atheists continue doggedly concocting “scientific” theories of morality in spite of this simple and seemingly self-evident truth. It follows immediately on rejection of the God hypothesis. In spite of that, legions of atheists reject it because they “feel in their bones” that the chimeras of Good and Evil that Mother Nature has seen fit to dangle before their eyes must be real. It just can’t be that all their noble ideals are mere artifacts of evolution, and so they continue tinkering on their hopeless systems as the “ignorant” religious fundamentalists smirk in the background.
The very title of Trotsky’s essay reveals that he understood another fundamental aspect of human morality – its dual nature. In spite of approaching the subject via Marx instead of Darwin, he understood the difference between ingroups and outgroups. In the jargon of Marxism, these became “classes.” Thus, Trotsky’s ingroup was the proletariat, and his outgroup the bourgeoisie, and he found the notion that identical moral criteria should be applied to “oppressors” and “oppressed” alike absurd:
Whoever does not care to return to Moses, Christ or Mohammed; whoever is not satisfied with eclectic hodge-podges must acknowledge that morality is a product of social development; that there is nothing invariable about it; that it serves social interests; that these interests are contradictory; that morality more than any other form of ideology has a class character.
Let us note in justice that the most sincere and at the same time the most limited petty bourgeois moralists still live even today in the idealized memories of yesterday and hope for its return. They do not understand that morality is a function of the class struggle; that democratic morality corresponds to the epoch of liberal and progressive capitalism; that the sharpening of the class struggle in passing through its latest phase definitively and irrevocably destroyed this morality; that in its place came the morality of fascism on one side, on the other the morality of proletarian revolution.
Trotsky was quite familiar with Darwinian explanations of morality. One might say that, like so many Marxists who came after him, he was a “Blank Slater,” but certainly not in the same rigid, dogmatic sense as the later versions who denied the very existence of human behavioral predispositions. He allowed that there might be such a thing as “human nature,” but only to the extent that it didn’t get in the way of the proper development of “history.” For example,
But do not elementary moral precepts exist, worked out in the development of mankind as an integral element necessary for the life of every collective body? Undoubtedly such precepts exist but the extent of their action is extremely limited and unstable. Norms “obligatory upon all” become the less forceful the sharper the character assumed by the class struggle. The highest pitch of the class struggle is civil war which explodes into mid-air all moral ties between the hostile classes.
He didn’t realize that these “elementary moral precepts” were just as capable of accommodating the Marxist “classes” as ingroups and outgroups as they are of enabling more “natural” perceptions of one’s own clan of hunter-gatherers and the next one over in the same roles. His conclusion that these “precepts” were relatively unimportant in the overall scheme of things was reinforced by the fact that he was also familiar with and had a predictable allergic reaction to the work of those who derived imaginary, quasi-objective and un-Marxist “natural laws” from “human nature”:
Moralists of the Anglo-Saxon type, in so far as they do not confine themselves to rationalist utilitarianism, the ethics of bourgeois bookkeeping, appear conscious or unconscious students of Viscount Shaftesbury, who at the beginning of the 18th century deduced moral judgments from a special “moral sense” supposedly once and for all given to man.
The “evolutionary” utilitarianism of Spencer likewise abandons us half-way without an answer, since, following Darwin, it tries to dissolve the concrete historical morality in the biological needs or in the “social instincts” characteristic of a gregarious animal, and this at a time when the very understanding of morality arises only in an antagonistic milieu, that is, in a society torn by classes.
Other than the concocters of “natural law,” there was another powerful barrier in the way of Trotsky’s grasping the fundamental significance of his “elementary moral precepts” – his own, powerful moral emotions. According to his autobiography, these manifested themselves at a very young age as powerful reactions to what he perceived as the oppression of the weak by the strong. As Jonathan Haidt might have predicted, they were concentrated in the “Care/harm,” “Liberty/oppression,” and “Fairness/cheating” “foundations” of morality described in his The Righteous Mind as characteristic of the ideologues of the Left. It was inconceivable to Trotsky that the ultimate cause of these exalted emotions was to be found in a subset of the evolved behavioral traits of our species that have no “purpose,” and exist purely because they happened to increase the odds that his ancestors would survive and reproduce. And so it was that, as noted above, he skipped cheerfully across the is-ought divide, hardly noticing that he’d even crossed the line. At the end of the essay we discover that this sober rejecter of all absolute and objective moralities has somehow discovered a magical philosopher’s stone that enabled him to distinguish “higher” from “lower” moralities:
To a revolutionary Marxist there can be no contradiction between personal morality and the interests of the party, since the party embodies in his consciousness the very highest tasks and aims of mankind… Does it not seem that “amoralism” in the given case is only a pseudonym for higher human morality?
Not all will reach that shore, many will drown. But to participate in this movement with open eyes and with an intense will – only this can give the highest moral satisfaction to a thinking being!
Let us say that it provided Trotsky with moral satisfaction, and leave it at that. It is certainly easier to forgive him for such a non sequitur than the more puritanical among the New Atheists of today, who have witnessed the collapse of the Blank Slate, can have no excuse for failing to understand where morality “comes from,” and yet still insist on edifying the rest of us with their freshly minted universal and “scientific” moral systems.
As it happens, there is a poignant footnote to Trotsky’s essay. Even at the time he wrote it, he probably knew in his heart of hearts that his earthly god had failed. By then, he could only maintain his defiant faith in Marxism by some convoluted theoretical revisions that must have seemed implausible to a man of his intelligence. According to the dogma of his “Fourth International,” the Bolshevik coup of 1917 had, indeed, been a genuine proletarian revolution. However, soon after seizing power, the proletariat had somehow gone to sleep, and allowed the sly bourgeoisie to regain control, using Stalin as their tool. The historical precedent for this remarkable historical double back flip was the Thermidorian reaction of the French Revolution. As all good Marxists know, this had ended in the defeat of Robespierre and the Jacobins, who were the “real revolutionaries,” by the dark minions of the ancien regime. A more realistic interpretation of the events of 9 Thermidor is that it was a logical response on the part of perfectly sensible men to the realization that, if they did nothing, they were sure to be the next victims of Madame Guillotine. No matter, like the pastor of some tiny fundamentalist sect who insists that only his followers are “true Christians,” and only they will go to heaven, Trotsky insisted that only his followers were the “true revolutionaries” of 1917.
The fact that he took such license with Marxist dogma didn’t prevent Trotsky from grasping what was going on in the 1930’s much more clearly than the “parlor pink” Stalinist apologists of the time. Here’s what he had to say about he Duranty school of Stalinist stooges:
The King’s Counselor, Pritt, who succeeded with timeliness in peering under the chiton of the Stalinist Themis and there discovered everything in order, took upon himself the shameless initiative. Romain Rolland, whose moral authority is highly evaluated by the Soviet publishing house bookkeepers, hastened to proclaim one of his manifestos where melancholy lyricism unites with senile cynicism. The French League for the Rights of Man, which thundered about the “amoralism of Lenin and Trotsky” in 1917 when they broke the military alliance with France, hastened to screen Stalin’s crimes in 1936 in the interests of the Franco-Soviet pact. A patriotic end justifies, as is known, any means. The Nation and The New Republic closed their eyes to Yagoda’s exploits since their “friendship” with the U.S.S.R. guaranteed their own authority. Yet only a year ago these gentlemen did not at all declare Stalinism and Trotskyism to be one and the same. They openly stood for Stalin, for his realism, for his justice and for his Yagoda. They clung to this position as long as they could.
Until the moment of the execution of Tukhachevsky, Yakir, and the others, the big bourgeoisie of the democratic countries, not without pleasure, though blanketed with fastidiousness, watched the execution of the revolutionists in the U.S.S.R. In this sense The Nation and The New Republic, not to speak of Duranty, Louis Fischer, and their kindred prostitutes of the pen, fully responded to the interests of “democratic” imperialism. The execution of the generals alarmed the bourgeoisie, compelling them to understand that the advanced disintegration of the Stalinist apparatus lightened the tasks of Hitler, Mussolini and the Mikado. The New York Times cautiously but insistently began to correct its own Duranty.
Those who don’t understand what Trotsky is getting at with his imputations of Stalinism regarding The Nation and The New Republic need only read a few back issues of those magazines from the mid to late 1930’s. It won’t take them long to get the point.
Even if the gallant old Bolshevik still firmly believed in his own revisions of Marxism in 1938, there can be little doubt that the scales had fallen from his eyes shortly before Stalin had him murdered in 1940. By then, World War II was already underway. In an essay that appeared in his last book, a collection of essays entitled In Defense of Marxism, he wrote,
If, however, it is conceded that the present war will provoke not revolution but a decline of the proletariat, then there remains another alternative; the further decay of monopoly capitalism, its further fusion with the state and the replacement of democracy wherever it still remained by a totalitarian regime. The inability of the proletariat to take into its hands the leadership of society could actually lead under these conditions to the growth of a new exploiting class from the Bonapartist fascist bureaucracy. This would be, according to all indications, a regime of decline, signaling the eclipse of civilization… Then it would be necessary in retrospect to establish that in its fundamental traits the present USSR was the precursor of a new exploiting regime on an international scale… If (this) prognosis proves to be correct, then, of course, the bureaucracy will become a new exploiting class. However onerous the second perspective may be, if the world proletariat should actually prove incapable of fulfilling the mission placed upon it by the course of development, nothing else would remain except only to recognize that the socialist program, based on the internal contradictions of capitalist society, ended as a Utopia.
The assassin who ended Trotsky’s life with an ice pick was perhaps the most merciful of Stalin’s many executioners. There could have been little joy for the old Bolshevik in witnessing the bloody dictator’s triumph in 1945, and the final collapse of all his glorious dreams.
Posted on April 16th, 2014 2 comments
One Thomas Rodham Wells, who apparently fancies himself a philosopher, has posted an article entitled Is Parenthood Morally Respectable? over at 3quarksdaily. It explains to the rest of us benighted souls why it’s immoral to have children, except in situations where the number is limited to one, and the prospective parents’ motives in having children are scrutinized for moral purity, presumably by a board of philosophers appointed by Wells. Such tracts have been popping up in increasing numbers lately, mainly emanating from the left of the ideological spectrum. I really don’t know whether to laugh or cry when I see them. They’re the ultimate expressions of what one might call a morality inversion – morality as a negation of the very basis of morality itself. Moral objections to parenthood are hardly the only manifestations of such suicidal inversions observable in modern society. For example, often the very same people who consider parenthood “evil” also consider unlimited illegal immigration “good.” I suppose one shouldn’t be surprised. Jury-rigging a large brain on a creature with a pre-existing set of behavioral traits, and then expecting the moral emotions to catch up with the change overnight would be a dubious proposition even in a static environment. Plump that creature down in the environment of today, radically different as it is from the one in which its moral equipment evolved, and such “anomalies” are only to be expected.
On the other hand, Darwin happened. He certainly had no trouble making the connection between his revolutionary theory and moral behavior. It was immediately obvious to him that morality exists because it evolved. The connection has been just as obvious to many others who have come and gone in the intervening century and a half. In this post-Blank Slate era the fact should be as obvious as the nose on your face. It should serve as a check on the intellectual hubris of our species that, in spite of that, so many of us still don’t get it.
I won’t go into too much detail about how Wells rationalized himself into a morality inversion. It’s the usual stuff. Parenthood is selfish because it imposes social costs on those who choose not to have children. Parenthood is irresponsible because the carbon footprint of children will melt the planet. Parenthood is unfair because the burden of other people’s children on the childless don’t outweigh their advantages. And so on, and so on. As usual, all this completely misses the point. The “point” is that the ultimate reason that morality exists to begin with, and absent which it would not exist, is that it increased the probability that individuals of our species would survive and have children who would also survive. In other words, using morality to encourage genetic suicide is manifestly absurd. It is basically the same thing as using one’s evolved hand to shoot oneself, or using one’s evolved feet to jump off a cliff. One can only conclude that, in the midst of all his complex moral reasoning, Wells never bothered to consider why, exactly, there is such a thing as morality.
Should one go to the trouble of pointing all this out to him? Why on earth for! The rest of us should be overjoyed that he and as many others like him as possible are delusional. If anything, we should encourage them to remain delusional. If they have no children, we won’t have to feed them, educate them, the planet may not melt after all, and, best of all, there will be more room for our children. As for me pointing this out to my readers, I admit, it does seem somewhat counterintuitive. On the other hand, so far there aren’t enough of you to seriously risk melting the planet, and if you’re smart enough to “get it” it’s probably worth my while to keep you around to provide a little quality genetic diversity in any case.
Posted on March 29th, 2014 7 comments
People worry about a “grounding” for morality. There’s really no need to. As Marc Bekoff and Jessica Pierce pointed out in Wild Justice – The Moral Lives of Animals, there are analogs of moral behavior in many species besides our own. Eventually some bright Ph.D. will design an experiment to scan the brains of chimpanzees as they make morally loaded decisions, and discover that the relevant equipment in their brains is located more or less in the same places as in ours. Other animals don’t wonder why one thing is good and another evil. They’re not intelligent enough to worry about it. Hominids are Mother Nature’s first experiment with creatures that are smart enough to worry about it. The result of this cobbling of big brains onto the already existing mental equipment responsible for moral emotions and perceptions hasn’t been entirely happy. In fact, it has caused endless confusion through the ages.
We can’t just perceive one thing as good, and another as evil, and leave it at that like other animals. We’re too smart for that. We have to invent a story to explain why. We perceive Good and Evil as things independent of ourselves, so we need to come up with some kind of myth about how they got there. It’s an impossible task, because Good and Evil don’t exist as independent things. They are subjective impressions. It is our nature to perceive them as things because morality has always worked best that way, at least until now. That fact has led to endless confusion over the ages, as philosophers and theologians have tried to grasp the mirage.
We are much like the patients described in Michael Gazzaniga’s The Ethical Brain, who had their left and right brain hemispheres severed from each other to relieve severe epilepsy. According to Gazzaniga,
Beyond the finding…that the left hemisphere makes strange input logical, it includes a special region that interprets the inputs we receive every moment and weaves them into stories to form the ongoing narrative of our self-image and our beliefs. I have called this area of the left hemisphere the interpreter because it seeks explanations for internal and external events and expands on the actual facts we experience to make sense of, or interpret, the events of our life.
Experiments on split-brain patients reveal how readily the left brain interpreter can make up stories and beliefs. In one experiment, for example, when the word walk was presented only to the right side of a patients’s brain, he got up and started walking. When he was asked why he did this, the left brain (where language is stored and where the word walk was not presented) quickly created a reason for the action: “I wanted to go get a Coke.”
We constantly invent similar stories to rationalize to ourselves why something we have just perceived as good really is Good, or why something we have perceived as evil really is Evil. Jonathan Haidt describes the same phenomenon in his The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Noting that he will present evidence in the paper to back up his claims, he writes,
These findings offer four reasons for doubting the causality of reasoning in moral judgment: 1) there are two cognitive processes at work — reasoning and intuition — and the reasoning process has been overemphasized; 2) reasoning is often motivated; 3) the reasoning process constructs post-hoc justifications, yet we experience the illusion of objective reasoning; and 4) moral action covaries with moral emotion more than with moral reasoning.
The most common post-hoc justification, of course, has always been God. Coming up with a God-based narrative is a piece of cake compared to the alternative. After all, if the big guy upstairs wants one thing to be Good and another Evil, and promises to fry you in hell forever if you beg to differ with him, it’s easy to find reasons to agree with Him. Take him out of the mix, however, and things get more complicated. We come up with all kinds of amusing and flimsy rationalizations to demonstrate the existence of the non-existent.
Consider, for example, the matter of Rights which, like Good and Evil, exist as subjective impressions that our mind portrays to us as objective things. The website of the Foundation for Economic Education has a regular “Arena” feature hosting debates on various topics, and a while back the question was, “Do Natural Rights Exist?” The affirmative side was taken by Tibor Machan in a piece entitled, “Natural Rights Come From Human Nature.” If you get the sinking feeling on reading this that you’re about to see yet another version of the naturalistic fallacy, unfortunately you would be right. Machan sums up his argument in the final two paragraphs as follows:
We are all dependent upon knowing the nature of things so that we can organize our knowledge of the world. We know, for example, that there are fruits (a class of some kind of beings) and games (another class) and subatomic particles (yet another class) and so on. These classes or natures of things are not something separate from the things being classified, but constitute their common features, ones without which they wouldn’t be what they are. Across the world, for example, apples and dogs and chickens and tomatoes and, yes, human beings are all recognized for what they are because we know their natures even when some cases are difficult to identify fully, completely, or when there are some oddities involved.
So there is good reason that governments do not create rights for us—we have them, instead, by virtue of our human nature. And this puts a limit on what governments may do, including do to us. They need to secure our rights, and as they do so they must also respect them.
Is it just me, or is this transparent conflation of “is” and “ought” sufficiently obvious to any ten-year old? Well, it must be me, because according to the poll accompanying the debate, 66% of the respondents thought that Machan “won” with this argument, according to which Natural Rights “evolved” right along with our hands and feet. Obviously, since people “know in their bones” that Rights are real things, it doesn’t take a very profound argument to convince them that “it must be true.” In a word, if you think that the world will sink into a fetid sewer of moral relativism and debauchery because there is no “grounding of morality,” I have good news for you. It ain’t so. If our moral equipment works perfectly well even when the only thing propping it up is such a flimsy post-hoc rationalization, it can probably get along just as well without one.