Robert Plomin’s “Blueprint” – The Blank Slate and the Behavioral Genetics Insurgency

Robert Plomin‘s Blueprint is a must read. That would be true even if it were “merely” an account of recent stunning breakthroughs that have greatly expanded our understanding of the links between our DNA and behavior. However, beyond that it reveals an aspect of history that has been little appreciated to date; the guerilla warfare carried on by behavioral geneticists against the Blank Slate orthodoxy from a very early date. You might say the book is an account of the victorious end of that warfare. From now on those who deny the existence of heritable genetic effects on human behavior will self-identify as belonging to the same category as the more seedy televangelists, or even professors in university “studies” departments.

Let’s begin with the science.   We have long known by virtue of thousands of twin and adoption studies that many complex human traits, including psychological traits, are more or less heritable due to differences in DNA. These methods also enable us to come up with a ballpark estimate of the degree to which these traits are influenced by genetics. However, we have not been able until very recently to detect exactly what inherited differences in DNA sequences are actually responsible for the variations we see in these traits. That’s were the “revolution” in genetics described by Plomin comes in. It turns out that detecting these differences was to be a far more challenging task than optimistic scientists expected at first. As he put it,

When the hunt began twenty-five years ago everyone assumed we were after big game – a few genes of large effect that were mostly responsible for heritability. For example, for heritabilities of about 50 per cent, ten genes each accounting for 5 per cent of the variance would do the job. If the effects were this large, it would require a sample size of only 200 to have sufficient power to detect them.

This fond hope turned out to be wishful thinking. As noted in the book, some promising genes were studied, and some claims were occasionally made in the literature that a few such “magic” genes had been found. The result, according to Plomin, was a fiasco. The studies could not be replicated. It was clear by the turn of the century that a much broader approach would be necessary. This, however, would require the genotyping of tens of thousands of single-nucleotide polymorphisms, or SNPs (snips). A SNP is a change in a single one of the billions of rungs of the DNA ladder each of us carries. SNPs are one of the main reasons for differences in the DNA sequence among different human beings. To make matters worse, it was expected that sample sizes of a thousand or more individuals would have to be checked in this way to accumulate enough data to be statistically useful. At the time, such genome-wide association (GWA) studies would have been prohibitively expensive. Plomin notes that he attempted such an approach to find the DNA differences associated with intelligence, with the aid of a few shortcuts. He devoted two years to the study, only to be disappointed again. It was a second false start. Not a single DNA association with intelligence could be replicated.

Then, however, a major breakthrough began to make its appearance in the form of SNP chips.  According to Plomin, “These could “genotype many SNPs for an individual quickly and inexpensively. SNP chips triggered the explosion of genome-wide association studies.” He saw their promise immediately, and went back to work attempting to find SNP associations with intelligence. The result? A third false start. The chips available at the time were still too expensive, and could identify too few SNPs. Many other similar GWA studies failed miserably as well. Eventually, one did succeed, but there was a cloud within the silver lining. The effect size of the SNP associations found were all extremely small. Then things began to snowball. Chips were developed that could identify hundreds of thousands instead of just tens of thousands of SNPs, and sample sizes in the tens of thousands became feasible. Today, sample sizes can be in the hundreds of thousands. As a result of all this, revolutionary advances have been made in just the past few years. Numerous genome-wide significant hits have been found for a host of psychological traits. And now we know the reason why the initial studies were so disappointing. In Plomin’s words,

For complex traits, no genes have been found that account for 5 per cent of the variance, not even 0.5 per cent of the variance. The average effect sizes are in the order of 0.01 per cent of the variance, which means that thousands of SNP associations will be needed to account for heritabilities of 50 per cent… Thinking about so many SNPs with such small effects was a big jump from where we started twenty-five years ago. We now know for certain that heritability is caused by thousands of associations of incredibly small effect. Nonetheless, aggregating these associations in polygenic scores that combine the effects of tens of thousands of SNPs makes it possible to predict psychological traits such as depression, schizophrenia and school achievement.

In short, we now have a tool that, as I write this, is rapidly increasing in power, and that enables falsifiable predictions regarding many psychological traits based on DNA alone. As Plomin puts it,

The DNA revolution matters much more than merely replicating results from twin and adoption studies. It is a game-changer for science and society. For the first time, inherited DNA differences across our entire genome of billions of DNA sequences can be used to predict psychological strengths and weaknesses for individuals, called personal genomics.

As an appreciable side benefit, thanks to this revolution we can now officially declare the Blank Slate stone cold dead. It’s noteworthy that this revolutionary advance in our knowledge of the heritable aspects of our behavior did not happen in the field of evolutionary psychology, as one might expect. Diehard Blank Slaters have been directing their ire in that direction for some time. They could have saved themselves the trouble. While the evolutionary psychologists have been amusing themselves inventing inconsequential just so stories about the more abstruse aspects of our sexual behavior, a fifth column that germinated long ago in the field of behavioral genetics was about to drive the decisive nail in their coffin. Obviously, it would have been an inappropriate distraction for Plomin to expand on the fascinating history behind this development in Blueprint.  Read between the lines, though, and its quite clear that he knows what’s been going on.

It turns out that the behavioral geneticists were already astute at dodging the baleful attention of the high priests of the Blank Slate, flying just beneath their radar, at a very early date. A useful source document recounting some of that history entitled, Origins of Behavior Genetics: The Role of The Jackson Laboratory, was published in 2009 by Donald Dewsbury, emeritus professor of psychology at the University of Florida. He notes that,

A new field can be established and coalesce around a book that takes loosely evolving material and organizes it into a single volume. Examples include Watson’s (1914) Behavior: An Introduction to Comparative Psychology and Wilson’s (1975) Sociobiology. It is generally agreed that Fuller and Thompson’s 1960 Behavior Genetics served a similar function in establishing behavior genetics as a separate field.

However, research on the effects of genes on behavior had already begun much earlier. In the 1930’s, when the Blank Slate already had a firm grip on the behavioral sciences, According to the paper, Harvard alumnus Alan Gregg, who was Director of the Medical Sciences Division of Rockefeller Foundation,

…developed a program of “psychobiology” or “mental hygiene” at the Foundation. Gregg viewed mental illness as a fundamental problem in society and believed that there were strong genetic influences. There was a firm belief that the principles to be discovered in nonhuman animals would generalize to humans. Thus, fundamental problems of human behavior might be more conveniently and effectively studied in other species.

The focus on animals turned out to be a very wise decision. For many years it enabled the behavioral geneticists to carry on their work while taking little flak from the high priests of the Blank Slate, whose ire was concentrated on scientists who were less discrete about their interest in humans, in fields such as ethology. Eventually Gregg teamed up with Clarence Little, head of the Jackson Laboratory in Bar Harbor, Maine, and established a program to study mice, rabbits, guinea pigs, and, especially dogs. Gregg wrote papers about selective breeding of dogs for high intelligence and good disposition. However, as his colleagues were aware, another of his goals “was conclusively to demonstrate a high heritability of human intelligence.”

Fast forward to the 60’s. It was a decade in which the Blank Slate hegemony began to slowly crumble under the hammer blows of the likes of Konrad Lorenz, Niko Tinbergen, Robert Trivers, Irenäus Eibl-Eibesfeldt, and especially the outsider and “mere playwright” Robert Ardrey. In 1967 the Institute for Behavioral Genetics (IBG) was established at the University of Colorado by Prof. Jerry McClearn with his colleagues Kurt Schlesinger and Jim Wilson. In the beginning, McClearn et. al. were a bit coy, conducting “harmless” research on the behavior of mice, but by the early 1970’s they had begun to publish papers that were explicitly about human behavior. It finally dawned on the Blank Slaters what they were up to, and they were subjected to the usual “scientific” accusations of fascism, Nazism, and serving as running dogs of the bourgeoisie, but by then it was too late. The Blank Slate had already become a laughing stock among lay people who were able to read and had an ounce of common sense. Only the “experts” in the behavioral sciences would be rash enough to continue futile attempts to breath life back into the corpse.

Would that some competent historian could reconstruct what was going through the minds of McClearn and the rest when they made their bold and potentially career ending decision to defy the Blank Slate and establish the IBG. I believe Jim Wilson is still alive, and no doubt could tell some wonderful stories about this nascent insurgency. In any case, in 1974 Robert Plomin made the very bold decision for a young professor to join the Institute. One of the results of that fortuitous decision was the superb book that is the subject of this post. As noted above, digression into the Blank Slate affair would only have been a distraction from the truly revolutionary developments revealed in his book. However, there is no question that that he was perfectly well aware of what had been going on in the “behavioral sciences” for many years. Consider, for example, the following passage, about why research results in behavioral genetics are so robust and replicate so strongly:

Another reason seems paradoxical: behavioral genetics has been the most controversial topic in psychology during the twentieth century. The controversy and conflict surrounding behavioral genetics raised the bar for the quality and quantity of research needed to convince people of the importance of genetics. This has had the positive effect of motivating bigger and better studies. A single study was not enough. Robust replication across studies tipped the balance of opinion.

As the Germans say, “Was mich nicht umbringt, macht mich stark” (What doesn’t kill me make me strong). If you were looking for a silver lining to the Blank Slate, there you have it. What more can I say. The book is a short 188 pages, but in those pages are concentrated a wealth of knowledge bearing on the critical need of our species to understand itself. If you would know yourself, then by all means, buy the book.

Morality and Reason – Why Do We Do the Things We Do?

Consider the evolution of life from the very beginning. Why did the first stirrings of life – molecules that could reproduce themselves – do what they did? The answer is simple – chemistry. As life forms became more complex, they eventually acquired the ability to exploit external sources of energy, such as the sun or thermal vents, to survive and reproduce. They improved the odds of survival even further by acquiring the ability to move towards or away from such resources. One could easily program a machine to perform such simple tasks. Eventually these nascent life forms increased the odds that they would survive and reproduce even further by acquiring the ability to extract energy from other life forms. These other life forms could only survive themselves by virtue of acquiring mechanisms to defend themselves from these attacks. This process of refining the traits necessary to survive continues to this day. We refer to it as natural selection. Survival tools of astounding complexity have evolved in this way, such as the human brain, with its ability evoke consciousness of such things as the information received from our sense organs, drives such as thirst, hunger, and sexual desire, and our emotional responses to, for example, our own behavior and the behavior of others. Being conscious of these things, it can also reason about them, considering how best to satisfy our appetites for food, water, sex, etc., and how to interpret the emotions we experience as we interact with others of our species.

A salient feature of all these traits, from simple to complex, is the reason they exist to begin with. They exist because at the time and in the environment in which they evolved, they enhanced the odds that we would survive, or at least they did to the extent that they were relevant to our survival at all. They exist for no other reason. Our emotions and predispositions to behave in some ways and not others are certainly no exception. They are innate, in the sense that their existence depends on genetic programming. Thanks to natural selection, we also possess consciousness and the ability to reason. As a result, we can reason about what these emotions and predispositions mean, and how we should respond to them. They are not rigid instincts, and they do not “genetically determine” our behavior. In the case of a subset of them, we refer to the outcome of this process of reasoning about and seeking to interpret them as morality. It is these emotions and predispositions that are the root cause for the existence of morality. Without them, morality as we know it would not exist. They exist by virtue of natural selection. At some time and in some environment, they promoted our survival and reproduction. It can hardly be assumed that they will accomplish the same result at a later date and in a different environment. In fact, it is quite apparent that in the drastically different environment we live in today, they often accomplish the opposite. For a sizable subset of the human population, morality has become maladaptive.

The remarkable success of our species in expanding from a small cohort of African apes to cover virtually the entire planet is due in large part to our ability to deal with rapid changes in the environment. We can thrive in the tropics or the arctic, and in deserts or rain forests. However, when it comes to morality, we face a very fundamental problem in dealing with such radical changes. Our brain spawns illusions that make it extremely difficult for us to grasp the nature of the problem we are dealing with. We perceive Good, Evil, Rights, etc., as real, objective things. These illusions are extremely powerful, because by being powerful they could most effectively regulate our behavior in ways that promoted survival. Now, in many cases, the illusions have become a threat to our survival, but we can’t shake them, or see them for what they really are. What they are is subjective constructs that are completely incapable of existing independently outside of the minds of individuals. Even those few who claim to see through the illusion are found defending various “Goods,” “Evils,” “Rights,” “Duties,” and other “Oughts” in the very next breath as if they were referring to real, objective things. They often do so in support of behaviors that are palpably maladaptive, if not suicidal.

An interesting feature of such maladaptive behaviors is the common claim that they are justified by “reason.” The Scotch-Irish philosopher Francis Hutcheson explained very convincingly why moral claims can’t be based on reason alone almost 300 years ago. As David Hume put it somewhat later, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Reason alone can never do anything but chase its own tail. After all, computers don’t program themselves. There must be something to reason about. In the case of human behavior the chain of reasons can be as long and as elaborate as you please, but must always and invariably originate in an innate predisposition or drive, whether it be hunger, thirst, lust, or what is occasionally referred to as our “moral sense.” Understood in that way, all of our actions are “unreasonable,” because reason can never, ever serve as the cause of our actions itself.  Reasoning about good and evil is equivalent to reasoning about the nature of God. In both cases one is reasoning about imaginary things. Behavior can never be objectively good or evil, because those categories only exist as illusions. It can, however, be objectively described as adaptive or maladaptive, depending on whether it enhances the odds of genetic survival or not.

In the case of morality, maladaptive behavior is seldom limited to a single individual. Morality is always other-regarding. The illusion that Good, Evil, etc., exist as independent, objective things implies that, not just we ourselves, but everyone else “ought” to behave in ways that embrace the “Good,” and resist “Evil.” As a result we assume a “right” to dictate potentially maladaptive and/or suicidal behavior to others. If we are good at manipulating the relevant emotions, those others may quite possibly agree with us. If we can convince them to believe our version of the illusion, they may accept our reasoning about what our moral emotions are “really” trying to tell us, and become convinced that they must act in ways detrimental to their own survival as well. They may clearly see that they are being induced to behave in a way that is not to their advantage, but the illusion would tend to paralyze any attempt to behave differently. The only means of resistance would be to manipulate the moral sense so as to evoke different illusions of what good and evil “really” are.

If, as noted above, there is nothing objectively good or evil about anything, it follows that there is nothing objectively good or evil about any of these behaviors. They are simply biological facts that happen to be observable at a given time and in a given environment. However, whatever one seeks to accomplish in life, they will be more likely to succeed if they base their actions on facts rather than illusions. That applies to the illusions associated with our moral sense as much as to any others. The vast majority of us, including myself, have an almost overwhelming sense that the illusions are real, and that good and evil are objective things. However, it is becoming increasingly dangerous, if not suicidal, to continue to cling to these illusions, assuming one places any value on survival.

Most of us have goals in life. In most cases those goals are based on illusions such as those described above. Human beings tend to stumble blindly through life, without a clue about the fundamental reasons they behave the way they do. Occasionally one sees them jumping off cliffs, stridently insisting that others must jump off the cliff too, because it is “good,” or it is their “duty.” Perhaps Socrates had such behavior in mind when he muttered, “The unexamined life is not worth living” at his trial. Before jumping off a cliff, would it not be wise to closely examine your reasons for doing so, following those reasons to their emotional source, and considering why those emotions exist to begin with? I, too, have goals. Paramount among my personal goals is survival and reproduction. There is nothing intrinsically or objectively better about those goals than anyone else’s, including the goal of jumping off a cliff. I have them because I perceive them to be in harmony with the reasons I exist to begin with. Those who do not wish to survive and reproduce appear to me to be sick and dysfunctional biological units. I do not care to be such a unit. As corollary goals I wish for the continued evolution of my species to become ever more capable of survival, and beyond that for the continued existence of biological life in general. I have no basis for claiming that my goals are “correct,” or that the goals of others are “wrong.” Mine are just as much expressions of emotion as anyone else’s. Call them whims, if you will, but at least they have the virtue of being whims that aren’t self destructive.

Supposing you have similar goals, I suggest that it would behoove you to shed the illusion of objective morality. That is by no means the same thing as dispensing with morality entirely, nor does it imply that you can’t treat a version of morality you deem conducive to your survival as an absolute. In other words, it doesn’t imply “moral relativism.” It is our nature to perceive whatever version of morality we happen to favor as absolute. Understanding why that is our nature will not result in moral nihilism, but it will have the happy effect of pulling the rug out from under the feet of the moralistic bullies who have always assumed a right to dictate behavior to the rest of us. To understand morality is to realize that the “moral high ground” they imagine they’re standing on doesn’t exist.

It is unlikely that any of us will be able to resist or significantly influence the massive shifts in population, ideology and the other radical changes to the world we live in that are happening at an ever increasing rate merely by virtue of the fact that we recognize morality and the illusions of objective good and evil associated with it for what they really are. However, it seems to me that recognizing the truth will at least enhance our ability to cope with those changes. In other words, it will help us survive, and, after all, survival is the reason that morality exists to begin with.

How a “Study” Repaired History and the Evolutionary Psychologists Lived Happily Ever After

It’s a bit of a stretch to claim that those who have asserted the existence and importance of human nature have never experienced ideological bias. If that claim is true, then the Blank Slate debacle could never have happened. However, we know that it happened, based not only on the testimony of those who saw it for the ideologically motivated debasement of science that it was, such as Steven Pinker and Carl Degler, but of the ideological zealots responsible for it themselves, such as Hamilton Cravens, who portrayed it as The Triumph of Evolution. The idea that the Blank Slaters were “unbiased” is absurd on the face of it, and can be immediately debunked by simply counting the number of times they accused their opponents of being “racists,” “fascists,” etc., in books such as Richard Lewontin’s Not in Our Genes, and Ashley Montagu’s Man and Aggression. More recently, the discipline of evolutionary psychology has experienced many similar attacks, as detailed, for example, by Robert Kurzban in an article entitled, Alas poor evolutionary psychology.

The reasons for this bias has never been a mystery, either to the Blank Slaters and their latter day leftist descendants, or to evolutionary psychologists and other proponents of the importance of human nature. Leftist ideology requires not only that human beings be equal before the law, but that the menagerie of human identity groups they have become obsessed with over the years actually be equal, in intelligence, creativity, degree of “civilization,” and every other conceivable measure of human achievement. On top of that, they must be “malleable,” and “plastic,” and therefore perfectly adaptable to whatever revolutionary rearrangement in society happened to be in fashion. The existence and importance of human nature has always been perceived as a threat to all these romantic mirages, as indeed it is. Hence the obvious and seemingly indisputable bias.

Enter Jeffrey Winking of the Department of Anthropology at Texas A&M, who assures us that it’s all a big mistake, and there’s really no bias at all! Not only that, but he “proves” it with a “study” in a paper entitled, Exploring the Great Schism in the Social Sciences, that recently appeared in the journal Evolutionary Psychology. We must assume that, in spite of his background in anthropology, Winking has never heard of a man named Napoleon Chagnon, or run across an article entitled Darkness’s Descent on the American Anthropological Association, by Alice Degler.

Winking begins his article by noting that “The nature-nurture debate is one that biologists often dismiss as a false dichotomy,” but adds, “However, such dismissiveness belies the long-standing debate that is unmistakable throughout the biological and social sciences concerning the role of biological influences in the development of psychological and behavioral traits in humans.” I agree entirely. One can’t simply hand-wave away the Blank Slate affair and a century of bitter ideological debate by turning up one’s nose and asserting the term isn’t helpful from a purely scientific point of view.

We also find that Winking isn’t completely oblivious to examples of bias on the “nature” side of the debate. He cites the Harvard study group which “evaluated the merits of sociobiology, and which included intellectual giants like Stephen J. Gould and Richard Lewontin.” I am content to let history judge whether Gould and Lewontin were really “intellectual giants.” Regardless, if Winking actually read these “evaluations,” he cannot have failed to notice that they contained vicious ad hominem attacks on E. O. Wilson and others that it is extremely difficult to construe as anything but biased. Winking goes on to note similar instances of bias by other authors in various disciplines, such as,

Many researchers use [evolutionary approaches to the study of international relations] to justify the status quo in the guise of science.

The totality [of sociobiology and evolutionary psychology] is a myth of origin that is compelling precisely because it resonates strongly with Euro American presuppositions about the nature of the world.

…in the social sciences (with the exception of primatology and psychology) sociobiology appeals most to right-wing social scientists.

These are certainly compelling examples of bias. Now, however, Winking attempts to demonstrate that those who point out the bias, and correctly interpret the reasons for it, are just as biased themselves. As he puts it,

Conversely, those who favor biological approaches have argued that those on the other side are rendered incapable of objective assessment by their ideological promotion of equality. They are alleged to erroneously reject evidence of biological influences because such evidence suggests that social outcomes are partially explained by biology, and this might inhibit the realization of equality. Their critiques of biological approaches are therefore often blithely dismissed as examples of the moralistic/naturalistic fallacy. This line of reason is exemplified in the quote by biologist Jerry Coyne

If you can read the [major Evolutionary Psychology review paper] and still dismiss the entire field as worthless, or as a mere attempt to justify scientists’ social prejudices, then I’d suggest your opinions are based more on ideology than judicious scientific inquiry.

I can’t imagine what Winking finds “blithe” about that statement! Is it really “blithe” to so much as suggest that people who dismiss entire fields of science as worthless may be ideologically motivated? I note in passing that Coyne must have thought long and hard about that statement, because his Ph.D. advisor was none other than Richard Lewontin, whom he still honors and admires!  Add to that the fact that Coyne is about as far as you can imagine from “right wing,” as anyone can see by simply visiting his Why Evolution is True website, and the notion that he is being “blithe” here is ludicrous. Winking’s other examples of “blithness” are similarly dubious, including,

For critics, the heart of the intellectual problem remains an ideological adherence to the increasingly implausible view that human behavior is strictly determined by socialization… Should [social]hierarchies result strictly from culture, then the possibilities for an egalitarian future were seen to be as open and boundless as our ever-malleable brains might imagine.

Like the Church, a number of contemporary thinkers have also grounded their moral and political views in scientific assumptions about… human nature, specifically that there isn’t one.

Unlike the “comparable” statements by the Blank Slaters, these statements neither accuse those who deny the existence of human nature of being Nazis, nor is evidence lacking to back them up.  On the contrary, one could cite a mountain of evidence to back them up supplied by the Blank Slaters themselves.  Winking soon supplies us with the reason for this strained attempt to establish “moral equivalence” between “nature” and “nurture.”  It appears in his “hypothesis,” as follows:

It is entirely possible that confirmation bias plays no role in driving disagreement and that the overarching debate in academia is driven by sincere disagreements concerning the inferential value of the research designs informing the debate.

Wait a minute!  Don’t roll your eyes like that!  Winking has a “study” to back up this hypothesis.  Let me explain it to you.  He invented some “mock results” of studies which purported to establish, for example, the increased prevalence of an allele associated with “appetitive aggression” in populations with African ancestry.  Subtle, no?  Then he used Mechanical Turk and social media to come up with a sample of 365 people with Masters degrees or Ph.D.’s for a survey on what they thought of the “inferential power” of the fake data.  Another sample of 71 were scraped together for another survey on “research design.”  In the larger sample, 307 described themselves as either only “somewhat” on the “nature” side, or “somewhat” on the “nurture” side.  Only 57 claimed they leaned strongly one way or the other.  The triumphant results of the study included, for example, that,

Participants perceptions of inferential value did not vary by the degree to which results supported a particular ideology, suggesting that ideological confirmation bias is not affecting participant perceptions of inferential value.

Seriously?  Even the author admits that the statistical power of his “study” is low because of the small sample sizes.  However statistical power only applies where the samples are truly random, meaning, in this case, where the participants are either unequivocably on the “nature” or “nurture” side.  That is hardly the case.  Mechanical Turk samples, for example are biased towards a younger and more liberal demographic.  Most of the participants were on the fence between nature and nurture.  In other words, there’s no telling what their true opinions were even if they were honest about them.  Even the most extreme Blank Slaters admitted that nature plays a significant role in such bodily functions as urinating, defecating, and breathing, and so could have easily described themselves as “somewhat bioist.”  Perhaps most importantly, any high school student could have easily seen what this “study” was about.  There is no doubt whatsoever that holders of Masters and Doctors degrees in related disciplines had no trouble a) inferring what the study was about, and b) had an interest in making sure that the results demonstrated that they were “unbiased.”  In other words, were not exactly talking “double blind” here.

I think the author was well aware that most readers would have no trouble detecting the blatant shortcomings of his “study.”  Apparently to ward off ridicule he wrote,

Regardless of one’s position, it is important to remind scholars that if they believe a group of intelligent and informed academics could be so unknowingly blinded by ideology that they wholeheartedly subscribe to an unquestionably erroneous interpretation of an entire body of research, then they must acknowledge they themselves are equally as capable of being so misguided.

Kind of reminds you of the curse over King Tut’s tomb, doesn’t it?  “May those who question my study be damned to dwell among the misguided forever!”  Sorry, my dear Winking, but “a group of intelligent and informed academics” not only could, but were “so unknowingly blinded by ideology that they wholeheartedly subscribed to an unquestionably erroneous interpretation of an entire body of research.”  It was called the Blank Slate, and it derailed the behavioral sciences for more than half a century.  That’s what Pinker’s book was about.  That’s what Degler’s book was about, and yes, that’s even what Cravens’ book was about.  They all did an excellent job of documenting the debacle.  I suggest you read them.

Or not.  You could decide to believe your study instead.  I have to admit, it would have its advantages.  History would be “fixed,” the lions would lie down with the lambs, and the evolutionary psychologists would live happily ever after.

Of Ingroups and Outgroups and the Hatreds they Spawn

Did it ever strike you as odd that the end result of Communism, a philosophy that was supposed to usher in a paradise of human brotherhood, was the death of 100 million people, give or take, and the self-decapitation of countries like Cambodia and the former Soviet Union?  Does it seem counter-intuitive that the adherents of a religion that teaches “blessed are the peacemakers” should have launched wars that killed tens of millions?  Is it bewildering than another one, promoted as the “religion of peace,” should have launched its zealots out of Arabia, killing millions more, and becoming the most successful vehicle of colonialism and imperialism ever heard of?  Do you find the theory that human warfare resulted from purely environmental influences that were the unfortunate outcome of the transition to Neolithic economies somewhat implausible?  In fact, all of these “anomalies” are predictable manifestations of what is perhaps both the most important and the most dangerous aspect of innate human behavior; our tendency to perceive others in terms of ingroups and outgroups.

Our tendency to associate the good with our ingroup, and all that is evil, disgusting and contemptible with outgroups, is a most inconvenient truth for moral philosophy.  You might call it the universal solvent of all moral systems concocted to date.  It is a barrier standing in the way of all attempts to manipulate human moral emotions, to force them to serve a “higher purpose,” or to cajole them into promoting the goal of “human flourishing.”  Because it is such an inconvenient truth it was vehemently denied as one aspect of the Blank Slate catastrophe.  Attempts were made to scare it away by calling it bad names.  Different specific manifestations became racism, bigotry, xenophobia, and so on.  The result was something like squeezing jello.  The harder we squeezed, the faster the behavior slipped through our fingers in new forms.  New outgroups emerged to take the place of the old ones, but the hatred remained, often more virulent than before.

It is impossible to understand human behavior without first determining who are the ingroups, and who are their associated outgroups.  Consider, for example, recent political events in the United States.  Wherever one looks, whether in news media, social media, on college campuses, or in the “jokes” of comedians, one finds manifestations of a furious hatred directed at Trump and his supporters.  There is jubilation when they are murdered in effigy on stage, or shot in reality on baseball fields.  The ideologically defined ingroup responsible for all this hatred justifies its behavior with a smokescreen of epithets, associating all sorts of “bad” qualities with its outgroup, following a pattern that should be familiar to anyone who has studied a little history.  In fact, their hate is neither rational, nor does it result from any of these “bad” things.  They hate for the same reason that humans have always hated; because they have identified Trump and his supporters as an outgroup.

Going back several decades, one can see the same phenomenon unfolding under the rubric of the Watergate Affair.  In that case, of course, Nixon and his supporters were the outgroup, and the ingroup can be more specifically identified with the “mainstream media” of the day.  According to the commonly peddled narrative, Nixon was a very bad man who committed very terrible crimes.  I doubt it, but it doesn’t matter one way or the other.  Nixon was deposed in what we are informed was a “triumph of justice” by some heroic reporters.  In fact, it was a successful coup d’état carried out behind a façade of legality.  The idea that what Nixon did or didn’t do had anything to do with it can be immediately exposed as a fiction by anyone who is aware of the type of human behavior described in this post, and who bothers to read through the front pages of the Washington Post and the New York Times during the 18 months or so the affair lasted.  There he will not find a conscientious attempt to keep readers informed about affairs in the world that might be important to them.  Rather, he will see an unrelenting obsession with Watergate, inexplicable as other than the manifestation of a deep hatred.  The result was a dangerous destabilization of the U.S. government, leading to further attempts to depose legitimately elected Presidents, as we saw in the case of Clinton, and as we now see underway in the case of Trump.  In Nixon’s day the mainstream media controlled the narrative.  They were able to fob off their coup d’état as the triumph of virtue and justice.  That won’t happen this time around.  Now there are powerful voices on the other side, and the outcome of such a “nice and legal” coup d’état carried out against Trump will be the undermining of the trust of the American people in the legitimacy of their political system at best.  At worst, some are suggesting we will find ourselves in the middle of a civil war.

Those still inclined to believe that the behavior in question really can be explained by the rationalizations used to justify it need only look a bit further back in history.  There they will find descriptions of exactly the same behavior, but rationalized in ways that appear incomprehensible and absurd to modern readers.  For example, read through the accounts of the various heresies that afflicted Christianity over the years.  Few Christians today could correctly identify the “orthodox” number of persons, natures, and wills of the Godhead, or the “orthodox” doctrines regarding the form of Communion or the efficacy of faith, and yet such issues have spawned ingroup/outgroup identification accompanied by the usual hatreds, resulting in numerous orgies of mass murder and warfare.

I certainly don’t mean to claim that issues and how they are decided never matter in themselves.  However, when it comes to human behavior, their role often becomes a mere pretext, a façade used to rationalize hatred that is actually a manifestation of innate emotional predispositions.  Read the comments following articles about politics and you will get the impression that half the population wakes up in the morning determined to deliberately commit as many bad deeds as they possibly can, and the other half is heroically struggling to stop them and secure the victory of the Good.  Does that really make sense?  Is it really so difficult to see that such a version of reality represents a delusion, explicable only if one accepts human nature for what it is?  Would you understand what’s going on in the world?  Then for starters you need to identify the ingroups and outgroups.  Lacking that fundamental insight, you will be stumbling in the dark.  In the dark it’s very difficult to see that you, too, are a hater, simply by virtue of the fact that you belong to the species Homo sapiens, and to understand why you hate.  Hatred is a destructive force.  It would behoove us to learn to control it no matter what our goals happen to be, but we will have a very difficult time controlling it unless we finally understand why it exists.

More Whimsical History of the Blank Slate

As George Orwell wrote in 1984, “Who controls the past controls the future. Who controls the present controls the past.”  The history of the Blank Slate is a perfect illustration of what he meant.  You might say there are two factions in the academic ingroup; those who are deeply embarrassed by the Blank Slate, and those who are still bitterly clinging to it.  History as it actually happened is damaging to both factions, so they’ve both created imaginary versions that support their preferred narratives.  At this point the “official” histories have become hopelessly muddled.  I recently ran across an example of how this affects younger academics who are trying to make sense of what’s going on in their own fields in an article entitled, Sociology’s Stagnation at the Quillette website.  It was written by Brian Boutwell, Associate Professor of Criminology and Criminal Justice at St. Louis University.

Boutwell cites an article published back in 1990 by sociologist Pierre van den Berghe excoriating the practitioners in his own specialty.  Van den Berghe was one of those rare sociologists who insisted on the relevance of evolved behavioral traits to his field.  He did not mince words.  Boutwell quotes several passages from the article, including the following:

Such a theoretical potpourri is premised on the belief that, in the absence of a powerful simplifying idea, all ideas are potentially good, especially if they are turgidly presented, logically opaque, and empirically irrefutable. This sorry state of theoretical affairs in sociology is probably the clearest evidence of the discipline’s intellectual bankruptcy. But let my colleagues rest assured: intellectual bankruptcy never spelled the doom of an academic discipline. Those within it are professionally deformed not to recognize it, and those outside of it could not care less. Sociology is safe for at least a few more decades.

In response, Boutwell writes,

Intellectually bankrupt? Those are strong words. Can a field survive like this? It can, and it has. Hundreds of new sociology PhDs are minted every year across the country (not to mention the undergraduate and graduate degrees that are conferred as well). How many students were taught that human beings evolved about around 150,000 years ago in Africa? How many know what a gene is? How many can describe Mendel’s laws, or sexual selection? The answer is very few. And, what is worse, many sociologists do not think this ignorance matters.

In other words, Boutwell thinks the prevailing malaise in Sociology continues because sociologists don’t know about Darwin.  He may be right in some cases, but that’s not really the problem.  The problem is that the Blank Slate still prevails in sociology.  It is probably the most opaque of all the behavioral “sciences.”  In fact, it is just an ideological narrative pretending to be a science, just as psychology was back in the day when van den Berghe wrote his article.  Psychologists deal with individuals.  As a result they have to look at behavior a lot closer to the source of what motivates it.  As most reasonably intelligent lay people have been aware for millennia, it is motivated by human nature.  By the end of the 90’s, naturalists, neuroscientists, and evolutionary psychologists had heaped up such piles of evidence supporting that fundamental fact that psychologists who tried to prop up the threadbare shibboleths of the Blank Slate ran the risk of becoming laughing stocks.  By 2000 most of them had thrown in the towel.  Not so the sociologists.  They deal with masses of human beings.  It was much easier for them to insulate themselves from the truth by throwing up a smokescreen of “culture.”  They’ve been masturbating with statistics ever since.

Boutwell thinks the solution is for them to learn some evolutionary biology.  I’m not sure which version of the “history” gave him that idea.  However, if he knew how the Blank Slate really went down, he might change his mind.  Evolutionary biologists and scientists in related fields were part of the heart and soul of the Blank Slate orthodoxy.  They knew all about genes, Mendel’s laws, and sexual selection, but it didn’t help.  Darwin?  They simply redacted those parts of his work that affirmed the relationship between natural selection, human nature in general, and morality in particular.  No matter that Darwin himself was perfectly well aware of the connections.  For these “scientists,” an ideological narrative trumped scientific integrity until the mass of evidence finally rendered the narrative untenable.

Of course, one could always claim that I’m just supporting an ideological narrative of my own.  Unfortunately, that claim would have to explain away a great deal of source material, and because the events in question are so recent, the source material is still abundant and easily accessible.  If Prof. Boutwell were to consult it he would find that evolutionary biologists like Stephen Jay Gould, geneticists like Richard Lewontin, and many others like them considered the Blank Slate the very “triumph of evolution.”  I suggest that anyone with doubts on that score have a look at a book that bears that title by scientific historian Hamilton Cravens published in 1978 during the very heyday of the Blank Slate.  It is very well researched, cites scores of evolutionary biologists, geneticists, and behavioral scientists, and concludes that all the work of these people who were perfectly familiar with Darwin culminated in the triumphant establishment of the Blank Slate as “scientific truth,” or, as announced by the title of his book, “The Triumph of Evolution.”  His final paragraph gives a broad hint about how something so ridiculous could ever have been accepted as an unquestionable dogma.  It reads,

The long-range, historical function of the new evolutionary science was to resolve the basic questions about human nature in a secular and scientific way, and thus provide the possibilities for social order and control in an entirely new kind of society.  Apparently this was a most successful and enduring campaign in American culture.

Here, unbeknownst to himself, Cravens hit the nail on the head.  Social control was exactly what the Blank Slate was all about.  It was essential that the ideal denizens of the future utopias that the Blank Slaters had in mind for us have enough “malleability” and “plasticity” to play their assigned parts.  “Human nature” in the form of genetically transmitted behavioral predispositions would only gum things up.  They had to go, and go they did.  Ideology trumped and derailed science, and kept it derailed for more than half a century.  As Boutwell has noticed, it remains derailed in sociology and a few other specialties that have managed to develop similarly powerful allergic reactions to the real world.  Reading Darwin isn’t likely to help a bit.

One of the best books on the genesis of the Blank Slate is In Search of Human Nature, by Carl Degler.  It was published in 1991, well after the grip of the Blank Slate on the behavioral sciences had begun to loosen, and presents a somewhat more sober and realistic portrayal of the affair than Cravens’ triumphalist account.  Among other things it gives an excellent account of the genesis of the Blank Slate.  As portrayed by Degler, in the beginning it hadn’t yet become such a blatant tool for social control.  One could better describe it as an artifact of idealistic cravings.  Then, as now, one of the most important of these was the desire for human equality, not only under the law, but in a much more real, physical sense, among both races and individuals.  If human nature existed and was important, than such equality was out of the question.  Perfect equality was only possible if every human mind started out as a Blank Slate.

Degler cites the work of several individuals as examples of this nexus between the ideal of equality and the Blank Slate, but I will focus on one in particular; John B. Watson, the founder of behaviorism.  One of the commenters to an earlier post suggested that the behaviorists weren’t Blank Slaters.  I think that he, too, is suffering from historical myopia.  Again, it’s always useful to look at the source material for yourself.  In his book, Behaviorism, published in 1924, Watson notes that all human beings breathe, sneeze, have hearts that beat, etc., but have no inherited traits that might reasonably be described as human nature.  In those days, psychologists like William James referred to hereditary behavioral traits as “instincts.”  According to Watson,

In this relatively simple list of human responses there is none corresponding to what is called an “instinct” by present-day psychologists and biologists.  There are then for us no instincts – we no longer need the term in psychology.  Everything we have been in the habit of calling an “instinct” today is the result largely of training – belongs to man’s learned behavior.

A bit later on he writes,

The behaviorist recognizes no such things as mental traits, dispositions or tendencies.  Hence, for him, there is no use in raising the question of the inheritance of talent in its old form.

In case we’re still in doubt about his Blank Slate bona fides, a few pages later he adds,

I should like to go one step further now and say, “Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select – doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.”  I am going beyond my facts and I admit it, but so have the advocates of the contrary and they have been doing it for many thousands of years.  Please note that when this experiment is made I am to be allowed to specify the way the children are to be brought up and the type of world they have to live in.

Here, in a nutshell, we can see the genesis of hundreds of anecdotes about learned professors dueling over the role of “nature” versus “nurture,” in venues ranging from highbrow intellectual journals to several episodes of The Three Stooges.  Watson seems to be literally pulling at our sleeves and insisting, “No, really, I’m a Blank Slater.”  Under the circumstances I’m somewhat dubious about the claim that Watson, Skinner, and the rest of the behaviorists don’t belong in that category.

What motivated Watson and others like him to begin this radical reshaping of the behavioral sciences?  I’ve already alluded to the answer above.  To make a long story short, they wanted to create a science that was “fair.”  For example, Watson was familiar with the history of the Jukes family outlined in an account of a study by Richard Dugdale published in 1877.  It documented unusually high levels of all kinds of criminal behavior in the family.  Dugdale himself insisted on the role of environmental as well as hereditary factors in explaining the family’s criminality, but later interpreters of his work focused on heredity alone.  Apparently Watson considered such an hereditary burden unfair.  He decided to demonstrate “scientifically” that a benign environment could have converted the entire family into model citizens.  Like many other scientists in his day, Watson abhorred the gross examples of racial discrimination in his society, as well as the crude attempts of the Social Darwinists to justify it.  He concluded that “science” must support a version of reality that banished all forms of inequality.  The road to hell is paved with good intentions.

I could go on and on about the discrepancies one can find between the “history” of the Blank Slate and source material that’s easily available to anyone willing to do a little searching.  Unfortunately, I’ve already gone on long enough for a single blog post.  Just be a little skeptical the next time you read an account of the affair in some textbook.  It ain’t necessarily so.

 

Morality and the Ideophobes

In our last episode I pointed out that, while some of the most noteworthy public intellectuals of the day occasionally pay lip service to the connection between morality and evolution by natural selection, they act and speak as if they believed the opposite.  If morality is an expression of evolved traits, it is necessarily subjective.  The individuals mentioned speak as if, and probably believe, that it is objective.  What do I mean by that?  As the Finnish philosopher Edvard Westermarck put it,

The supposed objectivity of moral values, as understood in this treatise (his Ethical Relativity, ed.) implies that they have a real existence apart from any reference to a human mind, that what is said to be good or bad, right or wrong, cannot be reduced merely to what people think to be good or bad, right or wrong.  It makes morality a matter of truth and falsity, and to say that a judgment is true obviously means something different from the statement that it is thought to be true.

All of the individuals mentioned in my last post are aware that there is a connection between morality and its evolutionary roots.  If pressed, some of them will even admit the obvious consequence of this fact; that morality must be subjective.  However, neither they nor any other public intellectual that I am aware of actually behaves or speaks as if that consequence meant anything or, indeed, as if it were even true.  One can find abundant evidence that this is true simply by reading their own statements, some of which I quoted.  For example, according the Daniel Dennett, Trump supporters are “guilty.”  Richard Dawkins speaks of the man in pejorative terms that imply a moral judgment rather than rational analysis of his actions.  Sam Harris claims that Trump is “unethical,” and Jonathan Haidt says that he is “morally wrong,” without any qualification to the effect that they are just making subjective judgments, and that the subjective judgments of others may be different and, for that matter, just as “legitimate” as theirs.

A commenter suggested that I was merely quoting tweets, and that the statements may have been taken out of context, or would have reflected the above qualifications if more space had been allowed.  Unfortunately, I have never seen a single example of an instance where one of the quoted individuals made a similar statement, and then qualified it as suggested.  They invariably speak as if they were stating objective facts when making such moral judgments, with the implied assumption that individuals who don’t agree with them are “bad.”

A quick check of the Internet will reveal that there are legions of writers out there commenting on the subjective nature of morality.  Not a single one I am aware of seems to realize that, if morality is subjective, their moral judgments lack any objective normative power or legitimacy whatsoever when applied to others.  Indeed, one commonly finds them claiming that morality is subjective, and as a consequence one is “morally obligated” to do one thing, and “morally obligated” not to do another, in the very same article, apparently oblivious to the fact that they are stating a glaring non sequitur.

None of this should be too surprising.  We are not a particularly rational species.  We give ourselves far more credit for being “wise” than is really due.  Most of us simply react to atavistic urges, and seek to satisfy them.  Our imaginations portray Good and Evil to us as real, objective things, and so we thoughtlessly assume that they are.  It is in our nature to be judgmental, and we take great joy in applying these imagined standards to others.  Unfortunately, this willy-nilly assigning of others to the above imaginary categories is very unlikely to accomplish the same thing today as it did when the  responsible behavioral predispositions evolved.  I would go further.  I would claim that this kind of behavior is not only not “adaptive.”  In fact, it has become extremely dangerous.

The source of the danger is what I call “ideophobia.”  So far, at least, it hasn’t had a commonly recognized name, but it is by far the most dangerous form of all the different flavors of “bigotry” that afflict us today.  By “bigotry” I really mean outgroup identification.  We all do it, without exception.  Some of the most dangerous manifestations of it exist in just those individuals who imagine they are immune to it.  All of us hate, despise, and are disgusted by the individuals in whatever outgroup happens to suit our fancy.  The outgroup may be defined by race, religion, ethnic group, nationality, and even sex.  I suspect, however, that by far the most common form of outgroup (and ingroup) identification today is by ideology.

Members of ideologically defined ingroups have certain ideas and beliefs in common.  Taken together, they form the intellectual shack the ingroup in question lives in.  The outgroup consists of those who disagree with these core beliefs, and especially those who define their own ingroup by opposing beliefs.  Ideophobes hate and despise such individuals.  They indulge in a form of bigotry that is all the more dangerous because it has gone so long without a name.  Occasionally they will imagine that they advocate universal human brotherhood, and “human flourishing.”  In reality, “brotherhood” is the last thing ideophobes want when it comes to “thought crime.”  They do not disagree rationally and calmly.  They hate the “other,” to the point of reacting with satisfaction and even glee if the “other” suffers physical harm.  They often imagine themselves to be great advocates of diversity, and yet are blithely unaware of the utter lack of it in the educational, media, entertainment, and other institutions they control when it comes to diversity of opinion.  As for the ideological memes of the ingroup, they expect rigid uniformity.  What Dennett, Dawkins, Harris and Haidt thought they were doing was upholding virtue.  What they were really doing is better called “virtue signaling.”  They were assuring the other members of their ingroup that they “think right” about some of its defining “correct thoughts,” and registering the appropriate allergic reaction to the outgroup.

I cannot claim that ideophobia is objectively immoral.  I do believe, however, that it is extremely dangerous, not only to me, but to everyone else on the planet.  I propose that it’s high time that we recognized the phenomenon as a manifestation of human nature that has long outlived its usefulness.  We need to recognize that ideophobia is essentially the same thing as racism, sexism, anti-Semitism, homophobia, xenophobia, or what have you.  The only difference is in the identifying characteristics of the outgroup.  The kind of behavior described is a part of what we are, and will remain a part of what we are.  That does not mean that it can’t be controlled.

What evidence do I have that this type of behavior is dangerous?  There were two outstanding examples in the 20th century.  The Communists murdered 100 million people, give or take, weighted in the direction of the most intelligent and capable members of society, because they belonged to their outgroup, commonly referred to as the “bourgeoisie.”  The Nazis murdered tens of millions of Jews, Slavs, gypsies, and members of any other ethnicity that they didn’t recognize as belonging to their own “Aryan” ingroup.  There are countless examples of similar mayhem, going back to the beginnings of recorded history, and ample evidence that the same thing was going on much earlier.  As many of the Communists and Nazis discovered, what goes around comes around.  Millions of them became victims of their own irrational hatred.

No doubt Dennett, Dawkins, Harris, Haidt and legions of others like them see themselves as paragons of morality and rationality.  I have my doubts.  With the exception of Haidt, they have made no attempt to determine why those they consider “deplorables” think the way they do, or to calmly analyze what might be their desires and goals, and to search for common ground and understanding.  As for Haidt, his declaration that the goals of his outgroup are “morally wrong” flies in the face of all the fine theories he recently discussed in his The Righteous Mind.  I would be very interested to learn how he thinks he can square this circle.  Neither he nor any of the others have given much thought to whether the predispositions that inspire their own desires and goals will accomplish the same thing now as when they evolved, and appear unconcerned about the real chance that they will accomplish the opposite.  They have not bothered to consider whether it even matters, and why, or whether the members of their outgroup may be acting a great deal more consistently in that respect than they do.  Instead, they have relegated those who disagree with them to the outgroup, slamming shut the door on rational discussion.

In short, they have chosen ideophobia.  It is a dangerous choice, and may turn out to be a very dangerous one, assuming we value survival.  I personally would prefer that we all learn to understand and seek to control the worst manifestations of our dual system of morality; our tendency to recognize ingroups and outgroups and apply different standards of good and evil to individuals depending on the category to which they belong.  I doubt that anything of the sort will happen any time soon, though.  Meanwhile, we are already witnessing the first violent manifestations of this latest version of outgroup identification.  It’s hard to say how extreme it will become before the intellectual fashions change again.  Perhaps the best we can do is sit back and collect the data.

Moral Nihilism, Moral Chaos, and Moral Truth

The truth about morality is both simple and obvious.  It exists as a result of evolution by natural selection.  From that it follows that it cannot possibly have a purpose or goal, and from that it follows that one cannot make “progress” towards fulfilling that nonexistent purpose or reaching that nonexistent goal.  Simple and obvious as it is, no truth has been harder for mankind to accept.

The reason for this has to do with the nature of moral emotions themselves.  They portray Good and Evil to us as real things that exist independent of human consciousness, when in fact they are subjective artifacts of our imaginations.  That truth has always been hard for us to accept.  It is particularly hard when self-esteem is based on the illusion of moral superiority.  That illusion is obviously alive and well at a time when a large fraction of the population is capable of believing that another large fraction is “deplorable.”  The fact that the result of indulging such illusions in the past has occasionally and not infrequently been mass murder suggests that, as a matter of public safety, it may be useful to stop indulging them.

The “experts on ethics” delight in concocting chilling accounts of what will happen if we do stop indulging them.  We are told that a world without objective moral truths will be a world of moral nihilism and moral chaos.  The most obvious answer to such fantasies is, “So what?”  Is the truth really irrelevant?  Are we really expected to force ourselves to believe in lies because that truth is just to scary for us to face?  Come to think of it, what, exactly, do we have now if not moral nihilism and moral chaos?

We live in a world in which every two bit social justice warrior can invent some new “objective evil,” whether “cultural appropriation,” failure to memorize the 57 different flavors or gender, or some arcane “micro-aggression,” and work himself into a fine fit of virtuous indignation if no one takes him seriously.  The very illusion that Good and Evil are objective things is regularly exploited to justify the crude bullying that is now used to enforce new “moral laws” that have suddenly been concocted out of the ethical vacuum.  The unsuspecting owners of mom and pop bakeries wake up one morning to learn that they are now “deplorable,” and so “evil” that their business must be destroyed with a huge fine.

We live in a world in which hundreds of millions believe that other hundreds of millions who associate the word “begotten” with the “son of God,” or believe in the Trinity, are so evil that they will certainly burn in hell forever.  These other hundreds of millions believe that heavenly bliss will be denied to anyone who doesn’t believe in a God with these attributes.

We live in a world in which the regime in charge of the most powerful country in the world believes it has such a monopoly on the “objective Good” that it can ignore international law, send its troops to occupy parts of another sovereign state, and dictate to the internationally recognized government of that state which parts of its territory it is allowed to control, and which not.  It persists in this dubious method of defending the “Good” even though it risks launching a nuclear war in the process.  The citizens in that country who happen to support one candidate for President don’t merely consider the citizens who support the opposing candidate wrong.  They consider them objectively evil according to moral “laws” that apparently float about as insubstantial spirits, elevating themselves by their own bootstraps.

We live in a world in which evolutionary biologists, geneticists, and neuroscientists who are perfectly well aware of the evolutionary roots of morality nevertheless persist in cobbling together new moral systems that lack even so much as the threadbare semblance of a legitimate basis.  The faux legitimacy that the old religions at least had the common decency to supply in the form of imaginary gods is thrown to the winds without a thought.  In spite of that these same scientists expect the rest of us to take them seriously when they announce that, at long last, they’ve discovered the philosopher’s stone of objective Good and Evil, whether in the form of some whimsical notion of “human flourishing,” or perhaps a slightly retouched version of utilitarianism.  In almost the same breath, they affirm the evolutionary basis of morality, and then proceed to denounce anyone who doesn’t conform to their newly minted moral “laws.”  When it comes to morality, it is hard to imagine a more nihilistic and chaotic world.

I find it hard to believe that a world in which the subjective nature and rather humble evolutionary roots of all our exalted moral systems were commonly recognized, along with the obvious implications of these fundamental truths, could possibly be even more nihilistic and chaotic than the one we already live in.  I doubt that “moral relativity” would prevail in such a world, for the simple reason that it is not in our nature to be moral relativists.  We might even be able to come up with a set of “absolute” moral rules that would be obeyed, not because humanity had deluded itself into believing they were objectively true, but because of a common determination to punish free riders and cheaters.  We might even be able to come up with some rational process for changing and adjusting the rules when necessary by common consent, rather than by the current “enlightened” process of successful bullying.

We would all be aware that even the most “exalted” and “noble” moral emotions, even those accompanied by stimulating music and rousing speeches, have a common origin; their tendency to improve the odds that the genes responsible for them would survive in a Pleistocene environment.  Under the circumstances, it would be reasonable to doubt, not only their ability to detect “objective Good” and “objective Evil,” but the wisdom of paying any attention to them at all.  Instead of swallowing the novel moral concoctions of pious charlatans without a murmur, we would begin to habitually greet them with the query, “Exactly what innate whim are you trying to satisfy?”  We would certainly be very familiar with the tendency of every one of us, described so eloquently by Jonathan Haidt in his “The Righteous Mind,” to begin rationalizing our moral emotions as soon as we experience them, whether in response to “social injustice” or a rude driver who happened to cut us off on the way to work.  We would realize that that very tendency also exists by virtue of evolution by natural selection, not because it is actually capable of unmasking social injustice, or distinguishing “evil” from “good” drivers, but merely because it improved our chances of survival when there were no cars, and no one had ever heard of such a thing as social justice.

I know, I’m starting to ramble.  I’m imagining a utopia, but one can always dream.

G. E. Moore Contra Edvard Westermarck

Many pre-Darwinian philosophers realized that the source of human morality was to be found in innate “sentiments,” or “passions,” often speculating that they had been put there by God.  Hume put the theory on a more secular basis.  Darwin realized that the “sentiments,” were there because of natural selection, and that human morality was the result of their expression in creatures with large brains.  Edvard Westermarck, perhaps at the same time the greatest and the most unrecognized moral philosopher of them all, put it all together in a coherent theory of human morality, supported by copious evidence, in his The Origin and Development of the Moral Ideas.

Westermarck is all but forgotten today, probably because his insights were so unpalatable to the various academic and professional tribes of “experts on ethics.”  They realized that, if Westermarck were right, and morality really is just the expression of evolved behavioral predispositions, they would all be out of a job.  Under the circumstances, its interesting that his name keeps surfacing in modern works about evolved morality, innate behavior, and evolutionary psychology.  For example, I ran across a mention of him in famous primatologist Frans de Waal’s latest book, The Bonobo and the Atheist.  People like de Waal who know something about the evolved roots of behavior are usually quick to recognize the significance of Westermarck’s work.

Be that as it may, G. E. Moore, the subject of my last post, holds a far more respected place in the pantheon of moral philosophers.  That’s to be expected, of course.  He never suggested anything as disconcerting as the claim that all the mountains of books and papers they had composed over the centuries might as well have been written about the nature of unicorns.  True, he did insist that everyone who had written about the subject of morality before him was delusional, having fallen for the naturalistic fallacy, but at least he didn’t claim that the subject they were writing about was a chimera.

Most of what I wrote about in my last post came from the pages of Moore’s Principia Ethica.  That work was published in 1903.  Nine years later he published another little book, entitled Ethics.  As it happens, Westermarck’s Origin appeared between those two dates, in 1906.  In all likelihood, Moore read Westermarck, because parts of Ethics appear to be direct responses to his book.  Moore had only a vague understanding of Darwin, and the implications of his work on the subject of human behavior.  He did, however, understand Westermarck when he wrote in the Origin,

If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.  It has been said by Bentham and others that moral principles cannot be proved because they are first principles which are used to prove everything else.  But the real reason for their being inaccessible to demonstration is that, owing to their very nature, they can never be true.  If the word “Ethics,” then, is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.

Now that got Moore’s attention.  Responding to Westermarck’s theory, or something very like it, he wrote:

Even apart from the fact that they lead to the conclusion that one and the same action is often both right and wrong, it is, I think, very important that we should realize, to begin with, that these views are false; because, if they were true, it would follow that we must take an entirely different view as to the whole nature of Ethics, so far as it is concerned with right and wrong, from what has commonly been taken by a majority of writers.  If these views were true, the whole business of Ethics, in this department, would merely consist in discovering what feelings and opinions men have actually had about different actions, and why they have had them.  A good many writers seem actually to have treated the subject as if this were all that it had to investigate.  And of course questions of this sort are not without interest, and are subjects of legitimate curiosity.  But such questions only form one special branch of Psychology or Anthropology; and most writers have certainly proceeded on the assumption that the special business of Ethics, and the questions which it has to try to answer, are something quite different from this.

Indeed they have.  The question is whether they’ve actually been doing anything worthwhile in the process.  Note the claim that Westermarck’s views were “false.”  This claim was based on what Moore called a “proof” that it couldn’t be true that appeared in the preceding pages.  Unfortunately, this “proof” is transparently flimsy to anyone who isn’t inclined to swallow it because it defends the relevance of their “expertise.”  Quoting directly from his Ethics, it goes something like this:

  1.  It is absolutely impossible that any one single, absolutely particular action can ever be both right and wrong, either at the same time or at different times.
  2. If the whole of what we mean to assert, when we say that an action is right, is merely that we have a particular feeling towards it, then plainly, provided only we really have this feeling, the action must really be right.
  3. For if this is so, and if, when a man asserts an action to be right or wrong, he is always merely asserting that he himself has some particular feeling towards it, then it absolutely follows that one and the same action has sometimes been both right and wrong – right at one time and wrong at another, or both simultaneously.
  4. But if this is so, then the theory we are considering certainly is not true.  (QED)

Note that this “proof” requires the positive assertion that it is possible to claim that an action can be right or wrong, in this case because of “feelings.”  A second, similar proof, also offered in Chapter III of Ethics, “proves” that an action can’t possible be right merely because one “thinks” it right, either.  With that, Moore claims that he has “proved” that Westermarck, or someone with identical views, must be wrong.  The only problem with the “proof” is that Westermarck specifically pointed out in the passage quoted above that it is impossible to make truth claims about “moral principles.”  Therefore, it is out of the question that he could ever be claiming that any action “is right,” or “is wrong,” because of “feelings” or for any other reason.  In other words, Moore’s “proof” is nonsense.

The fact that Moore was responding specifically to evolutionary claims about morality is also evident in the same Chapter of Ethics.  Allow me to quote him at length.

…it is supposed that there was a time, if we go far enough back, when our ancestors did have different feelings towards different actions, being, for instance, pleased with some and displeased with others, but when they did not, as yet, judge any actions to be right or wrong; and that it was only because they transmitted these feelings, more or less modified, to their descendants, that those descendants at some later stage, began to make judgments of right and wrong; so that, in a sense, or moral judgments were developed out of mere feelings.  And I can see no objection to the supposition that this was so.  But, then, it seems also to be supposed that, if our moral judgments were developed out of feelings – if this was their origin – they must still at this moment be somehow concerned with feelings; that the developed product must resemble the germ out of which it was developed in this particular respect.  And this is an assumption for which there is, surely, no shadow of ground.

In fact, there was a “shadow of ground” when Moore wrote those words, and the “shadow” has grown a great deal longer in our own day.  Moore continues,

Thus, even those who hold that our moral judgments are merely judgments about feelings must admit that, at some point in the history of the human race, men, or their ancestors, began not merely to have feelings but to judge that they had them:  and this along means an enormous change.

Why was this such an “enormous change?”  Why, of course, because as soon as our ancestors judged that they had feelings, then, suddenly those feelings could no longer be a basis for morality, because of the “proof” given above.  Moore concludes triumphantly,

And hence, the theory that moral judgments originated in feelings does not, in fact, lend any support at all to the theory that now, as developed, they can only be judgments about feelings.

If Moore’s reputation among them is any guide, such “ironclad logic” is still taken seriously by todays crop of “experts on ethics.”  Perhaps it’s time they started paying more attention to Westermarck.

The Moral Philosophy of G. E. Moore, or Why You Don’t Need to Bother with Aristotle, Hegel, and Kant

G. E. Moore isn’t exactly a household name these days, except perhaps among philosophers.  You may have heard of his most famous concoction, though – the “naturalistic fallacy.”  If we are to believe Moore, not only Aristotle, Hegel and Kant, but virtually every other philosopher you’ve ever heard of got morality all wrong because of it.  He was the first one who ever got it right.  On top of that, his books are quite thin, and he writes in the vernacular.  When you think about it, he did us all a huge favor.  Assuming he’s right, you won’t have to struggle with Kant, whose sentences can run on for a page and a half before you finally get to the verb at the end, and who is comprehensible, even to Germans, only in English translation.  You won’t have to agonize over the correct interpretation of Hegel’s dialectic.  Moore has done all that for you.  Buy his books, which are little more than pamphlets, and you’ll be able to toss out all those thick tomes and learn all the moral philosophy you will ever need in a week or two.

Or at least you will if Moore got it right.  It all hinges on his notion of the “Good-in-itself.”  He claims it’s something like what philosophers call qualia.  Qualia are the content of our subjective experiences, like colors, smells, pain, etc.  They can’t really be defined, but only experienced.  Consider, for example, the difficulty of explaining “red” to a blind person.  Moore’s description of the Good is even more vague.  As he puts it in his rather pretentiously named Principia Ethica,

Let us, then, consider this position.  My point is that ‘good’ is a simple notion, just as ‘yellow’ is a simple notion; that, just as you cannot, by any manner of means, explain to any one who does not already know it, what yellow is, so you cannot explain what good is.

In other words, you can’t even define good.  If that isn’t slippery enough for you, try this:

They (metaphysicians) have always been much occupied, not only with that other class of natural objects which consists in mental facts, but also with the class of objects or properties of objects, which certainly do not exist in time, are not therefore parts of Nature, and which, in fact, do no exist at all.  To this class, as I have said, belongs what we mean by the adjective “good.” …What is meant by good?  This first question I have already attempted to answer.  The peculiar predicate, by reference to which the sphere of Ethics must be defined, is simple, unanalyzable, indefinable.

Or, as he puts it elsewhere, the Good doesn’t exist.  It just is.  Which brings us to the naturalistic fallacy.  If, as Moore claims, Good doesn’t exist as a natural, or even a metaphysical, object, it can’t be defined with reference to such an object.  Attempts to so define it are what he refers to as the naturalistic fallacy.  That, in his opinion, is why every other moral philosopher in history, or at least all the ones whose names happen to turn up in his books, have been wrong except him.  The fallacy is defined at Wiki and elsewhere on the web, but the best way to grasp what he means is to read his books.  For example,

The naturalistic fallacy always implies that when we think “This is good,” what we are thinking is that the thing in question bears a definite relation to some one other thing.

That fallacy, I explained, consists in the contention that good means nothing but some simple or complex notion, that can be defined in terms of natural qualities.

To hold that from any proposition asserting “Reality is of this nature” we can infer, or obtain confirmation for, any proposition asserting “This is good in itself” is to commit the naturalistic fallacy.

In short, all the head scratching of all the philosophers over thousands of years about the question of what is Good has been so much wasted effort.  Certainly, the average layman had no chance at all of understanding the subject, or at least he didn’t until the fortuitous appearance of Moore on the scene.  He didn’t show up a moment too soon, either, because, as he explains in his books, we all have “duties.”  It turns out that, not only did the intuition “Good,” pop up in his consciousness, more or less after the fashion of “yellow,” or the smell of a rose.  He also “intuited” that it came fully equipped with the power to dictate to other individuals what they ought and ought not to do.  Again, I’ll allow the philosopher to explain.

Our “duty,” therefore, can only be defined as that action, which will cause more good to exist in the Universe than any possible alternative… When, therefore, Ethics presumes to assert that certain ways of acting are “duties” it presumes to assert that to act in those ways will always produce the greatest possible sum of good.

But how on earth can we ever even begin to do our duty if we have no clue what Good is?  Well, Moore is actually quite coy about explaining it to us, and rightly so, as it turns out.  When he finally takes a stab at it in Chapter VI of Principia, it turns out to be paltry enough.  Basically, it’s the same “pleasure,” or “happiness” that many other philosophers have suggested, only it’s not described in such simple terms.  It must be part of what Moore describes as an “organic whole,” consisting not only of pleasure itself, for example, but also a consciousness capable of experiencing the pleasure, the requisite level of taste to really appreciate it, the emotional equipment necessary to react with the appropriate level of awe, etc.  Silly old philosophers!  They rashly assumed that, if the Good were defined as “pleasure,” it would occur to their readers that they would have to be conscious in order to experience it without them spelling it out.  Little did they suspect the coming of G. E. Moore and his naturalistic fallacy.

When he finally gets around to explaining it to us, we gather that Moore’s Good is more or less what you’d expect the intuition of Good to be in a well-bred English gentleman endowed with “good taste” around the turn of the 20th century.  His Good turns out to include nice scenery, pleasant music, and chats with other “good” people.  Or, as he put it somewhat more expansively,

We can imagine the case of a single person, enjoying throughout eternity the contemplation of scenery as beautiful, and intercourse with persons as admirable, as can be imagined.

and

By far the most valuable things which we know or can imagine, are certain states of consciousness, which may be roughly described as the pleasures of human intercourse and the enjoyment of beautiful objects.  No one, probably, who has asked himself the question, has ever doubted that personal affection and the appreciation of what is beautiful in Art or Nature, are good in themselves.

Really?  No one?  One can only surmise that Moore’s circle of acquaintance must have been quite limited.  Unsurprisingly, Beethoven’s Fifth is in the mix, but only, of course, as part of an “organic whole.”  As Moore puts it,

What value should we attribute to the proper emotion excited by hearing Beethoven’s Fifth Symphony, if that emotion were entirely unaccompanied by any consciousness, either of the notes, or of the melodic and harmonic relations between them?

It would seem, then, that even if you’re such a coarse person that you can’t appreciate Beethoven’s Fifth yourself, it is still your “duty” to make sure that it’s right there on everyone else’s smart phone.

Imagine, if you will, Mother Nature sitting down with Moore, holding his hand, looking directly into his eyes, and revealing to him in all its majesty the evolution of life on this planet, starting from the simplest, one celled creatures more than four billion years ago, and proceeding through ever more complex forms to the almost incredible emergence of a highly intelligent and highly social species known as Homo sapiens.  It all happened, she explains to him with a look of triumph on her face, because, over all those four billion years, the chain of life remained unbroken because the creatures that made up the links of that chain survived and reproduced.  Then, with a serious expression on her face, she asks him, “Now do you understand the reason for the existence of moral emotions?”  “Of course,” answers Moore, “they’re there so I can enjoy nice landscapes and pretty music.”  (Loud forehead slap)  Mother Nature stands up and walks away shaking her head, consoling herself with the thought that some more advanced species might “get it” after another million years or so of natural selection.

And what of Aristotle, Hegel and Kant?  Throw out your philosophy books and forget about them.  Imagine being so dense as to commit the naturalistic fallacy!

Moore

…And One More Thing about James Burnham: On Human Nature

There’s another thing about James Burnham’s Suicide of the West that’s quite fascinating; his take on human nature.  In fact, Chapter III is entitled “Human Nature and the Good Society.”  Here are a few excerpts from that chapter:

However varied may be the combination of beliefs that it is psychologically possible for an individual liberal to hold, it remains true that liberalism is logically committed to a doctrine along the lines that I have sketched:  viewing human nature as not fixed but plastic and changing; with no pre-set limit to potential development; with no innate obstacle to the realization of a society of peace, freedom, justice and well-being.  Unless these things are true of human nature, the liberal doctrine and program for government, education, reform and so on are an absurdity.

But in the face of what man has done and does, it is only an ideologue obsessed with his own abstractions who can continue to cling to the vision of an innately uncorrupt, rational and benignly plastic human nature possessed of an unlimited potential for realizing the good society.

Quite true, which makes it all the more remarkable that virtually all the “scientists” in the behavioral “sciences” at the time Burnham wrote these lines were “clinging to that vision,” at least in the United States.  See, for example, The Triumph of Evolution, in which one of these “men of science,” author Hamilton Cravens, documents the fact.  Burnham continues,

No, we must repeat:  if human nature is scored by innate defects, if the optimistic account of man is unjustified, then is all the liberal faith in vain.

Here we get a glimpse of the reason that the Blank Slaters insisted so fanatically that there is no such thing as human nature, at least as commonly understood, for so many years, in defiance of all reason, and despite the fact that any 10 year old could have told them their anthropological theories were ludicrous.  The truth stood in the way of their ideology.  Therefore, the truth had to yield.

All this begs the question of how, as early as 1964, Burnham came up with such a “modern” understanding of the Blank Slate.  Reading on in the chapter, we find some passages that are even more intriguing.  Have a look at this:

It is not merely the record of history that speaks in unmistakable refutation of the liberal doctrine of man.  Ironically enough – ironically, because it is liberalism that has maintained so exaggerated a faith in science – almost all modern scientific studies of man’s nature unite in giving evidence against the liberal view of man as a creature motivated, once ignorance is dispelled, by the rational search for peace, freedom and plenty.  Every modern school of biology and psychology and most schools of sociology and anthropology conclude that men are driven chiefly by profound non-rational, often anti-rational, sentiments and impulses, whose character and very existence are not ordinarily understood by conscious reason.  Many of these drives are aggressive, disruptive, and injurious to others and to society.

!!!

The bolding and italics are mine.  How on earth did Burnham come up with such ideas?  By all means, dear reader, head for your local university library, fish out the ancient microfiche, and search through the scientific and professional journals of the time yourself.  Almost without exception, the Blank Slate called the tune.  Clearly, Burnham didn’t get the notion that “almost all modern scientific studies of man’s nature” contradicted the Blank Slate from actually reading the literature himself.  Where, then, did he get it?  Only Burnham and the wild goose know, and Burnham’s dead, but my money is on Robert Ardrey.  True, Konrad Lorenz’ On Aggression was published in Germany in 1963, but it didn’t appear in English until 1966.  The only other really influential popular science book published before Suicide of the West that suggested anything like what Burnham wrote in the above passage was Ardrey’s African Genesis, published in 1961.

What’s that you say?  I’m dreaming?  No one of any significance ever challenged the Blank Slate orthodoxy until E. O. Wilson’s stunning and amazing publication of Sociobiology in 1975?  I know, it must be true, because it’s all right there in Wikipedia.  As George Orwell once said, “He who controls the present controls the past.”