The world as I see it
RSS icon Email icon Home icon
  • The Damore Affair and the Ghost of the Blank Slate

    Posted on August 12th, 2017 Helian No comments

    So you thought the Blank Slate was dead, did you? Check out this post about the Damore affair by Jerry Coyne at his Why Evolution is True website:

    Salon disses dismisses Google memo as “biological determinism” that can “slip into eugenicist doctrines”

    Coyne is a professor emeritus at the University of Chicago. He’s also a leftist of great honesty and intellectual integrity. You should read him should you believe that such creatures went the way of unicorns long ago.  Among other things, he’s a strong supporter of the University of Chicago’s steadfast stance in favor of freedom of speech.  Coyne takes issue with an article by one Keith A. Spencer entitled, The ugly, pseudoscientific history behind that sexist Google manifesto, condemning Damore. Here’s the money quote:

    The Salon article is “The ugly pseudoscientific history behind that sexist Google manifesto“, and is by Keith A. Spencer, a Salon writer whose scientific training appears to be a B.A. in astrophysics/English at Oberlin (double major) and then subsequent work in the humanities and writing ever since (he also has a master’s degree in literary and cultural studies from Carnegie Mellon).  Although I’m not a credentials monger, perhaps Spencer’s lack of biological training is shown in the way he refutes Damore’s “pseudoscience”: his refutation relies on a single book published in 1984: Not in Our Genes, by Richard Lewontin, Steven Rose, and Leon Kamin (henceforth LRK). I am well familiar with that book, as the first author was my Ph.D. supervisor, and I have to note two things. First, The book not a dispassionate review of the literature: the authors wrote it because they were committed to dispelling biological determinism, and were certainly diehard opponents of evolutionary psychology, then called “sociobiology”. You cannot count on that book to be an objective review of the literature, as it’s a polemic. It should not have been used by Spencer as an authoritative refutation of gender differences.

    Second, the book is outdated. It is now 33 years old, and a considerable literature has accumulated since then. Not one thing is cited from that literature save in support of the absence of two sexes (see below)—Spencer just emits quote after quote from that book. And he uses it to refute three assertions that, he claims, Damore makes—at least implicitly…

    Note that Lewontin was Coyne’s Ph.D. supervisor. I know from other posts that Coyne admires and respects him personally, and reveres him as an educator in the field of evolutionary biology. The fact that he would take issue with Lewontin in this way is, among other things, what I mean by honesty and intellectual integrity.

    But just check out the quote. Here we have someone citing “Not in Our Genes” as a respectable scientific tract. It’s stunning! Even such reliable stalwarts of the Left as Scientific American and PBS threw in the towel and accepted the fact that there actually is such a thing as human nature long ago, flinging Not in Our Genes on the garbage heap of history.  How can one account for such an absurd historical anomaly?  Well, if you read Damore’s manifesto, you’ll notice that he actually uses the term “evolutionary psychology,” and in a supportive fashion, no less.  Of course, the fundamental premise of evolutionary psychology is the reality and importance of human nature, and insisting on that fact is tantamount to waving a red flag in the face of hoary Blank Slaters like Spencer.  These people are like the Bourbons; they’ve learned nothing and forgotten nothing. They still quote their ancient texts as if nothing had happened since those golden days of yesteryear, when the Blank Slate orthodoxy controlled the academy, the media, and the behavioral sciences virtually unchallenged for upwards of half a decade. They also still recall those who smashed their hegemony with unabated bitterness. Foremost among them was Robert Ardrey.  Sure enough, he popped up in a PBS special about Homo naledi as an evil proponent of the “Killer Ape Theory” even though no one, to the best of my knowledge, ever suggested that Homo naledi hunted or even ate meat. For more on that similarly incongruous fossil of the Blank Slate, see my post, PBS Answers the Burning Question:  What Does Robert Ardrey have to do with Homo naledi?

    It’s not hard to find similar artifacts these days.  Indeed, they pop up on both the Left and the Right, as evolutionary psychology has a way of deflating cherished narratives on both ends of the ideological spectrum.  However, those responsible for the mutilation of the behavioral sciences we recall as the Blank Slate were primarily leftist ideologues.  Given the Left’s current all but unchallenged hegemony in the academy, I wouldn’t be surprised to see a concerted attempt to turn back the clock and restore the Blank Slate orthodoxy at some point along the line.

  • More Whimsical History of the Blank Slate

    Posted on March 12th, 2017 Helian 10 comments

    As George Orwell wrote in 1984, “Who controls the past controls the future. Who controls the present controls the past.”  The history of the Blank Slate is a perfect illustration of what he meant.  You might say there are two factions in the academic ingroup; those who are deeply embarrassed by the Blank Slate, and those who are still bitterly clinging to it.  History as it actually happened is damaging to both factions, so they’ve both created imaginary versions that support their preferred narratives.  At this point the “official” histories have become hopelessly muddled.  I recently ran across an example of how this affects younger academics who are trying to make sense of what’s going on in their own fields in an article entitled, Sociology’s Stagnation at the Quillette website.  It was written by Brian Boutwell, Associate Professor of Criminology and Criminal Justice at St. Louis University.

    Boutwell cites an article published back in 1990 by sociologist Pierre van den Berghe excoriating the practitioners in his own specialty.  Van den Berghe was one of those rare sociologists who insisted on the relevance of evolved behavioral traits to his field.  He did not mince words.  Boutwell quotes several passages from the article, including the following:

    Such a theoretical potpourri is premised on the belief that, in the absence of a powerful simplifying idea, all ideas are potentially good, especially if they are turgidly presented, logically opaque, and empirically irrefutable. This sorry state of theoretical affairs in sociology is probably the clearest evidence of the discipline’s intellectual bankruptcy. But let my colleagues rest assured: intellectual bankruptcy never spelled the doom of an academic discipline. Those within it are professionally deformed not to recognize it, and those outside of it could not care less. Sociology is safe for at least a few more decades.

    In response, Boutwell writes,

    Intellectually bankrupt? Those are strong words. Can a field survive like this? It can, and it has. Hundreds of new sociology PhDs are minted every year across the country (not to mention the undergraduate and graduate degrees that are conferred as well). How many students were taught that human beings evolved about around 150,000 years ago in Africa? How many know what a gene is? How many can describe Mendel’s laws, or sexual selection? The answer is very few. And, what is worse, many sociologists do not think this ignorance matters.

    In other words, Boutwell thinks the prevailing malaise in Sociology continues because sociologists don’t know about Darwin.  He may be right in some cases, but that’s not really the problem.  The problem is that the Blank Slate still prevails in sociology.  It is probably the most opaque of all the behavioral “sciences.”  In fact, it is just an ideological narrative pretending to be a science, just as psychology was back in the day when van den Berghe wrote his article.  Psychologists deal with individuals.  As a result they have to look at behavior a lot closer to the source of what motivates it.  As most reasonably intelligent lay people have been aware for millennia, it is motivated by human nature.  By the end of the 90’s, naturalists, neuroscientists, and evolutionary psychologists had heaped up such piles of evidence supporting that fundamental fact that psychologists who tried to prop up the threadbare shibboleths of the Blank Slate ran the risk of becoming laughing stocks.  By 2000 most of them had thrown in the towel.  Not so the sociologists.  They deal with masses of human beings.  It was much easier for them to insulate themselves from the truth by throwing up a smokescreen of “culture.”  They’ve been masturbating with statistics ever since.

    Boutwell thinks the solution is for them to learn some evolutionary biology.  I’m not sure which version of the “history” gave him that idea.  However, if he knew how the Blank Slate really went down, he might change his mind.  Evolutionary biologists and scientists in related fields were part of the heart and soul of the Blank Slate orthodoxy.  They knew all about genes, Mendel’s laws, and sexual selection, but it didn’t help.  Darwin?  They simply redacted those parts of his work that affirmed the relationship between natural selection, human nature in general, and morality in particular.  No matter that Darwin himself was perfectly well aware of the connections.  For these “scientists,” an ideological narrative trumped scientific integrity until the mass of evidence finally rendered the narrative untenable.

    Of course, one could always claim that I’m just supporting an ideological narrative of my own.  Unfortunately, that claim would have to explain away a great deal of source material, and because the events in question are so recent, the source material is still abundant and easily accessible.  If Prof. Boutwell were to consult it he would find that evolutionary biologists like Stephen Jay Gould, geneticists like Richard Lewontin, and many others like them considered the Blank Slate the very “triumph of evolution.”  I suggest that anyone with doubts on that score have a look at a book that bears that title by scientific historian Hamilton Cravens published in 1978 during the very heyday of the Blank Slate.  It is very well researched, cites scores of evolutionary biologists, geneticists, and behavioral scientists, and concludes that all the work of these people who were perfectly familiar with Darwin culminated in the triumphant establishment of the Blank Slate as “scientific truth,” or, as announced by the title of his book, “The Triumph of Evolution.”  His final paragraph gives a broad hint about how something so ridiculous could ever have been accepted as an unquestionable dogma.  It reads,

    The long-range, historical function of the new evolutionary science was to resolve the basic questions about human nature in a secular and scientific way, and thus provide the possibilities for social order and control in an entirely new kind of society.  Apparently this was a most successful and enduring campaign in American culture.

    Here, unbeknownst to himself, Cravens hit the nail on the head.  Social control was exactly what the Blank Slate was all about.  It was essential that the ideal denizens of the future utopias that the Blank Slaters had in mind for us have enough “malleability” and “plasticity” to play their assigned parts.  “Human nature” in the form of genetically transmitted behavioral predispositions would only gum things up.  They had to go, and go they did.  Ideology trumped and derailed science, and kept it derailed for more than half a century.  As Boutwell has noticed, it remains derailed in sociology and a few other specialties that have managed to develop similarly powerful allergic reactions to the real world.  Reading Darwin isn’t likely to help a bit.

    One of the best books on the genesis of the Blank Slate is In Search of Human Nature, by Carl Degler.  It was published in 1991, well after the grip of the Blank Slate on the behavioral sciences had begun to loosen, and presents a somewhat more sober and realistic portrayal of the affair than Cravens’ triumphalist account.  Among other things it gives an excellent account of the genesis of the Blank Slate.  As portrayed by Degler, in the beginning it hadn’t yet become such a blatant tool for social control.  One could better describe it as an artifact of idealistic cravings.  Then, as now, one of the most important of these was the desire for human equality, not only under the law, but in a much more real, physical sense, among both races and individuals.  If human nature existed and was important, than such equality was out of the question.  Perfect equality was only possible if every human mind started out as a Blank Slate.

    Degler cites the work of several individuals as examples of this nexus between the ideal of equality and the Blank Slate, but I will focus on one in particular; John B. Watson, the founder of behaviorism.  One of the commenters to an earlier post suggested that the behaviorists weren’t Blank Slaters.  I think that he, too, is suffering from historical myopia.  Again, it’s always useful to look at the source material for yourself.  In his book, Behaviorism, published in 1924, Watson notes that all human beings breathe, sneeze, have hearts that beat, etc., but have no inherited traits that might reasonably be described as human nature.  In those days, psychologists like William James referred to hereditary behavioral traits as “instincts.”  According to Watson,

    In this relatively simple list of human responses there is none corresponding to what is called an “instinct” by present-day psychologists and biologists.  There are then for us no instincts – we no longer need the term in psychology.  Everything we have been in the habit of calling an “instinct” today is the result largely of training – belongs to man’s learned behavior.

    A bit later on he writes,

    The behaviorist recognizes no such things as mental traits, dispositions or tendencies.  Hence, for him, there is no use in raising the question of the inheritance of talent in its old form.

    In case we’re still in doubt about his Blank Slate bona fides, a few pages later he adds,

    I should like to go one step further now and say, “Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select – doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.”  I am going beyond my facts and I admit it, but so have the advocates of the contrary and they have been doing it for many thousands of years.  Please note that when this experiment is made I am to be allowed to specify the way the children are to be brought up and the type of world they have to live in.

    Here, in a nutshell, we can see the genesis of hundreds of anecdotes about learned professors dueling over the role of “nature” versus “nurture,” in venues ranging from highbrow intellectual journals to several episodes of The Three Stooges.  Watson seems to be literally pulling at our sleeves and insisting, “No, really, I’m a Blank Slater.”  Under the circumstances I’m somewhat dubious about the claim that Watson, Skinner, and the rest of the behaviorists don’t belong in that category.

    What motivated Watson and others like him to begin this radical reshaping of the behavioral sciences?  I’ve already alluded to the answer above.  To make a long story short, they wanted to create a science that was “fair.”  For example, Watson was familiar with the history of the Jukes family outlined in an account of a study by Richard Dugdale published in 1877.  It documented unusually high levels of all kinds of criminal behavior in the family.  Dugdale himself insisted on the role of environmental as well as hereditary factors in explaining the family’s criminality, but later interpreters of his work focused on heredity alone.  Apparently Watson considered such an hereditary burden unfair.  He decided to demonstrate “scientifically” that a benign environment could have converted the entire family into model citizens.  Like many other scientists in his day, Watson abhorred the gross examples of racial discrimination in his society, as well as the crude attempts of the Social Darwinists to justify it.  He concluded that “science” must support a version of reality that banished all forms of inequality.  The road to hell is paved with good intentions.

    I could go on and on about the discrepancies one can find between the “history” of the Blank Slate and source material that’s easily available to anyone willing to do a little searching.  Unfortunately, I’ve already gone on long enough for a single blog post.  Just be a little skeptical the next time you read an account of the affair in some textbook.  It ain’t necessarily so.

     

  • On the Unsubjective Morality and Unscientific Scientism of Alex Rosenberg

    Posted on February 26th, 2017 Helian 7 comments

    In a recent post I pointed out the irrational embrace of objective morality by some public intellectuals in spite of their awareness of morality’s evolutionary roots.  In fact, I know of only one scientist/philosopher who has avoided this non sequitur; Edvard Westermarck.  A commenter suggested that Alex Rosenberg was another example of such a philosopher.  In fact, he’s anything but.  He’s actually a perfect example of the type I described in my earlier post.

    A synopsis of Rosenberg’s philosophy may be found in his book, The Atheists Guide to Reality.  Rosenberg is a proponent of “scientism.”  He notes the previous, pejorative use of the term, but announces that he will expropriate it.  In his words,

    …we’ll call the worldview that all us atheists (and even some agnostics) share “scientism.”  This is the conviction that the methods of science are the only reliable ways to secure knowledge of anything; that science’s description of the world is correct in its fundamentals; and that when “complete,” what science tells us will not be surprisingly different from what it tells us today… Science provides all the significant truths about reality, and knowing such truths is what real understating is all about.

    Well, I’m “one of us atheists,” and while I would agree that science is the best and most effective method to secure knowledge of anything, I hardly agree that it is the only method, nor do I agree that it is always reliable.  For that matter, I doubt that Rosenberg believes it either.  He dismisses all the humanities with a wave of the hand as alternate ways of knowing, with particular emphasis on history.  In fact, one of his chapters is entitled, “History Debunked.”  In spite of that, his book is laced with allusions to history and historical figures.

    For that matter, we could hardly do without history as a “way of knowing” just what kind of a specimen we’re dealing with.  It turns out that, whether knowingly or not, Rosenberg is an artifact of the Blank Slate.  I reached convulsively for my crucifix as I encountered the telltale stigmata.  As those who know a little history are aware, the Blank Slate was a massive corruption of science involving what amounted to the denial of the existence of human nature that lasted for more than half a century.  It was probably the greatest scientific debacle of all time.  It should come as no surprise that Rosenberg doesn’t mention it, and seems blithely unaware that it ever happened.  It flies in the face of the rosy picture of science he’s trying to paint for us.

    We first get an inkling of where Rosenberg fits in the context of scientific history when he refers approvingly to the work of Richard Lewontin, who is described as a “well-known biologist.”  That description is a bit disingenuous.  Lewontin may well be a “well-known biologist,” but he was also one of the high priests of the Blank Slate.  As Steven Pinker put it in his The Blank Slate,

    Gould and Lewontin seem to be saying that the genetic components of human behavior will be discovered primarily in the “generalizations of eating, excreting, and sleeping.”  The rest of the slate, presumably, is blank.

    Lewontin embraced “scientific” Marxism, and alluded to the teachings of Marx often in his work.  His “scientific” method of refuting those who disagreed with him was to call them racists and fascists.  He even insisted that a man with such sterling leftist bona fides as Richard Trivers be dismissed as a lackey of the bourgeoisie.  It seems to me these facts are worth mentioning about anyone we may happen to tout as a “scientific expert.”  Rosenberg never gets around to it.

    A bit further on, Rosenberg again refers approvingly to another of the iconic figures of the Blank Slate; B. F. Skinner.  He cites Skinner’s theories as if there had never been anything the least bit controversial about them.  In fact, as primatologist Frans de Waal put it in his Are We Smart Enough to Know How Smart Animals Are?,

    Skinner… preferred language of control and domination.  He spoke of behavioral engineering and manipulation, and not just in relation to animals.  Later in life he sought to turn humans into happy, productive, and “maximally effective” citizens.

    and

    B. F. Skinner was more interested in experimental control over animals than spontaneous behavior.  Stimulus-response contingencies were all that mattered.  His behaviorism dominated animal studies for much of the last century.  Loosening its theoretical grip was a prerequisite for the rise of evolutionary cognition.

    Behaviorism, with its promise of the almost perfect malleability of behavior in humans and other animals, was a favorite prop of the Blank Slate orthodoxy.  Such malleability was a prerequisite for the creation of “maximally effective” citizens to occupy the future utopias they were concocting for us.

    Reading on, we find Rosenberg relating another of the favorite yarns of the Blank Slaters of old, the notion that our Pleistocene ancestors’ primary source of meat came from scavenging.  They would scamper out, we are told, and steal choice bones from the kills of large predators, then scamper back to their hiding places and smash the bones with rocks to get at the marrow.  This fanciful theory was much in fashion back in the 60’s when books disputing Blank Slate ideology and insisting on the existence and significance of human nature first started to appear.  These often mentioned aggression as one aspect of human behavior, an assertion that never failed to whip the Blank Slaters into a towering rage.  Hunting, of course, might be portrayed as a form of aggression.  Therefore it was necessary to deny that it ever happened early enough to have an effect on evolved human behavioral traits.  In those days, of course, we were so ignorant of primate behavior that Blank Slater Ashley Montagu was able to write with a perfectly straight face that chimpanzees are,

    …anything but irascible.  All the field observers agree that these creatures are amiable and quite unaggressive, and there is no reason to suppose that man’s pre-human primate ancestors were in any way different.

    We’ve learned a few things in the ensuing years.  Jane Goodall observed both organized hunting behavior and murderous attacks on neighboring bands carried out by these “amiable” creatures.  For reporting these observations she was furiously denounced and insulted in the most demeaning terms.  Meanwhile, chimps have been observed using sticks as thrusting spears, and fire-hardened spears were found associated with a Homo erectus campsite dated to some 400,000 years ago.  There is evidence that stone-tipped spears were used as far back as 500,000 years ago, and much more similar evidence of early hunting behavior has surfaced.  Articles about early hunting behavior have even appeared in the reliably politically correct Scientific American, not to mention that stalwart pillar of progressive ideology, PBS.  In other words, the whole scavenging thing is moot.  Apparently no one bothered to pass the word to Rosenberg.  No matter, he still includes enough evolutionary psychology in his book to keep up appearances.

    In spite of the fact that he writes with the air of a scientific insider who is letting us in on all kinds of revelations that we are to believe have been set in stone by “science” in recent years, and that we should never dare to question, Rosenberg shows similar signs of being a bit wobbly when it comes to actually knowing what he’s talking about elsewhere in the book.  For example, he seems to have a fascination with fermions and bosons, mentioning them often in the book.  He tells us that,

    …everything is made up of these two kinds of things.  Roughly speaking, fermions are what matter is composed of, while bosons are what fields of force are made of.

    Well, not exactly.  If matter isn’t composed of bosons, it will come as news to the helium atoms engaging in one of the neat tricks only bosons are capable of in the Wiki article on superfluidity.  As it happens, one of the many outcomes of the fundamental difference between bosons and fermions is that bosons are usually force carriers, but the notion that it actually is the fundamental difference is just disinformation, and a particularly unfortunate instance thereof at that.  I say that because our understanding of that difference is the outcome of an elegant combination of theoretical insight and mathematics.  I lack the space to go into detail here, but it follows from the indistinguishability of quantum particles.  I suggest that anyone interested in the real difference between bosons and fermions consult an elementary quantum textbook.  These usually boil the necessary math down to a level that should be accessible to any high school graduate who has taken an honors course or two in the subject.

    There are some more indications of the real depth of Rosenberg’s scientific understanding in his description of some of the books he recommends to his readers so they can “come up to speed” with him.  For example, he tells us that Steven Pinker’s The Blank Slate, “…argues for a sophisticated evolutionary account of several cognitive capacities critical for speech.”  Well, not really.  As the title implies Pinker’s The Blank Slate is about The Blank Slate.  I can only conclude that cognitive dissonance must have set in when Rosenberg read it, because that apocalypse in the behavioral sciences doesn’t fit too well in his glowing tale of the triumphant progress of science.  Elsewhere he tells us that,

    At its outset, human history might have been predictable just because the arms races were mainly biological.  That’s what enabled Jared Diamond to figure out how and why western Europeans came to dominate the globe over a period that lasted 8000 years or so in Guns, Germs, and Steel (1999), Though he doesn’t acknowledge it, Diamond is only applying an approach to human history made explicit by sociobiologist E. O. Wilson in On Human Nature more than 30 years ago (1978)…”

    Seriously?  Guns, Germs and Steel was actually an attempt to explain differences between human cultures in terms of environmental factors, whereas in On Human Nature Wilson doubled down on his mild assault on the Blank Slate orthodoxy in the first and last chapters of his Sociobiology, insisting on the existence and significance of evolved human behavioral traits.  I can only conclude that, assuming Rosenberg actually read the books, he didn’t comprehend what he was reading.

    With that let’s consider what Rosenberg has to say about morality.  He certainly seems to “get it” in the beginning of the book.  He describes himself as a “nihilist” when it comes to morality.  I consider that a bad choice of words, but whatever.  According to Rosenberg,

    Nihilism rejects the distinction between acts that are morally permitted, morally forbidden, and morally required.  Nihilism tells us not that we can’t know which moral judgments are right, but that they are all wrong.  More exactly, it claims, they are all based on false, groundless presuppositions.  Nihilism says that the whole idea of “morally responsible” is untenable nonsense.  As such, it can hardly be accused of holding that “everything is morally permissible.”  That, too, is untenable nonsense.

    Moreover, nihilism denies that there is really any such thing as intrinsic moral value.  People think that there are things that are intrinsically valuable, not just as a means to something else:  human life or the ecology of the planet or the master race or elevated states of consciousness, for example.  But nothing can have that sort of intrinsic value – the very kind of value morality requires.  Nihilism denies that there is anything at all that is good in itself or, for that matter, bad in itself.  Therefore, nihilism can’t be accused of advocating the moral goodness of, say, political violence or anything else.

    A promising beginning, no?  Sounds very Westermarckian.  But don’t jump to conclusions!  Before the end of the book we will find Rosenberg doing a complete intellectual double back flip when it comes to this so-called “nihilism.”  We will witness him chanting a few magic words over the ghost of objective morality, and then see it rise zombie-like from the grave he just dug for it.

    Rosenberg begins the pilgrimage from subjectivity to objectivity by evoking what he calls “core morality.”  He presents us with two premises about it, namely,

    First premise:  All cultures, and almost everyone in them, endorse most of the same core principles as binding on everyone.

    and

    Second premise:  The core moral principles have significant consequences for humans’ biological fitness – for our survival and reproduction.

    Seems harmless enough, doesn’t it, but then we learn some things that appear a bit counterintuitive about core morality.  For example,

    There is good reason to think that there is a moral core that is almost universal to almost all humans.  Among competing core moralities, it was the one that somehow came closest to maximizing the average fitness of our ancestors over a long enough period that it became almost universal.  For all we know, the environment to which our core morality constitutes an adaptation is still with us.  Let’s hope so, at any rate, since core morality is almost surely locked in by now.

    Are you kidding me?  There is not even a remote chance that “the environment to which our core morality constitutes an adaptation is still with us.”  Here, Rosenberg is whistling past the graveyard when it comes to the role he has in store for his “core morality.”  He is forced to make this patently absurd statement about our supposedly static environment because otherwise “core morality” couldn’t perform its necessary role in bringing the zombie back to life.  How can it perform that neat trick?  Well, according to Rosenberg,

    Along with everyone else, the most scientistic among us accept these core principles as binding. (!!)

    Some nihilism, no?  Suddenly, Rosenberg’s “core morality” has managed to jump right out of his skull onto our backs and is “binding” us!  Of course, it would be too absurd even for Rosenberg to insist that this “binding” feature was still in effect in spite of the radical changes in the environment that have obviously happened since “core morality” supposedly evolved.  Hence, he has to deny the obvious with his ludicrous suggestion that the environment hasn’t changed.  Meanwhile, the distinction noted by Westermarck between that which is thought to be binding, and that which actually is binding, has become very fuzzy.  We are well on the way back to the safe haven of objective morality.

    To sweeten the pill, Rosenberg assures us that core morality is “nice,” and cites all sorts of game theory experiments to prove it.  He wonders,

    Once its saddled with nihilism, can scientism make room for the moral progress that most of us want the world to make?  No problem.

    “Moral progress?”  That is a contradiction in terms unless morality and its rules exist as objective things in themselves.  How is “progress” possible if morality is really an artifact of evolution, and consequently has neither purpose nor goal?  Rosenberg puts stuff like this right in the middle of his pronouncements that morality is really subjective.  You could easily get whiplash reading his book.  The icing on the cake of “niceness” turns out to be altruistic behavior towards non-kin, which is also supposed to have evolved to enhance “fitness.”  Since one rather fundamental difference between the environment “then” and “now” is that “then” humans normally lived in communities of and interacted mainly with only about 150 people, the idea that they were really dealing with non-kin, and certainly any idea that similar behavior must work just as well between nations consisting of millions of not quite so closely related individuals is best taken with a grain of salt.

    Other then a few very perfunctory references, Rosenberg shows a marked reticence to discuss human behavior that is not so nice.  Of course, there is no mention of the ubiquitous occurrence of warfare between human societies since the dawn of recorded time.  After all, that would be history, and hasn’t Rosenberg told us that history is bunk?  He never mentions such “un-nice” traits as ingroup-outgroup behavior, or territoriality.  That’s odd, since we can quickly identify his own outgroup, thanks to some virtue signaling remarks about “Thatcherite Republicans,” and science-challenged conservatives.  As for those who get too far out of line he writes,

    Recall the point made early in this chapter that even most Nazis may have really shared a common moral code with us.  The qualification “most” reflects the fact that a lot of them, especially at the top of the SS, were just psychopaths and sociopaths with no core morality.

    Really?  What qualifies Rosenberg to make such a statement?  Did he examine their brains?  Did neuroscientists subject them to experiments before they died?  It would seem that if we don’t “get our minds right” about core morality we could well look forward to being “cured” the way “psychopaths and sociopaths” were “cured” in the old Soviet Union.

    By the time we get to the end of the book, the subjective façade has been entirely dismantled, and the “core morality” zombie has jettisoned the last of its restraints.  Rosenberg’s continued insistence on the non-existence of objective good and bad has deteriorated to a mere matter of semantics.  Consider, for example, the statement,

    Once science reveals the truths about human beings that may be combined with core morality, we can figure out what our morality does and does not require of us.  Of course, as nihilists, we have to remember that core morality’s requiring something of us does not make it right – or wrong.  There is no such thing.

    That should be comforting news to the inmates of the asylum who didn’t do what was “required” of them. We learn that,

    Almost certainly, when all these facts are decided, it will turn out that core morality doesn’t contain any blanket prohibition or permission of abortion as such.  Rather, together with the facts that science can at least in principle uncover, core morality will provide arguments in favor of some abortions and against other abortions, depending on the circumstances.

    The pro-life people shouldn’t entirely despair, however, because,

    Scientism allows that sometimes the facts of a case will combine with core morality to prohibit abortion, even when the woman demands it as a natural right.

    That’s about as wild and crazy as Rosenberg gets, though.  In fact, he’s not a scientist but a leftist ideologue, and we soon find him scurrying back to the confines of his ideologically defined ingroup, core morality held firmly under his arm.  He assures us that,

    …when you combine our core morality with scientism, you get some serious consequences, especially for politics.  In particular, you get a fairly left-wing agenda.  No wonder most scientists in the United States are Democrats and in the United Kingdom are Labour Party supporters or Liberal Democrats.

    Core morality reaches out its undead hand for the criminal justice system as well:

    There are other parts of core morality that permit or even require locking people up – for example, to protect others and to deter, reform, rehabilitate, and reeducate the wrongdoer.

    That would be a neat trick – reeducating wrongdoers if there really isn’t such a thing as wrong.  No matter, core morality is now not only alive but is rapidly turning into a dictator with “requirements.”

    Core morality may permit unearned inequalities, but it is certainly not going to require them without some further moral reason to do so.  In fact, under many circumstances, core morality is going to permit the reduction of inequalities, for it requires that wealth and income that people have no right to be redistributed to people in greater need.  Scientism assures us that no one has any moral rights.  Between them, core morality and scientism turn us into closet egalitarians.

    Did you get that?  Your “selfish genes” are now demanding that you give away your money to unrelated people even if the chances that this will ever help those genes to survive and reproduce are vanishingly small.  Rosenberg concludes,

    So, scientism plus core morality turn out to be redistributionist and egalitarian, even when combined with free-market economics.  No wonder Republicans in the United States have such a hard time with science.

    Did his outgroup just pop up on your radar screen?  It should have.  At this point any rational consequences of the evolved origins and subjective nature of morality have been shown the door.  The magical combination of scientism and core morality has us in a leftist full nelson.  They “require” us to do the things that Rosenberg considers “nice,” and refrain from doing the things he considers “not nice.”  In principle, he dismisses the idea of free will.  However, in this case we will apparently be allowed just a smidgeon of it if we happen to be “Thatcherite Republicans.”  Just enough to get our minds right and return us to a “nice” deterministic track.

    In a word, Rosenberg is no Westermarck.  In fact, he is a poster boy for leftist ideologues who like to pose as “moral nihilists,” but get an unholy pleasure out of dictating moral rules to the rest of us.  His “scientific” pronouncements are written with all the cocksure hubris characteristic of ideologues, and sorely lack the reticence more appropriate for real scientists.  There is no substantial difference between the illusion that there are objective moral laws, and Rosenberg’s illusion that a “core morality” utterly divorced from its evolutionary origins is capable of dictating what we ought and ought not to do.

    It’s not really that hard to understand.  The ingroup, or tribe, if you will, of leftist ideologues like Rosenberg and the other examples I mentioned in recent posts, lives in a box defined by ideological shibboleths.  Its members can make as many bombastic pronouncements about moral nihilism as they like, but in the end they must either kowtow to the shibboleths or be ostracized from the tribe.  That’s a sacrifice that none of them, at least to the best of my knowledge, has ever been willing to make.  If my readers are aware of any other “counter-examples,” I would be happy to examine them in my usual spirit of charity.

  • “Five Easy Pieces” and the Ghost of Robert Ardrey

    Posted on December 22nd, 2016 Helian 1 comment

    I know.  You think I’m too obsessed with Robert Ardrey.  Perhaps, but when I stumble across little historical artifacts of his existence, I can’t resist recording them.  Who else will?  Besides, I have moral emotions, too.  I’m not sure where I sit on the spectrum of Jonathan Haidt’s moral foundations, but when I consider Ardrey’s shabby treatment in the “official” histories, they all start howling at once.  Ardrey shouldn’t be forgotten.  He was the most significant player in the events that come to mind when one hears the term “Blank Slate.”

    What was the “Blank Slate?”  I’d call it the greatest scientific debacle of all time.  The behavioral sciences were derailed for fifty years and more by the ideologically motivated denial of human nature.  Unfortunately, its history will probably never be written, or at least not in a form that bears some resemblance to the truth.  Perhaps the most important truth that will be redacted from future accounts of the Blank Slate is the seminal role of Robert Ardrey in dismantling it.  That role was certainly recognized by the high priests of the Blank Slate themselves.  Their obsession with Ardrey can be easily documented.  In spite of that he is treated as an unperson today, and his historical role has been denied or suppressed.  I have discussed reasons for this remarkable instance of historical amnesia elsewhere.  They usually have something to do with the amour-propre of the academic tribe.  See, for example, here, here and here.

    If there are grounds for optimism that the real story will ever see the light of day, it lies in the ease with which the elaborate fairy tale that currently passes as the “history” of the Blank Slate can be exposed.  According to this official “history,” the Blank Slate prevailed virtually unchallenged until the mid-70’s.  Then, suddenly, E. O. Wilson appeared on the scene as the knight in shining armor who slew the Blank Slate dragon almost single-handedly with the publication of Sociobiology in 1975.  As I’ve noted in earlier posts, there’s a great deal of source material in both the academic and popular literature whose existence is very difficult to account for if one takes this sanitized version of the affair seriously.  I’ve occasionally cited some of the numerous examples of articles about or by Ardrey, both pro and con, in popular magazines including the highbrow Encounter, the more professionally oriented Saturday Review, the once popular Life, the “recreational” Penthouse, and many others, all of which appeared long before the publication of Sociobiology.  I recently stumbled across another amusing example in one of Jack Nicholson’s earlier flicks, and probably one of his best; Five Easy Pieces.

    I hadn’t watched the film since 1970, the year it was released.  I thought it was entertaining at the time, especially the iconic restaurant scene with the uncooperative waitress.  However, I certainly didn’t notice any connection to the Blank Slate.  It was a bit early for that.  However, I happened to watch the film again a couple of days ago.  This time I noticed something.  There was the ghost of Robert Ardrey, with an amused look on his face, waving at me right out of the screen.

    The great debunker of the Blank Slate turns up around 1:20:25 into the film.  Bobby (Jack Nicholson), his somewhat trashy girlfriend, Rayette, and a few other family members and guests are gathered in the living room of Bobby’s childhood home.  A pompous, insufferable woman by the name of Samia Glavia is holding forth about the nature of man.  The dialogue goes like this:

    Samia Glavia/Irene Dailey: But you see, man is born into the world with his existent adversary from the first. It is his historic, lithic inheritance. So, is it startling? Aggression is prehistoric. An organism behaves according to its nature, and its nature derives from the circumstances of its inheritance. The fact remains that primitive man took absolute delight in tearing his adversary apart. And there is where I think the core of the problem resides.

    John Ryan/Spicer: Doesn’t that seem unnecessarily apocalyptic?

    Glavia: I do not make poetry.

    Rayette: Is there a TV in the house?

    Glavia: I remarked to John, that rationality is not a device to alter facts. But moreover I think of it as an extraneous tool, a gadget, somewhat like… the television. To look at it any other way is ridiculous.

    Rayette:There’s some good things on it, though.

    Glavia: I beg your pardon? (Condescendingly)

    Rayette: There’s some good things on it sometimes.

    Glavia: I have strong doubts. Nevertheless, I am not discussing media.  (Icy, condescending smile)

    Susan Anspach/Catherine van Oost: I think these cold, objective discussions are aggressive.

    As Catherine leaves the room, Glavia rants on: There seems to be less aggression, or violence, if you like, among the higher classes, and loftier natures.

    Nicholson/Bobby Dupea: You pompous celibate. You’re totally full of shit.

    Great shades of Raymond Dart!  “Aggression” was a key buzzword at the time in any discussion of innate human nature.  Naturalist Konrad Lorenz had published the English version of his On Aggression a few years earlier.  Ardrey had highlighted the theories of Dart, according to which Australopithecus africanus was an aggressive hunting ape, in his African Genesis, published in 1961.  The scientific establishment, firmly in the grip of the Blank Slate ideologues, had been furiously blasting back, condemning Ardrey, Lorenz, and anyone else who dared to suggest the existence of anything as heretical as human nature as a fascist and a Nazi, not to mention very right wing.  (sound familiar?)  See, for example, the Blank Slate tract Man and Aggression, published in 1968.

    I’m not sure whether producer Bob Rafelson or screenwriter Carole Eastman or both were responsible for the lines in question, but there’s no doubt about one thing – whoever wrote them had been well coached by the Blank Slaters.  Their favorite memes were all there.  The grotesque, exaggerated “Killer Ape Theory?” Check!  The socially objectionable nature of the messenger?  Check!  Their association with the “exploiting classes, or, as Samia Glavia put it, “the higher classes and loftier natures?” Check!  As a final subtle touch, the very name “Glavia” is Latin for a type of sword or spear, a weapon of “aggression.”

    I’m sure there are many more of these artifacts of reality out there, awaiting discovery by some future historian bold enough to dispute the “orthodox” account of the Blank Slate.  According to that account, nothing much happened to disturb the hegemony of the Blank Slaters until E. O. Wilson turned up.  Then, as noted above, the whole charade supposedly popped like a soap bubble.  Well, as the song goes, “It ain’t necessarily so.”  Ardrey and friends had already reduced the Blank Slate to a laughing stock among the lay public long before Wilson happened along.  The “Men of Science” knew the game was up.  Still, they couldn’t bear to admit that a “mere playwright” like Ardrey had forced them to admit that the elaborate Blank Slate fairy tale they had been propping up for the last 50 years with thousands of “scientific” papers in hundreds of learned academic and professional journals was a hoax.  They needed some “graceful” way to rejoin the real world.  They seized on Wilson as the “way.”  Any port in a storm.  As a member of the academic tribe himself, he made it respectable for other “Men of Science” to disengage themselves from the Blank Slate dogmas.  Be that as it may, as anyone who was around at the time and was paying attention was aware, the man who was the real nemesis of the Blank Slate was Robert Ardrey.  If you’re looking for proof, I recommend Five Easy Pieces as both a revealing and entertaining place to start your search.

    Why is all this important?  Because the Blank Slate affair was a disfiguring and corruption of the integrity of science on an unprecedented scale.  It clearly demonstrated what can happen when ideological imperatives are allowed to trump the scientific method.  For half a century and more it blocked our path to self-understanding, and with it out ability to understand and cope with some of the more destructive aspects of our nature.  Under the circumstances it might behoove us to at least get the history right.

    Robert Ardrey

  • “On Aggression” Revisited

    Posted on October 3rd, 2016 Helian 8 comments

    Once upon a time, half a century ago and more, several authors wrote books according to which certain animals, including human beings, are, at least in certain circumstances, predisposed to aggressive behavior.  Prominent among them was On Aggression, published in English in 1966 by Konrad Lorenz.  Other authors included Desmond Morris (The Naked Ape, 1967), Lionel Tiger (Men in Groups, 1969) and Robin Fox (The Imperial Animal, co-authored with Tiger, 1971).  The most prominent and widely read of all was the inimitable Robert Ardrey (African Genesis, 1961, The Territorial Imperative, 1966, The Social Contract, 1970, and The Hunting Hypothesis, 1976).  Why were these books important, or even written to begin with?  After all, the fact of innate aggression, then as now, was familiar to any child who happened to own a dog.  Well, because the “men of science” disagreed.  They insisted that there were no innate tendencies to aggression, in man or any of the other higher animals.  It was all the fault of unfortunate cultural developments back around the start of the Neolithic era, or of the baneful environmental influence of “frustration.”

    Do you think I’m kidding?  By all means, read the source literature! For example, according to a book entitled Aggression by “dog expert” John Paul Scott published in 1958 by the University of Chicago Press,

    All research findings point to the fact that there is no physiological evidence of any internal need or spontaneous driving force for fighting; that all stimulation for aggression eventually comes from the forces present in the external environment.

    A bit later, in 1962 in a book entitled Roots of Behavior he added,

    All our present data indicate that fighting behavior among the higher mammals, including man, originates in external stimulation and that there is no evidence of spontaneous internal stimulation.

    Ashley Montagu added the following “scientific fact” about apes (including chimpanzees!) in his “Man and Aggression,” published in 1968:

    The field studies of Schaller on the gorilla, of Goodall on the chimpanzee, of Harrison on the orang-utan, as well as those of others, show these creatures to be anything but irascible. All the field observers agree that these creatures are amiable and quite unaggressive, and there is not the least reason to suppose that man’s pre-human primate ancestors were in any way different.

    When Goodall dared to contradict Montagu and report what she had actually seen, she was furiously denounced in vile attacks by the likes of Brian Deer, who chivalrously recorded in an artical published in the Sunday Times in 1997,

    …the former waitress had arrived at Gombe, ordered the grass cut and dumped vast quantities of trucked-in bananas, before documenting a fractious pandemonium of the apes. Soon she was writing about vicious hunting parties in which our cheery cousins trapped colubus monkeys and ripped them to bits, just for fun.

    This remarkable transformation from Montagu’s expert in the field to Deer’s “former waitress” was typical of the way “science” was done by the Blank Slaters in those days.  This type of “science” should be familiar to modern readers, who have witnessed what happens to anyone who dares to challenge the current climate change dogmas.

    Fast forward to 2016.  A paper entitled The phylogenetic roots of human lethal violence has just been published in the prestigious journal Nature.  The first figure in the paper has the provocative title, “Evolution of lethal aggression in non-human mammals.”   It not only accepts the fact of “spontaneous internal stimulation” of aggression without a murmur, but actually quantifies it in no less than 1024 species of mammals!  According to the abstract,

    Here we propose a conceptual approach towards understanding these roots based on the assumption that aggression in mammals, including humans, has a significant phylogenetic component. By compiling sources of mortality from a comprehensive sample of mammals, we assessed the percentage of deaths due to conspecifics and, using phylogenetic comparative tools, predicted this value for humans. The proportion of human deaths phylogenetically predicted to be caused by interpersonal violence stood at 2%.

    All this and more is set down in the usual scientific deadpan without the least hint that the notion of such a “significant phylogenetic component” was ever seriously challenged.  Unfortunately the paper itself is behind Nature’s paywall, but a there’s a free review with extracts from the paper by Ed Yong on the website of The Atlantic, and Jerry Coyne also reviewed the paper over at his Why Evolution is True website.  Citing the paper Yong notes,

    It’s likely that primates are especially violent because we are both territorial and social—two factors that respectively provide motive and opportunity for murder.  So it goes for humans.  As we moved from small bands to medium-sized tribes to large chiefdoms, our rates of lethal violence increased.

    “Territorial and social!?”  Whoever wrote such stuff?  Oh, now I remember!  It was a guy named Robert Ardrey, who happened to be the author of The Territorial Imperative and The Social Contract.  Chalk up another one for the “mere playwright.”  Yet again, he was right, and almost all the “men of science” were wrong.  Do you ever think he’ll get the credit he deserves from our latter day “men of science?”  Naw, neither do I.  Some things are just too embarrassing to admit.

  • The “Moral Progress” Delusion

    Posted on August 14th, 2016 Helian 7 comments

    “Moral progress” is impossible.  It is a concept that implies progress towards a goal that doesn’t exist.  We exist as a result of evolution by natural selection, a process that has simply happened.  Progress implies the existence of an entity sufficiently intelligent to formulate a goal or purpose towards which progress is made.  No such entity has directed the process, nor did one even exist over most of the period during which it occurred.  The emotional predispositions that are the root cause of what we understand by the term “morality” are as much an outcome of natural selection as our hands or feet.  Like our hands and feet, they exist solely because they have enhanced the probability that the genes responsible for their existence would survive and reproduce.  There is increasing acceptance of the fact that morality owes its existence to evolution by natural selection among the “experts on ethics” among us.  However, as a rule they have been incapable of grasping the obvious implication of that fact; that the notion of “moral progress” is a chimera.  It is a truth that has been too inconvenient for them to bear.

    It’s not difficult to understand why.  Their social gravitas and often their very livelihood depend on propping up the illusion.  This is particularly true of the “experts” in academia, who often lack marketable skills other than their “expertise” in something that doesn’t exist.  Their modus operandi consists of hoodwinking the rest of us into believing that satisfying some whim that happens to be fashionable within their tribe represents “moral progress.”  Such “progress” has no more intrinsic value than a five year old’s progress towards acquiring a lollipop.  Often it can be reasonably expected to lead to outcomes that are the opposite of those that account for the existence of the whim to begin with, resulting in what I have referred to in earlier posts as a morality inversion.  Propping up the illusion in spite of recognition of the evolutionary roots of morality in a milieu that long ago dispensed with the luxury of a God with a big club to serve as the final arbiter of what is “really good” and “really evil” is no mean task.  Among other things it requires some often amusing intellectual contortions as well as the concoction of an arcane jargon to serve as a smokescreen.

    Consider, for example, a paper by Professors Allen Buchanan and Russell Powell entitled Toward a Naturalistic Theory of Moral ProgressIt turned up in the journal Ethics, that ever reliable guide to academic fashion touching on the question of “human flourishing.”  Far from denying the existence of human nature after the fashion of the Blank Slaters of old, the authors positively embrace it.  They cheerfully admit its relevance to morality, noting in particular the existence of a predisposition in our species to perceive others of our species in terms of ingroups and outgroups; what Robert Ardrey used to call the Amity/Enmity Complex.  Now, if these things are true, and absent the miraculous discovery of any other contributing “root cause” for morality other than evolution by natural selection, whether in this world or the realm of spirits, it follows logically that “progress” is a term that can no more apply to morality than it does to evolution by natural selection itself.  It further follows that objective Good and objective Evil are purely imaginary categories.  In other words, unless one is merely referring to the scientific investigation of evolved behavioral traits, “experts on ethics” are experts about nothing.  Their claim to possess a philosopher’s stone pointing the way to how we should act is a chimera.  For the last several thousand years they have been involved in a sterile game of bamboozling the rest of us, and themselves to boot.

    Predictly, the embarrassment and loss of gravitas, not to mention the loss of a regular paycheck, implied by such a straightforward admission of the obvious has been more than the “experts” could bear.  They’ve simply gone about their business as if nothing had happened, and no one had ever heard of a man named Darwin.  It’s actually been quite easy for them in this puritanical and politically correct age, in which the intellectual life and self-esteem of so many depends on maintaining a constant state of virtuous indignation and moral outrage.  Virtuous indignation and moral outrage are absurd absent the existence of an objective moral standard.  Since nothing of the sort exists, it is simply invented, and everyone stays outraged and happy.

    In view of this pressing need to prop up the moral fashions of the day, then, it follows that no great demands are placed on the rigor of modern techniques for concocting real Good and real Evil.  Consider, for example, the paper referred to above.  The authors go to a great deal of trouble to assure their readers that their theory of “moral progress” really is “naturalistic.”  In this enlightened age, they tell us, they will finally be able to steer clear of the flaws that plagued earlier attempts to develop secular moralities.  These were all based on false assumptions “based on folk psychology, flawed attempts to develop empirically based psychological theories, a priori speculation, and reflections on history hampered both by a lack of information and inadequate methodology.”  “For the first time,” they tell us, “we are beginning to develop genuinely scientific knowledge about human nature, especially through the development of empirical psychological theories that take evolutionary biology seriously.”  This begs the question, of course, of how we’ve managed to avoid acquiring “scientific knowledge about human nature” and “taking evolutionary biology seriously” for so long.  But I digress.  The important question is, how do the authors manage to establish a rational basis for their “naturalistic theory of moral progress” while avoiding the Scylla of “folk psychology” on the one hand and the Charybdis of “a priori speculation” on the other?  It turns out that the “basis” in question hardly demands any complex mental gymnastics.  It is simply assumed!

    Here’s the money passage in the paper:

    A general theory of moral progress could take a more a less ambitious form.  The more ambitious form would be to ground an account of which sorts of changes are morally progressive in a normative ethical theory that is compatible with a defensible metaethics… In what follows we take the more modest path:  we set aside metaethical challenges to the notion of moral progress, we make no attempt to ground the claim that certain moralities are in fact better than others, and we do not defend any particular account of what it is for one morality to be better than another.  Instead, we assume that the emergence of certain types of moral inclusivity are significant instances of moral progress and then use these as test cases for exploring the feasibility of a naturalized account of moral progress.

    This is indeed a strange approach to being “naturalistic.”  After excoriating the legions of thinkers before them for their faulty mode of hunting the philosopher’s stone of “moral progress,” they simply assume it exists.  It exists in spite of the elementary chain of logic leading inexorably to the conclusion that it can’t possibly exist if their own claims about the origins of morality in human nature are true.  In what must count as a remarkable coincidence, it exists in the form of “inclusivity,” currently in high fashion as one of the shibboleths defining the ideological box within which most of today’s “experts on ethics” happen to dwell.  Those who trouble themselves to read the paper will find that, in what follows, it is hardly treated as a mere modest assumption, but as an established, objective fact.  “Moral progress” is alluded to over and over again as if, by virtue this original, “modest assumption,” the real thing somehow magically popped into existence in the guise of “inclusivity.”

    Suppose we refrain from questioning the plot, and go along with the charade.  If inclusivity is really to count as moral progress, than it must not only be desirable in certain precincts of academia, but actually feasible.  However if, as the authors agree, humans are predisposed to perceive others of their species in terms of ingroups and outgroups, the feasibility of inclusivity is at least in question.  As the authors put it,

    Attempts to draw connections between contemporary evolutionary theories of morality and the possibility of inclusivist moral progress begin with the standard evolutionary psychological assertion that the main contours of human moral capacities emerged through a process of natural selection on hunter-gatherer groups in the Pleistocene – in the so-called environment of evolutionary adaptation (EEA)… The crucial claim, which leads some thinkers to draw a pessimistic inference about the possibility of inclusivist moral progress, is that selection pressures in the EEA favored exclusivist moralisties.  These are moralities that feature robust moral commitments among group members but either deny moral standing to outsiders altogether, relegate out-group members to a substantially inferior status, or assign moral standing to outsiders contingent on strategic (self-serving) considerations.

    No matter, according to the authors, this flaw in our evolved moral repertoire can be easily fixed.  All we have to do is lift ourselves out of the EEA, achieve universal prosperity so great and pervasive that competition becomes unnecessary, and the predispositions in question will simply fade away, more or less like the state under Communism.  Invoking that wonderful term “plasticity,” which seems to pop up with every new attempt to finesse human behavioral traits out of existence, they write,

    According to an account of exclusivist morality as a conditionally expressed (adaptively plastic) trait, the suite of attitudes and behaviors associated with exclusivist tendencies develop only when cues that were in the past highly correlated with out-group threat are detected.

    In other words, it is the fond hope of the authors that, if only we can make the environment in which inconvenient behavioral predispositions evolved disappear, the traits themselves will disappear as well!  They go on to claim that this has actually happened, and that,

    …exclusivist moral tendencies are attenuated in populations inhabiting environments in which cues of out-group threat are absent.

    Clearly we have seen a vast expansion in the number of human beings that can be perceived as ingroup since the Pleistocene, and the inclusion as ingroup of racial and religious categories that once defined outgroups.  There is certainly plasticity in how ingroups and outgroups are actually defined and perceived, as one might expect of traits evolved during times of rapid environmental change in the nature of the “others” one happened to be in contact with or aware of at any given time.  However, this hardly “proves” that the fundamental tendency to distinguish between ingroups and outgroups itself will disappear or is likely to disappear in response to any environmental change whatever.  Perhaps the best way to demonstrate this is to refer to the paper itself.

    Clearly the authors imagine themselves to be “inclusive,” but is that really the case?  Hardly!  It turns out they have a very robust perception of outgroup.  They’ve merely fallen victim to the fallacy that it “doesn’t count” because it’s defined in ideological rather than racial or religious terms.  Their outgroup may be broadly defined as “conservatives.”  These “conservatives” are mentioned over and over again in the paper, always in the guise of the bad guys who are supposed to reject inclusivism and resist “moral progress.”  To cite a few examples,

    We show that although current evolutionary psychological understandings of human morality do not, contrary to the contentions of some authors, support conservative ethical and political conclusions, they do paint a picture of human morality that challenges traditional liberal accounts of moral progress.

    …there is no good reason to believe conservative claims that the shift toward greater inclusiveness has reached its limit or is unsustainable.

    These “evoconservatives,” as we have labeled them, infer from evolutionary explanations of morality that inclusivist moralities are not psychologically feasible for human beings.

    At the same time, there is strong evidence that the development of exclusivist moral tendencies – or what evolutionary psychologists refer to as “in-group assortative sociality,” which is associated with ethnocentric, xenophobic, authoritarian, and conservative psychological orientations – is sensitive to environmental cues…

    and so on, and so on.  In a word, although the good professors are fond of pointing with pride to their vastly expanded ingroup, they have rather more difficulty seeing their vastly expanded outgroup as well, more or less like the difficulty we have seeing the nose at the end of our face.  The fact that the conservative outgroup is perceived with as much fury, disgust, and hatred as ever a Grand Dragon of the Ku Klux Klan felt for blacks or Catholics can be confirmed by simply reading through the comment section of any popular website of the ideological Left.  Unless professors employed by philosophy departments live under circumstances more reminiscent of the Pleistocene than I had imagined this bodes ill for their theory of “moral progress” based on “inclusivity.”  More evidence that this is the case is easily available to anyone who cares to look for “diversity” in the philosophy department of the local university in the form of a professor who can be described as conservative by any stretch of the imagination.

    I note in passing another passage in the paper that demonstrates the fanaticism with which the chimera of “moral progress” is pursued in some circles.  Again quoting the authors,

    Some moral philosophers whom we have elsewhere called “evoliberals,” have tacitly affirmed the evo-conservative view in arguing that biomedical interventions that enhance human moral capacities are likely to be crucial for major moral progress due to evolved constraints on human moral nature.

    In a word, the delusion of moral progress is not necessarily just a harmless toy for the entertainment of professors of philosophy, at least as far as those who might have some objection to “biomedical interventions” carried out be self-appointed “experts on ethics” are concerned.

    What’s the point?  The point is that we are unlikely to make progress of any kind without first accepting the truth about our own nature, and the elementary logical implications of that truth.  Darwin saw them, Westermarck saw them, and they are far more obvious today than they were then.  We continue to ignore them at our peril.

  • Frans de Waal on Animal Smartness and the Rehabilitation of Konrad Lorenz

    Posted on June 5th, 2016 Helian 17 comments

    It’s heartening to learn that there is a serious basis for recent speculation to the effect that the science of animal cognition may gradually advance to a level long familiar to any child with a pet dog.  Frans de Waal breaks the news in his latest book, Are We Smart Enough to Know How Smart Animals Are?  In answer to his own question, de Waal writes,

    The short answer is “Yes, but you’d never have guessed.”  For most of the last century, science was overly cautious and skeptical about the intelligence of animals.  Attributing intentions and emotions to animals was seen as naïve “folk” nonsense.  We, the scientists, knew better!  We never went in for any of this “my dog is jealous” stuff, or “my cat knows what she wants,” let alone anything more complicated, such as that animals might reflect on the past or feel one another’s pain… The two dominant schools of thought viewed animals as either stimulus-response machines out to obtain rewards and avoid punishment or as robots genetically endowed with useful instincts.  While each school fought the other and deemed it too narrow, they shared a fundamentally mechanistic outlook:  there was no need to worry about the internal lives of animals, and anyone who did was anthropomorphic, romantic and unscientific.

    Did we have to go through this bleak period?  In earlier days, the thinking was noticeably more liberal.  Charles Darwin wrote extensively about human and animal emotions, and many a scientist in the nineteenth century was eager to find higher intelligence in animals.  It remains a mystery why these efforts were temporarily suspended, and why we voluntarily hung a millstone around the neck of biology.

    Here I must beg to differ with de Waal.  It is by no means a “mystery.”  This “mechanization” of animals in the sciences was more or less contemporaneous with the Blank Slate debacle, and was motivated by more or less the same ideological imperatives.  I invite readers interested in the subject to consult the first few chapters of Robert Ardrey’s African Genesis, published as far back as 1961.  Noting a blurb in Scientific American by Marshall Sahlins, more familiar to later readers as a collaborator in the slander of Napoleon Chagnon, to the effect that,

    There is a quantum difference, at points a complete opposition, between even the most rudimentary human society and the most advanced subhuman primate one.  The discontinuity implies that the emergence of human society required some suppression, rather than direct expression, of man’s primate nature.  Human social life is culturally, not biologically determined.

    Ardrey, that greatest of all debunkers of the Blank Slate, continues,

    Dr. Sahlins’ conclusion is startling to no one but himself.  It is a scientific restatement, 1960-style, of the philosophical conclusion of an eighteenth-century Neapolitan monk (Giambattista Vico, ed.):  Society is the work of man.  It is just another prop, fashioned in the shop of science’s orthodoxies from the lumber of Zuckerman’s myth, to support the fallacy of human uniqueness.

    The Zuckerman Ardrey refers to is anthropologist Solly Zuckerman.  I invite anyone who doubts the fanaticism with which “science” once insisted on the notion of human uniqueness alluded to in de Waal’s book to read some of Zuckerman’s papers.  For example, in The Social Life of Monkeys and Apes, he writes,

    It is now generally recognized that anthropomorphic preoccupations do not help the critical development of knowledge, either in fields of physical or biological inquiry.

    He exulted in the great “advances” science had made in correcting the “mistakes” of Darwin:

    The Darwinian period, in which animal behavior as a distinct study was born, was one in which anthropomorphic interpretation flourished.  Anecdotes were regarded in the most generous light, and it was believed that many animals were highly rational creatures, possessed of exalted ethical codes of social behavior.

    According to Zuckerman, “science” had now discovered that the very notion of animal “intelligence” was absurd.  As he put it,

    Until 1890, the study of the social behavior of mammals developed hand in hand with the study of their “intelligence,” and both subjects were usually treated in the same books.

    Such comments, which are ubiquitous in the literature of the Blank Slate era, make it hard to understand how de Waal can still be “mystified” about the motivation for the “scientific” denial of animal intelligence.  Be that as it may, he presents a wealth of data derived from recent experiments and field studies debunking all the lingering rationale for claims of human uniqueness one by one, whether it be the ability to experience emotion, a “theory of mind,” social problem solving ability, ability to contemplate the past and future, or even consciousness.  In the process he documents the methods “science” used to hermetically seal itself off from reality, such as the invention of pejorative terms like “anthropomorphism” to denounce and dismiss anyone who dared to challenge the human uniqueness orthodoxy, and the rejection of all evidence not supplied by members of the club as mere “anecdotes.”  In the process he notes,

    Needing a new term to make my point, I invented anthropodenial, which is the a priori rejection of humanlike traits in other animals or animallike traits in us.

    It’s hard to imagine that anyone could seriously believe that “science” consists of fanatically rejecting similarities between human and animal behavior that are obvious to everyone but “scientists” as “anthropomorphism” and “anecdotes” and assuming a priori that they’re of no significance until it can be absolutely proven that everyone else was right all along.  This does not strike me as a “parsimonious” approach.

    Not the least interesting feature of de Waal’s latest is his “rehabilitation” of several important debunkers of the Blank Slate who were unfortunate enough to publish before the appearance of E. O. Wilson’s Sociobiology in 1975.  According to the fairy tale that currently passes for the “history” of the Blank Slate, before 1975 “darkness was on the face of the deep.”  Only then did Wilson appear on the scene as the heroic slayer of the Blank Slate dragon.  A man named Robert Ardrey was never heard of, and anyone mentioned in his books as an opponent of the Blank Slate before the Wilson “singularity” is to be ignored.  The most prominent of them all, a man on whom the anathemas of the Blank Slaters often fell, literally in the same breath as Ardrey, was Konrad Lorenz.  Sure enough, in Steven Pinker’s fanciful “history” of the Blank Slate, Lorenz is dismissed, in the same paragraph with Ardrey, no less, as “totally and utterly wrong,” and a delusional believer in “archaic theories such as that aggression was like the discharge of a hydraulic pressure.”  De Waal’s response must be somewhat discomfiting to the promoters of Pinker’s official “history.”  He simply ignores it!

    Astoundingly enough, de Waal speaks of Lorenz as one of the great founding fathers of the modern sciences of animal behavior and cognition.  In other words, he tells the truth, as if it had never been disputed in any bowdlerized “history.”  Already at the end of the prologue we find the matter-of-fact observation that,

    …behavior is, as the Austrian ethologist Konrad Lorenz put it, the liveliest aspect of all that lives.

    Reading on, we find that this mention of Lorenz wasn’t just an anomaly designed to wake up drowsy readers.  In the first chapter we find de Waal referring to the field of phylogeny,

    …when we trace traits across the evolutionary tree to determine whether similarities are due to common descent, the way Lorenz had done so beautifully for waterfowl.

    A few pages later he writes,

    The maestro of observation, Konrad Lorenz, believed that one could not investigate animals effectively without an intuitive understanding grounded in love and respect.

    and notes, referring to the behaviorists, that,

    The power of conditioning is not in doubt, but the early investigators had totally overlooked a crucial piece of information.  They had not, as recommended by Lorenz, considered the whole organism.

    And finally, in a passage that seems to scoff at Pinker’s “totally and utterly wrong” nonsense, he writes,

    Given that the facial musculature of humans and chimpanzees is nearly identical, the laughing, grinning, and pouting of both species likely goes back to a common ancestor.  Recognition of the parallel between anatomy and behavior was a great leap forward, which is nowadays taken for granted.  We all now believe in behavioral evolution, which makes us Lorenzians.

    Stunning, really for anyone who’s followed what’s been going on in the behavioral and animal sciences for any length of time.  And that’s not all.  Other Blank Slate debunkers who published long before Wilson, like Niko Tinbergen and Desmond Morris, are mentioned with a respect that belies the fact that they, too, were once denounced by the Blank Slaters as right wing fascists and racists in the same breath with Lorenz.  I have a hard time believing that someone as obviously well read as de Waal has never seen Pinker’s The Blank Slate.  I honestly don’t know what to make of the fact that he can so blatantly contradict Pinker, and yet never trouble himself to mention even the bare existence of such a remarkable disconnect.  Is he afraid of Pinker?  Does he simply want to avoid hurting the feelings of another member of the academic tribe?  I must leave it up to the reader to decide.

    And what of Ardrey, who brilliantly described both “anthropodenial” and the reasons that it was by no means a “mystery” more than half a century before the appearance of de Waal’s latest book?  Will he be rehabilitated, too?  Don’t hold your breath.  Unlike Lorenz, Tinbergen and Morris, he didn’t belong to the academic tribe.  The fact that it took an outsider to smash the Blank Slate and give a few academics the courage to finally stick their noses out of the hole they’d dug for themselves will likely remain deep in the memory hole. It happens to be a fact  that is just too humiliating and embarrassing for them to ever admit.  It would seem the history of the affair can be adjusted, but it will probably never be corrected.

  • The God Myth and the “Humanity Can’t Handle The Truth” Gambit

    Posted on May 12th, 2016 Helian 5 comments

    Hardly a day goes by without some pundit bemoaning the decline in religious faith.  We are told that great evils will inevitably befall mankind unless we all believe in imaginary super-beings.  Of course, these pundits always assume a priori that the particular flavor of religion they happen to favor is true.  Absent that assumption, their hand wringing boils down to the argument that we must all somehow force ourselves to believe in God whether that belief seems rational to us or not.  Otherwise, we won’t be happy, and humanity won’t flourish.

    An example penned by Dennis Prager entitled Secular Conservatives Think America Can Survive the Death of God that appeared recently at National Review Online is typical of the genre.  Noting that even conservative intellectuals are becoming increasingly secular, he writes that,

    They don’t seem to understand that the only solution to many, perhaps most, of the social problems ailing America and the West is some expression of Judeo-Christian religion.

    In another article entitled If God is Dead…, Pat Buchanan echoes Prager, noting, in a rather selective interpretation of history, that,

    When, after the fall of the Roman Empire, the West embraced Christianity as a faith superior to all others, as its founder was the Son of God, the West went on to create modern civilization, and then went out and conquered most of the known world.

    The truths America has taught the world, of an inherent human dignity and worth, and inviolable human rights, are traceable to a Christianity that teaches that every person is a child of God.

    Today, however, with Christianity virtually dead in Europe and slowly dying in America, Western culture grows debased and decadent, and Western civilization is in visible decline.

    Both pundits draw attention to a consequence of the decline of traditional religions that is less a figment of their imaginations; the rise of secular religions to fill the ensuing vacuum.  The examples typically cited include Nazism and Communism.  There does seem to be some innate feature of human behavior that predisposes us to adopt such myths, whether of the spiritual or secular type.  It is most unlikely that it comes in the form of a “belief in God” or “religion” gene.  It would be very difficult to explain how anything of the sort could pop into existence via natural selection.  It seems reasonable, however, that less specialized and more plausible behavioral traits could account for the same phenomenon.  Which begs the question, “So what?”

    Pundits like Prager and Buchanan are putting the cart before the horse.  Before one touts the advantages of one brand of religion or another, isn’t it first expedient to consider the question of whether it is true?  If not, then what is being suggested is that mankind can’t handle the truth.  We must be encouraged to believe in a pack of lies for our own good.  And whatever version of “Judeo-Christian religion” one happens to be peddling, it is, in fact, a pack of lies.  The fact that it is a pack of lies, and obviously a pack of lies, explains, among other things, the increasingly secular tone of conservative pundits so deplored by Buchanan and Prager.

    It is hard to understand how anyone who uses his brain as something other than a convenient stuffing for his skull can still take traditional religions seriously.  The response of the remaining true believers to the so-called New Atheists is telling in itself.  Generally, they don’t even attempt to refute their arguments.  Instead, they resort to ad hominem attacks.  The New Atheists are too aggressive, they have bad manners, they’re just fanatics themselves, etc.  They are not arguing against the “real God,” who, we are told, is not an object, a subject, or a thing ever imagined by sane human beings, but some kind of an entity perched so high up on a shelf that profane atheists can never reach Him.  All this spares the faithful from making fools of themselves with ludicrous mental flip flops to explain the numerous contradictions in their holy books, tortured explanations of why it’s reasonable to assume the “intelligent design” of something less complicated by simply assuming the existence of something vastly more complicated, and implausible yarns about how an infinitely powerful super-being can be both terribly offended by the paltry sins committed by creatures far more inferior to Him than microbes are to us, and at the same time incapable of just stepping out of the clouds for once and giving us all a straightforward explanation of what, exactly, he wants from us.

    In short, Prager and Buchanan would have us somehow force ourselves, perhaps with the aid of brainwashing and judicious use of mind-altering drugs, to believe implausible nonsense, in order to avoid “bad” consequences.  One can’t dismiss this suggestion out of hand.  Our species is a great deal less intelligent than many of us seem to think.  We use our vaunted reason to satisfy whims we take for noble causes, without ever bothering to consider why those whims exist, or what “function” they serve.  Some of them apparently predispose us to embrace ideological constructs that correspond to spiritual or secular religions.  If we use human life as a metric, P&B would be right to claim that traditional spiritual religions have been less “bad” than modern secular ones, costing only tens of millions of lives via religious wars, massacres of infidels, etc., whereas the modern secular religion of Communism cost, in round numbers, 100 million lives, and in a relatively short time, all by itself.  Communism was also “bad” to the extent that we value human intelligence, tending to selectively annihilate the brightest portions of the population in those countries where it prevailed.  There can be little doubt that this “bad” tendency substantially reduced the average IQ in nations like Cambodia and the Soviet Union, resulting in what one might call their self-decapitation.  Based on such metrics, Prager and Buchanan may have a point when they suggest that traditional religions are “better,” to the extent that one realizes that one is merely comparing one disaster to another.

    Can we completely avoid the bad consequences of believing the bogus “truths” of religions, whether spiritual or secular?  There seems to be little reason for optimism on that score.  The demise of traditional religions has not led to much in the way of rational self-understanding.  Instead, as noted above, secular religions have arisen to fill the void.  Their ideological myths have often trumped reason in cases where there has been a serious confrontation between the two, occasionally resulting in the bowdlerization of whole branches of the sciences.  The Blank Slate debacle was the most spectacular example, but there have been others.  As belief in traditional religions has faded, we have gained little in the way of self-knowledge in their wake.  On the contrary, our species seems bitterly determined to avoid that knowledge.  Perhaps our best course really would be to start looking for a path back inside the “Matrix,” as Prager and Buchanan suggest.

    All I can say is that, speaking as an individual, I don’t plan to take that path myself.  I has always seemed self-evident to me that, whatever our goals and aspirations happen to be, we are more likely to reach them if we base our actions on an accurate understanding of reality rather than myths, on truth rather than falsehood.  A rather fundamental class of truths are those that concern, among other things, where those goals and aspirations came from to begin with.  These are the truths about human behavior; why we want what we want, why we act the way we do, why we are moral beings, why we pursue what we imagine to be noble causes.  I believe that the source of all these truths, the “root cause” of all these behaviors, is to be found in our evolutionary history.  The “root cause” we seek is natural selection.  That fact may seem inglorious or demeaning to those who lack imagination, but it remains a fact for all that.  Perhaps, after we sacrifice a few more tens of millions in the process of chasing paradise, we will finally start to appreciate its implications.  I think we will all be better off if we do.

  • Morality and the Truth as “Nihilism”

    Posted on April 24th, 2016 Helian 3 comments

    When the keepers of the official dogmas in the Academy encounter an inconvenient truth, they refute it by calling it bad names.  For example, the fact of human biodiversity is “racist,” and the fact of human nature was “fascist” back in the heyday of the Blank Slate.  I encountered another example in “Ethics” journal in one of the articles I discussed in a recent post; Only All Naturalists Should Worry About Only One Evolutionary Debunking Argument, by Tomas Bogardus.  It was discretely positioned in a footnote to the following sentence:

    Do these evolutionary considerations generate an epistemic challenge to moral realism, that is, the view that evaluative properties are mind-independent features of reality and we sometimes have knowledge of them?

    The footnote reads as follows:

    As opposed to nihilism – on which there are no moral truths – and subjectivist constructivism or expressivism, on which moral truths are functions of our evaluative attitudes themselves.

    This “scientific” use of the pejorative term “nihilism” to “refute” the conclusion that there are no moral truths fits the usual pattern.  According to its Wiki blurb, the term “nihilism” was used in a similar manner when it was first coined by Friedrich Jacobi to “refute” disbelief in the transcendence of God.  Wiki gives a whole genealogy of the various uses of the term.  However, the most common image the term evokes is probably one of wild-eyed, bomb hurling 19th century Russian radicals.  No matter.  If something is true, it will remain true regardless of how often it is denounced as racist, fascist, or nihilist.

    At this point in time, the truth about morality is sufficiently obvious to anyone who cares to think about it.  It is a manifestation of behavioral predispositions that evolved at times very different from the present.  It has no purpose.  It exists because the genes responsible for its existence happened to improve the odds that the package of genes to which they belonged would survive and reproduce.  That truth is very inconvenient.  It reduces the “expertise” of the “experts on ethics,” an “expertise” that is the basis of their respect and authority in society, and not infrequently of their gainful employment as well, to an expertise about nothing.  It also exposes that which the vast majority of human beings “know in their bones” to be true as an illusion.  For all that, it remains true.

    To the extent that the term “nihilist” has any meaning in the context of morality at all, it suggests that the world will dissolve in moral chaos unless some basis for objective morality can be extracted from the vacuum.  Rape, murder and mayhem will prevail when we all realize we’ve been hoodwinked by the philosophers all these years, and there really is no such basis.  The truth is rather more prosaic.  Human beings will behave morally regardless of the intellectual fashions prevailing among the philosophers because it is their nature to act morally.

    Moral chaos will not result from mankind finally learning the “nihilist” truth about morality.  Indeed, it’s hard to imagine a state of moral chaos worse than the one we’re already in.  Chaos doesn’t exist because of a gradually spreading understanding of the subjective roots of morality.  Rather, it exists as a byproduct of continued attempts to prop up the façade of moral realism.  The current “bathroom wars” are an instructive if somewhat ludicrous example.  They demonstrate both the strong connection between custom and morality, and the typical post hoc rationalization of moral “truths” described by Jonathan Haidt in his paper, The Emotional Dog and its Rational Tail.

    Edvard Westermarck explored the custom/morality connection in his Ethical Relativity, an indispensable text for anyone interested in the subject of moral behavior.  According to Westermarck,

    Customs are not merely public habits – the habits of a certain circle of men, a racial or national community, a rank or class of society – but they are at the same time rules of conduct.  As Cicero observes, the customs of a people “are precepts in themselves.”  We say that “custom commands,” or “custom demands,” and even when custom simply allows the commission of a certain class of actions, it implicitly lays down the rule that such actions are not to be interfered with.  And the rule of custom is conceived of as a moral rule, which decides what is right and wrong.

    However, the rule of custom can be challenged.  Westermarck noted that, as societies became more complex,

    Individuals arose who found fault with the moral ideas prevalent in the community to which they belonged, criticizing them on the basis of their own individual feelings… In the course of progressive civilization the moral consciousness has tended towards a greater equalization of rights, towards an expansion of the circle within which the same moral rules are held applicable.  And this process has been largely due to the example of influential individuals and their efforts to raise public opinion to their own standard of right.

    As Westermarck points out, in both cases the individuals involved are responding to subjective moral emotions, yet in both cases they suffer from the illusion that their emotions somehow correspond to objective facts about good and evil.  In the case of the bathroom wars, the defenders of custom rationalize their disapproval after the fact by evoking lurid pictures of perverts molesting little girls.  The problem is that, at least to the best of my knowledge, there is no data indicating that anything of the sort involving a transgender person has ever happened.  On the other side, the LGBT community points to this disconnect without realizing that they are just as deluded in their belief that their preferred bathroom rules are distilled straight out of objective Good and Evil.  In fact, they are nothing but personal preferences, with no more legitimate normative authority than the different rules preferred by others.  It seems to me that the term “nihilism” is better applied to this absurd state of affairs than to a correct understanding of what morality is and why it exists.

    Suppose that in some future utopia the chimera of “moral realism” were finally exchanged for such a correct understanding, at least by most of us.  It would change very little.  Our moral emotions would still be there, and we would respond to them as we always have.  “Moral relativism” would be no more prevalent than it is today, because it is not our nature to be moral relativists.  However, we might have a fighting chance of coming up with a set of moral “customs” that most of us could accept, along with a similarly accepted way to change them if necessary.  I would certainly prefer such a utopia to the moral obscurantism that prevails today.  If nothing else it would tend to limit the moral exhibitionism and virtuous grandstanding that led directly to the ideological disasters of the 20th century, and yet still pass as the “enlightened” way to alter the moral rules that apply in bathrooms and elsewhere.  Perhaps in such a utopia “nihilism” would be rejected even more firmly than it is today, because people would finally realize that, in spite of the subjective, emotional source of all moral rules, human societies can’t exist without them.

  • More Fun with “Ethics” Journal; Of Moral Realism and Evolutionary Debunking

    Posted on April 12th, 2016 Helian 3 comments

    Moral realism died with Darwin.  He was perfectly well aware that there is such a thing as human nature, and that morality is a manifestation thereof.  He also had an extremely pious wife and lived in Victorian England, so was understandably reticent about discussing the subject.  However, in one of his less guarded moments he wrote (in The Descent of Man and Selection in Relation to Sex),

    If, for instance, to take an extreme case, men were reared under precisely the same conditions as hive-bees, there can hardly be a doubt that our unmarried females would, like the worker bees, think it a sacred duty to kill there brothers, and mothers would strive to kill their fertile daughters, and no one would think of interfering.

    Assuming he believed his own theory, Darwin was merely stating the obvious.  Francis Hutcheson had demonstrated more than a century earlier that morality is a manifestation of innate moral sentiments.  He was echoed by David Hume, who pointed out that morality could not be derived from pure reason operating alone, and suggested that other than divine agencies might explain the existence of the sentiments in question.  Darwin supplied the final piece of the puzzle, discovering what that agency was.

    Many writers discussed the evolutionary origins of morality in the late 19th and early 20th centuries.  Few, however, were prepared to accept the conclusion that logically followed; the non-existence of objective Good and Evil, independent of any human opinion on the matter.  One of the few who did accept that conclusion, and outline its implications, was Edvard Westermarck, in his The Origin and Development of the Moral Ideas (1906), and Ethical Relativity (1932).  Westermarck was well aware that, although Good and Evil are not real, objective things, human moral emotions are easily strong enough to portray them as such to our imaginations.  They are so strong, in fact, that, more than a century after Westermarck took up the subject, the illusion is still alive and well, not only in the public at large, but even among the “experts on ethics.”

    Or at least that is the impression one gets on glancing through the pages of the academic journal Ethics.  There one commonly finds papers by learned professors who doggedly promote the notion of “moral realism,” and the objective existence of Good and Evil, presumably either as “spirits” or in some higher dimension beyond the ken of our best scientific instruments.  True, their jobs and social gravitas depend on how well they can maintain the charade, but I get the distinct impression that some of them actually believe what they write.  Lately, however, they have begun to feel the heat, in the form of what is referred to in the business as “evolutionary debunking.”

    The obvious implication of Darwin’s theory is that the innate predispositions responsible for human morality evolved, and the various and occasionally gaudy ways in which those predispositions manifest themselves in our behavior is pretty much what one would expect when those emotions are mediated and interpreted in the minds of creatures with large brains.  The existence of Good and Evil as independent things is about as likely as the existence of fairies in Richard Dawkins’ garden.  How is it, then, that the “experts on ethics” haven’t closed up shop and moved on to less futile occupations?  To answer that question, we must again refer to the pages of Ethics.

    Two articles that appeared in the most recent issue demonstrate the degree to which the shock waves from the collapse of the Blank Slate have penetrated into even the darkest and most remote nooks of academia.  The first, by Tomas Bogardus, is entitled “Only All Naturalists Should Worry About Only One Evolutionary Debunking Argument.”  It begins with the rhetorical question, “Do the facts of evolution undermine moral realism.”  You think you know the answer, don’t you, dear reader?  But wait!  Before you jump to conclusions, you should be aware that the bar is set fairly high for “evolutionary debunking” arguments.  You may agree with me that the existence of pink unicorns is improbable, but can you absolutely prove it?  That’s the kind of standard we’re talking about.  It’s not necessary for today’s crop of moral realists to explain the mode of existence of such imaginary categories as Good and Evil.  It’s not necessary for them to explain the mysteries of their creation.  It’s not necessary for them to explain how moral emotions turned up in human brains, or why the possibility of their evolutionary origins is irrelevant, or how they manage to jump from the skull of one human being onto the back of another with ease.  No, “evolutionary debunking” requires that you absolutely prove that there are no pink unicorns.

    Let’s refer to Prof. Bogardus’ paper to see how this works in practice.  According to the author, one species of evolutionary debunking arguments runs as follows:

    Our moral faculty was naturally selected to produce adaptive moral beliefs, and not naturally selected to produce true moral beliefs.

    Therefore, it is false that:  had the moral truths been different, and had we formed our moral beliefs using the same method we actually used, or moral beliefs would have been different.

    Therefore, our moral beliefs are not sensitive

    Therefore, our moral beliefs do not count as knowledge

    In other words, nothing as tiresome as demonstrating that moral realism is the least bit plausible is necessary to defeat evolutionary debunking arguments.  All that’s necessary is to show that any of the “therefores” in the above “argument” is at all shaky.  In that case, then the pink unicorn must still be out there roaming around.  Prof. Bogardus reviews other evolutionary debunking arguments, and ends his paper on the hopeful note that one of them, which he describes as the “Argument from Symmetry,” may actually be bulletproof, if only to the assaults of the “Naturalists.”  (It turns out there are other, less vulnerable tribes of moral realists, such as “Rationalists,” and “Divine Revelationists.)  I’m not as sanguine as the good professor.  I suspect that proving a negative will be difficult even with the “Argument from Symmetry.”

    In another paper, entitled “Reductionist Moral Realism and the Contingency of Moral Evolution,” author Max Barkhausen reveals some of the astounding intellectual double back flips moral realists routinely perform in order to accept both the evolution of moral emotions and the existence of objective Good and Evil at the same time.  For example, one strategy, which he attributes to philosophers Frank Jackson and Philip Pettit and aptly refers to as “Panglossianism,” posits that, while human morality does indeed have evolutionary roots, by pure coincidence the end product just happened to agree with “true” morality.  Such luck!  Barkhausen assures us that his paper debunks such notions, and I am content to take him at his word.

    Here again, however, there is no hint of a suggestion that those who posit the existence of Good and Evil as objective things existing independently of human minds lay their cards on the table and reveal what substance those things consist of, or defend the alternative belief that things can consist of nothing, or suggest what experiments might be performed to actually snag a “Good” or “Evil” as it floats about, whether in the material world or the realm of ghosts.  The only standard they are held to is the mere avoidance of absolute proof that their pink unicorns are a figment of their imagination.  It stands to reason.  After all, as far as the “experts on ethics” are concerned, the closest thing to “absolute Good” they will ever encounter is a tenured position with a substantial and regular paycheck.  They would have to sacrifice that particular “absolute Good” if they were ever required to stop waving their hands about objective morality and either explain to the rest of us the mode of existence of these “objects” they’ve been imagining all these years, or admit the sterility of their “expertise.”  Barkhausen admits as much, concluding with the sentence,

    I believe that it will be a great challenge to construct a meta-ethical theory that accommodates both contingency and our intuitions about objectivity and mind-independence.  How to reconcile the two is, no doubt, and issue that merits further thought.

    Yes, and no doubt the effort to do so will be a virtually inexhaustible topic for the papers in journals like Ethics that are the coin of the realm in academia.  On the other hand, admitting the obvious – that objectivity and mind-independence are illusions – would tend to bring the whole, futile exercise to a screeching halt.

    I note in passing that the jargon in use to prop up the illusion is becoming increasingly arcane and abstruse.  If you’re masochistic enough to try to read these journals for yourself, be sure to bring along your secret decoder ring.  There’s no better way to defend your academic turf than to deny access to anyone who hasn’t mastered the lingo.

    Westermarck had it right.  Back in 1906 he wrote,

    As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity.  The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments.  The intensity of his emotions makes him the victim of an illusion.

    The presumed objectivity of moral judgments thus being a chimera there can be no moral truth in the sense in which this term is generally understood.  The ultimate reason for this is that the moral concepts are based upon emotions and that the contents of an emotion fall entirely outside the category of truth.

    No “moral progress” will be possible until we recognize that salient fact.  It’s hard to construe what one finds in the pages of journals like Ethics as “progress” by any rational definition of the term in any case.  In the papers referred to above, for example, cultural evolution is referred to as something entirely independent of biological evolution, instead of the manifestation of biological evolution that it actually is.  There are constant references to the “function” of morality, as if morality had a “purpose.”  One cannot speak of a purpose or a function of something that exists because it happened to increase the odds that particular genes would survive and reproduce.  “Function” implies a creator with conscious intent, and nothing of the sort is involved in the process of evolution by natural selection.  Such terms may be useful as a form of shorthand for describing what actually happened, but only if one is careful to avoid misunderstanding of the sense in which they are being used.  When used carelessly in discussions of moral realism, they serve mainly to distract and obfuscate.

    What is really necessary for “moral progress?”  For starters, we need to understand why morality exists, and the subjective nature of its existence.  We need to understand that it evolved, at least for the most part, in times vastly different from the present.  We need to stop pretending that morality’s only “function” is to promote intergroup and intragroup cooperation.   Altruism has a real subjective existence in our brains, but so do outgroup identification, hatred, rage and “aggression.”  These “immoral” tendencies are seldom mentioned in the pages of Ethics, but we ignore them at our peril.  As long as we continue to ignore them, it is premature to speak of “progress.”