Helian Unbound

The world as I see it
RSS icon Email icon Home icon
  • On the Continuing Adventures of the “Killer Ape Theory” Zombie

    Posted on November 19th, 2014 Helian No comments

    An article entitled “The Evolution of War – A User’s Guide,” recently turned up at “This View of Life,” a website hosted by David Sloan Wilson. Written by Anthony Lopez, it is one of the more interesting artifacts of the ongoing “correction” of the history of the debate over human nature I’ve seen in a while. One of the reasons it’s so remarkable is that Wilson himself is one of the foremost proponents of the theory of group selection, Lopez claims in his article that one of the four “major theoretical positions” in the debate over the evolution of war is occupied by the “group selectionists,” and yet he conforms to the prevailing academic conceit of studiously ignoring the role of Robert Ardrey, who was not only the most influential player in the “origins of war” debate, but overwhelmingly so in the whole “Blank Slate” affair as well. Why should that be so remarkable? Because at the moment the academics’ main rationalization for pretending they never heard of a man named Ardrey is (you guessed it) his support for group selection!

    When it comes to the significance of Ardrey, you don’t have to take my word for it. His was the most influential voice in a growing chorus that finally smashed the Blank Slate orthodoxy. The historical source material is all still there for anyone who cares to trouble themselves to check it. One invaluable piece thereof is “Man and Aggression,” a collection of essays edited by arch-Blank Slater Ashley Montagu and aimed mainly at Ardrey, with occasional swipes at Konrad Lorenz, and with William Golding, author of “Lord of the Flies,” thrown in for comic effect. The last I looked you could still pick it up for a penny at Amazon. For example, from one of the essays by psychologist Geoffrey Gorer,

    Almost without question, Robert Ardrey is today the most influential writer in English dealing with the innate or instinctive attributes of human nature, and the most skilled populariser of the findings of paleo-anthropologists, ethologists, and biological experimenters… He is a skilled writer, with a lively command of English prose, a pretty turn of wit, and a dramatist’s skill in exposition; he is also a good reporter, with the reporter’s eye for the significant detail, the striking visual impression. He has taken a look at nearly all the current work in Africa of paleo-anthropologists and ethologists; time and again, a couple of his paragraphs can make vivid a site, such as the Olduvai Gorge, which has been merely a name in a hundred articles.

    In case you’ve been asleep for the last half a century, the Blank Slate affair was probably the greatest debacle in the history of science. The travails of Galileo and the antics of Lysenko are child’s play in comparison. For decades, whole legions of “men of science” in the behavioral sciences pretended to believe there was no such thing as human nature. As was obvious to any ten year old, that position was not only not “science,” it was absurd on the face of it. However, it was required as a prop for a false political ideology, and so it stood for half a century and more. Anyone who challenged it was quickly slapped down as a “fascist,” a “racist,” or a denizen of the “extreme right wing.” Then Ardrey appeared on the scene. He came from the left of the ideological spectrum himself, but also happened to be an honest man. The main theme of all his work in general, and the four popular books he wrote between 1961 and 1976 in particular, was that here is such a thing as human nature, and that it is important. He insisted on that point in spite of a storm of abuse from the Blank Slate zealots. On that point, on that key theme, he has been triumphantly vindicated. Almost all the “men of science,” in psychology, sociology, and anthropology were wrong, and he was right.

    Alas, the “men of science” could not bear the shame. After all, Ardrey was not one of them. Indeed, he was a mere playwright! How could men like Shakespeare, Ibsen, and Moliere possibly know anything about human nature? Somehow, they had to find an excuse for dropping Ardrey down the memory hole, and find one they did! There were actually more than one, but the main one was group selection. Writing in “The Selfish Gene” back in 1976, Richard Dawkins claimed that Ardrey, Lorenz, and Irenäus Eibl-Eibesfeldt were “totally and utterly wrong,” not because they insisted there was such a thing as human nature, but because of their support for group selection! Fast forward to 2002, and Steven Pinker managed the absurd feat of writing a whole tome about the Blank Slate that only mentioned Ardrey in a single paragraph, and then only to assert that he had been “totally and utterly wrong,” period, on Richard Dawkins’ authority, and with no mention of group selection as the reason. That has been the default position of the “men of science” ever since.

    Which brings us back to Lopez’ paper. He informs us that one of the “four positions” in the debate over the evolution of war is “The Killer Ape Hypothesis.” In fact, there never was a “Killer Ape Hypothesis” as described by Lopez. It was a strawman, pure and simple, concocted by Ardrey’s enemies. Note that, in spite of alluding to this imaginary “hypothesis,” Lopez can’t bring himself to mention Ardrey. Indeed, so effective has been the “adjustment” of history that, depending on his age, it’s quite possible that he’s never even heard of him. Instead, Konrad Lorenz is dragged in as an unlikely surrogate, even though he never came close to supporting anything even remotely resembling the “Killer Ape Hypothesis.” His main work relevant to the origins of war was “On Aggression,” and he hardly mentioned apes in it at all, focusing instead mainly on the behavior of fish, birds and rats.

    And what of Ardrey? As it happens, he did write a great deal about our ape-like ancestors. For example, he claimed that Raymond Dart had presented convincing statistical evidence that one of them, Australopithecus africanus, had used weapons and hunted. That statistical evidence has never been challenged, and continues to be ignored by the “men of science” to this day. Without bothering to even mention it, C. K. Brain presented an alternative hypothesis that the only acts of “aggression” in the caves explored by Dart had been perpetrated by leopards. In recent years, as the absurdities of his hypothesis have been gradually exposed, Brain has been in serious row back mode, and Dart has been vindicated to the point that he is now celebrated as the “father of cave taphonomy.”

    Ardrey also claimed that our apelike ancestors had hunted, most notably in his last book, “The Hunting Hypothesis.” When Jane Goodall published her observation of chimpanzees hunting, she was furiously vilified by the Blank Slaters. She, too, has been vindicated. Eventually, even PBS aired a program about hunting behavior in early hominids, and, miraculously, just this year even the impeccably politically correct “Scientific American” published an article confirming the same in the April edition! In a word, we have seen the vindication of these two main hypotheses of Ardrey concerning the behavior of our apelike and hominid ancestors. Furthermore, as I have demonstrated with many quotes from his work in previous posts, he was anything but a “genetic determinist,” and, while he strongly supported the view that innate predispositions, or “human nature,” if you will, have played a significant role in the genesis of human warfare, he clearly did not believe that it was unavoidable or inevitable.  In fact, that belief is one of the main reasons he wrote his books.  In spite of that, the “Killer Ape” zombie marches on, and turns up as one of the “four positions” that are supposed to “illuminate” the debate over the origins of war, while another of the “positions” is supposedly occupied by of all things, “group selectionists!” History is nothing if not ironical.

    Lopez’ other two “positions” include “The Strategic Ape Hypothesis,” and “The Inventionists.” I leave the value of these remaining “positions” to those who want to “examine the layout of this academic ‘battlefield’”, as he puts it, to the imagination of my readers. Other than that, I can only suggest that those interested in learning the truth, as opposed to the prevailing academic narrative, concerning the Blank Slate debacle would do better to look at the abundant historical source material themselves than to let someone else “interpret” it for them.

  • Why are Philosophers Marginalized?

    Posted on November 4th, 2014 Helian No comments

    Modern philosophers are a touchy bunch.  They resent their own irrelevance.  The question is, why have they become so marginalized.  After all, it wasn’t always so.  Consider, for example, the immediate and enduring impact of the French philosophes of the 18th century.  I can’t presume to give a complete answer in this blog post, but an article by Uri Bram that recently turned up at Café.com entitled, This Philosopher Wants to Change How You Think About Doing Good might at least contain a few hints.

    It’s an account of the author’s encounter with a young philosopher named Will MacAskill who, not uncharacteristically, has a job in the Academy, in his case at Cambridge.  Bram assures us that, “he’s already a superstar among his generation of philosophers.”  We learn he also has, “fondness for mild ales, a rollicking laugh, a warm Scottish accent and a manner that reminds you of the kid everyone likes in senior year of high school—not the popular kid, mind, but the kid everyone actually likes.”  If you get the sinking feeling that you’re about to read a hagiography, you won’t be mistaken.  It reminded me of what Lenin was talking about when he referred to “the silly lives of the saints.”

    According to Bram, MacAskill had already sensed the malaise in modern philosophy by the time he began his graduate studies:

     “I kept going to academics and actively trying to find people who were taking these ideas seriously and trying to make a difference, trying to put them into practice,” he says. But (for better or worse), academic philosophy as a whole is not generally focused on having a direct, practical impact.  “Someone studying the philosophy of linguistics or logic is probably doing it as a pure intellectual enterprise,” says MacAskill, “but what surprised me was the extent to which even applied philosophers weren’t having any impact on the world. I spoke to a giant in the field of practical ethics, one of the most successful applied philosophers out there, and asked him what impact he thought he’d had with his work; he replied that someone had once sent him an email saying they’d signed up for the organ donor register based on something he’d written. And that made me sad.”

    Then he had an epiphany, inspired by a conversation with fellow graduate student Toby Ord:

    One of the things that most impressed MacAskill about Ord was the extent to which the latter was walking the talk on his philosophical beliefs, manifested by his pledge to give away everything he earned above a certain modest threshold to organizations working effectively towards reducing global suffering (a story about Ord in the BBC’s news magazine became a surprise hit in 2010).

    Ord, it turns out, was a modern incarnation of Good King Wenceslas, who had pledged to give away a million pounds to charity in the course of his career.  To make a long story short, MacAskill decided to make a similar pledge, and founded an organization with Ord to get other people to do the same.  He has since been going about doing similar good works, while at the same time publishing the requisite number of papers in all the right philosophical journals.

    As it happens, I ran across this article thanks to a reference at 3quarksdaily, and my thoughts about it were the same as some of the commenters there.  For example, from one who goes by the name of Lemoncookies,

    I see nothing particularly original or profound in this young man’s suggestion, which basically amounts to: give more to charity. Lots of people have made this their clarion call, and lots of people already and will give to charity.

    Another named Paul chimes in,

    I find the suggestion humorous that a 27-year-old is going “to revolutionize the way you think about doing good.” What effort the philosophers will go to in order to maintain their hegemony on moral reasoning. Unfortunately, I think they missed the boat 150 years ago by ignoring evolution and biology. They have been treading water ever since yet still manage to attract followers.

    He really hits the nail on the head with that one.  It’s ludicrous to write hagiographies about people who are doing “good” unless you understand what “good” is, and there has been no excuse for not understanding what “good” is since Darwin published “On the Origin of Species.”  Darwin himself saw the light immediately.  Morality is a manifestation of evolved “human nature.”  It exists purely because the features in the brain that are responsible for that nature happened to improve the odds that the genes responsible for those features would survive and reproduce.  “Good” exists as a subjective perception in the mind of individuals, and there is no way in which it can climb out of the skull of individuals and magically acquire a life of its own.  Philosophers, with a few notable exceptions, have rejected that truth.  That’s one of the reasons, and a big one at that, why they’re marginalized.

    It was a truth they couldn’t bear to face.  It’s not really that the truth made philosophy itself irrelevant.  The way to the truth had been pointed out long before Darwin by philosophers of the likes of Shaftesbury, Hutcheson, and Hume.  At least a part of the problem was that this truth smashed the illusion that philosophers, or anyone else for that matter, could be genuinely better, more virtuous, or more righteous in some objective sense than anyone else.  They’ve been fighting the truth ever since.  The futility of that fight is demonstrated by the threadbare nature of the ideas that have been used to justify it.

    For example, there’s “moral realism.”  It goes like this:  Everyone knows that two plus two equals four.  However, numbers do not exist in the material world.  Moral truths don’t exist in the material world either.  Therefore, moral truths are also real.  QED.  Then there’s utilitarianism, which was demolished by Westermarck with the aid of the light provided courtesy of Darwin.  It’s greatest proponent, John Stuart Mill, had the misfortune to write his book about it before the significance of Darwin’s great theory had time to sink in.  If it had, I doubt he would ever have written it.  He was too smart for that.  Sam Harris’ “scientific morality” is justified mainly by bullying anyone who doesn’t go along with charges of being “immoral.”

    With the aid of such stuff, modern philosophy has wandered off into the swamp.  Commenter Paul was right.  They need to stop concocting fancy new moral systems once and for all, and do a radical rewind, if not to Darwin, then at least to Westermarck.  They’ll never regain their relevance by continuing to ignore the obvious.

  • Oswald Spengler got it Wrong

    Posted on November 1st, 2014 Helian 3 comments

    Sometimes the best metrics for public intellectuals are the short articles they write for magazines.  There are page limits, so they have to get to the point.  It isn’t as easy to camouflage vacuous ideas behind a smoke screen of verbiage.  Take, for example, the case of Oswald Spengler.  His “Decline of the West” was hailed as the inspired work of a prophet in the years following its publication in 1918.  Read Spengler’s Wiki entry and you’ll see what I mean.  He should have quit while he was ahead.

    Fast forward to 1932, and the Great Depression was at its peak.  The Decline of the West appeared to be a fait accompli.  Spengler would have been well-advised to rest on his laurels.  Instead, he wrote an article for The American Mercury, still edited at the time by the Sage of Baltimore, H. L. Mencken, with the reassuring title, “Our Backs are to the Wall!”  It was a fine synopsis of the themes Spengler had been harping on for years, and a prophecy of doom worthy of Jeremiah himself.  It was also wrong.

    According to Spengler, high technology carried within itself the seeds of its own collapse.  Man had dared to “revolt against nature.”  Now the very machines he had created in the process were revolting against man.  At the time he wrote the article he summed up the existing situation as follows:

    A group of nations of Nordic blood under the leadership of British, German, French, and Americans command the situation.  Their political power depends on their wealth, and their wealth consists in their industrial strength.  But this in turn is bound up with the existence of coal.  The Germanic peoples, in particular, are secured by what is almost a monopoly of the known coalfields…

    Spengler went on to explain that,

    Countries industrially poor are poor all around; they cannot support an army or wage a war; therefore they are politically impotent; and the workers in them, leaders and led alike, are objects in the economic policy of their opponents.

    No doubt he would have altered this passage somewhat had he been around to witness the subsequent history of places like Vietnam, Algeria, and Cambodia.  Willpower, ideology, and military genius have trumped political and economic power throughout history.  Spengler simply assumed they would be ineffective against modern technology because the “Nordic” powers had not been seriously challenged in the 50 years before he wrote his book.  It was a rash assumption.  Even more rash were his assumptions about the early demise of modern technology.  He “saw” things happening in his own times that weren’t really happening at all.  For example,

    The machine, by its multiplication and its refinement, is in the end defeating its own purpose.  In the great cities the motor-car has by its numbers destroyed its own value, and one gets on quicker on foot.  In Argentina, Java, and elsewhere the simple horse-plough of the small cultivator has shown itself economically superior to the big motor implement, and is driving the latter out.  Already, in many tropical regions, the black or brown man with his primitive ways of working is a dangerous competitor to the modern plantation-technic of the white.

    Unfortunately, motor cars and tractors can’t read, so went right on multiplying without paying any attention to Spengler’s book.  At least he wasn’t naïve enough to believe that modern technology would end because of the exhaustion of the coalfields.  He knew that we were quite clever enough to come up with alternatives.  However, in making that very assertion, he stumbled into what was perhaps the most fundamental of all his false predictions; the imminence of the “collapse of the West.”

    It is, of course, nonsense to talk, as it was fashionable to do in the Nineteenth Century, of the imminent exhaustion of the coal-fields within a few centuries and of the consequences thereof – here, too, the materialistic age could not but think materially.  Quite apart from the actual saving of coal by the substitution of petroleum and water-power, technical thought would not fail ere long to discover and open up still other and quite different sources of power.  It is not worth while thinking ahead so far in time.  For the west-European-American technology will itself have ended by then.  No stupid trifle like the absence of material would be able to hold up this gigantic evolution.

    Alas, “so far in time” came embarrassingly fast, with the discovery of nuclear fission a mere six years later.  Be that as it may, among the reasons that this “gigantic evolution” was unstoppable was what Spengler referred to as “treason to technics.”  As he put it,

    Today more or less everywhere – in the Far East, India, South America, South Africa – industrial regions are in being, or coming into being, which, owing to their low scales of wages, will face us with a deadly competition.  the unassailable privileges of the white races have been thrown away, squandered, betrayed.

    In other words, the “treason” consisted of the white race failing to keep its secrets to itself, but bestowing them on the brown and black races.  They, however, were only interested in using this technology against the original creators of the “Faustian” civilization of the West.  Once the whites were defeated, they would have no further interest in it:

    For the colored races, on the contrary, it is but a weapon in their fight against the Faustian civilization, a weapon like a tree from the woods that one uses as scaffolding, but discards as soon as it has served its purpose.  This machine-technic will end with the Faustian civilization and one day will lie in fragments, forgotten – our railways and steamships as dead as the Roman roads and the Chinese wall, our giant cities and skyscrapers in ruins, like old Memphis and Babylon.  The history of this technic is fast drawing to its inevitable close.  It will be eaten up from within.  When, and in what fashion, we so far know not.

    Spengler was wise to include the Biblical caveat that, “…about that day or hour no one knows, not even the angels in heaven, nor the Son, but only the Father”  (Matthew 24:36).  However, he had too much the spirit of the “end time” Millennialists who have cropped up like clockwork every few decades for the last 2000 years, predicting the imminent end of the world, to leave it at that.  Like so many other would-be prophets, his predictions were distorted by a grossly exaggerated estimate of the significance of the events of his own time.  Christians, for example, have commonly assumed that reports of war, famine and pestilence in their own time are somehow qualitatively different from the war, famine and pestilence that have been a fixture of our history for that last 2000 years, and conclude that they are witnessing the signs of the end times, when, “…nation shall rise against nation, and kingdom against kingdom: and there shall be famines, and pestilences, and earthquakes, in divers places” (Matthew 24:7).  In Spengler’s case, the “sign” was the Great Depression, which was at its climax when he wrote the article:

    The center of gravity of production is steadily shifting away from them, especially since even the respect of the colored races for the white has been ended by the World War.  This is the real and final basis of the unemployment that prevails in the white countries.  It is no mere crisis, but the beginning of a catastrophe.

    Of course, Marxism was in high fashion in 1932 as well.  Spengler tosses it in for good measure, agreeing with Marx on the inevitability of revolution, but not on its outcome:

    This world-wide mutiny threatens to put an end to the possibility of technical economic work.  The leaders (bourgeoisie, ed.) may take to flight, but the led (proletariat, ed.) are lost.  Their numbers are their death.

    Spengler concludes with some advice, not for us, or our parents, or our grandparents, but our great-grandparents generation:

    Only dreamers believe that there is a way out.  Optimism is cowardice… Our duty is to hold on to the lost position, without hope, without rescue, like that Roman soldier whose bones were found in front of a door in Pompeii, who, during the eruption of Vesuvius, died at his post because they forgot to relieve him.  That is greatness.  That is what it means to be a thoroughbred.  The honorable end is the one thing that can not be taken from a man.

    One must be grateful that later generations of cowardly optimists donned their rose-colored glasses in spite of Spengler, went right on using cars, tractors, and other mechanical abominations, and created a world in which yet later generations of Jeremiahs could regale us with updated predictions of the end of the world.  And who can blame them?  After all, eventually, at some “day or hour no one knows, not even the angels in heaven,” they are bound to get it right, if only because our sun decides to supernova.  When that happens, those who are still around are bound to dust off their ancient history books, smile knowingly, and say, “See, Spengler was right after all!”

  • Post-Darwinian, “Evolutional” Theories of Morality in the 19th Century

    Posted on October 26th, 2014 Helian No comments

    It’s become fashionable in some quarters to claim that philosophy is useless.  I wouldn’t go that far.  Philosophers have at least been astute enough to notice some of the more self-destructive tendencies of our species, and to come up with more or less useful formulas for limiting the damage.  However, they have always had a tendency to overreach.  We are not intelligent enough to reliably discover truth far from the realm of repeatable experiments.  When we attempt to do so, we commonly wander off into intellectual swamps.  That is where one often finds philosophers.

    The above is well illustrated by the history of thought touching on the subject of morality in the decades immediately following the publication of On the Origin of Species in 1859.  It was certainly realized in short order that Darwin’s theory was relevant to the subject of morality.  Perhaps no one at the time saw it better than Darwin himself.  However, the realization that the search for the “ultimate Go0d” was now over once and for all, because the object sought did not exist, was slow in coming.  Indeed, for the most part, it’s still not realized to this day.  The various “systems” of morality in the decades after Darwin’s book appeared kept stumbling forward towards the non-existent goal, like dead men walking.  For the most part, their creators never grasped the significance of the term “natural selection.”  Against all odds, they obstinately persisted in the naturalistic fallacy; the irrational belief that, to the extent that morality had evolved, it had done so “for the good of the species.”

    An excellent piece of historical source material documenting these developments can be found at Google Books.  Entitled, A Review of the Systems of Ethics Founded on the Theory of Evolution, it was written by one C. M. Williams, and published in 1893.  According to one version on Google Books, “C. M.” stands for “Cora Mae,” apparently a complete invention.  The copying is botched, so that every other page of the last part of the book is unreadable.  The second version, which is at least readable, claims the author was Charles Mallory Williams and, indeed, that name is scribbled after the initials “C. M.” in the version copied.  There actually was a Charles Mallory Williams.  He was a medical doctor, born in 1872, and would have been 20 years old at the time the book was published.  The chances that anyone so young wrote the book in question are vanishingly small.  Unfortunately, I must leave it to some future historian to clear up the mystery of who “C. M.” actually was, and move on to consider what he wrote.

    According to the author, by 1893 a flood of books and papers had already appeared addressing the connection between Darwin’s theory and morality.  In his words,

    Of the Ethics founded on the theory of Evolution, I have considered only the independent theories which have been elaborated to systems. I have omitted consideration of many works which bear on Evolutional Ethics as practical or exhortative treatises or compilations of facts, but which involve no distinctly worked out theory of morals.

    The authors who made the cut include Alfred Russell Wallace, Ernst Haeckel, Herbert Spencer, John Fiske, W. H. Rolph, Alfred Barratt, Leslie Stephen, Bartholomäus von Carneri, Harald Hoffding, Georg von Gizycki, Samuel Alexander, and, last but not least, Darwin himself.  Williams cites the books of each that bear on the subject, and most of them have a Wiki page.  Wallace, of course, is occasionally mentioned as the “co-inventor” of the theory of evolution by natural selection with Darwin.  Collectors of historical trivia may be interested to know that Barratt’s work was edited by Carveth Read, who was probably the first to propose a theory of the hunting transition from ape to man.  Leslie Stephen was the father of Virginia Woolf, and Harald Hoffding was the friend and philosophy teacher of Niels Bohr.

    I don’t intend to discuss the work of each of these authors in detail.  However, certain themes are common to most, if not all, of them, and most of them, not to mention Williams himself, still clung to Lamarckism and other outmoded versions of evolution.  It took the world a long time to catch up to Darwin.  For example, in the case of Haeckel,

    Even in the first edition of his Naturliche Schopfungsgeschichte Haeckel makes a distinction between conservative and progressive inheritance, and in the edition of 1889 he still maintains this division against Weismann and others, claiming the heredity of acquired habit under certain circumstances and showing conclusively that even wounds and blemishes received during the life of an individual may be in some instances inherited by descendants.

    For Williams’ own Lamarckism, see chapter 1 of Volume II, in which he seems convinced that Darwin himself believes in inheritance of acquired characteristics, and that Lamarck’s theories are supported by abundant evidence.  We are familiar with an abundance of similar types of “evidence” in our own day.

    More troublesome than these vestiges of earlier theories of evolution are the vestiges of earlier systems of morality.  Every one of the authors cited above has a deep background in the theories of morality concocted by philosophers, both ancient and modern.  In general, they have adopted some version of one of these theories as their own.  As a result, they have a tendency to fit evolution by natural selection into the Procrustean bed of their earlier theories, often as a mere extension of them.  An interesting manifestation of this tendency is the fact that, almost to a man, they believed that evolution promoted the “good of the species.”  For example, quoting Stephen:

    The quality which makes a race survive may not always be a source of advantage to every individual, or even to the average individual.  Since the animal which is better adapted for continuing its species will have an advantage in the struggle even though it may not be so well adapted for pursuing its own happiness, an instinct grows and decays not on account of its effects on the individual, but on account of its effects upon the race.

    The case of Carneri, who happened to be a German, is even more interesting.  Starting with the conclusion that “evolution by natural selection” must inevitably favor the species over the individual,

    Every man has his own ends, and in the attempt to attain his ends, does not hesitate to set himself in opposition to all the rest of mankind.  If he is sufficiently energetic and cunning, he may even succeed for a time in his endeavors to the harm of humanity.  Yet to have the whole of humanity against oneself is to endeavor to proceed in the direction of greater resistance, and the process must sooner or later result in the triumph of the stronger power. In the struggle for existence in its larger as well as its smaller manifestations, the individual seeks with all his power to satisfy the impulse to happiness which arises with conscious existence, while the species as the complex of all energies developed by its parts has an impulse to self preservation of its own.

    It follows, at least for Carneri, that Darwin’s theory is a mere confirmation of utilitarianism:

    The “I” extends itself to an “I” of mankind, so that the individual, in making self his end, comes to make the whole of mankind his end. The ideal cannot be fully realized; the happiness of all cannot be attained; so that there is always choice between two evils, never choice of perfect good, and it is necessary to be content with the greatest good of the greatest number as principle of action.

    which, in turn, leads to a version of morality worthy of Bismarck himself.  As paraphrased by Williams,

    He lays further stress upon the absence of morality, not only among the animals, in whom at least general ethical feelings in distinction from those towards individuals are not found, but also among savages, morality being not the incentive to, but the product of the state.

    Alexander gives what is perhaps the most striking example of this perceived syncretism between Darwinism and pre-existing philosophies, treating it as a mere afterthought to Hegel and Kant:

     Nothing is more striking at the present time than the convergence of different schools of Ethics. English Utilitarianism developing into Evolutional Ethics on the one hand, and the idealism associated with the German philosophy derived from Kant on the other.  The convergence is not of course in mere practical precepts, but in method also. It consists in an objectivity or impartiality of treatment commonly called scientific.  There is also a convergence in general results which consists in a recognition of a kind of proportion between individual and society, expressed by the phrase “organic connection.”  The theory of egoism pure and simple has been long dead.  Utilitarianism succeeded it and enlarged the moral end. Evolution continued the process of enlarging the individual interest, and has given precision to the relation between the individual and the moral law.  But in this it has added nothing new, for Hegel in the early part of the century, gave life to Kant’s formula by treating the law of morality as realized in the society and the state.

    Alexander continues by confirming that he shares a belief common to all the rest as well, in one form or another – in the reality of objective morality:

    The convergence of dissimilar theories affords us some prospect of obtaining a satisfactory statement of the ethical truths towards which they seem to move.

    Gyzicki embraces this version of the naturalistic fallacy even more explicitly:

    Natural selection is therefore a power of judgment, in that it preserves the just and lets the evil perish.  Will this war of the good with the evil always continue?  Or will the perfect kingdom of righteousness one day prevail.  We hope this last but we cannot know certainly.

    There is much more of interest in this book by an indeterminate author.  Of particular note is the section on Alfred Russell Wallace, but I will leave that for a later post.  One might mention as an “extenuating circumstance” for these authors that none of them had the benefit of the scientific community’s belated recognition of the significance of Mendel’s discoveries.  It’s well know that Darwin himself struggled to come up with a logical mechanism to explain how it was possible for natural selection to even happen.  The notions of these moral philosophers on the subject must have been hopelessly vague by comparison.  Their ideas about “evolution for the good of the species” must be seen in that context.  The concocters of the modern “scientific” versions of morality can offer no such excuse.

  • Edvard Westermarck on Morality: The Light Before the Darkness Fell

    Posted on October 18th, 2014 Helian 3 comments

    The nature of morality became obvious to anyone who cared to think about it after Darwin published his great theory, including Darwin himself.  In short, it became clear that the “root causes” of morality were to be found in “human nature,” our specie’s collection of evolved behavioral predispositions.  As the expression of evolved traits, morality has no purpose, unless one cares to use that term as shorthand for the apparent biological function it serves.  It exists because it enhanced the probability that the creatures with the genetic endowment that gave rise to it would survive and reproduce in the conditions that existed when those genes appeared.  As a result, there are no moral “truths.”  Rather, morality is a subjective phenomenon with emotional rather than logical origins.

    So much became obvious to many during the decades that following the publication of On the Origin of Species in 1859.  One man spelled out the truth more explicitly, clearly, and convincingly than any other.  That man was Edvard Westermarck.

    Westermarck was a Finnish philosopher and sociologist who published his seminal work on morality, The Origin and Development of the Moral Ideas, in 1906.  As we now know in retrospect, the truths in that great book were too much for mankind to bear.  The voices repeating those truths became fewer, and were finally silenced.  The darkness returned, and more than a century later we are still struggling to find our way out of the fog.  It should probably come as no surprise.  It goes without saying that the truth was unpalatable to believers in imaginary super beings.  Beyond that, the truth relegated the work of most of the great moral philosophers of the past to the status  of historical curiosities.  Those who interpreted their thought for the rest of us felt the ground slipping from beneath their feet.  Experts in ethics and morality became the equivalent of experts in astrology, and a step below the level of doctors of chiropracty.  Zealots of Marxism and the other emerging secular versions of religion rejected a truth that exposed the absurdity of attempts to impose new versions of morality from on high.  As for the average individuals of the species Homo sapiens, they rejected the notion that the “Good” and “Evil” objects that their emotions portrayed so realistically, and that moved them so profoundly, were mere fantasies.

    The result was more or less predictable.  Westermarck and the rest were shouted down.  The Blank Slate debacle turned the behavioral sciences into so many strongholds of an obscurantist orthodoxy.  The blind exploitation of moral emotions in the name of such newly concocted “Goods” as Nazism and Communism resulted in the deaths of tens of millions, and misery on a vast scale.  The Academy became the spawning ground of a modern, secular version of Puritanism, more intolerant and bigoted than the last.  In the case of Westermarck, the result has, at least, been more amusing.  He has been hidden in plain sight.  On his Wiki page, for example, he is described as one who “studied exogamy and incest taboo.”  To the extent that his name is mentioned at all, it is usually in connection with the Westermarck Effect, according to which individuals in close proximity in the early years of life become sexually desensitized to each other.  So much for the legacy of the man who has a good claim to be the most profound thinker on the subject of morality to appear since the days of Hume.

    Let us cut to the chase and consider what Westermarck actually said.  In the first place, he stressed a point often completely overlooked by modern researchers in the behavioral sciences; the complex emotions we now associate with morality did not suddenly appear fully formed like Athena from the forehead of Zeus.  Rather, they represent the results of a continuous process of evolution from simpler emotional responses that Westermarck grouped into the categories of “resentment” and “approval.”  These had existed in many animal species long before hominids appeared on the scene.  They were there as a result of natural selection.  As Westermarck put it:

    As to their origin, the evolutionist can hardly entertain a doubt. Resentment, like protective reflex action, out of which it has gradually developed, is a means of protection for the animal. Its intrinsic object is to remove a cause of pain, or, what is the same, a cause of danger. Two different attitudes maybe taken by an animal towards another which has made it feel pain: it may either shun or attack its enemy. In the former case its action is prompted by fear, in the latter by anger, and it depends on the circumstances which of these emotions is the actual determinant. Both of them are of supreme importance for the preservation of the species, and may consequently be regarded as elements in the animal’s mental constitution which have been acquired by means of natural selection in the struggle for existence.

    From what has been said above it is obvious that moral resentment is of extreme antiquity in the human race, nay that the germ of it is found even in the lower animal world among social animals capable of feeling sympathetic resentment.  The origin of custom as a moral rule no doubt lies in a very remote period of human history.

    This is followed by another remarkable passage, which showcases another aspect of Westermarck’s genius that appears repeatedly in his books; his almost incredible erudition.  His knowledge of the intellectual and historical antecedents of his own ideas is not limited to a narrow field, but is all-encompassing, and highly useful to anyone who cares to study the relevant source material on his own:

     This view is not new. More than one hundred and fifty years before Darwin, Shaftesbury wrote of resentment in these words:  ” Notwithstanding its immediate aim be indeed the ill or punishment of another, yet it is plainly of the sort of those [affections] which tend to the advantage “and interest of the self-system, the animal himself; and is withal in other respects contributing to the good and interest of the species.”  A similar opinion is expressed by Butler, according to whom the reason and end for which man was made liable to anger is, that he might be better qualified to prevent and resist violence and opposition, while deliberate resentment “is to be considered as a weapon, put into our hands by nature, against injury, injustice, and cruelty.”  Adam Smith, also, believes that resentment has “been given us by nature for defence, and for defence only,” as being “the safeguard of justice and the I security of innocence.”  Exactly the same view is taken by several modern evolutionists as regards the “end” of resentment, though they, of course, do not rest contented with saying that this feeling has been given us by nature, but try to explain in what way it has developed. “Among members of the same species,” says Mr. Herbert Spencer, “those individuals which have not, in any considerable degree, resented aggressions, must have ever tended to disappear, and to have left behind those which have with some effect made counter-aggressions.”

    All these references are accompanied by citations of the works in which they appear in the footnotes.  Westermarck then went on to derive conclusions from the evolutionary origins of morality that are both simple and obvious, but which modern behavioral scientists and philosophers have a daunting capacity to ignore.  He concluded that morality is subjective.  It may be reasoned about, but is the product of emotion, not reason.  It follows that there are no such things as moral “truths,” and that the powerful moral emotions that we so cling to, and that cause the chimeras of “Good” and “Evil” to hover in our consciousness as palpable, independent objects, are, in fact, illusions.  In Westermarck’s own words:

    As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity.  The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments.  The intensity of his emotions makes him the victim of an illusion.

    The presumed objectivity of moral judgments thus being a chimera there can be no moral truth in the sense in which this term is generally understood.  The ultimate reason for this is that the moral concepts are based upon emotions and that the contents of an emotion fall entirely outside the category of truth.

    Consider the significance of these passages, almost incredible looking back from a point of view through the Puritanical mist of the 21st century.  In one of the blurbs I ran across while searching the name “Westermarck,” his work was referred to as “outdated.”  I suppose that, in a sense, that conclusion is quite true, but not in the way intended.  I know of not a single modern thinker, scientist, or philosopher who has even come close to Westermarck in the simplicity and clarity with which he presents these conclusions, so obvious to anyone who has read and understood Darwin.  Here are some more passages that reinforce that conclusion:

    If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.  It has been said by Bentham and others that moral principles cannot be proved because they are first principles which are used to prove everything else.  But the real reason for their being inaccessible to demonstration is that, owing to their very nature, they can never be true.  If the word, “Ethics,” then, is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.

    To put it more bluntly, and to reveal some of my own purely subjective moral emotions in the process, the flamboyant peacocks currently strutting about among us peddling their idiosyncratic flavors of virtuous indignation and moral outrage based on a supposed monopoly on moral “truths” are, in reality, so many charlatans and buffoons.  To take them seriously is to embrace a lie, and one that, as has been clearly and repeatedly demonstrated in the past, and will almost certainly be abundantly demonstrated again in the future, is not only irritating, but extremely dangerous.  The above, by the way, appears in the context of a shattering rebuttal of utilitarianism in Chapter 1 that is as applicable to the modern versions being concocted for our edification by the likes of Sam Harris and Joshua Greene as it is to the earlier theories of John Stuart Mill and others.  In reading Westermarck’s book, one is constantly taken aback by insights that are stunning in view of the time at which they were written.  Consider, for example, the following in light of recent research on mirror neurons:

    That a certain act causes pleasure or pain to the bystander is partly due to the close association which exists between these feelings and their outward expressions.  The sight of a happy face tends to produce some degree of pleasure in him who sees it.  The sight of the bodily signs of suffering tends to produce a feeling of pain.  In either case the feeling of the spectator is the result of a process of reproduction, the perception of the physical manifestation of the feeling recalling the feeling itself on account of the established association between them.

    I fear we will have a very long wait before our species grasps the significance of Westermarck’s ideas and adjusts its perceptions of the nature and significance of morality accordingly.  As Jonathan Haidt pointed out in his The Righteous Mind, we are far to fond of the delightful joys of self-righteousness to admit the less than exalted truths about its origins without a struggle.  There are some grounds for optimism in the fact that a “Happy Few” are still around who understand that the significance of Westermarck completely transcends anything he had to say about sexual attraction and marriage.  As it happens, Frans de Waal, whose latest book is the subject of one of my recent posts, is one of them.  I personally became aware of him thanks to a reference to his book in Nietzsche’s “Human, All Too Human.”  I don’t think Nietzsche ever quite grasped what Westermarck was saying.  He had too much the soul of an artist and a poet rather than a scientist for that.  Yet, somehow, he had a sixth sense for ferreting out the wheat from the chaff in human thought.  As it happens, I began reading Stendhal, my favorite novelist, thanks to a reference in Nietzsche as well.  I may not exactly be on board as far as his ramblings about morality are concerned, but at least I owe him a tip of the hat for that.  As for Westermarck, I can but hope that many more will read and grasp the significance of his theories.  His book is available free online at Google books for anyone who cares to look at it.

    UPDATE:  Apparently I became too “dizzy with success” at discovering Westermarck to notice a “minor” temporal anomaly in the above post.  A commenter just pointed it out to me.  Westermarck wrote his book in 1906, and Nietzsche died in 1900!  He was actually referring to a book by Paul Ree entitled, “The Origin of the Moral Sensations,” which appeared in 1877.  Check Ree’s Wiki page, and you’ll see he’s the guy standing in front of a cart with Nietzsche in the famous picture with Lou Andreas-Salome sitting in the cart holding a whip.  Of course, it’s a spoof on Nietzsche’s famous dictum, “You go to women? Do not forget the whip!”  I was reading the German version of his “Human, all too Human.”  The quote referred to appears in Section 37, as follows:

    Welches ist doch der Hauptsatz, zu dem einer der kühnsten und kältesten Denker, der Verfasser des Buches “Über den Ursprung der moralischen Empfindungen” vermöge seiner ein-und durchschneidenden Analysen des menschlichen Handelns gelangt?

    In my English version of the book above the quote is translated as,

    Which principle did one of the keenest and coolest thinkers, the author of the book On the Origin of the Moral Feelings, arrive at through his incisive and piercing analysis of human actions?

    I translated the title on the fly as “On the Origin of the Moral Emotions,” and when you search that title on Bing, the first link that comes up points to Westermarck’s book.  In a word, my discovery of Westermarck was due to serendipity or bungling, take your pick.  The shade of Nietzsche must be chuckling somewhere.  Now I feel obligated to have a look at Ree’s book as well.  I’ll let you know what I think of him in a later post, and I promise not to claim I discovered him thanks to a reference in Aristotle’s “Ethics.”

    ree

  • Frans de Waal’s “The Bonobo and the Atheist”: The Objective Morality of a Subjective Moralist

    Posted on October 12th, 2014 Helian No comments

    Franz de Waal’s The Bonobo and the Atheist is interesting for several reasons.  As the title of this post suggests, it demonstrates the disconnect between the theory and practice of morality in the academy.  It’s one of the latest brickbats in the ongoing spat between the New Atheists and the “accommodationist” atheists.  It documents the current progress of the rearrangement of history in the behavioral sciences in the aftermath of the Blank Slate debacle.  It’s a useful reality check on the behavior of bonobos, the latest “noble savage” among the primates.  And, finally, it’s an entertaining read.

    In theory, de Waal is certainly a subjective moralist.  As he puts it, “the whole point of my book is to argue a bottom up approach” to morality, as opposed to the top down approach:  “The view of morality as a set of immutable principles, or laws, that are ours to discover.”  The “bottom” de Waal refers to are evolved emotional traits.  In his words,

    The moral law is not imposed from above or derived from well-reasoned principles; rather, it arises from ingrained values that have been there since the beginning of time.

    My views are in line with the way we know the human mind works, with visceral reactions arriving before rationalizations, and also with the way evolution produces behavior.  A good place to start is with an acknowledgment of our background as social animals, and how this background predisposes us to treat each other.  This approach deserves attention at a time in which even avowed atheists are unable to wean themselves from a semireligious morality, thinking that the world would be a better place if only a white-coated priesthood could take over from the frocked one.

    So far, so good.  I happen to be a subjective moralist myself, and agree with de Waal on the origins of morality.  However, reading on, we find confirmation of a prediction made long ago by Friedrich Nietzsche.  In Human, All Too Human, he noted the powerful human attachment to religion and the “metaphysics” of the old philosophers.  He likened the expansion of human knowledge to a ladder, or tree, up which humanity was gradually climbing.  As we reached the top rungs, however, we would begin to notice that the old beliefs that had supplied us with such great emotional satisfaction in the past were really illusions.  At that point, our tendency would be to recoil from this reality.  The “tree” would begin to grow “sprouts” in reverse.  We would balk at “turning the last corner.”  Nietzsche imagined that developing a new philosophy that could accommodate the world as it was instead of the world as we wished it to be would be the task of “the great thinkers of the next century.”  Alas, a century is long past since he wrote those words, yet to all appearances we are still tangled in the “downward sprouts.”

    Nowhere else is this more apparent than in the academy, where a highly moralistic secular Puritanism prevails.  Top down, objective morality is alive and well, and the self-righteous piety of the new, secular priesthood puts that of the old-fashioned religious Puritans in the shade.  All this modern piety seems to be self-supporting, levitating in thin air, with none of the props once supplied by religion.  As de Waal puts it,

    …the main ingredients of a moral society don’t require religion, since they come from within.

    Clearly, de Waal can see where morality comes from, and how it evolved, and why it exists, but, even with these insights, he too recoils from “climbing the last rungs,” and “turning the final corner.”  We find artifacts of the modern objective morality prevalent in the academy scattered throughout his book.  For example,

     Science isn’t the answer to everything.  As a student, I learned about the “naturalistic fallacy” and how it would be the zenith of arrogance for scientists to think that their work could illuminate the distinction between right and wrong.  This was not long after World War II, mind you, which had brought us massive evil justified by a scientific theory of self-directed evolution.  Scientists had been much involved in the genocidal machine, conducting unimaginable experiments.

    American and British scientists were not innocent, however, because they were the ones who earlier in the century had brought us eugenics.  They advocated racist immigration laws and forced sterilization of the deaf, blind, mentally ill, and physically impaired, as well as criminals and members of minority races.

    I am profoundly skeptical of the moral purity of science, and feel that its role should never exceed that of morality’s handmaiden.

    One can consider humans as either inherently good but capable of evil or as inherently evil yet capable of good.  I happen to belong to the first camp.

    None of these statements make any sense in the absence of objective good and evil.  If, as de Waal claims repeatedly elsewhere in his book, morality is ultimately an expression of emotions or “gut feelings,” analogs of which we share with many other animals, and which exist because they evolved, then the notions that scientists are or were evil, period, or that science itself can be morally impure, period, or that humans can be good, period, or evil, period, are obvious non sequiturs.  De Waal has climbed up the ladder, peaked at what lay just beyond the top rungs, and jumped back down onto Nietzsche’s “backward growing sprouts.”  Interestingly enough, in spite of that de Waal admires the strength of one who was either braver or more cold-blooded, and kept climbing; Edvard Westermarck.  But I will have more to say of him later.

    The Bonobo and the Atheist is also interesting from a purely historical point of view.  The narrative concocted to serve as the “history” of the behavioral sciences continues to be adjusted and readjusted in the aftermath of the Blank Slate catastrophe, probably the greatest scientific debacle of all time.  As usual, the arch-villain is Robert Ardrey, who committed the grave sin of being right about human nature when virtually all the behavioral scientists and professionals, at least in the United States, were wrong.  Imagine the impertinence of a mere playwright daring to do such a thing!  Here’s what de Waal has to say about him:

    Confusing predation with aggression is an old error that recalls the time that humans were seen as incorrigible murderers on the basis of signs that our ancestors ate meat.  This “killer ape” notion gained such traction that the opening scene of Stanley Kubrick’s movie 2001:  A Space Odyssey showed one hominin bludgeoning another with a zebra femur, after which the weapon, flung triumphantly into the air, turned into an orbiting spacecraft.  A stirring image, but based on a single puncture wound in the fossilized skull of an ancestral infant, known as the Taung Child.  It’s discoverer had concluded that our ancestors must have been carnivorous cannibals, an idea that the journalist Robert Ardrey repackaged in African Genesis by saying that we are risen apes rather than fallen angels.  It is now considered likely, however, that the Taung Child had merely fallen prey to a leopard or eagle.

    I had to smile when I read this implausible yarn.  After all, anyone can refute it by simply looking up the source material, not to mention the fact that there’s no lack of people who’ve actually read Ardrey, and are aware that the “Killer Ape Theory” is a mere straw man concocted by his enemies.  De Waal is not one of them.  Not only has he obviously not read Ardrey, but he probably knows of him at all only at third or fourth hand.  If he had, he’d realize that he was basically channeling Ardrey in the rest of his book.  Indeed, much of The Bonobo and the Atheist reads as if it had been lifted from Ardrey’s last book, The Hunting Hypothesis, complete with the ancient origins of morality, Ardrey’s embrace of de Waal’s theme that humans are genuinely capable of altruism and cooperation, resulting in part, as also claimed by de Waal, from his adoption of a hunting lifestyle, and his rejection of what de Waal calls “Veneer Theory,” the notion that human morality is merely a thin veneer covering an evil and selfish core.  For example, according to de Waal,

    Hunting and meat sharing are at the root of chimpanzee sociality in the same way that they are thought to have catalyzed human evolution.  The big-game hunting of our ancestors required even tighter cooperation.

    This conclusion is familiar to those who have actually read Ardrey, but was anathema to the “Men of Science” as recently as 15 years ago.  Ardrey was, of course, never a journalist, and his conclusion that Australopithecine apes had hunted was based, not on the “single puncture wound” in the Taung child’s skull, but mainly on the statistical anomaly of numbers of a particular type of bone that might have been used as a weapon found in association with the ape remains far in excess of what would be expected if they were there randomly.  To date, no one has ever explained that anomaly, and it remains carefully swept under the rug.  In a word, the idea that Ardrey based his hypothesis entirely “on a single puncture wound” is poppycock.  In the first place, there were two puncture wounds, not one.  Apparently, de Waal is also unaware that Raymond Dart, the man who discovered this evidence, has been rehabilitated, and is now celebrated as the father of cave taphonomy, whereas those who disputed his conclusions about what he had found, such as C. K. Brain, who claimed that the wounds were caused by a leopard, are now in furious rowback mode.  For example, from the abstract of a paper in which Brain’s name appears at the end of the list of authors,

    The ca. 1.0 myr old fauna from Swartkrans Member 3 (South Africa) preserves abundant indication of carnivore activity in the form of tooth marks (including pits) on many bone surfaces. This direct paleontological evidence is used to test a recent suggestion that leopards, regardless of prey body size, may have been almost solely responsible for the accumulation of the majority of bones in multiple deposits (including Swartkrans Member 3) from various Sterkfontein Valley cave sites. Our results falsify that hypothesis and corroborate an earlier hypothesis that, while the carcasses of smaller animals may have been deposited in Swartkrans by leopards, other kinds of carnivores (and hominids) were mostly responsible for the deposition of large animal remains.

    Meanwhile, we find that none other than Stephen Jay Gould has been transmogrified into a “hero.”  As documented by Steven Pinker in The Blank Slate, Gould was basically a radical Blank Slater, unless one cares to give him a pass because he grudgingly admitted that, after all, eating, sleeping, urinating and defecating might not be purely learned behaviors, after all.  The real Steven Jay Gould rejected evolutionary psychology root and branch, and was a co-signer of the Blank Slater manifesto that appeared in the New York Times in response to claims about human nature as reserved as those of E. O. Wilson in his Sociobiology.  He famously invented the charge of “just so stories” to apply to any and all claims for the existence of human behavioral predispositions.  Now, in The Bonobo and the Atheist, we find Gould reinvented as a good evolutionary psychologist.  His “just so stories” only apply to the “excesses” of evolutionary psychology.  We find the real Gould, who completely rejected the idea of “human nature,” softened to a new, improved Gould who merely “vehemently resisted the idea that every single human behavior deserves an evolutionary account.”  If anyone was a dyed-in-the-wool habitue of the Blank Slate establishment in its heyday, it was Gould, but suddenly we learn that “Several skirmishes between him and the evolutionary establishment unfolded in the pages of the New York Review of Books in 1997.”  I can only suggest that anyone who honestly believes that a new “establishment” had already replaced the Blank Slate prior to 1997 should read Napoleon Chagnon’s Noble Savages: My Life Among Two Dangerous Tribes – The Yanomamö and the Anthropologists, published as recently as last year.  No matter, according to de Waal, “The greatest public defender of evolution this country has ever known was Stephen Jay Gould.”

    Perhaps one can best understand the Gould panegyrics in connection with another of the major themes of de Waal’s book; his rejection of Richard Dawkins and the rest of the New Atheists.  De Waal is what New Atheist Jerry Coyne would refer to as an “accommodationist,” that is, an atheist who believes that the atheist lions should lie down with the religious sheep.  As it happens, Gould was the Ur-accommodationist, and inventor of the phrase “nonoverlapping magisterial,” or NOMA to describe his claim that science and religion occupy separate spheres of knowledge.  One can find a good summary of the objections to NOMA from the likes of “New Atheists” Dawkins, Christopher Hitchens, Sam Harris and Coyne on Prof. Coyne’s website, Why Evolution is True, for example, here and here.

    It’s hard to understand de Waal’s bitter opposition to atheist activism as other than yet another example of Nietzsche’s “climbing down onto the backward pointing shoots.”  Indeed, as one might expect from such instances of “turning back,” it’s not without contradictions.  For example, he writes,

    Religion looms as large as an elephant in the United States, to the point that being nonreligious is about the biggest handicap a politician running for office can have, bigger than being gay, unmarried, thrice married, or black.

    And yet he objects to the same kind of activism among atheists that has been the most effective antidote to such bigotry directed at, for example, gays and blacks.  For some reason, atheists are just supposed to smile and take it.  De Waal accuses Dawkins, Harris and the rest of being “haters,” but I know of not a single New Atheist that term can really be accurately applied to, and certainly not to the likes of Dawkins, Harris or Coyne.  Vehement, on occasion, yes, but haters of the religious per se?  I don’t think so.  De Waal agrees with David Sloan Wilson that “religion” evolved.  I can certainly believe that predispositions evolved that have the potential to manifest themselves as religion, but “religion” per se, complete with imaginary spiritual beings?  Not likely.  Nevertheless, De Waal claims it is part of our “social skin.”  And yet, in spite of this claim that religion “evolved,” a bit later we find him taking note of a social phenomenon that apparently directly contradicts this conclusion:

    The secular model is currently being tried out in northern Europe, where it has progressed to the point that children naively ask why there are so many “plus signs” on large buildings called “churches.”

    Apparently, then, “evolved religion” only infected a portion of our species in northern Europe, and they all moved to the United States.  Finally, in his zeal to defend religion, de Waal comes up with some instances of “moral equivalence” that are truly absurd.  For example,

    I am as sickened (by female genital mutilation, ed.) as the next person, but if Harris’s quest is to show that religion fails to promote morality, why pick on Islam?  Isn’t genital mutilation common in the United States, too, where newborn males are routinely circumcised without their consent?  We surely don’t need to go all the way to Afghanistan to find valleys in the moral landscape.

    As it happens I know of several instances in which my undergraduate classmates voluntarily had themselves circumcised, not for any religious motive, but because otherwise their girlfriends wouldn’t agree to oral sex.  One wonders whether de Waal can cite similar instances involving FGM.

    Oh, well, I suppose I shouldn’t look a gift horse in the mouth.  Anyone who believes in a “bottom up” version of subjective morality can’t be all bad, according to my own subjective judgment, of course.  Indeed, de Waal even has the audacity to point out that bonobos, those paragons of primate virtue extolled so often as role models for our own species do, occasionally fight.  Along with Jonathan Haidt, he’s probably the closest thing to a “kindred spirit” I’m likely to find in academia.  The icing on the cake is that he is aware of and admires the brilliant work of Edvard Westermarck on morality.  What of Westermarck, you ask.  Well, I’ll take that up in another post.

  • Of Smug Germans and Sinful Australians: Global Warming Update

    Posted on October 4th, 2014 Helian No comments

    No doubt the outcome of the Nazi unpleasantness resulted in attitude adjustment in Germany on a rather large scale.  Clearly, however, it didn’t teach the Germans humility.  At a time when a secular mutation of Puritanism has become the dominant ideology in much of Europe and North America, the Germans take the cake for pathological piety.  Not that long ago the fashionable evil de jour was the United States, and anti-American hate mongering in the German media reached levels that would make your toes curl.  In the last years of the Clinton and the first years of the following Bush administrations it was often difficult to find anything about Germany on the home pages of popular German news magazines like Der Spiegel because the available space was taken up by furious rants against the United States for the latest failures to live up to German standards of virtue.  Eventually the anti-American jihad choked on its own excess, and other scapegoats were found. Clearly, however, German puritanism is still alive and well.  An amusing example just turned up in the Sydney Morning Herald under the headline, “Merkel adviser lashes Abbott’s ‘suicide strategy’ on coal.”  The advisor in question was one Hans Joachim Schellnhuber, Chancellor Merkel’s lead climate advisor.  A picture of him posing as the apotheosis of smugness accompanies the article, according to which he,

    …attacked Australia’s complacency on global warming and described the Abbott government’s championing of the coal industry as an economic “suicide strategy”.

    Alas, we learn that Schellnhuber’s anathemas also fell on our neighbor to the north.  The SMH quotes him as saying,

    Similar to Canada, Australia for the time being is not part of the international community which is cooperating to achieve greenhouse gas emission reductions.

    Tears run down our cheeks as Schellnhuber describes Australia’s fall from grace:

     …it had been disappointing to see Australia’s retreat on climate policy after it became “the darling of the world” when Kevin Rudd ratified the Kyoto Protocol in 2007.

    As readers who were around at the time may recall, the Kyoto Protocol conformed perfectly to German standards of “fairness.”  It would have required states like The United States and Canada to meet exactly the same percentage reduction in emissions from the base year 1990 as the countries in the European Union, in spite of the fact that their economies had expanded at a faster rate than most of Europe’s during the period, they did not enjoy the same access to cheap, clean-burning natural gas as the Europeans in those pre-fracking days, and, “fairest” of all, they weren’t the beneficiaries of massive emission reductions from the closing of obsolete east European factories following the demise of Communism.  In other words, it was “fair” for the US and Canada to shed tens of thousands of manufacturing jobs in order to meet grossly disproportionate emissions standards while Germany and the rest of the Europeans cheered from the sidelines.

    What is one to think of this latest instance of ostentatious German piety?  I don’t know whether to laugh or cry.  For one thing, the apparent concern about climate change in Germany is about 99% moralistic posing and 1% real.  Solzhenitsyn used a word in The First Circle that describes the phenomenon very well; sharashka.  Basically, it’s a lie so big that even those telling it eventually begin to believe it.  The German decision to shut down their nuclear power plants demonstrated quite clearly that they’re not serious about fighting global warming.  Base load sources of energy are needed for when renewables are unavailable because the wind isn’t blowing or the sun isn’t shining.  Practical alternatives for filling in the gaps include nuclear and fossil fuel.  Germany has rejected the former and chosen one of the dirtiest forms of the latter; coal-fired plants using her own sources of lignite.  She plans to build no less than 26 of them in the coming years!

    It’s stunning, really.  These plants will pump millions of tons of CO2 and other greenhouse gases into the atmosphere that wouldn’t have been there if Germany had kept her nuclear plants on line.  Not only that, they represent a far greater radioactive danger than nuclear plants, because coal contains several parts per million of radioactive thorium and uranium.  The extent of German chutzpah is further demonstrated by a glance at recent emission numbers.  Germany is now the worst polluter in the EU.  Her CO2 emissions have risen substantially lately, due mainly to those new lignite plants beginning to come on line.  Coal-generated energy in Germany is now around 50% of the mix, the highest it’s been since 1990.  Even as the German government shook its collective head at the sinful Australians, telling them to mend their evil ways or bear the guilt for wars and revolution, not to mention the bleaching of the coral in the Great Barrier Reef, her own CO2 emission rose 1.5% in 2013 over the previous year, while Australia’s fell by 0.8% in the same period!

    In a word, dear reader, for the German “Greens,” the pose is everything, and the reality nothing.

  • Morality Addiction in the Academy

    Posted on September 30th, 2014 Helian No comments

    One would think that, at the very least, evolutionary psychologists would have jettisoned their belief in objective morality by now.  After all, every day new papers are published about the evolutionary roots of morality, the actual loci in the brain that give rise to different types of moral behavior, and the existence in animals of some of the same traits we associate with morality in humans.  Now, if morality evolved, it must have done so because it enhanced the odds that the genes responsible for it would survive and reproduce.  It cannot somehow acquire a life of its own and decide that it actually has some other “purpose” in mind.  The spectacle of human “experts in ethics” arbitrarily reassigning its purpose in that way is even more ludicrous.  In spite of all that, faith in the existence of disembodied good and evil persists in the academy, in defiance of all logic, in evolutionary psychology as in other disciplines.  It’s not surprising really.  For some time now academics of all stripes have been heavily invested in the myth of their own moral superiority.  Eliminate objective morality, and the basis of that myth evaporates like a mirage.  Self-righteousness and heroin are both hard habits to kick.

    Examples aren’t hard to find.  An interesting one turned up in the journal Evolutionary Psychology lately.  Entitled Evolutionary Awareness and submitted by authors Gregory Gorelick and Todd Shackelford, the abstract reads as follows:

    In this article, we advance the concept of “evolutionary awareness,” a metacognitive framework that examines human thought and emotion from a naturalistic, evolutionary perspective. We begin by discussing the evolution and current functioning of the moral foundations on which our framework rests. Next, we discuss the possible applications of such an evolutionarily-informed ethical framework to several domains of human behavior, namely: sexual maturation, mate attraction, intrasexual competition, culture, and the separation between various academic disciplines. Finally, we discuss ways in which an evolutionary awareness can inform our cross-generational activities—which we refer to as “intergenerational extended phenotypes”—by helping us to construct a better future for ourselves, for other sentient beings, and for our environment.

    Those who’ve developed a nose for such things can already sniff the disembodied good and evil things-in-themselves levitating behind the curtain.  The term “better future” is a dead giveaway.  No future can be “better” than any other in an objective sense unless there is some legitimate standard of comparison that doesn’t depend on the whim of individuals.  As we read on, our suspicions are amply confirmed.  As far as its theme is concerned, the paper is just a rehash of what Konrad Lorenz and Robert Ardrey were suggesting back in the 60’s; that there are such things as innate human behavioral predispositions, that on occasion they have promoted warfare and the other forms of mayhem that humans have indulged in over the millennia, and that it would behoove us to take this fact into account and try to find ways to limit the damage.  Unfortunately, they did so at a time when the Blank Slate, probably the greatest scientific imposture ever heard of, was at its most preposterous extreme.  They were ridiculed and ignored by the “men of science” and forgotten.  Now that the Blank Slate orthodoxy has finally collapsed after reigning supreme for the better part of half a century, their ideas are belatedly being taken seriously again, albeit without ever giving them credit or mentioning their names.

    There is, however, an important difference.  In reading through the paper, one finds that the authors believe not only in evolved morality, necessarily a subjective phenomenon, but are true believers in a shadowy thing-in-itself that exists alongside of it.  This thing is objective morality, as noted above, an independent, and even “scientific,” something that has a “purpose” quite distinct from the reasons that explain the existence of evolved morality.  The “purpose” in high fashion at the moment is usually some version of the “human flourishing” ideology advocated by Sam Harris.  No evidence has ever been given for this concoction.  Neither Sam Harris nor anyone else has ever been able to capture one of these ghostly “goods” or “evils” and submit it for examination in the laboratory.  No matter, their existence is accepted as a matter of faith, accompanied by a host of “proofs” similar to those that are devised to “prove” the existence of God.

    Let us examine the artifacts of the faith in these ghosts in the paper at hand.  As it happens, it’s lousy with them.  On page 785, for example, we read,

    Because individual choices lead to cultural movements and social patterns (Kenrick, Li, and Butner, 2003), it is up to every individual to accept the responsibility of an evolutionarily-informed ethics.

    Really?  If so, where does this “responsibility” come from?  How does it manage to acquire legitimacy?  Reading a bit further on page 785, we encounter the following remarkable passage:

    However, as with any intellectually-motivated course of action, developing an evolutionarily-informed ethics entails an intellectual sacrifice: Are we willing to forego certain reproductive benefits or personal pleasures for the sake of building a more ethical community? Such an intellectual endeavor is not just relevant to academic debates but is also of great practical and ethical importance. To apply the paleontologist G. G. Simpson’s (1951) ethical standard of knowledge and responsibility, evolutionary scientists have the responsibility of ensuring that their findings are disseminated as widely as possible. In addition, evolutionarily-minded researchers should expand their disciplinary boundaries to include the application of an evolutionary awareness to problems of ethical and practical importance. Although deciphering the ethical dimension of life’s varying circumstances is difficult, the fact that there are physical consequences for every one of our actions—consequences on other beings and on the environment—means that, for better or worse, we are all players in constructing the future of our society and that all our actions, be they microscopic or macroscopic, are reflected in the emergent properties of our society (Kenrick et al., 2003).

    In other words, not only is the existence of this “other” morality simply assumed, but we also find that its “purpose” actually contradicts the reasons that have resulted in the evolution of morality to begin with.  It is supposed to be “evolutionarily-informed,” and yet we are actually to “forego certain reproductive benefits” in its name.  Later in the paper, on page 804, we find that this apparent faith in “real” good and evil, existing independently of the subjective variety that has merely evolved, is not just a momentary faux pas.  In the author’s words,

    It is not clear what the effects of being evolutionarily aware of our political and social behaviors will be. At the least, we can raise the level of individual and societal self-awareness by shining the light of evolutionary awareness onto our religious, political, and cultural beliefs. Better still, by examining our ability to mentally time travel from an evolutionarily aware perspective, we might envision more humane futures rather than using this ability to further our own and our offspring’s reproductive interests. In this way, we may be able to monitor our individual and societal outcomes and direct them to a more ethical and well-being-enhancing direction for ourselves, for other species, for our—often fragile—environment, and for the future of all three.

    Here the authors leave us in no doubt.  They have faith in an objective something utterly distinct from evolved morality, and with entirely different “goals.”  Not surprisingly, as already noted above, this “something” actually does turn out to be a version of the “scientific” objective morality proposed by Sam Harris.  For example, on page 805,

    As Sam Harris suggested in The Moral Landscape (2010), science has the power not only to describe reality, but also to inform us as to what is moral and what is immoral (provided that we accept certain utilitarian ethical foundations such as the promotion of happiness, flourishing, and well-being—all of which fall into Haidt’s (2012) “Care/Harm” foundation of morality).

    No rational basis is ever provided, by Harris or anyone else, for how these “certain utilitarian ethical foundations” are magically transmuted from the whims of individuals to independent objects, which then somehow hijack human moral emotions and endow them with a “purpose” that has little if anything to do with the reasons that explain the evolution of those emotions to begin with.  It’s all sufficiently absurd on the face of it, and yet understandable.  Jonathan Haidt gives a brilliant description of the reasons that self-righteousness is such a ubiquitous feature of our species in The Righteous Mind.  As a class, academics are perhaps more addicted to self-righteousness than any other.  There are, after all, whole departments of “ethical experts” whose very existence becomes a bad joke unless they can maintain the illusion that they have access to some mystic understanding of the abstruse foundations of “real” good and evil, hidden from the rest of us.  The same goes for all the assorted varieties of “studies” departments, whose existence is based on the premise that there is a “good” class that is being oppressed by an “evil” class.  At least since the heyday of Communism, academics have cultivated a faith in themselves as the special guardians of the public virtue, endowed with special senses that enable them to sniff out “real” morality for the edification of the rest of us.

    Apropos Communism, it actually used to be the preferred version of “human flourishing.”  As Malcolm Muggeridge put it in his entertaining look at The Thirties,

    In 1931, protests were made in Parliament against a broadcast by a Cambridge economist, Mr. Maurice Dobb, on the ground that he was a Marxist; now (at the end of the decade, ed.) the difficulty would be to find an economist employed in any university who was not one.

    Of course, this earlier sure-fire prescription for “human flourishing” cost 100 million human lives, give or take, and has hence been abandoned by more forward-looking academics.  However, a few hoary leftovers remain on campus, and there is an amusing reminder of the fact in the paper.  On page 784 the authors admit that attempts to tinker with human nature in the past have had unfortunate results:

    Indeed, totalitarian philosophies, whether Stalinism or Nazism, often fail because of their attempts to radically change human nature at the cost of human beings.

    Note the delicate use of the term “Stalinism” instead of Communism.  Meanwhile, the proper term is used for Nazism instead of “Hitlerism.”  Of course, mass terror was well underway in the Soviet Union under Lenin, long before Stalin took over supreme power, and the people who carried it out weren’t inspired by the “philosophy” of “socialism in one country,” but by a fanatical faith in a brave new world of “human flourishing” under Communism.  Nazism in no way sought to “radically change human nature,” but masterfully took advantage of it to gain power.  The same could be said of the Communists, the only difference being that they actually did attempt to change human nature once they were in power.  I note in passing that some other interesting liberties are taken with history in the paper.  For example,

    Christianity may have indirectly led to the fall of the Roman Empire by pacifying its population into submission to the Vandals (Frost, 2010), as well as the fall of the early Viking settlers in Greenland to “pagan” Inuit invaders (Diamond, 2005)—two outcomes that collectively highlight the occasional inefficiency (from a gene’s perspective) of cultural evolution.

    Of course, the authors apparently only have these dubious speculations second hand from Frost and Diamond, whose comments on the subject I haven’t read, but they would have done well to consider some other sources before setting down these speculations as if they had any authority.  The Roman Empire never “fell” to the Vandals.  They did sack Rome in 455 with the permission, if not of the people, at least of the gatekeepers, but the reason had a great deal more to do with an internal squabble over who should be emperor than with any supposed passivity due to Christianity.  Indeed, the Vandals themselves were Christians, albeit of the Arian flavor, and their north African kingdom was itself permanently crushed by an army under Belisarius sent by the emperor Justinian in 533.  Both certainly considered themselves “Romans,” as the date of 476 for the “fall of the Roman Empire” was not yet in fashion at the time.  There are many alternative theories to the supposition that the Viking settlements in Greenland “fell to the Inuits,” and to state this “outcome” as a settled fact is nonsense.

    But I digress.  To return to the subject of objective morality, it actually appears that the authors can’t comprehend the fact that it’s possible to believe anything else.  For example, they write,

     Haidt’s approach to the study of human morality is non-judgmental. He argues that the Western, cosmopolitan mindset—morally centered on the Care/Harm foundation—is limited because it is not capable of processing the many “moralities” of non-Western peoples. We disagree with this sentiment. For example, is Haidt really willing to support the expansion of the “Sanctity/Degradation” foundation (and its concomitant increase in ethnocentrism and out-group hostility)? As Pinker (2011) noted, “…right or wrong, retracting the moral sense from its traditional spheres of community, authority, and purity entails a reduction of violence” (p. 637).

    Here the authors simply can’t grok the fact that Haidt is stating an “is,” not an “ought.”  As a result, this passage is logically incomprehensible as it stands.  The authors are disagreeing with a “sentiment” that doesn’t exist.  They are incapable of grasping the fact that Haidt, who has repeatedly rejected the notion of objective morality, is merely stating a theory, not some morally loaded “should.”

    From my own subjective point of view, it is perhaps unfair to single out these two authors.  The academy is saturated with similar irrational attempts to hijack morality in the name of assorted systems designed to promote “human flourishing,” in the fond hope that the results won’t be quite so horrific as were experienced under Communism, the last such attempt to be actually realized in practice.  The addiction runs deep.  Perhaps we shouldn’t take it too hard.  After all, the Blank Slate was a similarly irrational addiction, but it eventually collapsed under the weight of its own absurdity after a mere half a century, give or take.  Perhaps, like the state was supposed to do under Communism, faith in the chimera of objective morality, or at least those versions of it not dependent on the existence of imaginary super-beings, will “whither away” in the next 50 years as well.  We can but hope.

  • Mencken Trilogy Republished: Some New Words of Wisdom from the Sage of Baltimore

    Posted on September 27th, 2014 Helian No comments

    Readers who loath the modern joyless version of Puritanism, shorn of its religious impedimenta, that has become the dominant dogma of our time, and would like to escape for a while to a happier time in which ostentatious public piety was not yet de rigueur are in luck.  An expanded version of H. L. Mencken’s “Days” trilogy has just been published, edited by Marion Elizabeth Rogers.  It includes Happy Days, Newspaper Days, and Heathen Days, and certainly ranks as one of the most entertaining autobiographies ever written.  The latest version actually contains a bonus for Mencken fans.  As noted in the book’s Amazon blurb,

    …unknown to the legions of Days books’ admirers, Mencken continued to add to them after publication, annotating and expanding each volume in typescripts sealed to the public for twenty-five years after his death. Until now, most of this material—often more frank and unvarnished than the original Days books—has never been published.  (This latest version contains) nearly 200 pages of previously unseen writing, and is illustrated with photographs from Mencken’s archives, many taken by Mencken himself.

    Infidel that he was, the Sage of Baltimore would have smiled to see the hardcover version.  It comes equipped with not one, but two of those little string bookmarks normally found in family Bibles.  I’ve read an earlier version of the trilogy, but that was many years ago.  I recalled many of Mencken’s anecdotes as I encountered them again, and perhaps with a bit more insight.  I know a great deal more about the author than I did the first time through, not to mention the times in which he lived.   There’ve been some changes made since then, to say the least.  For example, Mencken recalls that maids were paid $10 a month plus room and board in the 1880’s, but no less than $12 a month from about 1890 on.  Draught beer was a nickel, and a first class businessman’s lunch at a downtown hotel with soup, a meat dish, two side dishes, pie and coffee, was a quarter.  A room on the “American plan,” complete with three full meals a day, was $2.50.

    Mencken was already beginning to notice the transition to today’s “kinder, gentler” mode of raising children in his later days, but experienced few such ameliorations in his own childhood.  Children weren’t “spared the rod,” either by their parents or their teachers.  Mencken recalls that the headmaster of his first school, one Prof. Friedrich Knapp, had a separate ritual for administering corporal punishment to boys and girls, and wore out a good number of rattan switches in the process.  Even the policemen had strips of leather dangling from their clubs, with which they chastised juveniles who ran afoul of the law.  Parents took all this as a matter of course, and the sage never knew any of his acquaintance to complain.  When school started, the children were given one dry run on the local horse car accompanied by their parents, and were sent out on their own thereafter.  Of course, Mencken and his sister got lost on their first try, but were set on the right track by a policeman and some Baltimore stevedores.  No one thought of such a thing as supervising children at play. One encounters many similar changes in the social scene as one progresses through the trilogy, but the nature of the human beast hasn’t changed much.  All the foibles and weaknesses Mencken describes are still with us today.  He was, of course, one of the most prominent atheists in American history, and often singled out the more gaudy specimens of the faithful for special attention.  His description of the Scopes monkey trial in Heathen Days is a classic example.  I suspect he would have taken a dim view of the New Atheists.  In his words,

    No male of the Mencken family, within the period that my memory covers, ever took religion seriously enough to be indignant about it.  There were no converts from the faith among us, and hence no bigots or fanatics.  To this day I have a distrust of such fallen-aways, and when one of them writes in to say that some monograph of mine has aided him in throwing off the pox of Genesis my rejoicing over the news is very mild indeed.

    Of course, if one possesses the wit of a Mencken or a Voltaire, one has the luxury of fighting the bigotry and fanaticism coming from the other side very effectively without using the same weapons.

    I certainly encourage those who haven’t read Mencken to pick up a copy of this latest release of his work.  Those interested in more detail about the content may consult the work of professional reviewers that I’m sure will soon appear.  I will limit myself to one more observation.  It never fails that when some new bit of Menckeniana appears, the self-appointed guardians of the public virtue climb up on their soapboxes and condemn him as a racist.  Anyone who reads the Days will immediately see where this charge comes from.  Mencken makes free use of the N word and several other terms for African-Americans that have been banned from the lexicon over the ensuing years.  No matter that he didn’t use more flattering terms to describe other subgroups of the population, and certainly not of the white “boobeoisie,” of the cities, or the “hinds,” and “yokels” of the country.

    Nothing could be more untrue or unfair than this charge of “racism,” but, alas, to give the lie to it one must actually read Mencken’s work, and few of the preening moralists of our own day are willing to go to the trouble.  That’s sad, because none of them have contributed anywhere near as much as Mencken to the cause of racial equality.  He did that by ignoring the racist conventions of his own day and cultivating respect for black thinkers and intellectuals by actively seeking them out and publishing their work, most notably in the American Mercury, which he edited from its inception in 1924 until he turned over the reigns to Charles Angoff in 1933.  He didn’t publish them out of condescension or pity, or as their self-appointed savior, or out of an inordinate love of moralistic grandstanding of the sort that has become so familiar in our own day.  He paid them a much higher favor.  He published them because, unlike so many others in his own time, he was not blind to their intellectual gifts, and rightly concluded that their work was not only worthy of, but would enhance the value of the Mercury, one of the premier intellectual, political and literary journals of the time.  As a result, the work of a host of African-American intellectuals, professionals, and poets appeared in Mencken’s magazine, eclipsing the Nation, The New Republic, The Century, or any other comparable journal of the day in that regard.  All this can be easily fact-checked, because every issue of the Mercury published during Mencken’s tenure as editor can now be read online. For example, there are contributions by W. E. B. Dubois in the issue of October 1924, a young poet named Countee P. Cullen in November 1924, newspaper reporter and editor Eugene Gordon in June 1926, James Weldon Johnson, diplomat, author, lawyer, and former leader of the NAACP in April 1927, George Schuyler, author and social commentator in December 1927,  Langston Hughes, poet, author, and activist in November 1933, and many others.

    Most issues of the Mercury included an Americana section devoted to ridiculing absurdities discovered in various newspapers and other publications listed by state.  Mencken used it regularly to heap scorn on genuine racists.  For example, from the March 1925 issue:

    North Carolina

    Effects of the war for democracy among the Tar Heels, as reported in a dispatch from Goldsboro:

    Allen Moses and his wife, wealthy Negroes, left here in Pullman births tonight for Washington and New York.  This is the first time in the history of this city that Negroes have “had the nerve,” as one citizen expressed it, to buy sleeper tickets here.  White citizens are aroused, and it is said the Ku Klux Klan will be asked to give Moses a warm reception on his return.

    From the May 1926 issue:

    North Carolina

    The rise of an aristocracy among the defenders of 100% Americanism, as revealed by a dispatch from Durham:

    “According to reports being circulated here the Ku Klux Klan has added a new wrinkle to its activities and are now giving distinguished service crosses to member of the hooded order of the reconstruction days.  In keeping with this new custom, it is reported that two Durham citizens were recipients of this honor recently.  The medal, as explained by the honorable klansman making the award, is of no intrinsic value, ‘but the sentiment attached to it and the heart throbs that go with it are as measureless as the sands of the sea.'”

    From the August 1928 issue:

    District of Columbia

    The Hon. Cole L. Blease, of South Carolina, favors his colleagues in the Senate with a treatise on southern ethics:

    “There are not enough marines in or outside of the United States Army or Navy, in Nicaragua, and all combined, to make us associate with niggers.  We never expect to.  We never have; but we treat them fairly.  If you promise one of the $5 for a days work, if he does the days work, I believe you should pay him.”

    So much for the alleged “racism” of H. L. Mencken.  It reminds me of a poster that was prominently displayed in an office I once worked in.  It bore the motto, “No good deed goes unpunished.”

     

  • Comments on Some Comments on the National Ignition Facility

    Posted on September 23rd, 2014 Helian No comments

    We live in a dauntingly complex world.  Progress in the world of science is relevant to all of us, yet it is extremely difficult, although certainly not impossible, for the intelligent layperson to gain a useful understanding of what is actually going on.  I say “not impossible” because I believe it’s possible for non-experts to gain enough knowledge to usefully contribute to the conversation about the technological and social relevance of a given scientific specialty, if not of its abstruse details, assuming they are willing to put in the effort.  Indeed, when it comes to social relevance it’s not out of the question for them to become more knowledgeable than the scientists themselves, narrowly focused as they often are on a particular specialty.

    To illustrate my point, I invite my readers to take a look at a post that recently appeared on the blog LLNL – The True Story.  LLNL, or Lawrence Livermore National Laboratory, is one of the nation’s three major nuclear weapons research laboratories.  It is also home of the National Ignition Facility, which, as its name implies, was designed to achieve fusion “ignition” by focusing a giant assembly of 192 powerful laser beams on tiny targets containing a mixture of deuterium and tritium fuel.  The process itself is called inertial confinement fusion, or ICF.  Ignition is variously defined, but as far as the NIF is concerned LLNL officially accepted the definition as fusion energy out equal to total laser energy in, in the presence of members of a National Academy of Sciences oversight committee.  This is a definition that puts it on a level playing field with the competing magnetic confinement approach to fusion.

    According to the blurb that appears on the home page of LLNL – The True Story, its purpose is “for LLNL present and past employees, friends of LLNL and anyone impacted by the privatization of the Lab to express their opinions and expose the waste, wrongdoing and any kind of injustice against employees and taxpayers by LLNS/DOE/NNSA.”  The post in question is entitled ICF Program is now Officially Owned by WCI (Weapons and Concepts Integration).  It’s certainly harmless enough as it stands, consisting only of the line,

    ICF program is now officially owned by WCI.  A step forward or an attempt to bury it out of sight?

    This is followed by an apparently broken link to the story referred to.  This gist can probably be found here.  Presumably the author suspects LLNL might want to “bury it out of sight” because the first attempt to achieve ignition, known as the National Ignition Campaign, or NIC, failed to achieve its goal.  What’s really of interest is not the post itself, but the comments following it.  The commenters are all listed as “anonymous,” but given the nature of the blog we can probably assume that most of them are scientists of one tribe or another.  Let’s take a look at what they have to say.  According to the first “anonymous,”

    If (takeover of NIF by WCI) is an attempt to keep funding flowing by switching milestones from energy independence to weapons research.  “Contingency Plan B”.

    Another “anonymous” writes in a similar vein:

    Reading between the lines it is clear that the new energy source mission of the NIF is over and now it’s time to justify the unjustifiable costs by claiming it’s a great too for weapons research.

    Perhaps the second commenter would have done better to read the lines as they stand rather than between them.  In that case he would have noticed that energy independence was never an official NIF milestone, not to mention its “mission.”  NIF was funded for the purpose of weapons research from the start.  This fact was never in any way a deep, dark secret, and has long been obvious to anyone willing to take the trouble to consult the relevant publicly accessible documents.  The Inertial Confinement Fusion Advisory Committee, a Federal Advisory Committee that met intermittently in the early to mid-90’s, and whose member included a bevy of heavyweights in plasma physics and related specialties, was certainly aware of the fact, and recommended funding of the facility with the single dissenting vote of Tim Coffey, then Director of the Naval Research Laboratory, based on that awareness.

    Be that as it may, the claim that the technology could also end our dependence on fossil fuel, often made by the NIF’s defenders, is credible.  By “credible” I mean that many highly capable scientists have long held and continue to hold that opinion.  As it happens, I don’t.  Assuming we find a way to achieve ignition and high gain in the laboratory, it will certainly become scientifically feasible to generate energy with ICF power plants.  However, IMHO it will never be economically feasible, for reasons I have outline in earlier posts.  Regardless, from a public relations standpoint, it was obviously preferable to evoke the potential of the NIF as a clean source of energy rather than a weapons project designed to maintain the safety and reliability of our nuclear arsenal, as essential as that capability may actually be.  In spite of my own personal opinion on the subject, these claims were neither disingenuous nor mere “hype.”

    Another “anonymous” writes,

    What’s this user facility bullshit about?  Only Livermore uses the facility.  Cost recovery demands that a university would have to pay $1 million for a shot.  How can it be a user facility if it’s run by the weapons program?  This isn’t exactly SLAC we’re talking about.

    Here, again, the commenter is simply wrong.  Livermore is not the only user of NIF, and it is, in fact, a user facility.  Users to date include a team from MIT headed by Prof. Richard Petrasso.  I’m not sure how the users are currently funded, but in the past funds for experiments on similar facilities were allocated through a proposal process, similar to that used to fund other government-funded academic research.  The commenter continues,

    By the way, let’s assume NIF wants to be a “user facility” for stockpile stewardship.  Since ignition is impossible, the EOS (Equation of State, relevant to the physics of nuclear weapons, ed.) work is garbage, and the temperatures are not relevant to anything that goes bang, what use is this machine?

    NIF does not “want to be a user facility for stockpile stewardship.”  Stress has always been on high energy density physics (HEDP), which has many other potential applications besides stockpile stewardship.  I was not surprised that NIF did not achieve ignition immediately.  In fact I predicted as much in a post on this blog two years before the facility became operational.  However, many highly competent scientists disagreed with me, and for credible scientific reasons.  The idea that ignition is “impossible” just because it wasn’t achieved in the first ignition campaign using the indirect drive approach is nonsense.  Several other credible approaches have not yet even been tried, including polar direct drive, fast ignitor, and hitting the targets with green (frequency doubled) rather than blue (frequency tripled) light.  The latter approach would enable a substantial increase in the available laser energy on target.  The EOS work is not garbage, as any competent weapons designer will confirm as long as they are not determined to force the resumption of nuclear testing by hook or by crook, and some of the best scientists at Livermore confirmed long ago that the temperatures  achievable on the NIF are indeed relevant to things that go bang, whether it achieves ignition or not.  In fact, the facility allows us to access physical conditions that can be approached in the laboratory nowhere else on earth, giving us a significant leg up over the international competition in maintaining a safe and reliable arsenal, as long as testing is not resumed.

    Anonymous number 4 chimes in,

    I love this quote (apparently from the linked article, ed.):

    “the demonstration of laboratory ignition and its use to support the Stockpile Stewardship Program (SSP) is a major goal for this program”

    Hey guys, this has already failed.  Why are we still spending money on this?  A lot of other laboratories could use the $$.  You’re done.

    The quote this “anonymous” loves is a simple statement of fact.  For the reasons already cited, the idea that ignition on the NIF is hopeless is nonsense.  The (very good) reason we’re still spending money on the project is that NIF is and will continue into the foreseeable future to be one of the most capable and effective above ground experimental (AGEX) facilities in the world.  It can access physical conditions relevant to nuclear weapons regardless of whether it achieves ignition or not.  For that reason it is an invaluable tool for maintaining our arsenal unless one’s agenda happens to be the resumption of nuclear testing.  Hint:  The idea that no one in DOE, NNSA, or the national weapons laboratories wants to resume testing belongs in the realm of fantasy.  Consider, for example, what the next “anonymous” is actually suggesting:

    Attempting to get funding for NIF and computations’s big machines was made easier by claiming dual purposes but I always felt that the real down and dirty main purpose was weapons research.  If you want to get support from the anti-weapon Feinstein/Boxer/Pelosi contingent you need to put the “energy” lipstick on the pig.  Or we could go back to testing.  Our cessation of testing doesn’t seem to have deterred North Korea and Iran that much.

    Yes, Virginia, even scientists occasionally do have agendas of their own.  What can I say?  To begin, I suppose, that one should never be intimidated by the pontifications of scientists.  The specimens on display here clearly don’t have a clue what they’re talking about.  Any non-technical observer of middling intelligence could become more knowledgeable than they are on the topics they’re discussing by devoting a few hours to researching them on the web.  As to how the non-technical observer is to acquire enough knowledge to actually know that he knows more than the scientific specialists, I can offer no advice, other than to head to your local university and acquire a Ph.D.  I am, BTW, neither employed by nor connected in any other way with LLNL.