Helian Unbound

The world as I see it
RSS icon Email icon Home icon
  • Morality Addiction in the Academy

    Posted on September 30th, 2014 Helian No comments

    One would think that, at the very least, evolutionary psychologists would have jettisoned their belief in objective morality by now.  After all, every day new papers are published about the evolutionary roots of morality, the actual loci in the brain that give rise to different types of moral behavior, and the existence in animals of some of the same traits we associate with morality in humans.  Now, if morality evolved, it must have done so because it enhanced the odds that the genes responsible for it would survive and reproduce.  It cannot somehow acquire a life of its own and decide that it actually has some other “purpose” in mind.  The spectacle of human “experts in ethics” arbitrarily reassigning its purpose in that way is even more ludicrous.  In spite of all that, faith in the existence of disembodied good and evil persists in the academy, in defiance of all logic, in evolutionary psychology as in other disciplines.  It’s not surprising really.  For some time now academics of all stripes have been heavily invested in the myth of their own moral superiority.  Eliminate objective morality, and the basis of that myth evaporates like a mirage.  Self-righteousness and heroin are both hard habits to kick.

    Examples aren’t hard to find.  An interesting one turned up in the journal Evolutionary Psychology lately.  Entitled Evolutionary Awareness and submitted by authors Gregory Gorelick and Todd Shackelford, the abstract reads as follows:

    In this article, we advance the concept of “evolutionary awareness,” a metacognitive framework that examines human thought and emotion from a naturalistic, evolutionary perspective. We begin by discussing the evolution and current functioning of the moral foundations on which our framework rests. Next, we discuss the possible applications of such an evolutionarily-informed ethical framework to several domains of human behavior, namely: sexual maturation, mate attraction, intrasexual competition, culture, and the separation between various academic disciplines. Finally, we discuss ways in which an evolutionary awareness can inform our cross-generational activities—which we refer to as “intergenerational extended phenotypes”—by helping us to construct a better future for ourselves, for other sentient beings, and for our environment.

    Those who’ve developed a nose for such things can already sniff the disembodied good and evil things-in-themselves levitating behind the curtain.  The term “better future” is a dead giveaway.  No future can be “better” than any other in an objective sense unless there is some legitimate standard of comparison that doesn’t depend on the whim of individuals.  As we read on, our suspicions are amply confirmed.  As far as its theme is concerned, the paper is just a rehash of what Konrad Lorenz and Robert Ardrey were suggesting back in the 60’s; that there are such things as innate human behavioral predispositions, that on occasion they have promoted warfare and the other forms of mayhem that humans have indulged in over the millennia, and that it would behoove us to take this fact into account and try to find ways to limit the damage.  Unfortunately, they did so at a time when the Blank Slate, probably the greatest scientific imposture ever heard of, was at its most preposterous extreme.  They were ridiculed and ignored by the “men of science” and forgotten.  Now that the Blank Slate orthodoxy has finally collapsed after reigning supreme for the better part of half a century, their ideas are belatedly being taken seriously again, albeit without ever giving them credit or mentioning their names.

    There is, however, an important difference.  In reading through the paper, one finds that the authors believe not only in evolved morality, necessarily a subjective phenomenon, but are true believers in a shadowy thing-in-itself that exists alongside of it.  This thing is objective morality, as noted above, an independent, and even “scientific,” something that has a “purpose” quite distinct from the reasons that explain the existence of evolved morality.  The “purpose” in high fashion at the moment is usually some version of the “human flourishing” ideology advocated by Sam Harris.  No evidence has ever been given for this concoction.  Neither Sam Harris nor anyone else has ever been able to capture one of these ghostly “goods” or “evils” and submit it for examination in the laboratory.  No matter, their existence is accepted as a matter of faith, accompanied by a host of “proofs” similar to those that are devised to “prove” the existence of God.

    Let us examine the artifacts of the faith in these ghosts in the paper at hand.  As it happens, it’s lousy with them.  On page 785, for example, we read,

    Because individual choices lead to cultural movements and social patterns (Kenrick, Li, and Butner, 2003), it is up to every individual to accept the responsibility of an evolutionarily-informed ethics.

    Really?  If so, where does this “responsibility” come from?  How does it manage to acquire legitimacy?  Reading a bit further on page 785, we encounter the following remarkable passage:

    However, as with any intellectually-motivated course of action, developing an evolutionarily-informed ethics entails an intellectual sacrifice: Are we willing to forego certain reproductive benefits or personal pleasures for the sake of building a more ethical community? Such an intellectual endeavor is not just relevant to academic debates but is also of great practical and ethical importance. To apply the paleontologist G. G. Simpson’s (1951) ethical standard of knowledge and responsibility, evolutionary scientists have the responsibility of ensuring that their findings are disseminated as widely as possible. In addition, evolutionarily-minded researchers should expand their disciplinary boundaries to include the application of an evolutionary awareness to problems of ethical and practical importance. Although deciphering the ethical dimension of life’s varying circumstances is difficult, the fact that there are physical consequences for every one of our actions—consequences on other beings and on the environment—means that, for better or worse, we are all players in constructing the future of our society and that all our actions, be they microscopic or macroscopic, are reflected in the emergent properties of our society (Kenrick et al., 2003).

    In other words, not only is the existence of this “other” morality simply assumed, but we also find that its “purpose” actually contradicts the reasons that have resulted in the evolution of morality to begin with.  It is supposed to be “evolutionarily-informed,” and yet we are actually to “forego certain reproductive benefits” in its name.  Later in the paper, on page 804, we find that this apparent faith in “real” good and evil, existing independently of the subjective variety that has merely evolved, is not just a momentary faux pas.  In the author’s words,

    It is not clear what the effects of being evolutionarily aware of our political and social behaviors will be. At the least, we can raise the level of individual and societal self-awareness by shining the light of evolutionary awareness onto our religious, political, and cultural beliefs. Better still, by examining our ability to mentally time travel from an evolutionarily aware perspective, we might envision more humane futures rather than using this ability to further our own and our offspring’s reproductive interests. In this way, we may be able to monitor our individual and societal outcomes and direct them to a more ethical and well-being-enhancing direction for ourselves, for other species, for our—often fragile—environment, and for the future of all three.

    Here the authors leave us in no doubt.  They have faith in an objective something utterly distinct from evolved morality, and with entirely different “goals.”  Not surprisingly, as already noted above, this “something” actually does turn out to be a version of the “scientific” objective morality proposed by Sam Harris.  For example, on page 805,

    As Sam Harris suggested in The Moral Landscape (2010), science has the power not only to describe reality, but also to inform us as to what is moral and what is immoral (provided that we accept certain utilitarian ethical foundations such as the promotion of happiness, flourishing, and well-being—all of which fall into Haidt’s (2012) “Care/Harm” foundation of morality).

    No rational basis is ever provided, by Harris or anyone else, for how these “certain utilitarian ethical foundations” are magically transmuted from the whims of individuals to independent objects, which then somehow hijack human moral emotions and endow them with a “purpose” that has little if anything to do with the reasons that explain the evolution of those emotions to begin with.  It’s all sufficiently absurd on the face of it, and yet understandable.  Jonathan Haidt gives a brilliant description of the reasons that self-righteousness is such a ubiquitous feature of our species in The Righteous Mind.  As a class, academics are perhaps more addicted to self-righteousness than any other.  There are, after all, whole departments of “ethical experts” whose very existence becomes a bad joke unless they can maintain the illusion that they have access to some mystic understanding of the abstruse foundations of “real” good and evil, hidden from the rest of us.  The same goes for all the assorted varieties of “studies” departments, whose existence is based on the premise that there is a “good” class that is being oppressed by an “evil” class.  At least since the heyday of Communism, academics have cultivated a faith in themselves as the special guardians of the public virtue, endowed with special senses that enable them to sniff out “real” morality for the edification of the rest of us.

    Apropos Communism, it actually used to be the preferred version of “human flourishing.”  As Malcolm Muggeridge put it in his entertaining look at The Thirties,

    In 1931, protests were made in Parliament against a broadcast by a Cambridge economist, Mr. Maurice Dobb, on the ground that he was a Marxist; now (at the end of the decade, ed.) the difficulty would be to find an economist employed in any university who was not one.

    Of course, this earlier sure-fire prescription for “human flourishing” cost 100 million human lives, give or take, and has hence been abandoned by more forward-looking academics.  However, a few hoary leftovers remain on campus, and there is an amusing reminder of the fact in the paper.  On page 784 the authors admit that attempts to tinker with human nature in the past have had unfortunate results:

    Indeed, totalitarian philosophies, whether Stalinism or Nazism, often fail because of their attempts to radically change human nature at the cost of human beings.

    Note the delicate use of the term “Stalinism” instead of Communism.  Meanwhile, the proper term is used for Nazism instead of “Hitlerism.”  Of course, mass terror was well underway in the Soviet Union under Lenin, long before Stalin took over supreme power, and the people who carried it out weren’t inspired by the “philosophy” of “socialism in one country,” but by a fanatical faith in a brave new world of “human flourishing” under Communism.  Nazism in no way sought to “radically change human nature,” but masterfully took advantage of it to gain power.  The same could be said of the Communists, the only difference being that they actually did attempt to change human nature once they were in power.  I note in passing that some other interesting liberties are taken with history in the paper.  For example,

    Christianity may have indirectly led to the fall of the Roman Empire by pacifying its population into submission to the Vandals (Frost, 2010), as well as the fall of the early Viking settlers in Greenland to “pagan” Inuit invaders (Diamond, 2005)—two outcomes that collectively highlight the occasional inefficiency (from a gene’s perspective) of cultural evolution.

    Of course, the authors apparently only have these dubious speculations second hand from Frost and Diamond, whose comments on the subject I haven’t read, but they would have done well to consider some other sources before setting down these speculations as if they had any authority.  The Roman Empire never “fell” to the Vandals.  They did sack Rome in 455 with the permission, if not of the people, at least of the gatekeepers, but the reason had a great deal more to do with an internal squabble over who should be emperor than with any supposed passivity due to Christianity.  Indeed, the Vandals themselves were Christians, albeit of the Arian flavor, and their north African kingdom was itself permanently crushed by an army under Belisarius sent by the emperor Justinian in 533.  Both certainly considered themselves “Romans,” as the date of 476 for the “fall of the Roman Empire” was not yet in fashion at the time.  There are many alternative theories to the supposition that the Viking settlements in Greenland “fell to the Inuits,” and to state this “outcome” as a settled fact is nonsense.

    But I digress.  To return to the subject of objective morality, it actually appears that the authors can’t comprehend the fact that it’s possible to believe anything else.  For example, they write,

     Haidt’s approach to the study of human morality is non-judgmental. He argues that the Western, cosmopolitan mindset—morally centered on the Care/Harm foundation—is limited because it is not capable of processing the many “moralities” of non-Western peoples. We disagree with this sentiment. For example, is Haidt really willing to support the expansion of the “Sanctity/Degradation” foundation (and its concomitant increase in ethnocentrism and out-group hostility)? As Pinker (2011) noted, “…right or wrong, retracting the moral sense from its traditional spheres of community, authority, and purity entails a reduction of violence” (p. 637).

    Here the authors simply can’t grok the fact that Haidt is stating an “is,” not an “ought.”  As a result, this passage is logically incomprehensible as it stands.  The authors are disagreeing with a “sentiment” that doesn’t exist.  They are incapable of grasping the fact that Haidt, who has repeatedly rejected the notion of objective morality, is merely stating a theory, not some morally loaded “should.”

    From my own subjective point of view, it is perhaps unfair to single out these two authors.  The academy is saturated with similar irrational attempts to hijack morality in the name of assorted systems designed to promote “human flourishing,” in the fond hope that the results won’t be quite so horrific as were experienced under Communism, the last such attempt to be actually realized in practice.  The addiction runs deep.  Perhaps we shouldn’t take it too hard.  After all, the Blank Slate was a similarly irrational addiction, but it eventually collapsed under the weight of its own absurdity after a mere half a century, give or take.  Perhaps, like the state was supposed to do under Communism, faith in the chimera of objective morality, or at least those versions of it not dependent on the existence of imaginary super-beings, will “whither away” in the next 50 years as well.  We can but hope.

  • Mencken Trilogy Republished: Some New Words of Wisdom from the Sage of Baltimore

    Posted on September 27th, 2014 Helian No comments

    Readers who loath the modern joyless version of Puritanism, shorn of its religious impedimenta, that has become the dominant dogma of our time, and would like to escape for a while to a happier time in which ostentatious public piety was not yet de rigueur are in luck.  An expanded version of H. L. Mencken’s “Days” trilogy has just been published, edited by Marion Elizabeth Rogers.  It includes Happy Days, Newspaper Days, and Heathen Days, and certainly ranks as one of the most entertaining autobiographies ever written.  The latest version actually contains a bonus for Mencken fans.  As noted in the book’s Amazon blurb,

    …unknown to the legions of Days books’ admirers, Mencken continued to add to them after publication, annotating and expanding each volume in typescripts sealed to the public for twenty-five years after his death. Until now, most of this material—often more frank and unvarnished than the original Days books—has never been published.  (This latest version contains) nearly 200 pages of previously unseen writing, and is illustrated with photographs from Mencken’s archives, many taken by Mencken himself.

    Infidel that he was, the Sage of Baltimore would have smiled to see the hardcover version.  It comes equipped with not one, but two of those little string bookmarks normally found in family Bibles.  I’ve read an earlier version of the trilogy, but that was many years ago.  I recalled many of Mencken’s anecdotes as I encountered them again, and perhaps with a bit more insight.  I know a great deal more about the author than I did the first time through, not to mention the times in which he lived.   There’ve been some changes made since then, to say the least.  For example, Mencken recalls that maids were paid $10 a month plus room and board in the 1880’s, but no less than $12 a month from about 1890 on.  Draught beer was a nickel, and a first class businessman’s lunch at a downtown hotel with soup, a meat dish, two side dishes, pie and coffee, was a quarter.  A room on the “American plan,” complete with three full meals a day, was $2.50.

    Mencken was already beginning to notice the transition to today’s “kinder, gentler” mode of raising children in his later days, but experienced few such ameliorations in his own childhood.  Children weren’t “spared the rod,” either by their parents or their teachers.  Mencken recalls that the headmaster of his first school, one Prof. Friedrich Knapp, had a separate ritual for administering corporal punishment to boys and girls, and wore out a good number of rattan switches in the process.  Even the policemen had strips of leather dangling from their clubs, with which they chastised juveniles who ran afoul of the law.  Parents took all this as a matter of course, and the sage never knew any of his acquaintance to complain.  When school started, the children were given one dry run on the local horse car accompanied by their parents, and were sent out on their own thereafter.  Of course, Mencken and his sister got lost on their first try, but were set on the right track by a policeman and some Baltimore stevedores.  No one thought of such a thing as supervising children at play. One encounters many similar changes in the social scene as one progresses through the trilogy, but the nature of the human beast hasn’t changed much.  All the foibles and weaknesses Mencken describes are still with us today.  He was, of course, one of the most prominent atheists in American history, and often singled out the more gaudy specimens of the faithful for special attention.  His description of the Scopes monkey trial in Heathen Days is a classic example.  I suspect he would have taken a dim view of the New Atheists.  In his words,

    No male of the Mencken family, within the period that my memory covers, ever took religion seriously enough to be indignant about it.  There were no converts from the faith among us, and hence no bigots or fanatics.  To this day I have a distrust of such fallen-aways, and when one of them writes in to say that some monograph of mine has aided him in throwing off the pox of Genesis my rejoicing over the news is very mild indeed.

    Of course, if one possesses the wit of a Mencken or a Voltaire, one has the luxury of fighting the bigotry and fanaticism coming from the other side very effectively without using the same weapons.

    I certainly encourage those who haven’t read Mencken to pick up a copy of this latest release of his work.  Those interested in more detail about the content may consult the work of professional reviewers that I’m sure will soon appear.  I will limit myself to one more observation.  It never fails that when some new bit of Menckeniana appears, the self-appointed guardians of the public virtue climb up on their soapboxes and condemn him as a racist.  Anyone who reads the Days will immediately see where this charge comes from.  Mencken makes free use of the N word and several other terms for African-Americans that have been banned from the lexicon over the ensuing years.  No matter that he didn’t use more flattering terms to describe other subgroups of the population, and certainly not of the white “boobeoisie,” of the cities, or the “hinds,” and “yokels” of the country.

    Nothing could be more untrue or unfair than this charge of “racism,” but, alas, to give the lie to it one must actually read Mencken’s work, and few of the preening moralists of our own day are willing to go to the trouble.  That’s sad, because none of them have contributed anywhere near as much as Mencken to the cause of racial equality.  He did that by ignoring the racist conventions of his own day and cultivating respect for black thinkers and intellectuals by actively seeking them out and publishing their work, most notably in the American Mercury, which he edited from its inception in 1924 until he turned over the reigns to Charles Angoff in 1933.  He didn’t publish them out of condescension or pity, or as their self-appointed savior, or out of an inordinate love of moralistic grandstanding of the sort that has become so familiar in our own day.  He paid them a much higher favor.  He published them because, unlike so many others in his own time, he was not blind to their intellectual gifts, and rightly concluded that their work was not only worthy of, but would enhance the value of the Mercury, one of the premier intellectual, political and literary journals of the time.  As a result, the work of a host of African-American intellectuals, professionals, and poets appeared in Mencken’s magazine, eclipsing the Nation, The New Republic, The Century, or any other comparable journal of the day in that regard.  All this can be easily fact-checked, because every issue of the Mercury published during Mencken’s tenure as editor can now be read online. For example, there are contributions by W. E. B. Dubois in the issue of October 1924, a young poet named Countee P. Cullen in November 1924, newspaper reporter and editor Eugene Gordon in June 1926, James Weldon Johnson, diplomat, author, lawyer, and former leader of the NAACP in April 1927, George Schuyler, author and social commentator in December 1927,  Langston Hughes, poet, author, and activist in November 1933, and many others.

    Most issues of the Mercury included an Americana section devoted to ridiculing absurdities discovered in various newspapers and other publications listed by state.  Mencken used it regularly to heap scorn on genuine racists.  For example, from the March 1925 issue:

    North Carolina

    Effects of the war for democracy among the Tar Heels, as reported in a dispatch from Goldsboro:

    Allen Moses and his wife, wealthy Negroes, left here in Pullman births tonight for Washington and New York.  This is the first time in the history of this city that Negroes have “had the nerve,” as one citizen expressed it, to buy sleeper tickets here.  White citizens are aroused, and it is said the Ku Klux Klan will be asked to give Moses a warm reception on his return.

    From the May 1926 issue:

    North Carolina

    The rise of an aristocracy among the defenders of 100% Americanism, as revealed by a dispatch from Durham:

    “According to reports being circulated here the Ku Klux Klan has added a new wrinkle to its activities and are now giving distinguished service crosses to member of the hooded order of the reconstruction days.  In keeping with this new custom, it is reported that two Durham citizens were recipients of this honor recently.  The medal, as explained by the honorable klansman making the award, is of no intrinsic value, ‘but the sentiment attached to it and the heart throbs that go with it are as measureless as the sands of the sea.'”

    From the August 1928 issue:

    District of Columbia

    The Hon. Cole L. Blease, of South Carolina, favors his colleagues in the Senate with a treatise on southern ethics:

    “There are not enough marines in or outside of the United States Army or Navy, in Nicaragua, and all combined, to make us associate with niggers.  We never expect to.  We never have; but we treat them fairly.  If you promise one of the $5 for a days work, if he does the days work, I believe you should pay him.”

    So much for the alleged “racism” of H. L. Mencken.  It reminds me of a poster that was prominently displayed in an office I once worked in.  It bore the motto, “No good deed goes unpunished.”

     

  • Comments on Some Comments on the National Ignition Facility

    Posted on September 23rd, 2014 Helian No comments

    We live in a dauntingly complex world.  Progress in the world of science is relevant to all of us, yet it is extremely difficult, although certainly not impossible, for the intelligent layperson to gain a useful understanding of what is actually going on.  I say “not impossible” because I believe it’s possible for non-experts to gain enough knowledge to usefully contribute to the conversation about the technological and social relevance of a given scientific specialty, if not of its abstruse details, assuming they are willing to put in the effort.  Indeed, when it comes to social relevance it’s not out of the question for them to become more knowledgeable than the scientists themselves, narrowly focused as they often are on a particular specialty.

    To illustrate my point, I invite my readers to take a look at a post that recently appeared on the blog LLNL – The True Story.  LLNL, or Lawrence Livermore National Laboratory, is one of the nation’s three major nuclear weapons research laboratories.  It is also home of the National Ignition Facility, which, as its name implies, was designed to achieve fusion “ignition” by focusing a giant assembly of 192 powerful laser beams on tiny targets containing a mixture of deuterium and tritium fuel.  The process itself is called inertial confinement fusion, or ICF.  Ignition is variously defined, but as far as the NIF is concerned LLNL officially accepted the definition as fusion energy out equal to total laser energy in, in the presence of members of a National Academy of Sciences oversight committee.  This is a definition that puts it on a level playing field with the competing magnetic confinement approach to fusion.

    According to the blurb that appears on the home page of LLNL – The True Story, its purpose is “for LLNL present and past employees, friends of LLNL and anyone impacted by the privatization of the Lab to express their opinions and expose the waste, wrongdoing and any kind of injustice against employees and taxpayers by LLNS/DOE/NNSA.”  The post in question is entitled ICF Program is now Officially Owned by WCI (Weapons and Concepts Integration).  It’s certainly harmless enough as it stands, consisting only of the line,

    ICF program is now officially owned by WCI.  A step forward or an attempt to bury it out of sight?

    This is followed by an apparently broken link to the story referred to.  This gist can probably be found here.  Presumably the author suspects LLNL might want to “bury it out of sight” because the first attempt to achieve ignition, known as the National Ignition Campaign, or NIC, failed to achieve its goal.  What’s really of interest is not the post itself, but the comments following it.  The commenters are all listed as “anonymous,” but given the nature of the blog we can probably assume that most of them are scientists of one tribe or another.  Let’s take a look at what they have to say.  According to the first “anonymous,”

    If (takeover of NIF by WCI) is an attempt to keep funding flowing by switching milestones from energy independence to weapons research.  “Contingency Plan B”.

    Another “anonymous” writes in a similar vein:

    Reading between the lines it is clear that the new energy source mission of the NIF is over and now it’s time to justify the unjustifiable costs by claiming it’s a great too for weapons research.

    Perhaps the second commenter would have done better to read the lines as they stand rather than between them.  In that case he would have noticed that energy independence was never an official NIF milestone, not to mention its “mission.”  NIF was funded for the purpose of weapons research from the start.  This fact was never in any way a deep, dark secret, and has long been obvious to anyone willing to take the trouble to consult the relevant publicly accessible documents.  The Inertial Confinement Fusion Advisory Committee, a Federal Advisory Committee that met intermittently in the early to mid-90’s, and whose member included a bevy of heavyweights in plasma physics and related specialties, was certainly aware of the fact, and recommended funding of the facility with the single dissenting vote of Tim Coffey, then Director of the Naval Research Laboratory, based on that awareness.

    Be that as it may, the claim that the technology could also end our dependence on fossil fuel, often made by the NIF’s defenders, is credible.  By “credible” I mean that many highly capable scientists have long held and continue to hold that opinion.  As it happens, I don’t.  Assuming we find a way to achieve ignition and high gain in the laboratory, it will certainly become scientifically feasible to generate energy with ICF power plants.  However, IMHO it will never be economically feasible, for reasons I have outline in earlier posts.  Regardless, from a public relations standpoint, it was obviously preferable to evoke the potential of the NIF as a clean source of energy rather than a weapons project designed to maintain the safety and reliability of our nuclear arsenal, as essential as that capability may actually be.  In spite of my own personal opinion on the subject, these claims were neither disingenuous nor mere “hype.”

    Another “anonymous” writes,

    What’s this user facility bullshit about?  Only Livermore uses the facility.  Cost recovery demands that a university would have to pay $1 million for a shot.  How can it be a user facility if it’s run by the weapons program?  This isn’t exactly SLAC we’re talking about.

    Here, again, the commenter is simply wrong.  Livermore is not the only user of NIF, and it is, in fact, a user facility.  Users to date include a team from MIT headed by Prof. Richard Petrasso.  I’m not sure how the users are currently funded, but in the past funds for experiments on similar facilities were allocated through a proposal process, similar to that used to fund other government-funded academic research.  The commenter continues,

    By the way, let’s assume NIF wants to be a “user facility” for stockpile stewardship.  Since ignition is impossible, the EOS (Equation of State, relevant to the physics of nuclear weapons, ed.) work is garbage, and the temperatures are not relevant to anything that goes bang, what use is this machine?

    NIF does not “want to be a user facility for stockpile stewardship.”  Stress has always been on high energy density physics (HEDP), which has many other potential applications besides stockpile stewardship.  I was not surprised that NIF did not achieve ignition immediately.  In fact I predicted as much in a post on this blog two years before the facility became operational.  However, many highly competent scientists disagreed with me, and for credible scientific reasons.  The idea that ignition is “impossible” just because it wasn’t achieved in the first ignition campaign using the indirect drive approach is nonsense.  Several other credible approaches have not yet even been tried, including polar direct drive, fast ignitor, and hitting the targets with green (frequency doubled) rather than blue (frequency tripled) light.  The latter approach would enable a substantial increase in the available laser energy on target.  The EOS work is not garbage, as any competent weapons designer will confirm as long as they are not determined to force the resumption of nuclear testing by hook or by crook, and some of the best scientists at Livermore confirmed long ago that the temperatures  achievable on the NIF are indeed relevant to things that go bang, whether it achieves ignition or not.  In fact, the facility allows us to access physical conditions that can be approached in the laboratory nowhere else on earth, giving us a significant leg up over the international competition in maintaining a safe and reliable arsenal, as long as testing is not resumed.

    Anonymous number 4 chimes in,

    I love this quote (apparently from the linked article, ed.):

    “the demonstration of laboratory ignition and its use to support the Stockpile Stewardship Program (SSP) is a major goal for this program”

    Hey guys, this has already failed.  Why are we still spending money on this?  A lot of other laboratories could use the $$.  You’re done.

    The quote this “anonymous” loves is a simple statement of fact.  For the reasons already cited, the idea that ignition on the NIF is hopeless is nonsense.  The (very good) reason we’re still spending money on the project is that NIF is and will continue into the foreseeable future to be one of the most capable and effective above ground experimental (AGEX) facilities in the world.  It can access physical conditions relevant to nuclear weapons regardless of whether it achieves ignition or not.  For that reason it is an invaluable tool for maintaining our arsenal unless one’s agenda happens to be the resumption of nuclear testing.  Hint:  The idea that no one in DOE, NNSA, or the national weapons laboratories wants to resume testing belongs in the realm of fantasy.  Consider, for example, what the next “anonymous” is actually suggesting:

    Attempting to get funding for NIF and computations’s big machines was made easier by claiming dual purposes but I always felt that the real down and dirty main purpose was weapons research.  If you want to get support from the anti-weapon Feinstein/Boxer/Pelosi contingent you need to put the “energy” lipstick on the pig.  Or we could go back to testing.  Our cessation of testing doesn’t seem to have deterred North Korea and Iran that much.

    Yes, Virginia, even scientists occasionally do have agendas of their own.  What can I say?  To begin, I suppose, that one should never be intimidated by the pontifications of scientists.  The specimens on display here clearly don’t have a clue what they’re talking about.  Any non-technical observer of middling intelligence could become more knowledgeable than they are on the topics they’re discussing by devoting a few hours to researching them on the web.  As to how the non-technical observer is to acquire enough knowledge to actually know that he knows more than the scientific specialists, I can offer no advice, other than to head to your local university and acquire a Ph.D.  I am, BTW, neither employed by nor connected in any other way with LLNL.

     

  • The Objective Morality Delusion

    Posted on September 22nd, 2014 Helian 2 comments

    Atheists often scorn those who believe in the God Delusion.  The faithful, in turn, scorn those atheists who believe in the Objective Morality Delusion.  The scorn is understandable in both cases, but I give the nod to the faithful on this one.  Philosophers and theologians have come up with many refined and subtle arguments in favor of the existence of imaginary super beings.  The arguments in favor of imaginary objective moralities are threadbare by comparison.  I can hardly blame the true believers for laughing at the obvious imposture.  They don’t require such a crutch to maintain the illusion of superior virtue.  As a result, they see through the charade immediately.

    Let me put my own cards on the table.  I consider morality to be the expression of a subset of the innate human behavioral traits that exist as a result of evolution by natural selection.  It follows that I do not believe that the comments of Darwin, who specifically addressed the subject, can be simply ignored.  Neither do I believe that all the books and papers on the evolved wellsprings of morality that have been rolling of the presses lately can be simply ignored.  I agree with Hume, who pointed out that reason is a slave of the passions, and with Haidt, who wrote about the emotional dog and its rational tail, and take a dubious view of those who think the points made by either author can be simply ignored.  In short, I consider morality a purely subjective phenomenon.  There are, of course, many implications of this conclusion that are uncomfortable to the pious faithful and pious atheists alike.  However, if what I say is true, their discomfort will not make it untrue.

    I’ve discussed the arguments of Sam Harris and several other “objective moralists” in earlier posts.  As it happens, Daniel Fincke, another member of the club who writes the Camels with Hammers blog at Patheos.com has just chimed in.  Perhaps his comments on the subject will provide some insight into whether the supercilious smiles of the godly are out of place or not.

    Fincke has a Ph.D. in philosophy from Fordham, and teaches interactive philosophy classes online.  His comments appeared in the context of a pair of responses to Jerry Coyne, who differed with him on the subject at the latest Pennsylvania State Atheists Humanists Conference.  According to Fincke,

    When we talk about an endeavor being objective in the main or subjective in the main we’re talking about whether there can be objective principles that can often, at least theoretically, lead to determinations independent of our preferences.

    Of course, this statement that objective principles are those principles that are objective is somewhat lacking as a rigorous definition, but it’s on the right track.  Objective phenomena exist independently of the experiences or impressions in the minds of individuals.  Like Harris, Fincke associates morality with “human flourishing”:

    As to the nature of human flourishing, my basic view can be briefly boiled down to this. What we are as individuals is defined by the functional powers that constitute our being. In other words, we do not just “have” the powers of reasoning, emotional life, technological/artistic capacities, sociability, sexuality, our various bodily capabilities, etc., but we exist through such powers. We cannot exist without them. They constitute us ourselves. When they suffer, we suffer. Some humans might be drastically deficient in any number of them and there’s nothing they can do about that but make the best of it. But in general our inherent good is the objectively determinable good functioning of these basic powers (and all the subset powers that compose them and all the combined powers that integrate powers from across these roughly distinguishable kinds).

    One can almost guess where this is heading without reading the rest.  Like so many other “objective moralists,” Fincke will conflate that which is morally good with that which is “good” in the sense that it serves some useful purpose.  This gets us nowhere, because it merely begs the question of why the purpose served is itself morally good.  In what follows, our suspicions are amply confirmed.  For example, Fincke continues,

    Morality comes in at the stage of where any people who live lives impacting each other develop implicit or explicit rules and practices and judgments, etc. geared at cooperative living. Each of us has an interest in morality because we are social beings in vital ways.

    First, we socially depend for our basic flourishing on a society that is minimally orderly, where people are trustworthy, where we’re not swamped with chaotic violence, etc.

    Second, the more others around us are empowered to develop their functioning in their excellent powers is the more that they provide the means of us doing the same. So a society with greater functioning, powerful people is a society where we’ll be enriched by the things they create—be they technological or social—that help us thrive in our abilities.

    and so on.  In other words, moral rules are “objectively good” only in the sense that one can demonstrate their objective usefulness in advancing some other, higher “good.”  According to Fincke, this “higher good” is a “thriving, flourishing power” in each individual which is “beyond his body and beyond his awareness.”  Fine, but in that case the burden is still on him to demonstrate the objective nature of this “higher good.”  Unfortunately, he shrugs off the burden.  According to Fincke, the “higher good” is “objectively good” just because he says so.  For example,

    So, moral rules and practices and behaviors are a practical project. What objectively constitutes good instances of these are what lead to our objective good of maximally empowered functioning according to the abilities we have and what leads us to coordinate best with others for mutual empowerment on the long term.

    …with no explanation of why the “objective good” referred to is objectively good.  In a similar vein,

    The good of our powers thriving is inherently good for us because we are our powers. And the inherent good of a power thriving is objectively determinable in the sense that it has a characteristic function that makes it the power that it is.

    Again, Fincke doesn’t tell us why this “inherent good” is good in any objective sense, and why we should associate it with moral good at all.  Apparently we must simply take his word for it that he’s not just expressing a personal whim, but has some mysterious way of knowing that his “good” is both “objective” and “moral.”  Normally, when one claims objective existence for something, it must somehow manifest itself outside of the subjective minds of individuals.  If one is to believe in such an entity, one requires evidence of its independent existence.  That’s the main argument atheists have against the existence of God.  There’s no evidence for it.  How, then, is it reasonable for those same atheists to claim the objective existence of moral “good” with a similar lack of evidence.  The faithful can at least point to faith, and tell us that they believe because of the grace of God.  Atheists don’t have that luxury.  One of Fincke’s favorite arguments is as follows:

    Within this framework we can reason rationally. Does it mean we will always come to conclusive answers? No, of course not. Reasoning involves dealing with the real world and it’s empirical variables. Science can only go so far too, because we’re stuck with contingencies. You need information, sometimes impossible to precisely ascertain information about the future or the expected consequences of one path or another.

    That’s quite true, but science has something to back it up that Fincke can’t claim for his objective morality; data in the form of experimentally repeatable evidence.  We can be confident in the objective existence of electrons and photons, and on the fact that they don’t depend on our subjective whims for that existence, because we can observe and measure their physical characteristics.  To the best of my knowledge, neither Fincke nor Harris nor any of the rest have ever captured an objective “good” in their butterfly nets and produced any data regarding its physical or other qualities and characteristics.  If something is supposed to have an objective existence outside of our subjective minds, but we have not the faintest shred of evidence about it, we have only one alternative if we are to believe in it; blind faith.

    For Fincke, morality is infinitely malleable.  We can make it up as we go along to serve the “ultimate good” as our cultural and social circumstances change:

    Morality is a technological endeavor too. It’s one of determining what should be done for us all to live as well as we can collectively and individually. We should, as naturalists who have learned the lessons of empirical thinking in the hard sciences, determine our moral codes and practices according to what serves our purposes best.

    Unfortunately, this flies in the face of everything we have been learning recently about the innate wellsprings of morality.  It requires that we simply ignore it.  The claim that human flourishing is the ultimate good, and that morality is an objective something that exists to serve this end excludes any evolutionary contribution to morality whatsoever.  Some claim that evolution may occur as high as the level of groups, but no process or mathematical model has yet been heard of that predicts that it can occur at the level of the human species as a whole.

    If Fincke is right, then there can be no analogs of morality in animals, as claimed not only by Darwin, but by many others after him, and as suggested in Wild Justice by Marc Bekoff and Jessica Pierce and in several other recent books on the subject.  Objective moral rules as he describes then would only be discoverable by highly intelligent creatures through the exercise of high-powered reasoning that is beyond the capacity of animals or, for that matter even humans other than Fincke and a few other enlightened philosophers, whom we must apparently depend on forevermore to explain things to us.  No doubt  the popes would all have loved this line of reasoning.  These purported rules exist to support an end that can never be the direct result of natural selection, as it only applies at a level where selection does not occur.

    Again, if Fincke is right, then the emotions we associate with morality become absurd.  After all, what room is there for emotion in deriving perfectly rational “moral rules” from some “objective” ultimate good?  Why, indeed, do such reactions as virtuous indignation and moral outrage exist?  They are, after all, emotional rather than reasonable, and they can be observed across all cultures.  If true moral good is only discoverable by gurus like Fincke, and often contradicts our natural appetites and proclivities, where do these emotions come from?  Are they, as we were informed by the Blank Slaters of old, merely learned, along with such things as the pleasure we feel from eating when hungry, and the orgasms we experience during sex?  If not, how can we possibly explain their existence?  Here’s another excerpt from Fincke’s posts that raises some doubts about his “objective morality.”

    People seem to recognize this readily with respect to every art–that doing it in the way that evinces excellent ability and has the result effect of empowering others is obviously desirable over the way that doesn’t–except when it comes to something like ruling or acquiring wealth. In those cases, people start talking like they think mere domination and accumulation is sufficiently desirable. But there’s no reason to think that’s correct. The ruler is a failure if they cannot create a powerful citizenry. What is the intrinsic goodness of merely getting your way compared to the actual creative power, the actual excellent ability, to create greater flourishing through your efforts. The great ruler, by the ruler’s own internal standards of success, should obviously be to rule for generations even beyond death. To do that means to be so shrewd in one’s decisions that what one builds outlives you and thrives beyond your mortal coil. It means to be a contributor to the thriving of your citizens while you’re alive so you can take credit for your role in their thriving (and for as many subsequent generations as possible).

    and,

    Just because some tyrants realize that’s impossible because they’re incompetent to create that and keep power and so instead choose to rule a graveyard through terror doesn’t mean those tyrants are being rational. They’re functioning badly. They’re epically failing to do the actually powerful task of ruling.

    Genghis Khan might beg to differ.  In spite of recent attempts to rehabilitate him, it’s not an exaggeration to say he ruled a graveyard through terror throughout much of Asia, and was, therefore, an epic failure according to Fincke.  However, he left millions of descendants throughout the continent.  He would certainly have regarded this outcome as “good” and “powerful.”  It’s a human legacy that will certainly last much longer than the constitution of any state, or the opinion harbored by certain intellectuals in the 21st century concerning “human flourishing.”  Indeed, it’s a legacy that has the potential to last for billions of years, as demonstrated by the reality of our own existence as descendants of creatures who lived that long ago in the past.  How can we detect or identify an objective rule according to which the great Khan’s good is not really good, but evil?  Obviously, what we are looking for here is something more compelling than Fincke’s opinion on the matter.  According to Fincke,

    …we set up moral systems to regulate and make it so people are able to resist the temptation to think in short term, microlevel, temporarily selfish ways about what is good for them.

    Again, if moral systems are just something we “set up” at will to serve Fincke’s “inherent and ultimate good,” then Hume must be wrong.  Reason can’t be the slave of the passions.  Rather, the passions must be suppressed to serve reason.  Morality cannot possibly be associated with evolution in any way, because it would be impossible to “set up” the innate predispositions that would presumably be the result.  As it happens, our species already has extensive experience with “setting up” just such a moral system as Fincke describes, based on “science” and devoted to the ultimate goal of “human flourishing.”  It was called Communism.  It didn’t work.  As E. O. Wilson famously put it, “Great theory, wrong species.”  Am I being paranoid if I would prefer, on behalf of myself and my species, to avoid trying it twice?

    In the end, Fincke’s arguments really boil down to a statement of subjective morality in a nutshell:  “Human flourishing as defined by me and right-thinking individuals like me is the ultimate good, because I say so.”

     

  • Herbert Spencer, Animal Justice, and the Irrationality of Morality

    Posted on July 27th, 2014 Helian No comments

    As Hume pointed out long ago, moral emotions are not derived by reason. They exist a priori. They belong, not at the end, but at the beginning of reason. They are not derived by reason. Rather, they are reasoned about. Given the variations in the innate wellsprings of morality among individuals, huge variations in culture and experience, and the imperfections of human reason, the result has been the vast kaleidoscope of human moralities we see today, with all their remarkable similarities and differences.

    Most of us understand the concept Good, and most of us also understand the concept Evil. Good and Evil are subjective entities in the minds of individuals, not fundamentally different from any of our other appetites and whims. However, unlike other whims, such as hunger or sexual desire, it is our nature to perceive them as things, existing independently of our subjective minds. We don’t imagine that, if we are hungry, everyone else in the world must be hungry, too. However, we do imagine that if we perceive something as Good, it must be Good for everyone else as well. That’s where reason comes in. We use it in myriad variations to prop up the delusion that our Good possesses independent legitimacy, and therefore applies to everyone. Familiar variations are the God prop, the “Brave New World of the Future” prop, and the “human flourishing” prop. We commonly find even the most brilliant intellectuals among us attempting to hop over the is/ought divide in this way, differing from the rest of us only in the sophistication of their mirages.

    Consider, for example, the case of Herbert Spencer. According to his Wiki entry, he was “the single most famous European intellectual in the closing decades of the nineteenth century”. He “developed an all-embracing conception of evolution as the progressive development of the physical world, biological organisms, the human mind, and human culture and societies. He was ‘an enthusiastic exponent of evolution’ and even ‘wrote about evolution before Darwin did.’” Unfortunately, there was a problem with his version of the theory. He could never come up with a coherent explanation of what made evolution work. His attempts were usually based on Lamarckian notions of use-inheritance, but he was no more successful than Lamarck in coming up with an actual mechanism for use-inheritance – something that would actually drive the process. When Darwin came up with the actual mechanism, natural selection, Spencer grasped the concept immediately. It certainly influenced his later work, but could not destroy his faith in evolution as a “theory of everything.” For him, evolution was the mystical wellspring of “progress” in all things. Morality and ethics were no exception.

    It’s a testimony to the power of the delusion that the truth was actually staring Spencer in the face. Consider, for example, his comments on what he referred to as “animal ethics.” Like Darwin, Spencer was well aware of the analogs to human moral behavior in animals. He wrote about them in the first two chapters of his Justice, published in 1891, long before such ideas were dropped down the memory hole by the Blank Slaters, and more than a century before they were finally disinterred by the animal behaviorists of our own day. Pick out a paragraph here and a phrase there, and Spencer comes across as a perfectly orthodox Darwinian. For example,

     Speaking generally, we may say that gregariousness and cooperation more or less active, establish themselves in a species only because they are profitable to it since otherwise survival of the fittest must prevent establishment of them.

    For the association to be profitable the acts must be restrained to such extent as to leave a balance of advantage. Survival of the fittest will else exterminate that variety of the species in which association begins.

    Thus then it is clear that acts which are conducive to preservation of offspring or of the individual we consider as good relatively to the species and conversely.

    In the third chapter of his book, Spencer makes the obvious connection between sub-human and human morality, pointing out that they form a “continuous whole.”

    The contents of the last chapter foreshadow the contents of this. As from the evolution point of view human life must be regarded as a further development of sub-human life it follows that from this same point of view human justice must be a further development of sub-human justice. For convenience the two are here separately treated but they are essentially of the same nature and form parts of a continuous whole.

    In a word, Spencer seems to realize that morality is an artifact of evolution by natural selection, that it exists because it enhanced the probability that individuals and their offspring would survive, and that its innate origins manifest themselves in sub-human species as well as human beings. In other words, he seems to have identified just those aspects of morality that establish its subjective nature and the absurdity of the notion that it can somehow transcend the minds of one individual acquire independent legitimacy or normative power over other individuals. The truth seems to be staring him in the face, and yet, in the end, he evades it. His illusion that his version of human progress, formulated long before Darwin, really is the Good-in-itself, blinds him to the implications of what he has just written. Before long we find him hopelessly enmeshed in the naturalistic fallacy, busily converting “is” into “ought.” First, we find passages like the following that not only have a suspicious affinity with Spencer’s libertarian ideology, but reveal his continued, post-Darwin faith in Lamarckism:

    The necessity for observance of the condition that each member of the group, while carrying on self-sustentation and sustentation of offspring, shall not seriously impede the like pursuits of others makes itself so felt where association is established as to mould the species to it. The mischiefs from time to time experienced when the limits are transgressed continually discipline all in such ways as to produce regard for the limits so that such regard becomes in course of time a natural trait of the species.

    A little later, the crossing of the is/ought Rubicon is made quite explicit:

    To those who take a pessimist view of animal life in general contemplation of these principles can of course yield only dissatisfaction. But to those who take an optimist view or a meliorist view of life in general, and who accept the postulate of hedonism, contemplation of these principles must yield greater or less satisfaction and fulfilment of them must be ethically approved. Otherwise considered these principles are according to the current belief expressions of the Divine will or else according to the agnostic belief indicate the mode in which works the Unknowable Power throughout the universe, and in either case they have the warrant hence derived.

    It’s not that Spencer was a stupid man. In fact, he was brilliant. Among other things, he analyzed the flaws in socialist theory and predicted the outcome of the Communist experiment with amazing prescience long before it was actually tried. Rather, Spencer didn’t see the truth that was staring him in the face because he was human. Like all humans, he suffered from the delusion that his version of the Good must surely be the “real” Good, and rationalized that conclusion. It continues to be similarly rationalized in our own day by our own public intellectuals, in spite of a century and more of great advances in evolutionary theory, neuroscience, and understanding of the innate wellsprings of both human and non-human behavior.

    I suppose there’s some solace in the fact that, as Jonathan Haidt put it, the emotional dog continues to wag its rational tail, and not vice versa. It certainly lays to rest fears that some fragile thread of religion or philosophy is all that suspends us over the abyss of moral relativism. We will not become moral relativists because it is not our nature to be moral relativists, even if legions of philosophers declare that we are being unreasonable. On the other hand, there are always drawbacks to not recognizing the truth. We experienced two of those drawbacks in the 20th century in the form of the highly moralistic Nazi and Communist ideologies. Perhaps it would be well for us to recognize the obvious before the next messiah turns up on the scene.

  • Nature vs. Nurture at the Movies: Hollywood Turns on the Blank Slate

    Posted on May 26th, 2014 Helian 3 comments

    If Hollywood is any guide, we can put a fork in the Blank Slate.  I refer, of course, to the delusional orthodoxy that was enforced by the “Men of Science” in the behavioral sciences for more than half a century, according to which there is no such thing as human nature.  Consider, for example, the movie Divergent.  It belongs to the dystopian genre beloved of American audiences, and is set in post-apocalyptic Chicago.  A semblance of order has been restored by arranging the surviving population into five factions based on what the evolutionary psychologists might call their innate predispositions.  They include Candor, whose supreme values are honesty and trustworthiness, and from whose ranks come the legal scholars and lawyers.  The brave and daring are assigned to the Dauntless faction, and become the defenders of the little city-state.  At the opposite extreme is Amity, the home of those who value kindness, forgiveness and trust, and whose summum bonum is peace.  Their admiration for self-reliance suits them best for the agricultural chores.  Next comes Abnegation, composed of the natural do-gooders of society.  So selfless that they can only bear to look in a mirror for a few seconds, they are deemed so incorruptible that they are entrusted with the leadership and government of the city.  Finally, the intelligent and curious are assigned to the Erudite faction.  They fill such roles as doctors, scientists, and record-keepers.  They are also responsible for technological advances, which include special “serums,” some of which are identified with particular factions.  One of these is a “simulation serum,” used to induce imaginary scenarios that test a subject’s aptitude for the various factions.

    As it happens, the simulation serum doesn’t always work.  When the heroine, Tris, takes the test, she discovers that she can “finesse” the simulation.  She is a rare instance of an individual whose nature does not uniquely qualify her for any faction, but who is adaptable enough to fit adequately into several of them.  In other words, she is a “Divergent,” and as such, a free thinker and a dire threat to anyone who might just happen to have plans to misuse the serums to gain absolute control over the city.

    Alas, there’s trouble in paradise.  The “factions” are groups, and where there are groups, there are ingroups and outgroups.  Sure enough, each “in-faction” has its own “out-faction.”  This aspect of the plot is introduced matter-of-factly, as if it were the most natural thing in the world.  And, of course, since it can be assumed that the audience will consist largely of the species Homo sapiens, it is.  Most of us, with the exception of a few aging behavioral scientists, are familiar with the fact that it is our nature to apply different versions of morality depending on whether we are dealing with one of “us” or one of the “others.”  It turns out that Abnegation is the outgroup of Erudite, who consider them selfish poseurs, and weak and cowardly to boot.  That being the case, it follows that Abnegation is completely unsuited to run the government of Chicago or any other post-apocalyptic city state.  That role should belong to Erudite.

    Which brings us, of course, to the “bad guy.”  You’ll never guess who the bad guy is, so I’ll just spill the beans.  It’s none other than Kate Winslet!  She plays the cold and nefarious Erudite leader Jeanine Matthews.  These smarties are planning to overthrow Abnegation and seize control for themselves with the aid of the martial Dauntless, whose members have been conveniently mind-controlled with the aid of one of Erudite’s serums.  Eventually, Jeanine unmasks Tris and her amorous partner, Four, as Divergents.  And with them in her power, she treats them to a remarkable soliloquy, which nearly caused me to choke on my butter-slathered popcorn.  Once Erudite is in the saddle, she explains, they will eliminate human nature.  Using a combination of re-education a la Joseph Stalin and mind control drugs, all citizens will become latter day versions of Homo sovieticus, perfectly adapted to fit into the Brave New World planned by the Erudites.  The utopia envisioned by generations of Blank Slaters will be realized at last!

    There’s no need for me to reveal any more of the plot.  It’s a very entertaining movie so, by all means, see it yourself.  Suffice it to say that, if Hollywood now associates the denial of human nature with evil bad guys, then the Blank Slate must be stone cold dead.  Or at least it is with the exception of a few ancient Blank Slater bats still hanging in the more dark and obscure belfries of academia.

    For the benefit of the history buffs among my readers, I note in passing that Hollywood never quite succumbed to Gleichschaltung.  They were always just a bit out of step, even in the heyday of the Blank Slate orthodoxy.  Consider, for example, Sam Peckinpah’s 1971 movie Straw Dogs.  It was directly inspired by the work of none other than that greatest of bête noires of the Blank Slaters, Robert Ardrey.  The first to taste of the forbidden fruit was Strother Martin, best known for his portrayal of the sadistic “Captain” in Cool Hand Luke (“What we have here is a failure to communicate”).  He, in turn, passed on Ardrey’s African Genesis to Peckinpah, with the remark that the two seemed to share similar attitudes about violence in human beings.  Peckinpah was fascinated, and later said,

    Robert Ardrey is a writer I admire tremendously.  I read him after Wild Bunch and have reread his books since because Ardrey really knows where it’s at, Baby.  Man is violent by nature, and we have to learn to live with it and control it if we are to survive.

    That statement, rough around the edges though it is, actually shows more insight into the thought of Ardrey than that revealed by about 99.9% of the learned book reviewers and “Men of Science” who have deigned to comment on his work in the ensuing 45 years.  Specifically, Peckinpah understood that Ardrey was no “genetic determinist,” and that he believed that aggressive human predispositions could be controlled by environment, or “culture.”  As it happens, that is a theme he elaborated on repeatedly in every one of his books.  The theme of Straw Dogs was taken directly out of The Territorial Imperative.  According to Ardrey,

    There is a law of territorial behavior as true of the single roebuck defending his private estate as it is of a band of howling monkeys defending its domain held in common.  Huxley long ago observed that any territory is like a rubber disc:  the tighter it is compressed, the more powerful will be the pressure outward to spring it back into shape.  A proprietor’s confidence is at its peak in the heartland, as is an intruder’s at its lowest.  Here the proprietor will fight hardest, chase fastest.

    In Straw Dogs, Peckinpah’s diminutive hero, timid mathematician David Sumner, played by Dustin Hoffman, travels from the sheltered campus of an American university to be with his young wife, Amy, in her native village in England.  To make a long story short, she is raped by three of the locals.  Eventually, these muscular miscreants are joined by other townspeople in besieging Sumner in his territory, his house, in the mistaken belief that he is knowingly harboring a murderer.  Ardrey’s territorial boost takes over with a vengeance, and Sumner draws on unimagined reserves of strength, courage, and resourcefulness to annihilate the attackers one by one.  As badly behind the PC curve as any Disney film, Hollywood eventually repented and in 2000 churned out an alternative version of Straw Dogs, in which all the violent behavior was “learned.”  By then, however, getting in step meant getting out of step.  Even the Public Broadcasting Network had given the Blank Slate the heave ho years earlier.

    Straw Dogs was hardly the first time Hollywood took up the subject of nature versus nurture.  For those whose tastes run more to the intellectual and profound, I have attached a short film below dealing with that theme that predates Peckinpah by almost a quarter of a century.

  • Privilege and Morality

    Posted on May 25th, 2014 Helian 2 comments

    A Princeton freshman named Tal Fortgang recently made quite a stir with an essay on privilege.  Entitled Checking My Privilege: Character as the Basis of Privilege, it described his encounters with racism and sexism rationalized by the assumption that one is privileged simply by virtue of being white and male.  In his words,

    There is a phrase that floats around college campuses, Princeton being no exception, that threatens to strike down opinions without regard for their merits, but rather solely on the basis of the person that voiced them. “Check your privilege,” the saying goes, and I have been reprimanded by it several times this year…  “Check your privilege,” they tell me in a command that teeters between an imposition to actually explore how I got where I am, and a reminder that I ought to feel personally apologetic because white males seem to pull most of the strings in the world.

    As it happens, Fortgang is Jewish, and his ancestors were victims, not only of the Nazis, but of Stalin and several of the other horrific if lesser known manifestations of anti-Semitism in 20th century Europe.  His grandfather and grandmother managed to survive the concentration camps of Stalin and Hitler, respectively, and emigrate to the U.S.  Again quoting Fortgang,

    Perhaps my privilege is that those two resilient individuals came to America with no money and no English, obtained citizenship, learned the language and met each other; that my grandfather started a humble wicker basket business with nothing but long hours, an idea, and an iron will—to paraphrase the man I never met: “I escaped Hitler. Some business troubles are going to ruin me?” Maybe my privilege is that they worked hard enough to raise four children, and to send them to Jewish day school and eventually City College.

    I a word, there are some rather obvious objections to the practice of applying crude metrics of “privilege” based on race and gender to Fortgang or anyone else, for that matter.  When pressed on these difficulties, those who favor the “check your privilege” meme typically throw out a smokescreen in the form of a complex calculus for determining “genuine privilege.”  For example, in a piece at The Wire entitled What the Origin of ‘Check Your Privilege’ Tells Us About Today’s Privilege Debates, author Arit John notes that it was,

    Peggy MacIntosh, a former women’s studies scholar whose 1988 paper on white privilege and male privilege took “privilege” mainstream.

    and that MacIntosh’s take was actually quite nuanced:

    What MacIntosh classifies as a privilege goes deeper and more specific than most online commentators. There’s older or younger sibling privilege, body type privilege, as well as privileges based on “your athletic abilities, or your relationship to written and spoken words, or your parents’ places of origin, or your parents’ relationship to education and to English, or what is projected onto your religious or ethnic background,” she says. Men, even straight, white, cis gender men,  are disadvantaged by the pressure to be tougher than they might be.

    The key is acknowledging everyone’s advantages and disadvantages, which is why Fortgang is both very wrong and (kind of) right: those telling him to check his privilege have privileges too, and are likely competing in the privilege Olympics. At the same time, it wouldn’t hurt him to check his privilege.

    Which brings us to the point of this post.  Our species isn’t good at nuance.  The “privilege” debate will and must take place in a morally charged context.  It is not possible to sanitize the discussion by scrubbing it free of moral emotions.  That is one of the many reasons why it is so important to understand what morality is and why it exists.  It does not exist as a transcendental entity that happened to pop into existence with the big bang, nor does it exist because the Big Man upstairs wants it that way.  It exists because it evolved.  It evolved because at a certain time in a certain environment unlike the one we live in today, individuals with the innate behavioral traits that give rise to what we generalize as “morality” happened to be more likely to survive and procreate.  That is the only reason for its existence.  Furthermore, human moral behavior is dual.  It is our nature to view others in terms of ingroups and outgroups.  That dual nature is not optional.  It is all-pervasive, and artifacts of its existence can easily be found be glancing at any of the myriads of Internet comment threads relevant to privilege or any other controversial topic.

    The above insights have certain implications concerning the matter of privilege.  It is certainly not out of the question that, in general, it is to an individual’s advantage to be male and white.  However, as pointed out by Ms. MacIntosh, there are countless other ways in which one individual may be privileged over another in modern society.  As a result, it is hardly out of the question for a person of color or a female to be more privileged than a white or a male.  Given the nature of human morality, however, that’s almost never how the question of privilege is actually perceived.  As pointed out by Jonathan Haidt in his The Righteous Mind, we are a highly self-righteous species.  It is our nature to rationalize why we are”good” and those who oppose us are “bad,” and not vice versa.  Furthermore, we tend to lump the “good” and the “bad” together into ingroups and outgroups.  That, in turn, is the genesis of sexism, racism, and all the other manifestations of “othering.”

    It would seem then, that we are faced with a dilemma.  Privilege exists.  It is probable that there are privileges associated with being white, and with being male, and certainly, as Thomas Picketty just pointed out for the umpteenth time in his “Capital in the 21st Century,” with being wealthy.  However, insisting that the playing field be leveled can lead and often has led in the past to racism, sexism, and class hatred.  The examples of Nazism and Communism have recently provided us with experimental data on the effectiveness of racism and class hatred in eliminating privilege.  Fortunately, I know of no manifestations of sexism that have been quite that extreme.

    What “should” we do under the circumstances?  There is no objective answer to that question.  At best I can acquaint you with my personal whims.  In general, I am uncomfortable with what I refer to as “morality inversions.”  A “morality inversion” occurs when our moral emotions prompt us to do things that are a negation of the reasons for the existence of moral emotions themselves.  For example, they might be actions that reduce rather than enhance our chances of survival.  Giving up a privilege without compensation is an instance of such an action.  Furthermore, I object to the irrational assumption by the habitually sanctimonious and the pathologically pious among us that their moral emotions apply to me.  When the implication of those moral emotions is that I am evil because of my race or sex, then, like Tal Fortgang, my inclination is to fight back.

    On the other hand, I take a broad view of “compensation.”  For example, “compensation” can take the form of being able to live in a society that is peaceful and harmonious because of the general perception that the playing field is level and the distribution of the necessities and luxuries of life is fair.  Nazism and Communism aren’t the only ways of dealing with privilege.  I now enjoy many advantages my ancestors didn’t share acquired through processes that were a good deal less drastic, even though they required the sacrifice of privilege by, for example, hereditary nobilities.

    However, like Mr. Fortgang, I reject the notion that I owe anyone special favors or reparations based on my race.  In that case, the probability of “compensation” in any form would be essentially zero.  Other than whites, I know of no other race or ethnic group that has ever sacrificed its “privileges” in a similar fashion.  Millions of whites have been enslaved by Mongols, Turks, and Arabs, not to mention other whites, over periods lasting many centuries.  The last I heard, none of those whose ancestors inflicted slavery on my race has offered to sacrifice any of its “privileges” by way of compensation.  I would be embarrassed and ashamed to ask for such reparations.  I am satisfied with equality before the law.

    Beyond that, I don’t insist that the dismantling of certain privileges can never be to our collective advantage.  I merely suggest that, if dismantle we must, it be done in the light of a thorough understanding of the origins and nature of human morality, lest our moral emotions once again blow up in our faces.

  • Troublesome Nick and the Timid Echoes of the Blank Slate

    Posted on May 14th, 2014 Helian No comments

    You can still get in trouble for saying things that are true, or, for that matter, even obvious.  Consider, for example, Nicholas Wade’s A Troublesome Inheritance: Genes, Race and Human History.  I haven’t yet read the book, so have no comment on whether any of the specific hypotheses therein are scientifically credible or not.  However, according to the blurb at Amazon, the theme of the book is that there actually is such a thing as human biodiversity (hbd).  So much is, of course, not only true, but obvious.  The problem is that such truths have implications.  If there are significant genetic differences between human groups, then it is unlikely that the influence of those differences on the various metrics of human “success” will be zero.  In other words, we are dealing with a truth that is not only inconvenient, but immoral.  It violates the principle of equality. 

    It is only to be expected that there will be similarities between the reaction to this particular immoral truth and those that have been observed in response to other immoral truths in the past.  Typical reactions among those whose moral emotions have been aroused by such truths have been denial, vilification of the messenger, and the invention of straw men that are easier targets than the truth itself.  All these reactions occurred in response to what is probably the most familiar example of an immoral truth; the fact that genes influence behavior, or, if you will, that there actually is such a thing as “human nature.”  In that case, denial took the form of the Blank Slate orthodoxy, which perverted and derailed progress in the behavioral sciences for more than half a century.  The messengers were condemned, not only with the long since hackneyed accusation of racism, but with a host of other political and moral shortcomings.  The most familiar straw man was, of course, the “genetic determinist.”

    Predictably, the response to Wade’s book has been similar.  Not so predictable has been the muted nature of that response.  Compared to the vicious attacks on the messengers who debunked the Blank Slate, it has been pianissimo, and even apologetic.  It would almost seem as if the current paragons of moral purity among us have actually been chastened by the collapse of that quasi-religious orthodoxy.  Allow me to illustrate with an example from the past.  It took the form of a response to the publication of E. O. Wilson’s Sociobiology in 1975.  Entitled Against “Sociobiology”, it appeared in the New York Review of Books shortly after Wilson’s book was published.  As it happens, it didn’t have just one author. It had a whole gang, including such high priests of the Blank Slate as Steven Jay Gould and Richard Lewontin. The message, of course, was that all right-thinking people agreed that the book should be on the proscribed list, and not just a mere individual. Anathemas were rained down on the head of Wilson with all the pious self-assurance of those who were cocksure they controlled the message of “science.” For example,

     The reason for the survival of these recurrent determinist theories is that they consistently tend to provide a genetic justification of the status quo and of existing privileges for certain groups according to class, race or sex. Historically, powerful countries or ruling groups within them have drawn support for the maintenance or extension of their power from these products of the scientific community.

     Wilson joins the long parade of biological determinists whose work has served to buttress the institutions of their society by exonerating them from responsibility for social problems.

    and, of course, the de rigueur association of Wilson with the Nazis:

     These theories provided an important basis for the enactment of sterilization laws and restrictive immigration laws by the United States between 1910 and 1930 and also for the eugenics policies which led to the establishment of gas chambers in Nazi Germany.

    Now, fast forward the better part of four decades and consider a similar diatribe by one of the current crop of the self-appointed morally pure.  The paragon of righteousness in question is Andrew Gelman, and the title of his bit is The Paradox of Racism. True, Gelman doesn’t leave us in suspense about whether he’s in the ranks of the just and good or not. He can’t even wait until he’s past the title of his article to accuse Wade of racism. However, having established his bona fides, he adopts a conciliatory, and almost apologetic tone. For example,

    Wade is clearly intelligent and thoughtful, and his book is informed by the latest research in genetics.

    Wade does not characterize himself as a racist, writing, “no one has the right or reason to assert superiority over a person of a different race.” But I characterize his book as racist based on the dictionary definition: per Merriam-Webster, “a belief that race is the primary determinant of human traits and capacities and that racial differences produce an inherent superiority of a particular race.” Wade’s repeated comments about creativity, intelligence, tribalism, and so forth seem to me to represent views of superiority and inferiority.

    That said, I can’t say that Wade’s theories are wrong. As noted above, racial explanations of current social and economic inequality are compelling, in part because it is always natural to attribute individuals’ successes and failures to their individual traits, and to attribute the successes and failures of larger societies to group characteristics. And genes provide a mechanism that supplies a particularly flexible set of explanations when linked to culture.

    Obviously, Gelman hasn’t been asleep for the last 20 years.  Here we find him peering back over his shoulder, appearing for all the world as if he’s afraid the truth might catch up with him.  He’s aware of the collapse of the Blank Slate orthodoxy, perhaps the greatest debunking of the infallible authority of “science” of all time.  He allows that we might not merely be dealing with a racist individual here, but a racist truth. He even acknowledges that, in that case, something might actually be done about it, implicitly dropping the “genetic determinism” canard:

     Despite Wade’s occasional use of politically conservative signifiers (dismissive remarks about intellectuals and academic leftists, an offhand remark about “global cooling”), I believe him when he writes that “this book is an attempt to understand the world as it is, not as it ought to be.” If researchers ever really can identify ethnic groups with genetic markers for short-term preferences, low intelligence, and an increased proclivity to violence, and other ethnic groups with an affinity for authoritarianism, this is something that more peaceful, democratic policymakers should be aware of.

    Indeed, unlike the authors of the earlier paper, Gelman can’t even bring himself to summon up the ghost of Hitler.  He concludes,

     Wade’s arguments aren’t necessarily wrong, just because they look like various erroneous arguments from decades past involving drunken Irishmen, crafty Jews, hot-blooded Spaniards, lazy Africans, and the like.

    In a word, Gelman’s remarks are rather more nuanced than the fulminations of his predecessors.  Am I making too much of this apparent change of tone? I don’t think so. True, there are still plenty of fire-breathing Blank Slaters lurking in the more obscure echo chambers of academia, but, like the Communists, they are doing us the favor of gradually dying off. Their latter day replacements, having seen whole legions of behavioral “scientists” exposed as charlatans, are rather less self-assured in their virtuous indignation. Some of them have even resorted to admitting that, while it may be true that there is such a thing as human biodiversity, the masses should be sheltered from that truth. Predictably, they have appointed themselves gatekeepers of the forbidden knowledge.

    I note in passing the historical value of the attack on Wilson mentioned above.  Like many similar bits and pieces of source material published in the decade prior to 1975, still easily accessible to anyone who cares to do a little searching, it blows the modern mythology concocted by the evolutionary psychologists to account for the origins of their science completely out of the water.  According to that mythology, it all began with the “big bang” of Wilson’s publication of Sociobiology.  The whole yarn may be found summarized in a nutshell in the textbook Evolutionary Psychology by David Buss.  According to Buss, Sociobiology was “monumental in both size and scope.” It “synthesized under one umbrella a tremendous diversity of scientific endeavors and gave the emerging field (sociobiology) a visible name.”  And so on and so on.  At least that’s the version in my 2009 edition of the book.  “History” might have changed a bit since Wilson’s embrace of group selection in his The Social Conquest of Earth, published in 2012.  We’ll have to wait and see when the next edition of the textbook is published.

    Be that as it may, the fact is that the reason for the original notoriety of Sociobiology, and the reason it is not virtually forgotten today, had nothing to do with all the good stuff Wilson packed into the middle 25 chapters of his book that was subsequently the subject of Buss’ panegyrics.  That reason was Wilson’s insistence in the first and last of his 27 chapters that there actually is such a thing as “human nature.” There was nothing in the least novel, original, or revolutionary in that insistence. In fact, it was merely a repetition of what other authors had been writing for more than a decade. Those authors were neither obscure nor ignored, and were recognized by Blank Slaters like Gould and Lewontin as their most influential and effective opponents. They, and not any novel “scientific synthesis,” were the reason that such worthies paid any attention to Wilson’s book at all. And, much as I admire the man, they, and not Wilson, were most influential in unmasking the absurdities of the Blank Slate, causing it to stumble and eventually collapse. Those facts were certainly no secret to the authors of the article.  Allow me to quote them by way of demonstration:

    From Herbert Spencer, who coined the phrase “survival of the fittest,” to Konrad Lorenz, Robert Ardrey, and now E. O. Wilson, we have seen proclaimed the primacy of natural selection in determining most important characteristics of human behavior.

    Each time these ideas have resurfaced the claim has been made that they were based on new scientific information.

     The latest attempt to reinvigorate these tired theories comes with the alleged creation of a new discipline, sociobiology.

    In a word, the Blank Slaters themselves certainly perceived nothing “novel” in Sociobiology. By the time it was published the hypotheses about human nature they objected to in its content were already old hat. They were merely trying to silence yet another voice proclaiming the absence of the emperor’s new clothes. Again, just do a little searching through the historical source material and you’ll find that the loudest and most influential voice of all, and the one that drew the loudest bellows of rage from the Blank Slaters, belonged to one of Wilson’s predecessors mentioned by name in the above quotes; Robert Ardrey. Of course, Ardrey was a “mere playwright,” and it’s a well-known fact that, once one has been a playwright, one is automatically disqualified from becoming a scientist or writing anything that counts as science ever after. Add to that the fact that Ardrey was right when all the scientists with their Ph.D.’s were wrong about human nature, and I think it’s obvious why making him anything like the “father of evolutionary psychology” would be in bad taste. Wilson fits that role nicely, or at least he did until his flirtation with group selection escalated into a full scale romance.  Ergo, Ardrey was declared “totally and utterly wrong,” became an unperson, and Wilson stepped up to fill his ample shoes.  Alas, if past history is any guide, I fear that it eventually may become necessary to drop poor Nick down the memory hole as well.  True, at least the man isn’t a playwright, but he isn’t sporting a Ph.D., either.  Really, how “scientific” can you be if you don’t have a Ph.D.?   In any case, the mythology that passes for the history of evolutionary psychology began “just so.”   

  • MSNBC’s Orwellian Take on “Animal Farm”

    Posted on May 9th, 2014 Helian No comments

    There’s been a lot of chatter on the Internet lately about MSNBC host Krystal Ball’s “re-interpretation” of Animal Farm as an anti-capitalist parable.  The money quote from her take in the video below:

    At its heart, Animal Farm is about tyranny and the likelihood of those in power to abuse that power. It’s clear that tendency is not only found in the Soviet communist experience. In fact, if you read Animal Farm today, it seems to warn not of some now non-existent communist threat but of the power concentrated in the hands of the wealthy elites and corporations…

    As new research shows that we already live a sort of oligarchy that the preferences of the masses literally do not matter and that the only thing that counts is the needs and desires of the elites, Animal Farm is a useful cautionary tale warning of the corruption of concentrated power, no matter in whose hands that power rests.

    Well, not exactly, Krystal.  As astutely pointed out by CJ Ciaramella at The Federalist,

    This is such a willfully stupid misreading that it doesn’t warrant much comment. However, for those who haven’t read Animal Farm since high school, as seems to be the case with Ball: The book is a satire of Soviet Russia specifically and a parable about totalitarianism in general. Every major event in the book mirrors an event in Soviet history, from the Bolshevik Revolution to Trotsky fleeing the country to Stalin’s cult of personality.

    Indeed.  Animal Farm’s Napoleon as the Koch Brothers?  Snowball as Thomas Picketty?  I don’t think so.  True, you have to be completely clueless about the history of the Soviet Union to come up with such a botched interpretation, but, after all, that’s not too surprising.   For citizens of our fair Republic, cluelessness about the history of the Soviet Union is probably the norm.  The real irony here is that you also have to be completely clueless about Orwell to bowdlerize Animal Farm into an anti-capitalist parable.  If that’s your agenda, why not fish out something more appropriate from his literary legacy.  Again, quoting Ciaramella,

    What is most impressive, though, is that MSNBC couldn’t locate an appropriate reference to inequality in the works of a lifelong socialist. It’s not as if one has to search hard to find Orwell railing against class divisions. He wrote an entire book, The Road to Wigan Pier, about the terrible living conditions in the industrial slums of northern England.

    Not to mention Down and Out in Paris and London and four volumes of essays full of rants against the Americans for being so backward about accepting the blessings of socialism.  Indeed, Orwell, has been “re-interpreted” on the Right just as enthusiastically as on the Left of the political spectrum.  For example, from Brendan Bordelon at The Libertarian Republic,

    Leaving aside the obvious historical parallels between Animal Farm and the Soviet Union, the inescapable message is that government-enforced equality inevitably leads to oppression and further inequality, as fallible humans (or pigs) use powerful enforcement tools for their own personal gain.

    Sorry, Brendan, but that message is probably more escapable than you surmise.  Orwell was, in fact, a firm supporter of “government-enforced equality,” at least to the extent that he was a life-long, dedicated socialist.  Indeed, he thought the transition to socialism in the United Kingdom was virtually inevitable in the aftermath of World War II.

    In short, if you’re really interested in learning what Orwell was trying to “tell” us, whether in Animal Farm or the rest of his work, it’s probably best to read what he had to say about it himself.

     

  • “On Aggression” and the Continuing Vindication of the Unpersons

    Posted on May 8th, 2014 Helian No comments

    The vindication just keeps coming for the unpersons of the Blank Slate.  First Robert Ardrey’s “Territorial Imperative” is confirmed in an article in the journal International Security.  The authors actually deign to mention Ardrey, but claim that, even though their “novel ideas” are all remarkably similar to the main themes of a book he published almost half a century ago, it doesn’t count.  You see, unlike all the other scientists who ever lived, Ardrey wasn’t infallible, so he can be ignored, and his legacy appropriated at will.  Shortly thereafter, Ardrey’s “Hunting Hypothesis” is confirmed yet again, and in the pages of Scientific American, no less!  The article in question bears the remarkably Ardreyesque title How Hunting Made Us Human.  It does not mention Ardrey.

    Now another major theme from the work of yet another unperson whose life work and legacy don’t count because Richard Dawkins said he was “totally and utterly wrong” has been (yet again) confirmed!  The unperson in question is Konrad Lorenz, a Nobel laureate who dared to suggest that genes might have some influence on human aggression in his book, On Aggression, published back in 1966.  According to the authors of a recent Penn State study there is now some doubt about whether Lorenz was “totally and utterly wrong” after all.  Here are some blurbs from an account in the Penn State News:

    Aggression-causing genes appeared early in animal evolution and have maintained their roles for millions of years and across many species, even though animal aggression today varies widely from territorial fighting to setting up social hierarchies, according to researchers from Iowa State University, Penn State and Grand Valley State University.

    If these “mean genes” keep their roles in different animals and in different contexts, then perhaps model organisms — such as bees and mice — can provide insights into the biological basis of aggression in all animals, including humans, the researchers said.

    Do you think Lorenz will get any credit?  Dream on!  After all, he wasn’t infallible (what was it he was wrong about now?  The “hydraulic theory” or something), and it’s a “well known fact,” as Stalin always used to say, that any scientist who wasn’t as infallible as the Almighty should be ignored and forgotten and his work freely appropriated.  Or at least that’s the rule generally applied by the modern “historians” of the Blank Slate to scientists whose existence is “inconvenient” to their narrative.

    BTW, the title typically used for articles about the study is very amusing.  In most cases, it’s simply copied from the one used in the Penn State News; “Wasps use ancient aggression genes to create social groups”.  Move along people!  There’s nothing interesting here.  It’s just a dull study about wasps.

    No matter, studies on the influence of genes on human behavior continue to stream out of the Academy, demonstrating that, for the most part, such work can now be done without fear of retribution.  That, and not any vindicated or unvindicated scientific hypothesis, is the real legacy of Ardrey, Lorenz, and the other great unpersons of the Blank Slate.