Helian Unbound

The world as I see it
RSS icon Email icon Home icon
  • Of the War on Christmas and the Thinness of Leftist Skins

    Posted on December 20th, 2014 Helian No comments

    ‘Twas the month before Christmas, and Bill O’Reilly launched his usual jihad against the purported “War on Christmas.” It drew the predictable counterblasts from the Left, and I just happened to run across one that appeared back on December 4 on Huffpo, entitled “A War on Reason, Not on Christmas.” I must admit I find the “War on Christmas” schtick tiresome. Conservatives rightly point to the assorted liberal cults of victimization as so much pious grandstanding. It would be nice if they practiced what they preach and refrained from concocting similar cults of their own. Be that as it may, I found the article in question somewhat more unctuous and self-righteous than usual, and left a comment to that effect. It was immediately deleted.

    My comment included no ad hominem attacks, nor was it abusive. I simply disagreed with the author on a few points, and noted that the political Left has an exaggerated opinion of its devotion to reason. The main theme of the article was the nature of the political divide in the U.S. According to the author, it is less between rich and poor than between “reasonable” liberals and “irrational” conservatives. As he put it,

    Before imploding in the face of his sordid extramarital trysts, presidential candidate John Edwards based his campaign on the idea of two Americas, one rich the other poor. He was right about the idea that American is divided, but wrong about the nature of the division. The deeper and more important split is defined by religiosity, not riches.

    The conflict between these two world views is made apparent in the details of our voting booth preferences. Religiosity alone is the most important, obvious and conclusive factor in determining voter behavior. Simply put, church goers tend to vote Republican. Those who instead go the hardware store on Sunday vote Democrat by wide margins.

    He then continued,

    Those who accept the idea of god tend to divide the world into believers and atheists. Yet that is incorrect. Atheist means “without god” and one cannot be without something that does not exist. Atheism is really a pejorative term that defines one world view as the negative of another, as something not what something else is.

    This evoked my first comment, which seemed to me rather harmless on the face of it. I merely said that as an atheist myself, I had no objection to the term, and would prefer to avoid the familiar game of inventing ever more politically correct replacements until we ended up with some abomination seven or eight syllables long. However, what followed was even more remarkable. The author proceeded to deliver himself of a pronouncement about the nature of morality that might have been lifted right out of one of Ardrey’s books. In a section entitled, “Secular and Religious Morality,” he writes,

    Traits that we view as moral are deeply embedded in the human psyche. Honesty, fidelity, trustworthiness, kindness to others and reciprocity are primeval characteristics that helped our ancestors survive. In a world of dangerous predators, early man could thrive only in cooperative groups. Good behavior strengthened the tribal bonds that were essential to survival. What we now call morality is really a suite of behaviors favored by natural selection in an animal weak alone but strong in numbers. Morality is a biological necessity and a consequence of human development, not a gift from god.

    Exactly! Now, as I’ve often pointed out to my readers, if morality really is the expression of evolved traits as the author suggests, it exists because it happened to enhance the chances that certain genes we carry would survive and reproduce in the environment in which they happened to appear. There is no conceivable way in which they could somehow acquire the magic quality of corresponding to some “real, objective” morality in the sky. There is no way in which they could assume a “purpose” attributed to them by anyone, whether on the left or the right of the political spectrum. Finally, there is no way in which they could acquire the independent legitimacy to dictate to anyone the things they “really” ought or ought not to do. So much is perfectly obvious. Assuming one really is “reasonable,” it follows immediately from what the author of the article says about the evolved origins of morality above. That, of course, is not how the Left is spinning the narrative these days.

    No, for a large faction on the secular Left, the fact that morality is evolved means not merely that the God-given morality of the Christians and other religious sects is “unreasonable.” For them, it follows that whatever whims they happen to tart up as the secular morality du jour become “reasonable.” That means that they are not immoral, or amoral. They are, by default, the bearers of the “true morality.”  In the article in question it goes something like this:

    The species-centric arrogance of religion cultivates a dangerous attitude about our relationship with the environment and the resources that sustain us. Humanists tend to view sustainability as a moral imperative while theists often view environmental concerns as liberal interference with god’s will. Conservative resistance to accepting the reality of climate change is just one example, and another point at which religious and secular morality diverge, as the world swelters.

    It’s wonderful, really. The Left has always been addicted to moralistic posing, and now they don’t have to drop the charade! Now they can be as self-righteous as ever, as devotees of this secular version of morality that has miraculously acquired the power to become a thing-in-itself, presumably drifting up there in the clouds somewhere beyond the profane ken of the unenlightened Christians. As it happens, at the moment my neighbors are largely Mormon, and I must say their dogmas appear to me to be paragons of “reason” compared to this secular version of morality in the sky.

    Of course, I couldn’t include all these observations in the Huffpo comment section. I merely pointed out that what the author had said about morality would have branded him as a heretic no more than 20 years ago, and evoked frenzied charges of “racism” and “fascism” from the same political Left in which he now imagines himself so comfortably ensconced. That’s because 20 years ago the behavioral sciences were still in thrall to the Blank Slate orthodoxy, as they had been for 50 years and more at the time. That orthodoxy was the greatest debacle in the history of science, and it was the gift, not of the Right, but of the “reasonable” secular Left. That was the point I made in the comment section, along with the observation that liberals would do well to keep it in mind before they break their arms patting themselves on the back for being so “reasonable.”

    The author concluded his article with the following:

    There is no war on Christmas; the idea is absurd at every level. Those who object to being forced to celebrate another’s religion are drowning in Christmas in a sea of Christianity dominating all aspects of social life. An 80 percent majority can claim victimhood only with an extraordinary flight from reality. You are probably being deafened by a rendition of Jingle Bells right now. No, there is no war on Christmas, but make no mistake: the Christian right is waging a war against reason. And they are winning. O’Reilly is riding the gale force winds of crazy, and his sails are full.

    I must agree that the beloved Christian holiday does have a fighting chance of surviving the “War on Christmas.” Indeed, Bill O’Reilly himself has recently been so sanguine as to declare victory.  When it comes to popular delusions, however, I suspect the Left’s delusion that it has a monopoly on “reason” is likely to be even more enduring.  As for the deletion of my comment, we all know about the Left’s proclivity for suppressing speech that they find “offensive.”  Thin skins are encountered in those political precincts at least as frequently as the characteristic delusions about “reason.”

  • Does It Matter If You Believe In God?

    Posted on December 14th, 2014 Helian No comments

    In an open thread that was posted today at Professor Ceiling Cat’s Why Evolution is True website, he asked his readers,

    …to tell me why, in the absence of data, they were so sure that religion was bad for the world. That is, how do they know that if the world had never had religion, it would be better than it is now?

    and added,

    That would seem to be an empirical question, resolvable only with data. Yet as far as I can see (and I haven’t read every comment), most readers feel that the question can be resolved not with data, but with logic or from first principles. Or, they cite anecdotes like religiously-inspired violence (my response would be that it’s easy to measure deaths, but not so easy to measure the consolation and well being that, believers claim, religion brings them). But pointing out that religion does bad stuff doesn’t answer the question if it’s been harmful on the whole.

    As an atheist myself, my answer would be that the question is neither empirical nor resolvable with logic from first principles, because it implies an objective standard whereby such terms as “bad,” “better,” and “harmful” can be defined.  No such objective standard exists.  At best, one can identify the consequences and then decide whether they are “go0d” or “bad” based on one’s personal subjective whims.  As long as it is clearly understood that my reply is based on that standard, I would say that religion is “bad.”

    Supernatural beings either exist or they don’t.  I don’t claim to know the truth of the matter with absolute certainly.  I don’t claim to know anything with absolute certainty.  I base my actions and my goals in life on what I consider probable rather than absolute truths, and I consider the chance that a God or other supernatural beings exist to be vanishingly small.

    The question then becomes, do I, again from my personal point of view, consider it a good thing for other people to believe in supernatural beings even though I consider that belief an illusion.  In short, the answer is no.  It will never be possible for us to know and understand ourselves, either as individuals or as a species, if we believe things that are false, and yet have a profound impact on our understanding of where we come from, what the future holds for us, what morality is and why it exists, the nature of our most cherished goals, and how we live our lives.  Our very survival may depend on whether or not we have an accurate knowledge of ourselves.  I want my species to survive, and therefore I want as many of us as possible to have that knowledge.

    According to a current manifestation of the naturalistic fallacy, religion “evolved,” and therefore it is “good.”  Among other places, articles to this effect have appeared at the This View of Life website, edited by David Sloan Wilson, a noted proponent of group selection.  Examples may be found here and here.  According to the latter:

    For Darwin, an inevitable conflict between evolution and religion could not exist for the simple reason that religiosity and religions had been biocultural products of evolution themselves! He realized in the 19th century what many religious Creationists and so-called “New Atheists” are trying to ignore in their odd alliance to this day: If evolutionary theory is true, it must be able to explain the emergence of our cognitive tendencies to believe in supernatural agencies and the forms and impacts of its cultural products.

    I’m not sure which passages from the work of Darwin the article’s author construed to mean that he believed that “an inevitable conflict between evolution and religion could not exist,” but the idea is nonsense in any case.  Many flavors of both Christianity and Islam explicitly deny the theory of evolution, and therefore a conflict most certainly does exist.  That conflict will not disappear whether religiosity and religions are biocultural products of evolution or not.  Assuming for the sake of argument that they are, that mere fact would be irrelevant to the questions of whether religiosity and religions are “good,” or whether supernatural beings actually exist or not.

    In any case, I doubt that religiosity and religion are biocultural products of evolution in any but a very limited sense.  It is most unlikely that genes could be “smart enough” to distinguish between supernatural and non-supernatural agencies in the process of installing innate behavioral tendencies in our brains.  Some subset of our suite of innate behavioral predispositions might make it more likely for us to respond to and behave towards “leaders” in some ways and not in others.  Once we became sufficiently intelligent to imagine supernatural beings, it became plausible that we might imagine one as “leader,” and culture could take over from there to come up with the various versions of God or gods that have turned up from time to time.  That does not alter the fact that the “root cause” of these manifestations almost certainly does not directly “program” belief in the supernatural.

    This “root cause,” supposing it exists, is to be found in our genes, and our genes are not in the habit of rigidly determining what we believe or how we act.  In other words, our genes cannot force us to believe in imaginary beings, as should be obvious from the prevalence of atheists on the planet.  Because of our genes we may “tend” to believe in imaginary beings, but it is at least equally likely that because of them we “tend” to engage in warfare.  Supposing both tendencies exist, that mere fact hardly insures that they are also “good.”  Insisting that the former is “good” is equivalent to the belief that it is “good” for us to believe certain lies.  This begs the question of how anyone is to acquire the legitimate right to determine for the rest of us that it is “good” for us to believe in lies, not to mention which particular version of the lie is “most good.”

    One can argue ad nauseum about whether, on balance, religion has been “good” because of the comfort and consolation if provides in this vale of tears, the art products it has spawned, and the sense of community it has encouraged, or “bad” because of the wars, intolerance, bigotry, and social strife that can be chalked up to its account.  In the end, it seems to me that the important question is not who “wins” this argument, but whether religious claims are true or not.  If, as I maintain, they are not, then, from my personal point of view, it is “good” that we should know it.  It matters in terms of answering such questions as what we want to do with our lives and why.

    Consider, for example, the question of life after death.  Most of us don’t look forward to the prospect of death with any particular relish, and it is certainly plausible to claim that religion provides us with the consolation of an afterlife.  Suppose we look at the question from the point of view of our genes.  They have given rise to our consciousness, along with most of the other essential features of our physical bodies, because consciousness has made it more probable that those genes would survive and reproduce.  When we fear death, we fear the death of our consciousness, but as far as the genes are concerned, consciousness is purely ancillary – a means to an end.  If they “program” an individual to become a Catholic priest in order to inherit eternal life, and that individual fails to have children as a result, then, from this “genes point of view,” they have botched it.

    In a sense, it is more rational to claim that “we” are our genes rather than that “we” are this ancillary entity we refer to as consciousness.  In that case, “we” have never died.  “Our” lives have existed in an unbroken chain, passed from one physical form to another for billions of years.  The only way “we” can die is for the last physical “link in the chain” to fail to have children.  Of course, genes don’t really have a point of view, nor do they have a purpose.  They simply are.  I merely point out that it would be absurd to imagine that “we” suddenly spring into existence when we are born, and that “we” then die and disappear forever with the physical death of our bodies.  Why on earth would Mother Nature put up with such nonsense?  It seems to me that such an irrational surmise must be based on a fundamental confusion about who “we” actually are.

  • What Made the “blank slate” the Blank Slate?

    Posted on December 7th, 2014 Helian No comments

    The Blank Slate affair was probably the greatest scientific debacle in history.  For half a century, give or take, an enforced orthodoxy prevailed in the behavioral sciences, promoting the dogma that there is no such thing as human nature.  So traumatic was the affair that no accurate history of it has been written to this day.  What was it about the Blank Slate affair that transmuted what was originally just another false hypothesis into a dogma that derailed progress in the behavioral sciences for much of the 20th century?  After all, the blank slate as a theory has been around since the time of Aristotle.  A host of philosophers have supported it in one form or another, including John Locke, Jean-Jacques Rousseau, and John Stuart Mill.  Many others had opposed them, including such prominent British moral philosophers as Shaftesbury, Hutcheson, Hume, and Mackintosh.

    Sometimes the theories of these pre-Darwinian philosophers were remarkably advanced.  Hume, of course, is often cited by evolutionary psychologists in our own time for pointing out that such human behavioral phenomena as morality cannot be derived by reason, and are rooted in emotion, or “passions.”  In his words, “Reason is wholly inactive, and can never be the source of so active a principle as conscience, or a sense of morals.”  The relative sophistication of earlier thinkers can also be demonstrated by comparing them with the rigid dogmas of the Blank Slaters of the 20th century who followed them.  For example, the latter day dogmatists invented the “genetic determinist” straw man.  Anyone who insisted, however mildly, on the existence of human nature was automatically denounced as a “genetic determinist,” that is, one who believes that human “instincts” are as rigid as those of a spider building its nest, and we are powerless to control them rationally.  Real “genetic determinists” must be as rare as unicorns, because in spite of a diligent search I have never encountered one personally.  The opponents of the Blank Slate against whom the charge of “genetic determinism” was most commonly leveled were anything but.  They all insisted repeatedly that human behavior was influenced, not by rigid instincts that forced us to engage in warfare and commit acts of “aggression,” but by predispositions that occasionally worked against each other and could be positively directed or controlled by reason.  As it happens, this aspect of the nature of our “nature” was also obvious to earlier thinkers long before Darwin.  For example, 19th century British moral philosopher William Whewell, referring to the work of his co-philosopher Henry Sidgwick, writes,

    The celebrated comparison of the mind to a sheet of white paper is not just, except we consider that there may be in the paper itself many circumstances which affect the nature of the writing.  A recent writer, however, appears to me to have supplied us with a much more apt and beautiful comparison.  Man’s soul at first, says Professor Sidgwick, is one unvaried blank, till it has received the impressions of external experience.  “Yet has this blank,” he adds, “been already touched by a celestial hand; and, when plunged in the colors which surround it, it takes not its tinge from accident but design, and comes out covered with a glorious pattern.”  This modern image of the mind as a prepared blank is well adapted to occupy a permanent place in opposition to the ancient sheet of white paper.

    Note that Sidgwick was a utilitarian, and is often referred to as a “blank slater” himself.  Obviously, he had a much more nuanced interpretation of “human nature” than the Blank Slaters of a later day, and was much closer, both to the thought of Darwin and to that of modern evolutionary psychologists than they.  This, by the by, illustrates the danger of willy-nilly throwing all the thinkers who have ever mentioned some version of the blank slate into a common heap, or of ordering them all in a neat row, as if each one since the time of Aristotle “begat” the next after the fashion of a Biblical genealogy.

    In any case, these pre-Darwinian thinkers and philosophers could occasionally discuss their differences without stooping to ad hominem attacks, and even politely.  That, in my opinion, is a fundamental difference between them and the high priests of the Blank Slate orthodoxy.  The latter day Blank Slaters were ideologues, not scientists.  They derailed the behavioral sciences because their ideological narrative invariably trumped science, and common sense, for that matter.  Their orthodoxy was imposed and enforced, not by “good science,” but by the striking of moralistic poses, and the vicious vilification of anyone who opposed them.  And for a long time, it worked.

    By way of example, it will be illuminating to look at the sort of “scientific” writings produced by one of these high priests, Richard Lewontin.  Steven Pinker’s book, The Blank Slate, is occasionally flawed, but it does do a good job of describing the basis of Lewontin’s Blank Slate credentials.  Interested readers are encouraged to check the index.  As Pinker puts it,

    So while Gould, Lewontin, and Rose deny that they believe in a blank slate, their concessions to evolution and genetics – that they let us eat, sleep, urinate, defecate, grow bigger than a squirrel, and bring about social change – reveal them to be empiricists more extreme than Locke himself, who at least recognized the need for an innate faculty of “understanding.”

    Anyone doubting the accuracy of this statement can easily check the historical source material to confirm it.  For example, in a rant against E. O. Wilson’s Sociobiology in the New York Review of Books, which Lewontin co-authored with Gould and others, we find, along with copious references to the “genetic determinist” bugbear,

    We are not denying that there are genetic components to human behavior. But we suspect that human biological universals are to be discovered more in the generalities of eating, excreting and sleeping than in such specific and highly variable habits as warfare, sexual exploitation of women and the use of money as a medium of exchange.

    Anyone still inclined to believe that Lewontin wasn’t a “real” Blank Slater need only consult the title of his most significant book on the subject, Not In Our Genes, published in 1984.  What on earth was he referring to as “not in our genes,” if not innate behavior?  As it happens, that book is an excellent reference for anyone who cares to examine the idiosyncratic fashion in which the Blank Slaters were in the habit of doing “science.”  Here are some examples, beginning with the “genetic determinist” bogeyman:

    Biological determinism (biologism) has been a powerful mode of explaining the observed inequalities of status, wealth, and power in contemporary industrial capitalist societies, and of defining human “universals” of behavior as natural characteristics of these societies.  As such, it has been gratefully seized upon as a political legitimator by the New Right, which finds its social nostrums so neatly mirrored in nature; for if these inequalities are biologically determined, they are therefore inevitable and immutable.

    Biological determinist ideas are part of the attempt to preserve the inequalities of our society and to shape human nature in their own image.  The exposure of the fallacies and political content of those ideas is part of the struggle to eliminate those inequalities and to transform our society.

    All of these recent political manifestations of biological determinism have in common that they are directly opposed to the political and social demands of those without power.

    The Nobel Prize laureate Konrad Lorenz, in a scientific paper on animal behavior in 1940 in Germany during the Nazi extermination campaign said:  “The selection of toughness, heroism, social utility… must be accomplished by some human institutions if mankind in default of selective factors, is not to be ruined by domestication induced degeneracy.  The racial idea as the basis of the state has already accomplished much in this respect.”  He was only applying the view of the founder of eugenics, Sir Francis Galton, who sixty years before wondered that “there exists a sentiment, for the most part quite unreasonable, against the gradual extinction of an inferior race.”  What for Galton was a gradual process became rather more rapid in the hands of Lorenz’s efficient friends.  As we shall see, Galton and Lorenz are not atypical.

    Of course, Lewontin is a Marxist.  Apparently, by applying the “dialectic,” he has determined that the fact that the process was even more rapid and efficient in the hands of his Communist friends doesn’t have quite the same “ideological” significance.  As far as eugenics is concerned, it was primarily promoted by leftists and “progressives” in its heyday.  Apparently Lewontin “forgot” that as well, for, continuing in the same vein, he writes:

    The sorry history of this century of insistence on the iron nature of biological determination of criminality and degeneracy, leading to the growth of the eugenics movement, sterilization laws, and the race science of Nazi Germany has frequently been told.

    The claim that “human nature” guarantees that inherited differences between individuals and groups will be translated into a hierarchy of status, wealth, and power completes the total ideology of biological determinism.  To justify their original ascent to power, the new middle class had to demand a society in which “intrinsic merit” could be rewarded.  To maintain their position they now claim that intrinsic merit, once free to assert itself, will be rewarded, for it is “human nature” to form hierarchies of power and reward.

    Biological determinism, as we have been describing it, draws its human nature ideology largely from Hobbes and the Social Darwinists, since these are the principles on which bourgeois political economy are founded.

    Everyone had to be stretched or squeezed to fit on the Procrustean bed of Lewontin’s Marxist dogma. In the process, E. O. Wilson became a “bourgeois” like all the rest:

    More, by emphasizing that even altruism is the consequence of selection for reproductive selfishness, the general validity of individual selfishness in behaviors is supported.  E. O. Wilson has identified himself with American neoconservative liberalism, which holds that society is best served by each individual acting in a self-serving manner, limited only in the case of extreme harm to others.  Sociobiology is yet another attempt to put a natural scientific foundation under Adam Smith.  It combines vulgar Mendelism, vulgar Darwinism, and vulgar reductionism in the service of the status quo.

    This, then, was the type of “scientific” criticism favored by the ideologues of the Blank Slate.  They had an ideological agenda, and so assumed that everything that anyone else thought, wrote, or said, must be part of an ideological agenda as well.  There could be no such thing as “mere disagreement.”  Disagreement implied a different agenda, opposed to clearing the path to the Brave New World favored by the Blank Slaters.  By so doing it sought to institutionalize inequality, racism, and the evil status quo, and was therefore criminal.

    It’s hard to imagine anything more important than getting the historical record of the Blank Slate affair straight.  We possess the means of committing suicide as a species.  Self-knowledge is critical if we are to avoid that fate.  The Blank Slate orthodoxy planted itself firmly in the path of any advance in human self-knowledge for a great many more years than we could afford to squander.  In spite of that, the bowdlerization of history continues.  Lewontin and the other high priests of the Blank Slate are being reinvented as paragons of reason, who were anything but “blank slaters” themselves, but merely applied some salutary adult supervision to the worst excesses of evolutionary psychology.  Often, they left themselves such an “out” to their own eventual rehabilitation by themselves protesting that they weren’t “blank slaters” at all.  For example, again quoting from Lewontin:

    Yet, at the same time, we deny that human beings are born tabulae rasae, which they evidently are not, and that individual human beings are simple mirrors of social circumstances.  If that were the case, there could be no social evolution.

    One can easily see through this threadbare charade by merely taking the trouble to actually read Lewontin.  What Pinker has to say as noted above about the degree to which he was “not a blank slater” is entirely accurate.  I know of not a single instance in which he has ever agreed that anything commonly referred to in the vernacular as “human nature,” as opposed to urinating, defecating, being taller than a squirrel, etc., is real.  Throughout his career he has rejected the behavioral hypotheses of ethology (yes, I am referring to the behavior of animals other than man, as well as our own species), sociobiology, and evolutionary psychology root and branch.

    It has been said that those who do not learn from history are doomed to repeat it.  However, it’s not out of the question that we don’t have enough time left to enjoy the luxury of making the same mistake twice.  Under the circumstances, we would be well-advised to take a very dim view of any future saviors of the world who show signs of adopting political vilification as their way of “doing science.”

  • The New Atheists as Imperialist Warmongers; Leftist Islamophilia in the Afterglow of Communism

    Posted on December 3rd, 2014 Helian 1 comment

    The human types afflicted with the messianic itch have never been too choosy about the ideology they pick to scratch it.  For example, the Nazis turned up some of their most delirious converts among the ranks of former Communists, and vice versa.  The “true believer” can usually make do with whatever is available.  The main thing is that whatever “ism” he chooses enables him to maintain the illusion that he is saving the world and clearing the path to some heavenly or terrestrial paradise, and at the same time supplies him with an ingroup of like-minded zealots.  In the 20th century both Communism and Nazism/fascism, which had served admirably in their time, collapsed, leaving an ideological vacuum behind.  As we all know, nature abhors a vacuum, and something had to fill it.  Paradoxically, that “something” turned out to be radical Islam.  For the true believers, it is now pretty much the only game in town.  The result of this ideological sea change has been quite spectacular.  The “human types” one would normally have expected to find in the ranks of the atheist Communists 50 or 75 years ago are now powerfully attracted to the latest manifestation of industrial strength religious fanaticism.

    So far the ideological gap between the secular left that supplied the Communists of yesteryear and the jihadis of today has been a bit too wide for most western “progressives” to hop across.  Instead, they’ve been forced to settle for casting longing gazes at the antics of the less inhibited zealots on the other side of the chasm.  They can’t quite manage the ideological double back flip between the culture they come from and obscurantist Islam.  Instead, they seize on surrogates, defending the “oppressed” Palestinians against the “apartheid” Israelis, meanwhile furiously denouncing anyone who dares to criticize the new inamorata they are forced to love from afar as “islamophobic.”

    An interesting manifestation of this phenomenon recently turned up on the website of The Jacobin Magazine,  which styles itself, “The leading voice of the American left.”  In an article entitled “Old Atheism, New Empire,” one Luke Savage, described as “a student of political theory and formerly the editor of Canada’s largest student newspaper,” demonstrates that the New Atheists are not really the paladins of Enlightenment they claim to be, but are actually conducting a clever underground campaign to defend imperialism and provide a “smokescreen for the injustice of global capitalism!”  Similar attacks on such New Atheist stalwarts as Richard Dawkins, Sam Harris, and the late Christopher Hitchens are becoming increasingly common as the Left’s love affair with radical Islam continues to blossom.  The New Atheists, in turn, are finding that the firm ground on the left of the ideological spectrum they thought they were standing on has turned to quicksand.

    It isn’t hard to detect the Islamist pheromones in the article in question.  We notice, for example, that Savage isn’t particularly concerned about New Atheist attacks on religion in general.  He hardly mentions Christianity.  When it comes to Islam, however, it’s a different story.  As Savage puts it,

    It is against the backdrop of the war on terror, with its violent and destructive adventurism, that the notion of a monolithic evil called “Islam” has found a sizable constituency in the circles of liberal respectability.

    As one might expect, this is followed by the de rigueur charge of racism:

    The excessive focus on Islam as something at once monolithic and exceptionally bad, whose backwards followers need to have their rights in democratic societies suppressed and their home countries subjected to a Western-led civilizing process, cannot be called anything other than racist.

    Moslem zealots, we find, aren’t really the enemy of, but actually belong in the pantheon of, officially endorsed and certified victim groups:

    Criticisms of the violence carried out by fundamentalists of any kind – honor killings, suicide bombings, systemic persecution of women or gay people, or otherwise – are neither coherent nor even likely to be effective when they falsely attribute such phenomena to some monolithic orthodoxy.

    The cognoscenti will have immediately noticed some amusing similarities between this rhetoric and that used to defend Communism in a bygone era.  Notice, for example, the repeated insistence that Islam is not “monolithic.”  Back in the day, one of the most hackneyed defenses of Communism was also that it was not “monolithic.”  No doubt it was a great comfort to the millions slowly starving to death in the Gulag, or on their way to get a bullet in the back of the neck, that they at least weren’t the victims of a “monolithic” assassin.  In case that’s too subtle for you, Savage spells it out, quoting from a book by Richard Seymour:

    The function of [Hitchens’] antitheism was structurally analogous to what Irving Howe characterized as Stalinophobia…the Bogey-Scapegoat of Stalinism justified a new alliance with the right, obliviousness towards the permanent injustices of capitalist society, and a tolerance for repressive practices conducted in the name of the “Free World.”  In roughly isomorphic fashion Hitchens’ preoccupation with religion…authorized not just a blind eye to the injustices of capitalism and empire but a vigorous advocacy of the same.

    One would think that defending “the opiate of the masses” would be a bitter pill for any dedicated fighter against “capitalism and empire” to swallow, but Savage manages it with aplomb.  Channeling the likes of Karen Armstrong, David Bentley Hart, and the rest of the “sophisticated Christians,” he writes,

    Whether directed at Catholicism, Paganism, or Islam, the methodology employed to expose the inherent “irrationality” of all religions betrays a fundamental misunderstanding (or perhaps misrepresentation) of the nature of religious discourses, beliefs, and practices.

    If that’s not quite rarified enough for you, how about this:

    Moreover, the core assertion that forms the discursive nucleus of books like The God Delusion, God is Not Great, and The End of Faith – namely, that religious texts can be read as literal documents containing static ideas, and that the ensuing practices are uniform – is born out by neither real, existing religion or by its historical reality as a socially and ideologically heterogeneous phenomenon.

    and this:

    This is particularly significant in relation to the New Atheists’ denunciations of what they call “the doctrine of Islam” because it renders bare their false ontology of religion – one which more or less assumes that fundamentalism is the product of bad ideas rather than particular social and material conditions.

    So Stalin wasn’t a bad boy.  He just had a bad environment.  See how that works?  At this point Marx must be spinning in his grave, so we’ll leave these eloquent defenses of religion at that, and let the old man get some rest.  In point of fact Marxism was itself a religion for all practical purposes.  It just happened to be a secular one, with an earthly rather than a heavenly paradise.  In its heyday, Communism had to damn the older, spiritual versions because messianic religions are never tolerant.  Now that it’s defunct as an effective vehicle for militant zealotry, it’s pointless to continue trying to defend it from its spiritual competition.

    In any case, the “progressive” flirtation with medieval obscurantism continues unabated.  Will it ever become a full-fledged embrace?  I suppose it’s not completely out of the question, but a lot of ideological baggage will have to be ditched along the way to that consummation.  As for the New Atheists, one might say that they’ve just had a religious experience in spite of themselves.  They’ve all been excommunicated.

    happyjar

     

    Thanks to Tom at Happyjar.com for the cartoon.  Check out his store!

     

     

  • Do We Really Need New Nukes?

    Posted on December 2nd, 2014 Helian No comments

    If an article that just appeared in the LA Times is any indication, the agitation for jump-starting the nuclear weapons program at the Department of Energy (DOE) and the three nuclear weapons laboratories (Lawrence Livermore, Los Alamos, and Sandia National Laboratories) continues unabated. Entitled “New nuclear weapons needed, many experts say, pointing to aged arsenal,” it cites all the usual talking points of the weaponeers. For example,

    Warheads in the nation’s stockpile are an average of 27 years old, which raises serious concerns about their reliability, they say. Provocative nuclear threats by Russian President Vladimir Putin have added to the pressure to not only design new weapons but conduct underground tests for the first time since 1992.

    “It seems like common sense to me if you’re trying to keep an aging machine alive that’s well past its design life, then you’re treading on thin ice,” said Rep. Mac Thornberry (R-Texas), chairman-elect of the House Armed Services Committee. “Not to mention, we’re spending more and more to keep these things going.”

    Thornbury also offered support for renewed testing, saying, “You don’t know how a car performs unless you turn the key over. Why would we accept anything less from a weapon that provides the foundation for which all our national security is based on?”

    Such comments are entirely typical. They would make a lot of sense if the U.S. nuclear weapons program existed in a vacuum. However, it doesn’t. It exists in a world with several other major nuclear powers, and they all have the same problems. Under the circumstances, the fact that such problems exist and are shared by all the nuclear powers is less significant than the question of which nuclear power is best equipped to deal with them. The question of who will benefit by the building of new weapons and a resumption of nuclear testing depends on the answer to that question. If one country has a significant advantage over its rivals in dealing with a common problem as long as the status quo is maintained, then it would be very ill-advised to initiate a change to the status quo that would allow them to catch up.  At the moment, the United States is the country with an advantage. As noted in the article,

    The U.S. has by far the greatest archive of test data, having conducted 1,032 nuclear tests. Russia conducted 715 and China only 45.

    Beyond that, we have the ability to conduct tests with conventional explosives that mimic what goes on in the initial stages of a nuclear explosion, and superb diagnostics to extract a maximum of data from those tests. Perhaps more importantly, we have an unrivaled above ground experimental, or AGEX, capability. I refer to machines like Z at Sandia National Laboratories, or the NIF at Livermore, which are far more capable and powerful than similar facilities anywhere else in the world. Those who say they can’t access physical conditions relevant to those that occur in exploding nuclear weapons, or that they are useless for weapon effects or weapon physics experiments, either don’t know what they’re talking about or are attempting to deceive.

    As far as the NIF is concerned, it is quite true that it has so far failed to achieve its fusion ignition milestone, but that by no means rules out the possibility that it ever will. More importantly, the NIF will remain a highly useful AGEX facility whether it achieves ignition or not. Indeed, before it was built, many of the weapons designers showed little interest in ignition. It would merely “muddy the waters,” making it more difficult for the diagnostics to precisely record the results of an experiment. The NIF could access weapons-relevant conditions without it. In fact, in spite of its failure to achieve ignition to date, the NIF has been a spectacular success as far as achieving its specifications are concerned. It is more than an order of magnitude more powerful than any previously existing laser system, its 192 laser beams are highly accurate, and its diagnostic suite is superb.

    Another problem with the resumption of testing is that it will lead to the development of weapons that are much more likely to be used. Once the nuclear genie is out of the bottle, it will likely prove very difficult to put it back in. For example, again quoting the article,

    John S. Foster Jr., former director of Lawrence Livermore National Laboratory and chief of Pentagon research during the Cold War, said the labs should design, develop and build prototype weapons that may be needed by the military in the future, including a very low-yield nuclear weapon that could be used with precision delivery systems, an electromagnetic pulse weapon that could destroy an enemy’s communications systems and a penetrating weapon to destroy deeply buried targets.

    The commonly heard narrative at DOE goes something like this: “We need to develop small, precise, penetrating nuclear weapons because they will be a much better deterrent than the existing ones. Potential enemies are unlikely to believe that we would ever use one of the high yield weapons that are all that remain in the current arsenal. They would be far more likely to believe that we might use a small bunker buster that would minimize the possibility of significant collateral damage.” The problem with that narrative is that it’s true. We would be far more likely to use such a weapon than the ones in the current arsenal, and there would be no lack of voices within DOE and DoD calling for its use if an appropriate opportunity ever arose.

    I can understand the agitation for a resumption of testing. It’s a lot sexier to make things that go boom than to serve as custodians for an aging pile of existing nukes. Unfortunately, the latter course is the wiser one. By resuming nuclear testing we would really be unilaterally surrendering a huge advantage, playing into the hands of our enemies and destabilizing the nuclear landscape at the same time.

  • On the Continuing Adventures of the “Killer Ape Theory” Zombie

    Posted on November 19th, 2014 Helian No comments

    An article entitled “The Evolution of War – A User’s Guide,” recently turned up at “This View of Life,” a website hosted by David Sloan Wilson. Written by Anthony Lopez, it is one of the more interesting artifacts of the ongoing “correction” of the history of the debate over human nature I’ve seen in a while. One of the reasons it’s so remarkable is that Wilson himself is one of the foremost proponents of the theory of group selection, Lopez claims in his article that one of the four “major theoretical positions” in the debate over the evolution of war is occupied by the “group selectionists,” and yet he conforms to the prevailing academic conceit of studiously ignoring the role of Robert Ardrey, who was not only the most influential player in the “origins of war” debate, but overwhelmingly so in the whole “Blank Slate” affair as well. Why should that be so remarkable? Because at the moment the academics’ main rationalization for pretending they never heard of a man named Ardrey is (you guessed it) his support for group selection!

    When it comes to the significance of Ardrey, you don’t have to take my word for it. His was the most influential voice in a growing chorus that finally smashed the Blank Slate orthodoxy. The historical source material is all still there for anyone who cares to trouble themselves to check it. One invaluable piece thereof is “Man and Aggression,” a collection of essays edited by arch-Blank Slater Ashley Montagu and aimed mainly at Ardrey, with occasional swipes at Konrad Lorenz, and with William Golding, author of “Lord of the Flies,” thrown in for comic effect. The last I looked you could still pick it up for a penny at Amazon. For example, from one of the essays by psychologist Geoffrey Gorer,

    Almost without question, Robert Ardrey is today the most influential writer in English dealing with the innate or instinctive attributes of human nature, and the most skilled populariser of the findings of paleo-anthropologists, ethologists, and biological experimenters… He is a skilled writer, with a lively command of English prose, a pretty turn of wit, and a dramatist’s skill in exposition; he is also a good reporter, with the reporter’s eye for the significant detail, the striking visual impression. He has taken a look at nearly all the current work in Africa of paleo-anthropologists and ethologists; time and again, a couple of his paragraphs can make vivid a site, such as the Olduvai Gorge, which has been merely a name in a hundred articles.

    In case you’ve been asleep for the last half a century, the Blank Slate affair was probably the greatest debacle in the history of science. The travails of Galileo and the antics of Lysenko are child’s play in comparison. For decades, whole legions of “men of science” in the behavioral sciences pretended to believe there was no such thing as human nature. As was obvious to any ten year old, that position was not only not “science,” it was absurd on the face of it. However, it was required as a prop for a false political ideology, and so it stood for half a century and more. Anyone who challenged it was quickly slapped down as a “fascist,” a “racist,” or a denizen of the “extreme right wing.” Then Ardrey appeared on the scene. He came from the left of the ideological spectrum himself, but also happened to be an honest man. The main theme of all his work in general, and the four popular books he wrote between 1961 and 1976 in particular, was that here is such a thing as human nature, and that it is important. He insisted on that point in spite of a storm of abuse from the Blank Slate zealots. On that point, on that key theme, he has been triumphantly vindicated. Almost all the “men of science,” in psychology, sociology, and anthropology were wrong, and he was right.

    Alas, the “men of science” could not bear the shame. After all, Ardrey was not one of them. Indeed, he was a mere playwright! How could men like Shakespeare, Ibsen, and Moliere possibly know anything about human nature? Somehow, they had to find an excuse for dropping Ardrey down the memory hole, and find one they did! There were actually more than one, but the main one was group selection. Writing in “The Selfish Gene” back in 1976, Richard Dawkins claimed that Ardrey, Lorenz, and Irenäus Eibl-Eibesfeldt were “totally and utterly wrong,” not because they insisted there was such a thing as human nature, but because of their support for group selection! Fast forward to 2002, and Steven Pinker managed the absurd feat of writing a whole tome about the Blank Slate that only mentioned Ardrey in a single paragraph, and then only to assert that he had been “totally and utterly wrong,” period, on Richard Dawkins’ authority, and with no mention of group selection as the reason. That has been the default position of the “men of science” ever since.

    Which brings us back to Lopez’ paper. He informs us that one of the “four positions” in the debate over the evolution of war is “The Killer Ape Hypothesis.” In fact, there never was a “Killer Ape Hypothesis” as described by Lopez. It was a strawman, pure and simple, concocted by Ardrey’s enemies. Note that, in spite of alluding to this imaginary “hypothesis,” Lopez can’t bring himself to mention Ardrey. Indeed, so effective has been the “adjustment” of history that, depending on his age, it’s quite possible that he’s never even heard of him. Instead, Konrad Lorenz is dragged in as an unlikely surrogate, even though he never came close to supporting anything even remotely resembling the “Killer Ape Hypothesis.” His main work relevant to the origins of war was “On Aggression,” and he hardly mentioned apes in it at all, focusing instead mainly on the behavior of fish, birds and rats.

    And what of Ardrey? As it happens, he did write a great deal about our ape-like ancestors. For example, he claimed that Raymond Dart had presented convincing statistical evidence that one of them, Australopithecus africanus, had used weapons and hunted. That statistical evidence has never been challenged, and continues to be ignored by the “men of science” to this day. Without bothering to even mention it, C. K. Brain presented an alternative hypothesis that the only acts of “aggression” in the caves explored by Dart had been perpetrated by leopards. In recent years, as the absurdities of his hypothesis have been gradually exposed, Brain has been in serious row back mode, and Dart has been vindicated to the point that he is now celebrated as the “father of cave taphonomy.”

    Ardrey also claimed that our apelike ancestors had hunted, most notably in his last book, “The Hunting Hypothesis.” When Jane Goodall published her observation of chimpanzees hunting, she was furiously vilified by the Blank Slaters. She, too, has been vindicated. Eventually, even PBS aired a program about hunting behavior in early hominids, and, miraculously, just this year even the impeccably politically correct “Scientific American” published an article confirming the same in the April edition! In a word, we have seen the vindication of these two main hypotheses of Ardrey concerning the behavior of our apelike and hominid ancestors. Furthermore, as I have demonstrated with many quotes from his work in previous posts, he was anything but a “genetic determinist,” and, while he strongly supported the view that innate predispositions, or “human nature,” if you will, have played a significant role in the genesis of human warfare, he clearly did not believe that it was unavoidable or inevitable.  In fact, that belief is one of the main reasons he wrote his books.  In spite of that, the “Killer Ape” zombie marches on, and turns up as one of the “four positions” that are supposed to “illuminate” the debate over the origins of war, while another of the “positions” is supposedly occupied by of all things, “group selectionists!” History is nothing if not ironical.

    Lopez’ other two “positions” include “The Strategic Ape Hypothesis,” and “The Inventionists.” I leave the value of these remaining “positions” to those who want to “examine the layout of this academic ‘battlefield’”, as he puts it, to the imagination of my readers. Other than that, I can only suggest that those interested in learning the truth, as opposed to the prevailing academic narrative, concerning the Blank Slate debacle would do better to look at the abundant historical source material themselves than to let someone else “interpret” it for them.

  • Why are Philosophers Marginalized?

    Posted on November 4th, 2014 Helian No comments

    Modern philosophers are a touchy bunch.  They resent their own irrelevance.  The question is, why have they become so marginalized.  After all, it wasn’t always so.  Consider, for example, the immediate and enduring impact of the French philosophes of the 18th century.  I can’t presume to give a complete answer in this blog post, but an article by Uri Bram that recently turned up at Café.com entitled, This Philosopher Wants to Change How You Think About Doing Good might at least contain a few hints.

    It’s an account of the author’s encounter with a young philosopher named Will MacAskill who, not uncharacteristically, has a job in the Academy, in his case at Cambridge.  Bram assures us that, “he’s already a superstar among his generation of philosophers.”  We learn he also has, “fondness for mild ales, a rollicking laugh, a warm Scottish accent and a manner that reminds you of the kid everyone likes in senior year of high school—not the popular kid, mind, but the kid everyone actually likes.”  If you get the sinking feeling that you’re about to read a hagiography, you won’t be mistaken.  It reminded me of what Lenin was talking about when he referred to “the silly lives of the saints.”

    According to Bram, MacAskill had already sensed the malaise in modern philosophy by the time he began his graduate studies:

     “I kept going to academics and actively trying to find people who were taking these ideas seriously and trying to make a difference, trying to put them into practice,” he says. But (for better or worse), academic philosophy as a whole is not generally focused on having a direct, practical impact.  “Someone studying the philosophy of linguistics or logic is probably doing it as a pure intellectual enterprise,” says MacAskill, “but what surprised me was the extent to which even applied philosophers weren’t having any impact on the world. I spoke to a giant in the field of practical ethics, one of the most successful applied philosophers out there, and asked him what impact he thought he’d had with his work; he replied that someone had once sent him an email saying they’d signed up for the organ donor register based on something he’d written. And that made me sad.”

    Then he had an epiphany, inspired by a conversation with fellow graduate student Toby Ord:

    One of the things that most impressed MacAskill about Ord was the extent to which the latter was walking the talk on his philosophical beliefs, manifested by his pledge to give away everything he earned above a certain modest threshold to organizations working effectively towards reducing global suffering (a story about Ord in the BBC’s news magazine became a surprise hit in 2010).

    Ord, it turns out, was a modern incarnation of Good King Wenceslas, who had pledged to give away a million pounds to charity in the course of his career.  To make a long story short, MacAskill decided to make a similar pledge, and founded an organization with Ord to get other people to do the same.  He has since been going about doing similar good works, while at the same time publishing the requisite number of papers in all the right philosophical journals.

    As it happens, I ran across this article thanks to a reference at 3quarksdaily, and my thoughts about it were the same as some of the commenters there.  For example, from one who goes by the name of Lemoncookies,

    I see nothing particularly original or profound in this young man’s suggestion, which basically amounts to: give more to charity. Lots of people have made this their clarion call, and lots of people already and will give to charity.

    Another named Paul chimes in,

    I find the suggestion humorous that a 27-year-old is going “to revolutionize the way you think about doing good.” What effort the philosophers will go to in order to maintain their hegemony on moral reasoning. Unfortunately, I think they missed the boat 150 years ago by ignoring evolution and biology. They have been treading water ever since yet still manage to attract followers.

    He really hits the nail on the head with that one.  It’s ludicrous to write hagiographies about people who are doing “good” unless you understand what “good” is, and there has been no excuse for not understanding what “good” is since Darwin published “On the Origin of Species.”  Darwin himself saw the light immediately.  Morality is a manifestation of evolved “human nature.”  It exists purely because the features in the brain that are responsible for that nature happened to improve the odds that the genes responsible for those features would survive and reproduce.  “Good” exists as a subjective perception in the mind of individuals, and there is no way in which it can climb out of the skull of individuals and magically acquire a life of its own.  Philosophers, with a few notable exceptions, have rejected that truth.  That’s one of the reasons, and a big one at that, why they’re marginalized.

    It was a truth they couldn’t bear to face.  It’s not really that the truth made philosophy itself irrelevant.  The way to the truth had been pointed out long before Darwin by philosophers of the likes of Shaftesbury, Hutcheson, and Hume.  At least a part of the problem was that this truth smashed the illusion that philosophers, or anyone else for that matter, could be genuinely better, more virtuous, or more righteous in some objective sense than anyone else.  They’ve been fighting the truth ever since.  The futility of that fight is demonstrated by the threadbare nature of the ideas that have been used to justify it.

    For example, there’s “moral realism.”  It goes like this:  Everyone knows that two plus two equals four.  However, numbers do not exist in the material world.  Moral truths don’t exist in the material world either.  Therefore, moral truths are also real.  QED.  Then there’s utilitarianism, which was demolished by Westermarck with the aid of the light provided courtesy of Darwin.  It’s greatest proponent, John Stuart Mill, had the misfortune to write his book about it before the significance of Darwin’s great theory had time to sink in.  If it had, I doubt he would ever have written it.  He was too smart for that.  Sam Harris’ “scientific morality” is justified mainly by bullying anyone who doesn’t go along with charges of being “immoral.”

    With the aid of such stuff, modern philosophy has wandered off into the swamp.  Commenter Paul was right.  They need to stop concocting fancy new moral systems once and for all, and do a radical rewind, if not to Darwin, then at least to Westermarck.  They’ll never regain their relevance by continuing to ignore the obvious.

  • Oswald Spengler got it Wrong

    Posted on November 1st, 2014 Helian 3 comments

    Sometimes the best metrics for public intellectuals are the short articles they write for magazines.  There are page limits, so they have to get to the point.  It isn’t as easy to camouflage vacuous ideas behind a smoke screen of verbiage.  Take, for example, the case of Oswald Spengler.  His “Decline of the West” was hailed as the inspired work of a prophet in the years following its publication in 1918.  Read Spengler’s Wiki entry and you’ll see what I mean.  He should have quit while he was ahead.

    Fast forward to 1932, and the Great Depression was at its peak.  The Decline of the West appeared to be a fait accompli.  Spengler would have been well-advised to rest on his laurels.  Instead, he wrote an article for The American Mercury, still edited at the time by the Sage of Baltimore, H. L. Mencken, with the reassuring title, “Our Backs are to the Wall!”  It was a fine synopsis of the themes Spengler had been harping on for years, and a prophecy of doom worthy of Jeremiah himself.  It was also wrong.

    According to Spengler, high technology carried within itself the seeds of its own collapse.  Man had dared to “revolt against nature.”  Now the very machines he had created in the process were revolting against man.  At the time he wrote the article he summed up the existing situation as follows:

    A group of nations of Nordic blood under the leadership of British, German, French, and Americans command the situation.  Their political power depends on their wealth, and their wealth consists in their industrial strength.  But this in turn is bound up with the existence of coal.  The Germanic peoples, in particular, are secured by what is almost a monopoly of the known coalfields…

    Spengler went on to explain that,

    Countries industrially poor are poor all around; they cannot support an army or wage a war; therefore they are politically impotent; and the workers in them, leaders and led alike, are objects in the economic policy of their opponents.

    No doubt he would have altered this passage somewhat had he been around to witness the subsequent history of places like Vietnam, Algeria, and Cambodia.  Willpower, ideology, and military genius have trumped political and economic power throughout history.  Spengler simply assumed they would be ineffective against modern technology because the “Nordic” powers had not been seriously challenged in the 50 years before he wrote his book.  It was a rash assumption.  Even more rash were his assumptions about the early demise of modern technology.  He “saw” things happening in his own times that weren’t really happening at all.  For example,

    The machine, by its multiplication and its refinement, is in the end defeating its own purpose.  In the great cities the motor-car has by its numbers destroyed its own value, and one gets on quicker on foot.  In Argentina, Java, and elsewhere the simple horse-plough of the small cultivator has shown itself economically superior to the big motor implement, and is driving the latter out.  Already, in many tropical regions, the black or brown man with his primitive ways of working is a dangerous competitor to the modern plantation-technic of the white.

    Unfortunately, motor cars and tractors can’t read, so went right on multiplying without paying any attention to Spengler’s book.  At least he wasn’t naïve enough to believe that modern technology would end because of the exhaustion of the coalfields.  He knew that we were quite clever enough to come up with alternatives.  However, in making that very assertion, he stumbled into what was perhaps the most fundamental of all his false predictions; the imminence of the “collapse of the West.”

    It is, of course, nonsense to talk, as it was fashionable to do in the Nineteenth Century, of the imminent exhaustion of the coal-fields within a few centuries and of the consequences thereof – here, too, the materialistic age could not but think materially.  Quite apart from the actual saving of coal by the substitution of petroleum and water-power, technical thought would not fail ere long to discover and open up still other and quite different sources of power.  It is not worth while thinking ahead so far in time.  For the west-European-American technology will itself have ended by then.  No stupid trifle like the absence of material would be able to hold up this gigantic evolution.

    Alas, “so far in time” came embarrassingly fast, with the discovery of nuclear fission a mere six years later.  Be that as it may, among the reasons that this “gigantic evolution” was unstoppable was what Spengler referred to as “treason to technics.”  As he put it,

    Today more or less everywhere – in the Far East, India, South America, South Africa – industrial regions are in being, or coming into being, which, owing to their low scales of wages, will face us with a deadly competition.  the unassailable privileges of the white races have been thrown away, squandered, betrayed.

    In other words, the “treason” consisted of the white race failing to keep its secrets to itself, but bestowing them on the brown and black races.  They, however, were only interested in using this technology against the original creators of the “Faustian” civilization of the West.  Once the whites were defeated, they would have no further interest in it:

    For the colored races, on the contrary, it is but a weapon in their fight against the Faustian civilization, a weapon like a tree from the woods that one uses as scaffolding, but discards as soon as it has served its purpose.  This machine-technic will end with the Faustian civilization and one day will lie in fragments, forgotten – our railways and steamships as dead as the Roman roads and the Chinese wall, our giant cities and skyscrapers in ruins, like old Memphis and Babylon.  The history of this technic is fast drawing to its inevitable close.  It will be eaten up from within.  When, and in what fashion, we so far know not.

    Spengler was wise to include the Biblical caveat that, “…about that day or hour no one knows, not even the angels in heaven, nor the Son, but only the Father”  (Matthew 24:36).  However, he had too much the spirit of the “end time” Millennialists who have cropped up like clockwork every few decades for the last 2000 years, predicting the imminent end of the world, to leave it at that.  Like so many other would-be prophets, his predictions were distorted by a grossly exaggerated estimate of the significance of the events of his own time.  Christians, for example, have commonly assumed that reports of war, famine and pestilence in their own time are somehow qualitatively different from the war, famine and pestilence that have been a fixture of our history for that last 2000 years, and conclude that they are witnessing the signs of the end times, when, “…nation shall rise against nation, and kingdom against kingdom: and there shall be famines, and pestilences, and earthquakes, in divers places” (Matthew 24:7).  In Spengler’s case, the “sign” was the Great Depression, which was at its climax when he wrote the article:

    The center of gravity of production is steadily shifting away from them, especially since even the respect of the colored races for the white has been ended by the World War.  This is the real and final basis of the unemployment that prevails in the white countries.  It is no mere crisis, but the beginning of a catastrophe.

    Of course, Marxism was in high fashion in 1932 as well.  Spengler tosses it in for good measure, agreeing with Marx on the inevitability of revolution, but not on its outcome:

    This world-wide mutiny threatens to put an end to the possibility of technical economic work.  The leaders (bourgeoisie, ed.) may take to flight, but the led (proletariat, ed.) are lost.  Their numbers are their death.

    Spengler concludes with some advice, not for us, or our parents, or our grandparents, but our great-grandparents generation:

    Only dreamers believe that there is a way out.  Optimism is cowardice… Our duty is to hold on to the lost position, without hope, without rescue, like that Roman soldier whose bones were found in front of a door in Pompeii, who, during the eruption of Vesuvius, died at his post because they forgot to relieve him.  That is greatness.  That is what it means to be a thoroughbred.  The honorable end is the one thing that can not be taken from a man.

    One must be grateful that later generations of cowardly optimists donned their rose-colored glasses in spite of Spengler, went right on using cars, tractors, and other mechanical abominations, and created a world in which yet later generations of Jeremiahs could regale us with updated predictions of the end of the world.  And who can blame them?  After all, eventually, at some “day or hour no one knows, not even the angels in heaven,” they are bound to get it right, if only because our sun decides to supernova.  When that happens, those who are still around are bound to dust off their ancient history books, smile knowingly, and say, “See, Spengler was right after all!”

  • Post-Darwinian, “Evolutional” Theories of Morality in the 19th Century

    Posted on October 26th, 2014 Helian No comments

    It’s become fashionable in some quarters to claim that philosophy is useless.  I wouldn’t go that far.  Philosophers have at least been astute enough to notice some of the more self-destructive tendencies of our species, and to come up with more or less useful formulas for limiting the damage.  However, they have always had a tendency to overreach.  We are not intelligent enough to reliably discover truth far from the realm of repeatable experiments.  When we attempt to do so, we commonly wander off into intellectual swamps.  That is where one often finds philosophers.

    The above is well illustrated by the history of thought touching on the subject of morality in the decades immediately following the publication of On the Origin of Species in 1859.  It was certainly realized in short order that Darwin’s theory was relevant to the subject of morality.  Perhaps no one at the time saw it better than Darwin himself.  However, the realization that the search for the “ultimate Go0d” was now over once and for all, because the object sought did not exist, was slow in coming.  Indeed, for the most part, it’s still not realized to this day.  The various “systems” of morality in the decades after Darwin’s book appeared kept stumbling forward towards the non-existent goal, like dead men walking.  For the most part, their creators never grasped the significance of the term “natural selection.”  Against all odds, they obstinately persisted in the naturalistic fallacy; the irrational belief that, to the extent that morality had evolved, it had done so “for the good of the species.”

    An excellent piece of historical source material documenting these developments can be found at Google Books.  Entitled, A Review of the Systems of Ethics Founded on the Theory of Evolution, it was written by one C. M. Williams, and published in 1893.  According to one version on Google Books, “C. M.” stands for “Cora Mae,” apparently a complete invention.  The copying is botched, so that every other page of the last part of the book is unreadable.  The second version, which is at least readable, claims the author was Charles Mallory Williams and, indeed, that name is scribbled after the initials “C. M.” in the version copied.  There actually was a Charles Mallory Williams.  He was a medical doctor, born in 1872, and would have been 20 years old at the time the book was published.  The chances that anyone so young wrote the book in question are vanishingly small.  Unfortunately, I must leave it to some future historian to clear up the mystery of who “C. M.” actually was, and move on to consider what he wrote.

    According to the author, by 1893 a flood of books and papers had already appeared addressing the connection between Darwin’s theory and morality.  In his words,

    Of the Ethics founded on the theory of Evolution, I have considered only the independent theories which have been elaborated to systems. I have omitted consideration of many works which bear on Evolutional Ethics as practical or exhortative treatises or compilations of facts, but which involve no distinctly worked out theory of morals.

    The authors who made the cut include Alfred Russell Wallace, Ernst Haeckel, Herbert Spencer, John Fiske, W. H. Rolph, Alfred Barratt, Leslie Stephen, Bartholomäus von Carneri, Harald Hoffding, Georg von Gizycki, Samuel Alexander, and, last but not least, Darwin himself.  Williams cites the books of each that bear on the subject, and most of them have a Wiki page.  Wallace, of course, is occasionally mentioned as the “co-inventor” of the theory of evolution by natural selection with Darwin.  Collectors of historical trivia may be interested to know that Barratt’s work was edited by Carveth Read, who was probably the first to propose a theory of the hunting transition from ape to man.  Leslie Stephen was the father of Virginia Woolf, and Harald Hoffding was the friend and philosophy teacher of Niels Bohr.

    I don’t intend to discuss the work of each of these authors in detail.  However, certain themes are common to most, if not all, of them, and most of them, not to mention Williams himself, still clung to Lamarckism and other outmoded versions of evolution.  It took the world a long time to catch up to Darwin.  For example, in the case of Haeckel,

    Even in the first edition of his Naturliche Schopfungsgeschichte Haeckel makes a distinction between conservative and progressive inheritance, and in the edition of 1889 he still maintains this division against Weismann and others, claiming the heredity of acquired habit under certain circumstances and showing conclusively that even wounds and blemishes received during the life of an individual may be in some instances inherited by descendants.

    For Williams’ own Lamarckism, see chapter 1 of Volume II, in which he seems convinced that Darwin himself believes in inheritance of acquired characteristics, and that Lamarck’s theories are supported by abundant evidence.  We are familiar with an abundance of similar types of “evidence” in our own day.

    More troublesome than these vestiges of earlier theories of evolution are the vestiges of earlier systems of morality.  Every one of the authors cited above has a deep background in the theories of morality concocted by philosophers, both ancient and modern.  In general, they have adopted some version of one of these theories as their own.  As a result, they have a tendency to fit evolution by natural selection into the Procrustean bed of their earlier theories, often as a mere extension of them.  An interesting manifestation of this tendency is the fact that, almost to a man, they believed that evolution promoted the “good of the species.”  For example, quoting Stephen:

    The quality which makes a race survive may not always be a source of advantage to every individual, or even to the average individual.  Since the animal which is better adapted for continuing its species will have an advantage in the struggle even though it may not be so well adapted for pursuing its own happiness, an instinct grows and decays not on account of its effects on the individual, but on account of its effects upon the race.

    The case of Carneri, who happened to be a German, is even more interesting.  Starting with the conclusion that “evolution by natural selection” must inevitably favor the species over the individual,

    Every man has his own ends, and in the attempt to attain his ends, does not hesitate to set himself in opposition to all the rest of mankind.  If he is sufficiently energetic and cunning, he may even succeed for a time in his endeavors to the harm of humanity.  Yet to have the whole of humanity against oneself is to endeavor to proceed in the direction of greater resistance, and the process must sooner or later result in the triumph of the stronger power. In the struggle for existence in its larger as well as its smaller manifestations, the individual seeks with all his power to satisfy the impulse to happiness which arises with conscious existence, while the species as the complex of all energies developed by its parts has an impulse to self preservation of its own.

    It follows, at least for Carneri, that Darwin’s theory is a mere confirmation of utilitarianism:

    The “I” extends itself to an “I” of mankind, so that the individual, in making self his end, comes to make the whole of mankind his end. The ideal cannot be fully realized; the happiness of all cannot be attained; so that there is always choice between two evils, never choice of perfect good, and it is necessary to be content with the greatest good of the greatest number as principle of action.

    which, in turn, leads to a version of morality worthy of Bismarck himself.  As paraphrased by Williams,

    He lays further stress upon the absence of morality, not only among the animals, in whom at least general ethical feelings in distinction from those towards individuals are not found, but also among savages, morality being not the incentive to, but the product of the state.

    Alexander gives what is perhaps the most striking example of this perceived syncretism between Darwinism and pre-existing philosophies, treating it as a mere afterthought to Hegel and Kant:

     Nothing is more striking at the present time than the convergence of different schools of Ethics. English Utilitarianism developing into Evolutional Ethics on the one hand, and the idealism associated with the German philosophy derived from Kant on the other.  The convergence is not of course in mere practical precepts, but in method also. It consists in an objectivity or impartiality of treatment commonly called scientific.  There is also a convergence in general results which consists in a recognition of a kind of proportion between individual and society, expressed by the phrase “organic connection.”  The theory of egoism pure and simple has been long dead.  Utilitarianism succeeded it and enlarged the moral end. Evolution continued the process of enlarging the individual interest, and has given precision to the relation between the individual and the moral law.  But in this it has added nothing new, for Hegel in the early part of the century, gave life to Kant’s formula by treating the law of morality as realized in the society and the state.

    Alexander continues by confirming that he shares a belief common to all the rest as well, in one form or another – in the reality of objective morality:

    The convergence of dissimilar theories affords us some prospect of obtaining a satisfactory statement of the ethical truths towards which they seem to move.

    Gyzicki embraces this version of the naturalistic fallacy even more explicitly:

    Natural selection is therefore a power of judgment, in that it preserves the just and lets the evil perish.  Will this war of the good with the evil always continue?  Or will the perfect kingdom of righteousness one day prevail.  We hope this last but we cannot know certainly.

    There is much more of interest in this book by an indeterminate author.  Of particular note is the section on Alfred Russell Wallace, but I will leave that for a later post.  One might mention as an “extenuating circumstance” for these authors that none of them had the benefit of the scientific community’s belated recognition of the significance of Mendel’s discoveries.  It’s well know that Darwin himself struggled to come up with a logical mechanism to explain how it was possible for natural selection to even happen.  The notions of these moral philosophers on the subject must have been hopelessly vague by comparison.  Their ideas about “evolution for the good of the species” must be seen in that context.  The concocters of the modern “scientific” versions of morality can offer no such excuse.

  • Edvard Westermarck on Morality: The Light Before the Darkness Fell

    Posted on October 18th, 2014 Helian 3 comments

    The nature of morality became obvious to anyone who cared to think about it after Darwin published his great theory, including Darwin himself.  In short, it became clear that the “root causes” of morality were to be found in “human nature,” our specie’s collection of evolved behavioral predispositions.  As the expression of evolved traits, morality has no purpose, unless one cares to use that term as shorthand for the apparent biological function it serves.  It exists because it enhanced the probability that the creatures with the genetic endowment that gave rise to it would survive and reproduce in the conditions that existed when those genes appeared.  As a result, there are no moral “truths.”  Rather, morality is a subjective phenomenon with emotional rather than logical origins.

    So much became obvious to many during the decades that following the publication of On the Origin of Species in 1859.  One man spelled out the truth more explicitly, clearly, and convincingly than any other.  That man was Edvard Westermarck.

    Westermarck was a Finnish philosopher and sociologist who published his seminal work on morality, The Origin and Development of the Moral Ideas, in 1906.  As we now know in retrospect, the truths in that great book were too much for mankind to bear.  The voices repeating those truths became fewer, and were finally silenced.  The darkness returned, and more than a century later we are still struggling to find our way out of the fog.  It should probably come as no surprise.  It goes without saying that the truth was unpalatable to believers in imaginary super beings.  Beyond that, the truth relegated the work of most of the great moral philosophers of the past to the status  of historical curiosities.  Those who interpreted their thought for the rest of us felt the ground slipping from beneath their feet.  Experts in ethics and morality became the equivalent of experts in astrology, and a step below the level of doctors of chiropracty.  Zealots of Marxism and the other emerging secular versions of religion rejected a truth that exposed the absurdity of attempts to impose new versions of morality from on high.  As for the average individuals of the species Homo sapiens, they rejected the notion that the “Good” and “Evil” objects that their emotions portrayed so realistically, and that moved them so profoundly, were mere fantasies.

    The result was more or less predictable.  Westermarck and the rest were shouted down.  The Blank Slate debacle turned the behavioral sciences into so many strongholds of an obscurantist orthodoxy.  The blind exploitation of moral emotions in the name of such newly concocted “Goods” as Nazism and Communism resulted in the deaths of tens of millions, and misery on a vast scale.  The Academy became the spawning ground of a modern, secular version of Puritanism, more intolerant and bigoted than the last.  In the case of Westermarck, the result has, at least, been more amusing.  He has been hidden in plain sight.  On his Wiki page, for example, he is described as one who “studied exogamy and incest taboo.”  To the extent that his name is mentioned at all, it is usually in connection with the Westermarck Effect, according to which individuals in close proximity in the early years of life become sexually desensitized to each other.  So much for the legacy of the man who has a good claim to be the most profound thinker on the subject of morality to appear since the days of Hume.

    Let us cut to the chase and consider what Westermarck actually said.  In the first place, he stressed a point often completely overlooked by modern researchers in the behavioral sciences; the complex emotions we now associate with morality did not suddenly appear fully formed like Athena from the forehead of Zeus.  Rather, they represent the results of a continuous process of evolution from simpler emotional responses that Westermarck grouped into the categories of “resentment” and “approval.”  These had existed in many animal species long before hominids appeared on the scene.  They were there as a result of natural selection.  As Westermarck put it:

    As to their origin, the evolutionist can hardly entertain a doubt. Resentment, like protective reflex action, out of which it has gradually developed, is a means of protection for the animal. Its intrinsic object is to remove a cause of pain, or, what is the same, a cause of danger. Two different attitudes maybe taken by an animal towards another which has made it feel pain: it may either shun or attack its enemy. In the former case its action is prompted by fear, in the latter by anger, and it depends on the circumstances which of these emotions is the actual determinant. Both of them are of supreme importance for the preservation of the species, and may consequently be regarded as elements in the animal’s mental constitution which have been acquired by means of natural selection in the struggle for existence.

    From what has been said above it is obvious that moral resentment is of extreme antiquity in the human race, nay that the germ of it is found even in the lower animal world among social animals capable of feeling sympathetic resentment.  The origin of custom as a moral rule no doubt lies in a very remote period of human history.

    This is followed by another remarkable passage, which showcases another aspect of Westermarck’s genius that appears repeatedly in his books; his almost incredible erudition.  His knowledge of the intellectual and historical antecedents of his own ideas is not limited to a narrow field, but is all-encompassing, and highly useful to anyone who cares to study the relevant source material on his own:

     This view is not new. More than one hundred and fifty years before Darwin, Shaftesbury wrote of resentment in these words:  ” Notwithstanding its immediate aim be indeed the ill or punishment of another, yet it is plainly of the sort of those [affections] which tend to the advantage “and interest of the self-system, the animal himself; and is withal in other respects contributing to the good and interest of the species.”  A similar opinion is expressed by Butler, according to whom the reason and end for which man was made liable to anger is, that he might be better qualified to prevent and resist violence and opposition, while deliberate resentment “is to be considered as a weapon, put into our hands by nature, against injury, injustice, and cruelty.”  Adam Smith, also, believes that resentment has “been given us by nature for defence, and for defence only,” as being “the safeguard of justice and the I security of innocence.”  Exactly the same view is taken by several modern evolutionists as regards the “end” of resentment, though they, of course, do not rest contented with saying that this feeling has been given us by nature, but try to explain in what way it has developed. “Among members of the same species,” says Mr. Herbert Spencer, “those individuals which have not, in any considerable degree, resented aggressions, must have ever tended to disappear, and to have left behind those which have with some effect made counter-aggressions.”

    All these references are accompanied by citations of the works in which they appear in the footnotes.  Westermarck then went on to derive conclusions from the evolutionary origins of morality that are both simple and obvious, but which modern behavioral scientists and philosophers have a daunting capacity to ignore.  He concluded that morality is subjective.  It may be reasoned about, but is the product of emotion, not reason.  It follows that there are no such things as moral “truths,” and that the powerful moral emotions that we so cling to, and that cause the chimeras of “Good” and “Evil” to hover in our consciousness as palpable, independent objects, are, in fact, illusions.  In Westermarck’s own words:

    As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity.  The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments.  The intensity of his emotions makes him the victim of an illusion.

    The presumed objectivity of moral judgments thus being a chimera there can be no moral truth in the sense in which this term is generally understood.  The ultimate reason for this is that the moral concepts are based upon emotions and that the contents of an emotion fall entirely outside the category of truth.

    Consider the significance of these passages, almost incredible looking back from a point of view through the Puritanical mist of the 21st century.  In one of the blurbs I ran across while searching the name “Westermarck,” his work was referred to as “outdated.”  I suppose that, in a sense, that conclusion is quite true, but not in the way intended.  I know of not a single modern thinker, scientist, or philosopher who has even come close to Westermarck in the simplicity and clarity with which he presents these conclusions, so obvious to anyone who has read and understood Darwin.  Here are some more passages that reinforce that conclusion:

    If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.  It has been said by Bentham and others that moral principles cannot be proved because they are first principles which are used to prove everything else.  But the real reason for their being inaccessible to demonstration is that, owing to their very nature, they can never be true.  If the word, “Ethics,” then, is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.

    To put it more bluntly, and to reveal some of my own purely subjective moral emotions in the process, the flamboyant peacocks currently strutting about among us peddling their idiosyncratic flavors of virtuous indignation and moral outrage based on a supposed monopoly on moral “truths” are, in reality, so many charlatans and buffoons.  To take them seriously is to embrace a lie, and one that, as has been clearly and repeatedly demonstrated in the past, and will almost certainly be abundantly demonstrated again in the future, is not only irritating, but extremely dangerous.  The above, by the way, appears in the context of a shattering rebuttal of utilitarianism in Chapter 1 that is as applicable to the modern versions being concocted for our edification by the likes of Sam Harris and Joshua Greene as it is to the earlier theories of John Stuart Mill and others.  In reading Westermarck’s book, one is constantly taken aback by insights that are stunning in view of the time at which they were written.  Consider, for example, the following in light of recent research on mirror neurons:

    That a certain act causes pleasure or pain to the bystander is partly due to the close association which exists between these feelings and their outward expressions.  The sight of a happy face tends to produce some degree of pleasure in him who sees it.  The sight of the bodily signs of suffering tends to produce a feeling of pain.  In either case the feeling of the spectator is the result of a process of reproduction, the perception of the physical manifestation of the feeling recalling the feeling itself on account of the established association between them.

    I fear we will have a very long wait before our species grasps the significance of Westermarck’s ideas and adjusts its perceptions of the nature and significance of morality accordingly.  As Jonathan Haidt pointed out in his The Righteous Mind, we are far to fond of the delightful joys of self-righteousness to admit the less than exalted truths about its origins without a struggle.  There are some grounds for optimism in the fact that a “Happy Few” are still around who understand that the significance of Westermarck completely transcends anything he had to say about sexual attraction and marriage.  As it happens, Frans de Waal, whose latest book is the subject of one of my recent posts, is one of them.  I personally became aware of him thanks to a reference to his book in Nietzsche’s “Human, All Too Human.”  I don’t think Nietzsche ever quite grasped what Westermarck was saying.  He had too much the soul of an artist and a poet rather than a scientist for that.  Yet, somehow, he had a sixth sense for ferreting out the wheat from the chaff in human thought.  As it happens, I began reading Stendhal, my favorite novelist, thanks to a reference in Nietzsche as well.  I may not exactly be on board as far as his ramblings about morality are concerned, but at least I owe him a tip of the hat for that.  As for Westermarck, I can but hope that many more will read and grasp the significance of his theories.  His book is available free online at Google books for anyone who cares to look at it.

    UPDATE:  Apparently I became too “dizzy with success” at discovering Westermarck to notice a “minor” temporal anomaly in the above post.  A commenter just pointed it out to me.  Westermarck wrote his book in 1906, and Nietzsche died in 1900!  He was actually referring to a book by Paul Ree entitled, “The Origin of the Moral Sensations,” which appeared in 1877.  Check Ree’s Wiki page, and you’ll see he’s the guy standing in front of a cart with Nietzsche in the famous picture with Lou Andreas-Salome sitting in the cart holding a whip.  Of course, it’s a spoof on Nietzsche’s famous dictum, “You go to women? Do not forget the whip!”  I was reading the German version of his “Human, all too Human.”  The quote referred to appears in Section 37, as follows:

    Welches ist doch der Hauptsatz, zu dem einer der kühnsten und kältesten Denker, der Verfasser des Buches “Über den Ursprung der moralischen Empfindungen” vermöge seiner ein-und durchschneidenden Analysen des menschlichen Handelns gelangt?

    In my English version of the book above the quote is translated as,

    Which principle did one of the keenest and coolest thinkers, the author of the book On the Origin of the Moral Feelings, arrive at through his incisive and piercing analysis of human actions?

    I translated the title on the fly as “On the Origin of the Moral Emotions,” and when you search that title on Bing, the first link that comes up points to Westermarck’s book.  In a word, my discovery of Westermarck was due to serendipity or bungling, take your pick.  The shade of Nietzsche must be chuckling somewhere.  Now I feel obligated to have a look at Ree’s book as well.  I’ll let you know what I think of him in a later post, and I promise not to claim I discovered him thanks to a reference in Aristotle’s “Ethics.”

    ree