The world as I see it
RSS icon Home icon
  • Extreme Altruism – The Case of the Pathological Do-Gooder

    Posted on September 26th, 2015 Helian 14 comments

    The Guardian just published an article by Larissa MacFarquhar entitled, “Extreme altruism: should you care for strangers at the expense of your family?”  The byline reads as follows:

    The world is full of needless suffering. How should each of us respond? Should we live as moral a life as possible, even giving away most of our earnings? A new movement argues that we are not doing enough to help those in need.

    It’s a tribute to the power of the emotions responsible for what we call morality that, more than a century after Westermarck published The Origin and Development of the Moral Ideas, questions like the one in the title are still considered rational, and that a “moral life” is equated with “giving away most of our earnings.”  Westermarck put it this way:

    As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivise the moral estimate to which it gives rise, in other words, to assign to it universal validity.  The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments.  The intensity of his emotions makes him the victim of an illusion.

    and

    The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood.  The ultimate reason for this is, that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.

    The article tells the tale of one Julia Wise, whom MacFarquhar refers to as a “do-gooder.”  She doesn’t use the term in the usual pejorative sense, but defines a “do-gooder” as,

    …a human character who arouses conflicting emotions. By “do-gooder” here I do not mean a part-time, normal do-gooder – someone who has a worthy job, or volunteers at a charity, and returns to an ordinary family life in the evenings. I mean a person who sets out to live as ethical a life as possible. I mean a person who is drawn to moral goodness for its own sake. I mean someone who commits himself wholly, beyond what seems reasonable. I mean the kind of do-gooder who makes people uneasy.

    Julia is just such a person.  MacFarquhar describes her as follows:

    Julia believed that because each person was equally valuable, she was not entitled to care more for herself than for anyone else; she believed that she was therefore obliged to spend much of her life working for the benefit of others. That was the core of it; as she grew older, she worked out the implications of this principle in greater detail. In college, she thought she might want to work in development abroad somewhere, but then she realised that probably the most useful thing she could do was not to become a white aid worker telling people in other countries what to do, but, instead, to earn a salary in the US and give it to NGOs that could use it to pay for several local workers who knew what their countries needed better than she did. She reduced her expenses to the absolute minimum so she could give away 50% of what she earned. She felt that nearly every penny she spent on herself should have gone to someone else who needed it more. She gave to whichever charity seemed to her (after researching the matter) to relieve the most suffering for the least money.

    Interestingly, Julia became an atheist at the age of eleven.  In other words, she must have been quite intelligent by human standards.  In spite of that, it apparently never occurred to her to question the objectivity of moral judgments.  I’ve always found it surprising that so many religious believers who become atheists don’t reason a bit further and grasp the fact that they no longer have a legitimate basis for making moral judgments.  They commonly consider themselves smarter than religious believers, and yet they cling to the illusion that the basis is still there, as solid as ever.  Religious believers can usually detect the charade immediately, and notice with a chuckle that the atheist has just sawed off the branch they thought they were sitting on.  Alas, the faithful are no less delusional than the infidels.  Again quoting Westermarck,

    To the verdict of a perfect intellect, that is, an intellect which knows everything existing, all would submit; but we can form no idea of a moral consciousness which could lay claim to a similar authority.  If the believers in an all-good God, who has revealed his will to mankind, maintain that they in this revelation possess a perfect moral standard, and that, consequently, what is in accordance with such a standard must be objectively right, it may be asked what they mean by an “all-good” God.  And in their attempt to answer this question, they would inevitably have to assume the objectivity they wanted to prove.

    In any event, Julia’s case is a perfect example of why it is useful to understand what morality actually is, and why it exists.  The truth was obvious enough to Darwin, and of course, to Westermarck and several other great thinkers who followed him.  Morality is the manifestation of evolved behavioral traits.  It exists because it enhanced the probability that the genetic material that gave rise to it would survive and replicate itself.  Julia, however, lives in a world radically different from the world in which the evolution of morality took place.  She is an extreme example of what can happen when environmental changes outpace the ability of natural selection to keep up.  She suffers from an assortment of morality inversions.  It’s as if she had decided to use her hands to cut her throat, or her legs to jump off a cliff.  In short, she is a pathological do-gooder.

    Several examples are mentioned in the article.  In general, she believes that it is “good” to hand over money and other valuable resources that might have enhanced her own chances of genetic survival to genetically unrelated individuals, even though the chances that they will ever return the favor to her or her children are vanishingly small.  She very nearly decides it would be “immoral” to have children because, according to the article,

    Children would be the most expensive nonessential thing she could possibly possess, so by having children of her own she would be in effect killing other people’s children.

    However, she manages to dodge this bullet by reasoning that she and her husband will be able to indoctrinate their child with their own pathological “values.”  The decision to have a child becomes “good” as long as the parents are confident that they can control its environment sufficiently well to insure that it will grow up as emotionally crippled as they are.  Of course, such therapeutic generational brainwashing is unlikely to be a “good” long term strategy for survival.  MacFarquhar concludes her article with the question,

    What would the world be like if everyone thought like a do-gooder? What if everyone believed that his family was no more important or valuable than anyone else’s? What if everyone decided that spontaneity or self-expression or certain kinds of beauty or certain kinds of freedom were less vital, or less urgent, than relieving other people’s pain?

    Assuming the environment remains more or less the same, the answer is simple enough.  The Julias of the world would die out.  In the end, that’s really the only answer that matters.  Is Julia therefore “wrong,” or even “immoral” for clinging to her pathologically altruistic lifestyle?  Of course not, because the question implies the objective existence of things – Good and Evil – that are actually imaginary.  One cannot logically claim that either using your hands to cut your throat, or using your legs to jump off a cliff, is objectively immoral.  One must be content with the observation that such actions seem a bit counter-intuitive.

  • The Alternate Reality Fallacy

    Posted on September 18th, 2015 Helian 1 comment

    The alternate reality fallacy is ubiquitous.  Typically, it involves the existence of a deity, and goes something like this:  “God must exist because otherwise there would be no absolute good, no absolute evil, no unquestionable rights, life would have no purpose, life would have no meaning,” and so on and so forth.  In other words, one must only demonstrate that a God is necessary.  If so, he will automatically pop into existence.  The video of a talk by Christian apologist Ravi Zacharias included below is provided as an illustrative data point for the reader.

    The talk, entitled, “The End of Reason:  A Response to the New Atheists,” was Zacharias’ contribution to the 2012 Contending with Christianity’s Critics Conference in Dallas.  I ran across it at Jerry Coyne’s Why Evolution is True website in the context of a discussion of rights.  We find out where Zacharias is coming from at minute 4:15 in the talk when he informs us that the ideas,

    …that steadied this part of the world, rooted in the notion of the ineradicable difference between good and evil, facts on which we built our legal system, our notions of justice, the very value of human life, how intrinsic worth was given to every human being,

    all have a Biblical mooring.  Elaborating on this theme, he quotes Chesterton to the effect that “we are standing with our feet firmly planted in mid-air.”  We have,

    …no grounding anymore to define so many essential values which we assumed for many years.

    Here Zacharias is actually stating a simple truth that has eluded many atheists.  Christianity and other religions do, indeed, provide some grounding for such things as objective rights, objective good, and objective evil.  After all, it’s not hard to accept the reality of these things if the alternative is to burn in hell forever.  The problem is that the “grounding” is an illusion.  The legions of atheists who believe in these things, however, actually are “standing with their feet firmly planted in mid-air.”  They have dispensed even with the illusion, sawing off the limb they were sitting on, and yet they counterintuitively persist in lecturing others about the nature of these chimeras as they float about in the vacuum, to the point of becoming quite furious if anyone dares to disagree with them.  Zacharias’ problem, on the other hand, isn’t that he doesn’t bother to provide a grounding.  His problem is his apparent belief in the non sequitur that, if he can supply a grounding, then that grounding must necessarily be real.

    Touching on this disconcerting tendency of many atheists to hurl down anathemas on those they consider morally impure in spite of the fact that they lack any coherent justification for their tendency to concoct novel values on the fly, Zacharias remarks at 5:45 in the video,

    The sacred meaning of marriage (and others) have been desacralized, and the only one who’s considered obnoxious is the one who wants to posit the sacredness of these issues.

    Here, again, I must agree with him.  Assuming he’s alluding to the issue of gay marriage, it makes no sense to simply dismiss anyone who objects to it as a bigot and a “hater.”  That claim is based on the obviously false assumption that no one actually takes their religious beliefs seriously.  Unfortunately, they do, and there is ample justification in the Bible, not to mention the Quran, for the conclusion that gay marriage is immoral.  Marriage has a legal definition, but it is also a religious sacrament.  There is no rational basis for the claim that anyone who objects to gay marriage is objectively immoral.  Support for gay marriage represents, not a championing of objective good, but the statement of a cultural preference.  The problem with the faithful isn’t that they are all haters and bigots.  The problem is that they construct their categories of moral good and evil based on an illusion.

    Beginning at about 6:45 in his talk, Zacharias continues with the claim that we are passing through a cultural revolution, which he defines as a,

    decisive break with the shared meanings of the past, particularly those which relate  to the deepest questions of the nature and purpose of life.

    noting that culture is,

    an effort to provide a coherent set of answers to the existential questions that confront all human beings in the passage of their lives.

    In his opinion, it can be defined in three different ways. First, there are theonomous cultures.  As he puts it,

    These are based on the belief that God has put his law into our hearts, so that we act intuitively from that kind of reasoning.  Divine imperatives are implanted in the heart of every human being.

    Christianity is, according to Zacharias, a theonomous belief.  Next, there are heteronymous cultures, which derive their laws from some external source.  In such cultures, we are “dictated to from the outside.”  He cites Marxism is a heteronymous world view.  More to the point, he claims that Islam also belongs in that category.  Apparently we are to believe that this “cultural” difference supplies us with a sharp distinction between the two religions.  Here we discover that Zacharias’ zeal for his new faith (he was raised a Hindu) has outstripped his theological expertise.  Fully theonomous versions of Christianity really only came into their own among Christian divines of the 18th century.  The notion, supported by the likes of Francis Hutcheson and the Earl of Shaftesbury, that “God has put his law into our hearts,” was furiously denounced by other theologians as not only wrong, but incompatible with Christianity.  John Locke was one of the more prominent Christian thinkers among the many who denied that “divine imperatives are implanted in the heart of every human being.”

    But I digress.  According to Zacharias, the final element of the triad is autonomous culture, or “self law”, in which everyone is a law into him or herself.  He notes that America is commonly supposed to be such a culture.  However, at about the 11:00 minute mark he notes that,

    …if I assert sacred values, suddenly a heteronymous culture takes over, and tells me I have no right to believe that.  This amounts to a “bait and switch.”  That’s the new world view under which the word “tolerance” really operates.

    This regrettable state of affairs is the result of yet another triad, in the form of the three philosophical evils which Zacharias identifies as secularization, pluralism, and privatization.  They are the defining characteristics of the modern cultural revolution.  The first supposedly results in an ideology without shame, the second in one without reason, and the third in one without meaning.  Together, they result in an existence without purpose.

    One might, of course, quibble with some of the underlying assumptions of Zacharias’ world view.  One might argue, for example, that the results of Christian belief have not been entirely benign, or that the secular societies of Europe have not collapsed into a state of moral anarchy.  That, however, is really beside the point.  Let us assume, for the sake of argument, that everything Zacharias says about the baleful effects of the absence of Christian belief is true.  It still begs the question, “So what?”

    Baleful effects do not spawn alternate realities.  If the doctrines of Christianity are false, then the illusion that they supply meaning, or purpose, or a grounding for morality will not transmute them into the truth.  I personally consider the probability that they are true to be vanishingly small.  I do not propose to believe in lies, whether their influence is portrayed as benign or not.  The illusion of meaning and purpose based on a belief in nonsense is a paltry substitute for the real thing.  Delusional beliefs will not magically become true, even if those beliefs result in an earthly paradise.  As noted above, the idea that they will is what I refer to in my title as the alternate reality fallacy.

    In the final part of his talk, Zacharias describes his own conversion to Christianity, noting that it supplied what was missing in his life.  In his words, “Without God, reason is dead, hope is dead, morality is dead, and meaning is gone, but in Christ we recover all these.”  To this I can but reply that the man suffers from a serious lack of imagination.  We are wildly improbable creatures sitting at the end of an unbroken chain of life that has existed for upwards of three billion years.  We live in a spectacular universe that cannot but fill one with wonder.  Under the circumstances, is it really impossible to relish life, and to discover a reason for cherishing and preserving it, without resort to imaginary super beings?  Instead of embracing the awe-inspiring reality of the world as it is, does it really make sense to supply the illusion of “meaning” and “purpose” by embracing the shabby unreality of religious dogmas?  My personal and admittedly emotional reaction to such a choice is that it is sadly paltry and abject.  The fact that so many of my fellow humans have made that choice strikes me, not as cause for rejoicing, but for shame.

  • Notes on “A Clergyman’s Daughter” – George Orwell’s Search for the Meaning of Life

    Posted on September 2nd, 2015 Helian No comments

    A synopsis of George Orwell’s A Clergyman’s Daughter may be found in the Wiki entry on the same.  In short, it relates the experiences of Dorothy Hare, only daughter of the Reverend Charles Hare, a “gentleman” clergyman with a chronic habit of living beyond his means.  Dorothy’s life is consumed by a frantic struggle to maintain respectability in spite of a mountain of debt owed to the local tradesmen, a dwindling congregation, and a church gradually decaying to ruin for lack of maintenance.  There’s also a problem so repressed in Dorothy’s mind that she’s hardly conscious of it; she is losing her Christian faith.

    Eventually the pressure becomes unbearable.  At the end of Chapter 1 we leave Dorothy exhausted, working herself beyond endurance late at night to prepare costumes for a children’s play.  At the start of Chapter 2 we find her teleported to the Old Kent Road, south of London, where she wakes up with a bad case of amnesia and only half a crown in her pocket.  A good German might describe this rather remarkable turn of events as an den Haaren herbeigezogen (dragged in by the hair.)  In other words, it’s far fetched, but we can forgive it because Orwell refrains from boring us with explanatory psychobabble, it’s in one of his earliest books, and he needs some such device in order to dish up a fictional version of the autobiographical events described in his Down and Out in Paris and London, published a couple of years earlier.

    Eventually Dorothy is rescued from starvation and squalor by a much older cousin, who sets her up as a school teacher at Ringwood House, which Orwell describes as a fourth rate private school with only 21 female inmates.  At this point the astute reader will discover something that might come as a revelation to those who are only familiar with Animal Farm and 1984.  Orwell was a convinced socialist when he wrote the book, and remained one until the end of his life.  Mrs. Creevy, the woman who runs the school, is a grasping capitalist, interested only in squeezing as much profit out of the enterprise as possible.  The girls “education” consists mainly of a mind-numbing routine of rote memorization and handwriting drills.  Dorothy’s attempts at education reform are nipped in the bud, and she is eventually sacked.  In Mrs. Creevy’s words,

    It’s the fees I’m after, not developing the children’s minds.  It’s not to be supposed as anyone’s to go to all the trouble of keeping a school and having the house turned upside down by a pack of brats, if it wasn’t that there’s a bit of money to be made out of it.  The fee comes first, and everything else comes afterwards.

    Orwell later elaborates,

    There are, by the way, vast numbers of private schools in England.  Second-rate, third-rate, and fourth-rate (Ringwood House was a specimen of the fourth-rate school), they exist by the dozen and the score in every London suburb and every provincial town.  At any given moment there are somewhere in the neighborhood of ten thousand of them, of which less than a thousand are subject to Government inspection.  And though some of them are better than others, and a certain number, probably, are better than the council schools with which they compete, there is the same fundamental evil in all of them; that is , that they have ultimately no purpose except to make money.

    So long as schools are run primarily for money, things like this will happen.  The expensive private schools to which the rich send their children are not, on the surface, so bad as the others, because they can afford a proper staff, and the Public School examination system keeps them up to the mark; but they have the same essential taint.

    Recall that the book was published in 1935.  The Spanish Civil War, in which Orwell fought with a socialist unit not affiliated with the Communists, began in 1936.  In that conflict he had his nose rubbed in the reality of totalitarianism, socialism that had dropped the democratic mask.  The experience is described in his Homage to Catalonia, which is essential reading for anyone interested in learning what inspired his later work.  There he tells how the Communist legions attacked and destroyed his own division, regardless of the fact that it was fighting on the same side.  Totalitarianism has never recognized more than two sides; the side that it controls, and the side that it doesn’t.  He saw that its real reason for existence was nothing like a worker’s paradise, or any other version of “human flourishing,” but absolute, unconditional power.  The nature of the system and the power it aimed at was what he described in 1984.  When A Clergyman’s Daughter was published, that revelation still lay in the future.  It may be that in 1935 Orwell still thought of the socialists as one big, happy, if occasionally quarrelsome, family.

    Be that as it may, the real interest of the book, at least as far as I’m concerned, lies at the end.  There, more explicitly than in any other of his novels or essays, Orwell takes up the question of the Meaning of Life.  While down and out, Dorothy had lost her faith once and for all.  In spite of that, after Mrs. Creevy sacks her, she finds her way back to the family parsonage, and takes up again where she left off.  She suffers from no illusions.  As Orwell puts it,

    It was not that she was in any doubt about the external facts of her future.  She could see it all quite clearly before her… Whatever happened, at the very best, she had got to face the destiny that is common to all lonely and penniless women.  “The Old Maids of Old England,” as somebody called them.  She was twenty-eight – just old enough to enter their ranks.

    She was not the same women as before.  She had lost her faith, and yet, she meditated,

    Faith vanishes, but the need for faith remains the same as before.  And given only faith, how can anything else matter?  How can anything dismay you if only there is some purpose in the world which you can serve, and which, while serving it, you can understand?  Your whole life is illumined by that sense of purpose.

    Life, if the grave really ends it, is monstrous and dreadful.  No use trying to argue it away.  Think of life as it really is, think of the details of life; and then think that there is no meaning in it, no purpose, no goal except the grave.  Surely only fools or self-deceivers, or those whose lives are exceptionally fortunate, can face that thought without flinching?

    Her mind struggled with the problem, while perceiving that there was no solution.  There was, she saw clearly, no possible substitute for faith; no pagan acceptance of life as sufficient unto itself, no pantheistic cheer-up stuff, no pseudo-religion of “progress” with visions of glittering Utopias and ant-heaps of steel and concrete.  It is all or nothing.  Either life on earth is a preparation for something greater and more lasting, or it is meaningless, dark and dreadful.

    Here we see that, even in 1935, Orwell wasn’t quite convinced that the Soviet version of a Brave New World really represented “progress.”  And while democratic socialism may have later given him something of a sense of purpose, it wasn’t yet filling the void.  Dorothy considers,

    Where had she got to?  She had been saying that if death ends all, then there is no hope and no meaning in anything.  Well, what then?

    At this point, the true believers chime in.  They know the answer.  Bring back faith, and, voila, the void is filled!  So many of them honestly seem to believe that, because they feel a need, the thing needed will automatically pop into existence.  They need absolute moral standards.  Therefore their faith must be true.  They need a purpose in life.  Therefore their faith must be true.  They need human existence to have meaning.  Therefore their faith must be true.  They must have unquestionable rights.  Therefore their faith must be true.  And so on, and so on.  Orwell is having none of it.  Dorothy muses on,

    And how cowardly, after all, to regret a superstition that you had got rid of – to want to believe something that you knew in your bones to be untrue.

    Orwell provides us with no magic solution to this thorny problem.  Indeed, in the end his answer is singularly unsatisfying.  He suggests that we just get on with it and leave it at that.  As Dorothy glues together strips of paper, forming the boots, armor, and other accoutrements required for the next church play, she has stumbled into the solution without realizing it:

    The smell of glue was the answer to her prayer.  She did not know this.  She did not reflect, consciously, that the solution to her difficulty lay in accepting the fact that there was no solution; that if one gets on with the job that lies to hand, the ultimate purpose of the job fades into insignificance; that faith and no faith are very much the same provided that one is doing what is customary, useful and acceptable.  She could not formulate these thoughts as yet, she could only live them.  Much later, perhaps, she would formulate them and draw comfort from them.

    and, finally,

    Dorothy sliced two more sheets of brown paper into strips, and took up the breastplate to give it its final coating.  The problem of faith and no faith had vanished utterly from her mind.  It was beginning to get dark, but, too busy to stop and light the lamp, she worked on, pasting strip after strip of paper into place, with absorbed, with pious concentration, in the penetrating smell of the gluepot.

    Orwell didn’t want A Clergyman’s Daughter to be republished, unless, perhaps, in a cheap version to scare up a few pounds for his heirs.  No doubt he considered it too immature.  We can be grateful that his literary executors thought otherwise, else we might never have known of his struggles with the Meaning of Life problem so early in his career.  He didn’t spill much ink over the problem later on, but we must assume that he had found some more inspiring purpose to strive for than just “getting on with it.”  Weak and in pain, he fought to complete 1984 on his death bed with incredible tenacity and dedication.  It was a gift to all of us that didn’t follow him to the grave, but lived long after he was gone as the single most effective literary weapon against a threat that had materialized as Communism in his own day, but will likely always lurk among us in one form or another.

    And what of the Meaning of Life?  That’s a question we must all provide an answer for on our own.  None of the imaginary super-beings we have dreamed up over the years is likely to materialize to trivialize the search.  And just as Orwell wrote, whether we care to deal with the problem or not, there is no objective solution.  It must be subjective and individual.  It need not be any less compelling for all that.

     

     

  • On the Red Meat Morality Inversion

    Posted on August 18th, 2015 Helian 4 comments

    Dwight Furrow recently posted an article at 3 Quarks Daily entitled “In Defense of Eating Meat.”  His first paragraph reads,

    There are many sound arguments for drastically cutting back on our consumption of meat—excessive meat consumption wastes resources, contributes to climate change, and has negative consequences for health. But there is no sound argument based on the rights of animals for avoiding meat entirely.

    As far as the first sentence is concerned, I have no problem with rationally discussing the pros and cons of meat consumption as long as the emotional whims behind the reasons are laid on the table.  I certainly agree with the second sentence, for the same reason cited by Westermarck more than a century ago; there is no such thing as objective morality, and it is therefore not subject to truth claims.  Furrow “kind of” sees it that way, but not quite.  Indeed, the core of his argument is very revealing.  It exposes all the ambivalence of the modern moral philosopher who understands the evolutionary origins of morality, but can’t bear to accept the consequences of that truth.  It reads as follows:

    Singer’s argument is based on the idea that animals have moral status because they suffer. As a utilitarian he may not be comfortable using “rights” talk but it surely fits here. He thinks animals have a right to equal consideration. But animals cannot have moral rights, simply because the treatment of animals falls outside the scope of our core understanding of morality. Morality is not a set of principles written in the stars. Morality arises, because as human beings, we need to cooperate with each other in order to thrive, and such cooperation requires trust.  The institution of morality is a set of considerations that helps to secure the requisite level of trust to enable that cooperation. That is why morality is a stable evolutionary development. It enhances the kind of flourishing characteristic of human beings. Rights, then, are entitlements that determine what a right-holder may demand of others that we decide to honor in order to maintain the requisite level of social trust.

    We are not similarly dependent on the trustworthiness of animals. (Pets are a special case which is why we don’t eat them). Our flourishing does not depend on getting cows, tigers, or shrimp to trust us or we them, and thus we have no reciprocal moral relations with them. From the standpoint of human flourishing there simply is no reason to confer moral rights on animals.

    Lovers of boneless ribeye steaks may well wish to simply accept this as it stands.  Any port in a storm, right?  Unfortunately, I’m a bit more fastidious than that.  Before plunging ahead, however, a bit of background on the debate might be useful.  Perhaps the best known crusader against the consumption of red meat is Peter Singer.  His Animal Liberation: A New Ethics for Our Treatment of Animals, published in 1975, has been, as Wiki puts it, “a formative influence on leaders of the modern animal liberation movement.”  His arguments are based on his conclusion that the particular flavor of utilitarianism he favored at the time constituted an objective guide for establishing the legitimacy of truth claims about the rights of animals.  As Furrow points out, the basic claim of the Utilitarians is that “only overall consequences matter in assessing the moral quality of an action.”  The most coherent statement of this philosophy was probably John Stuart Mill’s Utilitarianism, published in 1863.  That was probably too early for the moral consequences of Darwin’s Origin, published in 1859, to sink in.  I seriously doubt that Mill himself would have been a Utilitarian if he had lived a century later.  He was too smart for that.  Mill explicitly denied any belief in objective morality, noting that mankind had been struggling to find such an objective standard since the time of Socrates.  In his words,

    To inquire how far the bad effects of this deficiency have been mitigated in practice, or to what extent the moral beliefs of mankind have been vitiated or made uncertain by the absence of any distinct recognition of an ultimate standard, would imply a complete survey and criticism of past and present ethical doctrine. It would, however, be easy to show that whatever steadiness or consistency these moral beliefs have attained, has been mainly due to the tacit influence of a standard not recognized.

    I think Mill would have grasped where the “standard not recognized” really came from if there had been time for the consequences of Darwin’s great theory to really dawn on him.  Not so Singer, who apparently either never read or never appreciated Mill’s own reservations about his moral philosophy when he wrote his book, and treated utilitarianism as some kind of a moral gold standard.

    Which brings us back to Furrow’s counter-arguments.  Note in the above quote that he recognizes that morality is both subjective and an “evolutionary development.”  From that point, however, he wanders off into an intellectual swamp.  If morality is an evolutionary development, then it is quite out of the question that it arose, “…because as human beings, we need to cooperate with each other in order to thrive, and such cooperation requires trust.”  Evolution is not driven by needs, nor does it serve any purpose.  Robert Ardrey put it very succinctly in his bon mot, “Birds do not fly because they have wings.  They have wings because they fly.”  According to Furrow, “The institution of morality is a set of considerations that helps to secure the requisite level of trust to enable that cooperation.”  No, evolution didn’t somehow create an “institution of morality” consisting of “a set of considerations.”  Rather, it resulted in a set of behavioral responses in the form of emotions and feelings.  In other words, it produced the “moral sense” whose existence was demonstrated by Francis Hutcheson a century and a half before Darwin.  These emotions and feelings have their analogs in other animals.  We can only “consider” what they might mean after we have experienced them.  Had we not experienced them to begin with there would be nothing to consider, and therefore no morality.  Morality is a fundamentally emotional behavioral phenomena, and not some cognitively distilled laundry list of legalistic prescriptions for developing trust so we can cooperate with each other.

    Furrow goes on to claim that animals cannot have rights because our “flourishing” does not depend on trusting them.  However, that can only be true if it is also true that the “purpose” of morality and therefore the “goal” of evolution was to promote “human flourishing,” which is nonsense.  “Rights” are subjective emotional constructs that we commonly delude ourselves into perceiving as real things.  It follows that any metric of their objective legitimacy when applied to animals is entirely equivalent to their objective legitimacy when applied to humans; zero.

    My own opinion on the eating of red meat is not based on any claim that I understand the “purpose” of moral emotions better than Singer.  Rather, it is based on the observation that morality exists because it has made our genetic survival more probable.  It therefore seems to me that interpreting our moral emotions in a way that makes our survival less likely is a characteristic of a dysfunctional biological unit.  In other words, it is what I call a morality inversion.  Establishing artificial moral taboos against the eating of red meat or any other food that might increase our chances of survival in the event that there’s not enough food to go around strikes me as just such a morality inversion.  It is based on the wildly improbable assumption that there will always be enough food to go around, in spite of the continuing increase of the human population, and in spite of the fact that such a state of affairs has often been more the exception rather than the rule throughout human history.  In other words, it amounts to turning morality against itself.

    There is nothing objectively wrong about morality inversions.  It’s just that an aversion to them happens to be one of my personal whims.  I like the idea of my own continued genetic survival and the continued survival of the human race because it seems to me to be in harmony with the reasons we happen to exist to begin with.  As a result, I have a negative emotional response to moral systems that accomplish the opposite.  In other words, according to my cognitive interpretation of my own subjective moral emotions, eating red meat is “good,” and morally induced vegetarianism is “evil.”  As I said, it’s just a whim, but I see no reason why my whims should take a back seat to anyone else’s, and that’s all Singer’s infinitesimally elaborated version of utilitarianism really amounts to.  Indeed, I’m encouraged by the hope that there are others who also place a certain value on survival, and therefore share my whims.  I note in passing that they by no means coincide with the notion of “human flourishing” that currently prevails in the academy.

     

  • “Ethics” in the 21st Century

    Posted on August 2nd, 2015 Helian 5 comments

    According to the banner on its cover, Ethics is currently “celebrating 125 years.”  It describes itself as “an international journal of social, political, and legal philosophy.”  Its contributors consist mainly of a gaggle of earnest academics, all chasing about with metaphysical butterfly nets seeking to capture that most elusive quarry, the “Good.”  None of them seems to have ever heard of a man named Westermarck, who demonstrated shortly after the journal first appeared that their prey was as imaginary as unicorns, or even Darwin, who was well aware of the fact, but was not indelicate enough to spell it out so blatantly.

    The latest issue includes an entry on the “Transmission Principle,” defined in its abstract as follows:

    If you ought to perform a certain act, and some other action is a necessary means for you to perform that act, then you ought to perform that other action as well.

    As usual, the author never explains how you get to the original “ought” to begin with.  In another article entitled “What If I Cannot Make a Difference (and Know It),” the author begins with a cultural artifact that will surely be of interest to future historians:

    We often collectively bring about bad outcomes.  For example, by continuing to buy cheap supermarket meat, many people together sustain factory farming, and the greenhouse gas emissions of millions of individuals together bring about anthropogenic climate change.

    and goes on to note that,

    Intuitively, these bad outcomes are not just a matter of bad luck, but the result of some sort of moral shortcoming.  Yet in many of these situations, none of the individual agents could have made any difference for the better.

    He then demonstrates that, because a equals b, and b equals c, we are still entirely justified in peering down our morally righteous noses at purchasers of cheap meat and emitters of greenhouse gases.  His conclusion in academic-speak:

    I have shown how Act Consequentialists can find fault with some agent in all cases where multiple agents who have modally robust knowledge of all the relevant facts gratuitously bring about collectively suboptimal outcomes, even if the agents individually cannot make any difference for the better due to the uncooperativeness of others.

    The author does not explain the process by which emotions that evolved in a world without cheap supermarket meat have lately acquired the power to prescribe whether buying it is righteous or not.

    It has been suggested by some that trading, the exchange of goods and services, is a defining feature of our species.  In an article entitled “Markets without Symbolic Limits,” the authors conclude that,

    In many cases, we are morally obligated to revise our semiotics in order to allow for greater commodification.  We ought to revise our interpretive schemas whenever the costs of holding that schema are significant, without counterweight benefits.  It is itself morally objectionable to maintain a meaning system that imbues a practice with negative meanings when that practice would save or improve lives, reduce or alleviate suffering, and so on.

    No doubt that very thought occurred to our hunter-gatherer ancestors, enhancing their overall fitness.  The happy result was the preservation of the emotional baggage that gave rise to it to later inform the pages of Ethics magazine.

    In short, “moral progress,” as reflected in the pages of Ethics, depends on studiously ignoring Darwin, averting our eyes from the profane scribblings of Westermarck, pretending that the recent flood of books and articles on the evolutionary origins of morality and the existence of analogs of human morality in many animals are irrelevant, and gratuitously assuming that there really is some “thing” out there for the butterfly nets to catch.  In other words, our “moral progress” has been a progress away from self-understanding.  It saddens me, because I’ve always considered self-understanding a “good.”  Just another one of my whims.

  • Scientific Morality and the Illusion of Progress

    Posted on July 11th, 2015 Helian 4 comments

    British philosophers demonstrated the existence of a “moral sense” early in the 18th century.  We have now crawled through the rubble left in the wake of the Blank Slate debacle and finally arrived once again at a point they had reached more than two centuries ago.  Of course, men like Shaftesbury and Hutcheson thought this “moral sense” had been planted in our consciousness by God.  When Hume arrived on the scene a bit later it became possible to discuss the subject in secular terms.  Along came Darwin to suggest that the existence of this “moral sense” might have developed in the same way as the physical characteristics of our species; via evolution by natural selection.  Finally, a bit less than half a century later, Westermarck put two and two together, pointing out that morality was a subjective emotional phenomenon and, as such, not subject to truth claims.  His great work, The Origin and Development of the Moral Ideas, appeared in 1906.  Then the darkness fell.

    Now, more than a century later, we can once again at least discuss evolved morality without fear of excommunication by the guardians of ideological purity.  However, the guardians are still there, defending a form of secular Puritanism that yields nothing in intolerant piety to the religious Puritans of old.  We must not push the envelope too far, lest we suffer the same fate as Tim Hunt, with his impious “jokes,” or Matt Taylor, with his impious shirt.  We cannot just blurt out, like Westermarck, that good and evil are merely subjective artifacts of human moral emotions, so powerful that they appear as objective things.  We must at least pretend that these “objects” still exist.  In a word, we are in a holding pattern.

    One can actually pin down fairly accurately the extent to which we have recovered since our emergence from the dark age.  We are, give or take, about 15 years pre-Westermarck.  As evidence of this I invite the reader’s attention to a fascinating “textbook” for teachers of secular morality that appeared in 1891.  Entitled Elements of Ethical Science: A Manual for Teaching Secular Morality, by John Ogden, it taught the subject with all the most up-to-date Darwinian bells and whistles.  In an introduction worthy of Sam Harris the author asks the rhetorical question,

    Can pure morality be taught without inculcating religious doctrines, as these are usually interpreted and understood?

    and answers with a firm “Yes!”  He then proceeds to identify the basis for any “pure morality:”

    Man has inherently a moral nature, an innate moral sense or capacity.  This is necessary to moral culture, since, without the nature or capacity, its cultivation were impossible… This moral nature or capacity is what we call Moral Sense.  It is the basis of conscience.  It exists in man inherently, and, when enlightened, cultivated, and improved, it becomes the active conscience itself.  Conscience, therefore, is moral sense plus intelligence.

    The author recognizes the essential role of this Moral Sense as the universal basis of all the many manifestations of human morality, and one without which they could not exist.  It is to the moral sentiments what the sense of touch is to the other senses:

    (The Moral Sense) furnishes the basis or the elements of the moral sentiments and conscience, much in the same manner in which the cognitive facilities furnish the data or elements for thought and reasoning.  It is not a sixth sense, but it is to the moral sentiments what touch is to the other senses, a base on which they are all built or founded; a soil into which they are planted, and from which they grow… All the moral sentiments are, therefore, but the concrete modifications of the moral sense, or the applications of it, in a developed form, to the ordinary duties of life, as a sense of justice, of right and wrong, of obligation, duty, gratitude, love, etc., just as seeing, hearing, tasting and smelling are but modified forms of feeling or touch, the basis of all sense.

    And here, in a manner entirely similar to so many modern proponents of innate morality, Ogden goes off the tracks.  Like them, he cannot let go of the illusion of objective morality.  Just as the other senses inform us of the existence of physical things, the moral sense must inform us of the existence of another kind of “thing,” a disembodied, ghostly something that floats about independently of the “sense” that “detects” it, in the form of a pure, absolute truth.  There are numerous paths whereby one may, more or less closely, approach this truth, but they all converge on the same, universal thing-in-itself:

    …it must be conceded that, while we have a body of incontestable truth, constituting the basis of all morality, still the opinions of men upon minor points are so diverse as to make a uniform belief in dogmatical principles impossible.  The author maintains that moral truths and moral conduct may be reached from different routes or sources; all converging, it is true, to the same point:  and that it savors somewhat of illiberality to insist upon a uniform belief in the means or doctrines whereby we are to arrive at a perfect knowledge of the truth, in a human sense.

    The means by which this “absolute truth” acquires the normative power to dictate “oughts” to all and sundry is described in terms just as fuzzy as those used by the moral pontificators of our own day, as if it were ungenerous to even ask the question:

    When man’s ideas of right and wrong are duly formulated, recognized and accepted, they constitute what we denominate MORAL LAW.  The moral law now becomes a standard by which to determine the quality of human actions, and a moral obligation demanding obedience to its mandates.  The truth of this proposition needs no further confirmation.

    As they say in the academy to supply missing steps in otherwise elegant proofs, it’s “intuitively obvious to the casual observer.”  In those more enlightened times, only fifteen years elapsed before Westermarck demolished Ogden’s ephemeral thing-in-itself, pointing out that it couldn’t be confirmed because it didn’t exist, and was therefore not subject to truth claims.  I doubt that we’ll be able to recover the same lost ground so quickly in our own day.  Secular piety reigns in the academy, in some cases to a degree that would make the Puritans of old look like abandoned debauchees, and is hardly absent elsewhere.  Savage punishment is meted out to those who deviate from moral purity, whether flippant Nobel Prize winners or overly principled owners of small town bakeries.  Absent objective morality, the advocates of such treatment would lose their odor of sanctity and become recognizable as mere absurd bullies.  Without a satisfying sense of moral rectitude, bullying wouldn’t be nearly as much fun.  It follows that the illusion will probably persist a great deal longer than a decade and a half this time around.

    Be that as it may, Westermarck still had it right.  The “moral sense” exists because it evolved.  Failing this basis, morality as we know it could not exist.  It follows that there is no such thing as moral truth, or any way in which the moral emotions of one individual can gain a legitimate power to dictate rules of behavior to some other individual.  Until we find our way back to that rather elementary level of self-understanding, it will be impossible for us to deal rationally with our own moral behavior.  We’ll simply have to leave it on automatic pilot, and indulge ourselves in the counter-intuitive hope that it will serve our species just as well now as it did in the vastly different environment in which it evolved.

  • Of Tim Hunt and Elementary Morality

    Posted on June 21st, 2015 Helian 33 comments

    If we are evolved animals, then it is plausible that we have evolved behavioral traits, and among those traits are a “moral sense.”  So much was immediately obvious to Darwin himself.  To judge by the number of books that have been published about evolved morality in the last couple of decades, it makes sense to a lot of other people, too.  The reason such a sense might have evolved is obvious, especially among highly social creatures such as ourselves.  The tendency to act in some ways and not in others enhanced the probability that the genes responsible for those tendencies would survive and reproduce.  It is not implausible that this moral sense should be strong, and that it should give rise to such powerful impressions that some things are “really good,” and others are “really evil,” as to produce a sense that “good” and “evil” exist independently as objective things.  Such a moral sense is demonstrably very effective at modifying our behavior.  It hardly follows that good and evil really are independent, objective things.

    If an evolved moral sense really is the “root cause” for the existence of all the various and gaudy manifestations of human morality, is it plausible to believe that this moral sense has somehow tracked an “objective morality” that floats around out there independent of any subjective human consciousness?  No.  If it really is the root cause, is there some objective mechanism whereby the moral impressions of one human being can leap out of that individual’s skull and gain the normative power to dictate to another human being what is “really good” and “really evil?”  No.  Can there be any objective justification for outrage?  No.  Can there be any objective basis for virtuous indignation?  No.  So much is obvious.  Under the circumstances it’s amazing, even given the limitations of human reason, that so many of the most intelligent among it just don’t get it.  One can only attribute it to the tremendous power of the moral emotions, the great pleasure we get from indulging them, and the dominant role they play in regulating all human interactions.

    These facts were recently demonstrated by the interesting behavior of some of the more prominent intellectuals among us in reaction to some comments at a scientific conference.  In case you haven’t been following the story, the commenter in question was Tim Hunt,- a biochemist who won a Nobel Prize in 2001 with Paul Nurse and Leland H. Hartwell for discoveries of protein molecules that control the division (duplication) of cells.  At a luncheon during the World Conference of Science Journalists in Seoul, South Korea, he averred that women are a problem in labs because “You fall in love with them, they fall in love with you, and when you criticize them, they cry.”

    Hunt’s comment evoked furious moral emotions, not least among atheist intellectuals.  According to PZ Myers, proprietor of Pharyngula, Hunt’s comments revealed that he is “bad.”  Some of his posts on the subject may be found here, here, and here.  For example, according to Myers,

    Oh, no! There might be a “chilling effect” on the ability of coddled, privileged Nobel prize winners to say stupid, demeaning things about half the population of the planet! What will we do without the ability of Tim Hunt to freely accuse women of being emotional hysterics, or without James Watson’s proud ability to call all black people mentally retarded?

    I thought Hunt’s plaintive whines were a big bowl of bollocks.

    All I can say is…fuck off, dinosaur. We’re better off without you in any position of authority.

    We can glean additional data in the comments to these posts that demonstrate the human version of “othering.”  Members of outgroups, or “others,” are not only “bad,” but also typically impure and disgusting.  For example,

    Glad I wasn’t the only–or even the first!–to mention that long-enough-to-macramé nose hair. I think I know what’s been going on: The female scientists in his lab are always trying hard to not stare at the bales of hay peeking out of his nostrils and he’s been mistaking their uncomfortable, demure behaviour as ‘falling in love with him’.

    However, in creatures with brains large enough to cogitate about what their emotions are trying to tell them, the same suite of moral predispositions can easily give rise to stark differences in moral judgments.  Sure enough, others concluded that Myers and those who agreed with him were “bad.”  Prominent among them was Richard Dawkins, who wrote in an open letter to the London Times,

    Along with many others, I didn’t like Sir Tim Hunt’s joke, but ‘disproportionate’ would be a huge underestimate of the baying witch-hunt that it unleashed among our academic thought police: nothing less than a feeding frenzy of mob-rule self-righteousness.”

    The moral emotions of other Nobel laureates informed them that Dawkins was right.  For example, according to the Telegraph,

    Sir Andre Geim, of the University of Manchester who shared the Nobel prize for physics in 2010 said that Sir Tim had been “crucified” by ideological fanatics , and castigated UCL for “ousting” him.

    Avram Hershko, an Israeli scientist who won the 2004 Nobel prize in chemistry, said he thought Sir Tim was “very unfairly treated.”  He told the Times: “Maybe he wanted to be funny and was jet lagged, but then the criticism in the social media and in the press was very much out of proportion. So was his prompt dismissal — or resignation — from his post at UCL .”

    All these reactions have one thing in common.  They are completely irrational unless one assumes the existence of “good” and “bad” as objective things rather than subjective impressions.  Or would you have me believe, dear reader, that statements like, “fuck off, dinosaur,” and allusions to crucifixion by “ideological fanatics” engaged in a “baying witch-hunt,” are mere cool, carefully reasoned suggestions about how best to advance the officially certified “good” of promoting greater female participation in the sciences?  Nonsense!  These people aren’t playing a game of charades, either.  Their behavior reveals that they genuinely believe, not only in the existence of “good” and “bad” as objective things, but in their own ability to tell the difference better than those who disagree with them.  If they don’t believe it, they certainly act like they do.  And yet these are some of the most intelligent representatives of our species.  One can but despair, and hope that aliens from another planet don’t turn up anytime soon to witness such ludicrous spectacles.

    Clearly, we can’t simply dispense with morality.  We’re much too stupid to get along without it.  Under the circumstances, it would be nice if we could all agree on what we will consider “good” and what “bad,” within the limits imposed by the innate bedrock of morality in human nature.  Unfortunately, human societies are now a great deal different than the ones that existed when the predispositions that are responsible for the existence of morality evolved, and they tend to change very rapidly.  It stands to reason that it will occasionally be necessary to “adjust” the types of behavior we consider “good” and “bad” to keep up as best we can.  I personally doubt that the current practice of climbing up on rickety soap boxes and shouting down anathemas on anyone who disagrees with us, and then making the “adjustment” according to who shouts the loudest, is really the most effective way to accomplish that end.  Among other things, it results in too much collateral damage in the form of shattered careers and ideological polarization.  I can’t suggest a perfect alternative at the moment, but a little self-knowledge might help in the search for one.  Shedding the illusion of objective morality would be a good start.

    soapbox

  • Indulge Yourself – Believe in Free Will

    Posted on May 24th, 2015 Helian 1 comment

    Philosophers have been masticating the question of free will for many centuries.  The net result of their efforts has been a dizzying array of different “flavors” of free will or the lack thereof.  I invite anyone with the patience to attempt disentangling the various permutations and combinations thereof to start with the Wiki page, and take it from there.   For the purpose of this post I will simply define free will as the ability to make choices that are not predetermined before we make the choice.  This implies that our conscious minds are not entirely subject to deterministic physical laws, and have the power to alter physical reality.  Lack of free will means the absence of this power, and implies that we lack the power to alter physical reality in any way.  I personally have no idea whether we have free will or not.  In my opinion, we currently lack the knowledge to answer the question.  However, I believe that debating the matter is useless.  Instead, we should assume that there is free will as the “default” position, and get on with our lives.

    Of course, if there is no free will, my advice is useless.  I am simply an automaton among automatons, adding to the chorus of sound and fury that signifies nothing.  In that case the debate over free will is merely another amusing case of pre-programmed robots arguing over what they “should” believe, and what they “ought” to do as a consequence, in a world in which the words “should” and “ought” are completely meaningless.  These words imply an ability to choose between two alternatives, but no such choice can exist if there is no free will.  “Ought” we to alter the criminal justice system because we have decided there is no such thing as free will?  If we have no free will, the question is meaningless.  We cannot possibly alter the predetermined outcome of the debate, or the predetermined evolution of the criminal justice system, or even our opinion on whether it “ought” to be changed or not.  Under the circumstances it can hardly hurt to assume that we do have free will.  If so, the assumption must have been foreordained, and no conscious agency exists that could have altered the fact.  If we don’t have free will, it is also absurd, if inevitable, to blame me or even take issue with me for advocating that we act as if we have free will.  After all, in that case I couldn’t have acted or thought any differently, assuming my mind is an artifact of the physical world, and not a “ghost in the machine.”  If we believe in free will but there is no free will, debate about the matter may or may not be inevitable, but it is certainly futile, because the outcome of the debate has been predetermined.

    On the other hand, if we decide that there is no free will, but there actually is, it can potentially “hurt” a great deal.  In that case, we will be basing our actions and our conclusions about what “ought” or “ought not” to be done on a false assumption.  Whatever our idiosyncratic goals happen to be, it is more probable that we will attain them if we base our strategy for achieving them on truth rather than falsehood.  If we have free will, the outcome of the debate matters.  Suppose, for example, that the anti-free will side has much better debaters and convinces those watching the debate that they have no free will even if they do.  Plausible results include despair, a sense of purposelessness, fatalism, a lethargic and indifferent attitude towards life, a feeling that nothing matters, etc.  No doubt there are legions of philosophers out there who can prove that, because a = b and b = c, none of these reactions are reasonable.  They will, however, occur whether they are reasonable or not.

    I doubt that my proposed default position will be difficult to implement.  Even the most diehard free will denialists seldom succeed in completely accepting the implications of their own theories.  Look through their writings, and before long you’ll find a “should.”  Read a bit further and you’re likely to stumble over an “ought” as well.  However, as noted above, speaking of “should” and “ought” in the absence of free will is absurd.  They imply the possibility of a choice between two alternatives that will lead to different outcomes.  If there is no free will, there can be no choice.  Individuals will do what they “ought” to do or “ought not” to do just as the arrangement of matter and energy in the universe happens to dictate.  It is absurd to blame them for doing something they could not avoid.  However, the question of whether they actually will be blamed or not is also predetermined.  It is just as absurd to blame the blamers.

    In short, I propose we all stop arguing and accept the default.  If there is no free will, then obviously I am proposing it because of my programming.  I can’t do otherwise even if I “ought” to.  It’s possible my proposal may change things, but, if so, the change was inevitable.  However, if there is free will, then believing in it is simply believing in the truth, and a truth that, at least from my point of view, happens to be a great deal more palatable than the alternative.

  • …and Speaking of the New Atheists

    Posted on May 17th, 2015 Helian 6 comments

    New Atheist bashing is all the rage these days.  The gloating tone at Salon over New Atheist Sam Harris’ humiliation by Noam Chomsky in their recent exchange over the correct understanding of something that doesn’t exist referred to in my last post is but one of many examples.  In fact, New Atheists aren’t really new, and neither is New Atheist bashing.  Thumb through some of the more high brow magazines of the 1920’s, for example, and chances are you’ll run across an article describing the then current crop of atheists as aggressive, ignorant, clannish, self-righteous and, in short, prone to all the familiar maladies that supposedly also afflict the New Atheists of today.  And just as we see today, the more “laid back” atheists were gleefully piling on then as now.  They included H. L. Mencken, probably the most famous atheist of the time, who deplored aggressive atheism in his recently republished autobiographical trilogy.  Unfortunately he’s no longer around to explain the difference between “aggressive” atheism, and his own practice of heaping scorn and ridicule on the more backward believers.  Perhaps it had something to do with the fact that Mencken was by nature a conservative.  He abhorred any manifestation of the “Uplift,” a term which in those days meant more or less the same thing as “progressive” today.

    I think the difference between these two species of atheists has something to do with the degree to which they resent belonging to an outgroup.  Distinguishing between ingroups and outgroups comes naturally to our species.  This particular predisposition is ostensibly not as beneficial now as it was during the period over which it evolved.  A host of pejorative terms have been invented to describe its more destructive manifestations, such as racism, anti-Semitism, xenophobia, etc., all of which really describe the same phenomenon.  Those among us who harbor no irrational hatreds of this sort must be rare indeed.  One often finds it present in its more virulent forms in precisely those individuals who consider themselves immune to it.  Atheists are different, and that’s really all it takes to become identified as an outgroup,

    Apparently some atheists don’t feel themselves particularly inconvenienced by this form of “othering,” especially in societies that have benefited to some extent from the European Enlightenment.  Others take it more seriously, and fight back using the same tactics that have been directed against them.  They “other” their enemies and seek to aggressively exploit human moral emotions to gain the upper hand.  That is exactly what has been done quite successfully at one time or another by many outgroups, including women, blacks, and quite spectacularly lately, gays.  New Atheists are merely those who embrace such tactics in the atheist community.

    I can’t really blame my fellow atheists for this form of activism.  One doesn’t choose to be an atheist.  If one doesn’t believe in God, then other than in George Orwell’s nightmare world of “1984,” one can’t be “cured” into becoming a Christian or a Moslem, any more than a gay can be “cured” into becoming heterosexual, or a black “cured” into becoming white.  However, for reasons having to do with the ideological climate in the world today that are much too complicated to address in a short blog post, New Atheists are facing a great deal more resistance than members of some of society’s other outgroups.  This resistance is coming, not just from religious believers, but from their “natural” allies on the ideological left.

    Noam Chomsky’s scornful treatment of Sam Harris, accompanied by the sneers of the leftist editors of Salon, is a typical example of this phenomenon.  Such leaders as Harris, Richard Dawkins, and the late Christopher Hitchens are the public “face” of the New Atheist movement, and as a consequence are often singled out in this way.  Of course they have their faults, and I’ve criticized the first two myself on this blog and elsewhere.  However, many of the recent attacks, especially from the ideological left, are neither well-reasoned nor, at least in terms of my own subjective moral emotions, even fair.  Often they conform to hackneyed formulas; the New Atheists are unsophisticated, they don’t understand what they’re talking about, they are bigoted, they are harming people who depend on religious beliefs to give “meaning” to their lives, etc.

    A typical example, which was also apparently inspired by the Harris/Chomsky exchange, recently turned up at Massimo Pigliucci’s Scientia Salon.  Entitled “Reflections on the skeptic and atheist movements,” it was ostensibly Pigliucci’s announcement that, after being a longtime member and supporter, he now wishes to “disengage” from the club.  As one might expect, he came down squarely in favor of Chomsky, who is apparently one of his heroes.  That came as no surprise to me, as fawning appraisals of Blank Slate kingpins Richard Lewontin and Stephen Jay Gould have also appeared at the site.  It had me wondering who will be rehabilitated next.  Charles Manson?  Jack the Ripper?  Pigliucci piques himself on his superior intellect which, we are often reminded, is informed by both science and a deep reading of philosophy.  In spite that, he seems completely innocent of any knowledge that the Blank Slate debacle ever happened, or of Lewontin’s and Gould’s highly effective role in propping it up for so many years, using such “scientific” methods as bullying, vilification and mobbing of anyone who disagreed with them, including, among others, Robert Trivers, W. D. Hamilton, Konrad Lorenz, and Richard Dawkins.  Evidence of such applications of “science” are easily accessible to anyone who makes even a minimal effort to check the source material, such as Lewontin’s Not in Our Genes.

    No matter, Pigliucci apparently imagines that the Blank Slate was just a figment of Steven Pinker’s fevered imagination.  With such qualifications as a detector of “fools,” he sagely nods his head as he informs us that Chomsky “doesn’t suffer fools (like Harris) gladly.”  With a sigh of ennui, he goes on, “And let’s not go (again) into the exceedingly naive approach to religious criticism that has made Dawkins one of the “four horsemen” of the New Atheism.”  The rest of the New Atheist worthies come in for similar treatment.  By all means, read the article.  You’ll notice that, like virtually every other New Atheist basher, whether on the left or the right of the ideological spectrum, Pigliucci never gets around to mentioning what these “naïve” criticisms of religion actually are, far less to responding to or discussing them.

    It’s not hard to find Dawkins’ “naïve” criticisms of religion.  They’re easily available to anyone who takes the trouble to look through the first few chapters of his The God Delusion.  In fact, most of them have been around at least since Jean Meslier wrote them down in his Testament almost 300 years ago.  Religious believers have been notably unsuccessful in answering them in the ensuing centuries.  No doubt they might seem naïve if you happen to believe in the ephemeral and hazy versions of God concocted by the likes of David Bentley Hart and Karen Armstrong.  They’ve put that non-objective, non-subjective, insubstantial God so high up on the shelf that it can’t be touched by atheists or anyone else.  The problem is that that’s not the God that most people believe in.  Dawkins can hardly be faulted for directing his criticisms at the God they do believe in.  If his arguments against that God are really so naïve, what can possibly be the harm in actually answering them?

    As noted above, New Atheist bashing is probably inevitable given the current ideological fashions.  However, I suggest that those happy few who are still capable of thinking for themselves think twice before jumping on the bandwagon.  In the first place, it is not irrational for atheists to feel aggrieved at being “othered,” any more than it is for any other ostracized minority.  Perhaps more importantly, the question of whether religious beliefs are true or not matters.  Today one actually hears so-called “progressive” atheists arguing that religious beliefs should not be questioned, because it risks robbing the “little people” of a sense of meaning and purpose in their lives.  Apparently the goal is to cultivate delusions that will get them from cradle to grave with as little existential Angst as possible.  It would be too shocking for them to know the truth.  Beyond the obvious arrogance of such an attitude, I fail to see how it is doing anyone a favor.  People supply their own “meaning of life,” depending on their perceptions of reality.  Blocking the path to truth and promoting potentially pathological delusions in place of reality seems more a betrayal than a “service” to me.  To the extent that anyone cares to take my own subjective moral emotions seriously, I can only say that I find substituting bland religious truisms for a chance to experience the stunning wonder, beauty and improbability of human existence less a “benefit” than an exquisite form of cruelty.

  • Whither Morality?

    Posted on April 19th, 2015 Helian 4 comments

    The evolutionary origins of morality and the reasons for its existence have been obvious for over a century.  They were no secret to Edvard Westermarck when he published The Origin and Development of the Moral Ideas in 1906, and many others had written books and papers on the subject before his book appeared.  However, our species has a prodigious talent for ignoring inconvenient truths, and we have been studiously ignoring that particular truth ever since.

    Why is it inconvenient?  Let me count the ways!  To begin, the philosophers who have taken it upon themselves to “educate” us about the difference between good and evil would be unemployed if they were forced to admit that those categories are purely subjective, and have no independent existence of their own.  All of their carefully cultivated jargon on the subject would be exposed as gibberish.  Social Justice Warriors and activists the world over, those whom H. L. Mencken referred to collectively as the “Uplift,” would be exposed as so many charlatans.  We would begin to realize that the legions of pious prigs we live with are not only an inconvenience, but absurd as well.  Gaining traction would be a great deal more difficult for political and religious cults that derive their raison d’être from the fabrication and bottling of novel moralities.  And so on, and so on.

    Just as they do today, those who experienced these “inconveniences” in one form or another pointed to the drawbacks of reality in Westermarck’s time.  For example, from his book,

    Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism.  If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it.  This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong.  To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief.  The unfortunate circumstance that there do exist dangerous things in the world, proves that something may be dangerous and yet true.  another question is whether any scientific truth really is mischievous on the whole, although it may cause much discomfort to certain people.  I venture to believe that this, at any rate, is not the case with that form of ethical subjectivism which I am here advocating.

    I venture to believe it as well.  In the first place, when we accept the truth about morality we make life a great deal more difficult for people of the type described above.  Their exploitation of our ignorance about morality has always been an irritant, but has often been a great deal more damaging than that.  In the 20th century alone, for example, the Communist and Nazi movements, whose followers imagined themselves at the forefront of great moral awakenings that would lead to the triumph of Good over Evil, resulted in the needless death of tens of millions of people.  The victims were drawn disproportionately from among the most intelligent and productive members of society.

    Still, just as Westermarck predicted more than a century ago, the bugaboo of “moral relativism” continues to be “repeated as an argument” in our own day.  Apparently we are to believe that if the philosophers and theologians all step out from behind the curtain after all these years and reveal that everything they’ve taught us about morality is so much bunk, civilized society will suddenly dissolve in an orgy of rape and plunder.

    Such notions are best left behind with the rest of the impedimenta of the Blank Slate.  Nothing could be more absurd than the notion that unbridled license and amorality are our “default” state.  One can quickly disabuse ones self of that fear by simply reading the comment thread of any popular news website.  There one will typically find a gaudy exhibition of moralistic posing and pious one-upmanship.  I encourage those who shudder at the thought of such an unpleasant reading assignment to instead have a look at Jonathan Haidt’s The Righteous Mind.  As he puts it in the introduction to his book,

    I could have titled this book The Moral Mind to convey the sense that the human mind is designed to “do” morality, just as it’s designed to do language, sexuality, music, and many other things described in popular books reporting the latest scientific findings.  But I chose the title The Righteous Mind to convey the sense that human nature is not just intrinsically moral, it’s also intrinsically moralistic, critical and judgmental… I want to show you that an obsession with righteousness (leading inevitably to self-righteousness) is the normal human condition.  It is a feature of our evolutionary design, not a bug or error that crept into minds that would otherwise be objective and rational.

    Haidt also alludes to a potential reason that some of the people already mentioned above continue to evoke the scary mirage of moral relativism:

    Webster’s Third New World Dictionary defines delusion as “a false conception and persistent belief in something that has no existence in fact.”  As an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history:  the rationalist delusion.  It’s the idea that reasoning is our most noble attribute, one that makes us like the gods (for Plato) or that brings us beyond the “delusion” of believing in gods (for the New Atheists).  The rationalist delusion is not just a claim about human nature.  It’s also a claim that the rational caste (philosophers or scientists) should have more power, and it usually comes along with a utopian program for raising more rational children.

    Human beings are not by nature moral relativists, and they are in no danger of becoming moral relativists merely by virtue of the fact that they have finally grasped what morality actually is.  It is their nature to perceive Good and Evil as real, independent things, independent of the subjective minds that give rise to them, and they will continue to do so even if their reason informs them that what they perceive is a mirage.  They will always tend to behave as if these categories were absolute, rather than relative, even if all the theologians and philosophers among them shout at the top of their lungs that they are not being “rational.”

    That does not mean that we should leave reason completely in the dust.  Far from it!  Now that we can finally understand what morality is, and account for the evolutionary origins of the behavioral predispositions that are its root cause, it is within our power to avoid some of the most destructive manifestations of moral behavior.  Our moral behavior is anything but infinitely malleable, but we know from the many variations in the way it is manifested in different human societies and cultures, as well as its continuous and gradual change in any single society, that within limits it can be shaped to best suit our needs.  Unfortunately, the only way we will be able to come up with an “optimum” morality is by leaning on the weak reed of our ability to reason.

    My personal preferences are obvious enough, even if they aren’t set in stone.  I would prefer to limit the scope of morality to those spheres in which it is indispensable for lack of a viable alternative.  I would prefer a system that reacts to the “Uplift” and unbridled priggishness and self-righteousness with scorn and contempt.  I would prefer an educational system that teaches the young the truth about what morality actually is, and why, in spite of its humble origins, we can’t get along without it if we really want our societies to “flourish.”  I know; the legions of those whose whole “purpose of life” is dependent on cultivating the illusion that their own versions of Good and Evil are the “real” ones stands in the way of the realization of these whims of mine.  Still, one can dream.