The world as I see it
RSS icon Email icon Home icon
  • On Steven Pinker’s Second Fairy Tale: The “Hydraulic Theory” of Konrad Lorenz

    Posted on August 4th, 2018 Helian 2 comments

    You have to hand it to Steven Pinker.  At least his book about the Blank Slate drew attention to the fact that it ever happened.  It would have been nice if he’d gotten the history right as well.  Unfortunately, his description of the affair airbrushes the two men most responsible for ending it completely out of the picture.  I refer to Robert Ardrey and Konrad Lorenz.  Ardrey played by far the most significant role of any individual in smashing the Blank Slate orthodoxy.  He was an outsider, a former playwright, whose highly popular and influential books insisting on the existence and significance of human nature made a mockery of the Blank Slate among intelligent lay people.  The academic and professional tribe of “scientists” in the behavioral disciplines never forgave him.  The humiliation they suffered during their slow, post-Ardrey return to reality following their long debauch with ideologically motivated myths tarted up as “science” rankles to this day.  One can still find occasional artifacts of their hatred in the popular media, as I noted in an earlier post.  That probably explains why Pinker dropped Ardrey down the memory hole.  It can be understood, at least in part, as a belated defense of his academic ingroup.  The result was a ludicrous “history” of the Blank Slate affair that studiously avoided mentioning the role of the individual who played the single most important role in ending it.

    Pinker’s rationalization for ignoring Ardrey and Lorenz was certainly crude enough.  He managed it in a single paragraph in Chapter 7 of The Blank Slate.  The first part of the paragraph reads as follows:

    The Noble Savage, too, is a cherished doctrine among critics of the sciences of human nature.  In Sociobiology, Wilson mentioned that tribal warfare was common in human prehistory.  The against-sociobiologists declared that this had been “strongly rebutted both on the basis of historical and anthropological studies.” I looked up these “studies,” which were collected in Ashley Montagu’s Man and Aggression.  In fact they were just hostile reviews of books by the ethologist Konrad Lorenz, the playwright Robert Ardrey, and the novelist William Golding (author of Lord of the Flies).  Some of the criticisms were, to be sure, deserved.  Ardrey and Lorenz believed in archaic theories such as that aggression was like the discharge of a hydraulic pressure and that evolution acted for the good of the species.  But far stronger criticisms of Ardrey and Lorenz had been made by the sociobiologists themselves.  (On the second page of The Selfish Gene, for example, Dawkins wrote, “The trouble with these books is that the authors got it totally and utterly wrong.”)  In any case, the reviews contained virtually no data about tribal warfare.

    That’s for sure!  Man and Aggression, published in 1968, was a collection of essays by some of the most prominent anthropologists and psychologists of the day.  It’s quite true that it had little to do with tribal warfare, because it was intended mainly as an attempt to refute Ardrey and Lorenz’ insistence on the existence and importance of human nature.  As such, it is one of the most important pieces of historical source material relevant to the Blank Slate.  Among other things, it demonstrates that Pinker’s portrayal of E. O. Wilson as the knight in shining armor who slew the Blank Slate dragon in Chapter 6 of his book is nonsense.  The battle had been joined long before the appearance of Wilson’s Sociobiology in 1975, and the two chapters in that book that had even mentioned human nature were essentially just restatements of what Ardrey, Lorenz, and several other authors of note, such as Robin Fox, Paul Leyhausen, Desmond Morris, Anthony Storr, and Lionel Tiger, had already written, in part, more than a decade earlier.

    As can be seen in the paragraph from Pinker’s book, he cites two main reasons for airbrushing Ardrey and Lorenz out of existence.  The first is Dawkins’ comment in The Selfish Gene that, “The trouble with these books is that the authors got it totally and utterly wrong.”  If you actually read what Dawkins was talking about, you’ll see this comment had nothing to do with human nature, the Blank Slate, or sociobiology.  Indeed, it had nothing to do with the theme of Pinker’s book, or any fundamental theme in the work of either Ardrey or Lorenz, either, for that matter.  It turns out Dawkins was referring solely to their favorable comments about group selection! In one of the more amusing ironies of scientific history, E. O. Wilson, Pinker’s heroic debunker of the Blank Slate, later outed himself as a far more devoted advocate of group selection than anything Ardrey or Lorenz ever dreamed of!  If they were “totally and utterly wrong,” Wilson must be doubly “totally and utterly wrong,” and himself and candidate for the memory hole.  I’ve written at length about this dubious rationale for dismissing Ardrey and Lorenz elsewhere.

    However, group selection wasn’t Pinker’s only excuse for creating his fairy tale version of the Blank Slate.  His other one (or more correctly, two), is contained in the sentence, “Ardrey and Lorenz believed in archaic theories such as that aggression was like the discharge of a hydraulic pressure and that evolution acted for the good of the species.”  In fact, Lorenz often does discuss whether particular adaptations are for the good of the species or not.  He does so mainly to illustrate his point that, while the innate behavioral traits that can result in aggression in human beings were “good for the species,” in the sense that they promoted the survival of our species as a whole, at the time that they evolved, the same traits may now be “not for the good of the species” in the radically different environment we find ourselves in today.  One could say in the same sense that our hands, feet and eyes are “for the good of the species,” because we are better off with them than without them.  I can only surmise that Pinker falsely imagined that Lorenz was trying to claim that selection operated at the level of the species.  In fact, he never claimed anything of the sort.  In the few instances he actually spoke of selection in his book, On Aggression, he was careful to point out that it took place at the level of individuals, or perhaps a few individuals.

    It turns out that the history behind Pinker’s comment that “Ardrey and Lorenz believed in archaic theories such as that aggression was like the discharge of a hydraulic pressure” is a great deal more interesting.  I seriously doubt that Pinker even knew what he was talking about here.  His knowledge of the “hydraulic theory” was probably second or third hand.  In the first place, Lorenz never had a “hydraulic theory.”  He did have a “hydraulic model,” and referred to it often.  An animated version of the model, which he first presented at a conference in 1949, may be found here.  Lorenz never referred to it as other than an admittedly crude model, but one which illustrated what he actually saw in the behavior of many different species.  Anyone who is capable of raising fish in an aquarium or ducks and geese in their backyard, can read Lorenz and see for themselves that, whether Pinker thinks the model is “archaic” or not, it does nicely illustrate aspects of how these species’ actually behave.

    This begs the question of how this simple and accurate model became transmogrified into a “theory.”  It turns out that the “authority” the Blank Slaters of old most often used to “refute” Lorenz’ “hydraulic theory” was one Daniel Lehrman, a professor at Rutgers and a purveyor of behaviorist flim flam of the first water.  His A Critique of Konrad Lorenz’s Theory of Instinctive Behavior appeared in The Quarterly Review of Biology back in 1953. By all means, have a look at it.  To read it is to marvel at how delusional the Blank Slaters had become by the early 50’s.  Lehrman denied the existence of instincts, not only in the great apes and human beings, as Ashley Montagu did in the 60’s, but in rats and geese, no less!  For example, according to Lehrman, the innate egg retrieving behavior of geese described by Lorenz was not innate, but was a result of “conditioning” while the goose was still in the egg!  He cited studies according to which the neck movements used by the goose to retrieve the egg actually began developing a few days after the egg was laid when the “head is stimulated tactually by the yolk sac.”  Apparently it never occurred to Lehrman that he was merely kicking the can down the road.  Why would the fetal goose move its head one way rather than another in response to this “conditioning?”  Indeed, why would it move it’s head at all?  As Lorenz put it, there must have been an innate “schoolmarm” to teach the goose these things.  Lehrman gives several other examples, explaining innate developmental feedback mechanisms in terms of behaviorist “conditioning.”  The following is another example of his “devastating” arguments against Lorenz:

    Now, what exactly is meant by the statement that a behavior pattern is “inherited” or “genetically controlled?”  Lorenz undoubtedly does not thing that the zygote contains the instinctive act in miniature, or that the gene is the equivalent of an entelechy which purposefully and continuously tries to push the organism’s development in a particular direction.  Yet one or both of these preformistic assumptions, or their equivalents, must underlie the notion that some behavior patterns are “inherited” as such.

    Quick!  Someone run and tell the computer programmers!  Everything they’ve done to date is clearly impossible.  Are they trying to claim that their video games actually exist in miniature in the software they’re trying to peddle?  Lehrman next gives a perfect illustration of what George Orwell was talking about when he spoke of “Newspeak,” in his 1984.  Newspeak was a version of the language that would make it impossible to even conceptualize “Crimethink.”  As Lehrman puts it,

    To lump them (behavioral traits) together under the rubric of “inherited” or “innate” characteristics serves to block the investigation of their origin just at the point where it should leap forward in meaningfulness.

    Elsewhere Lehrman makes a similar case for actually expunging the words “innate” and “instinct” from the behavioral science dictionary.  To borrow Orwell’s terminology, he considered them “doubleplus ungood.”  In retrospect, I think we can see perfectly well at this point what kinds of “investigation” really were blocked for upwards of half a century by the high priests of the Blank Slate, and it certainly wasn’t the kind that was dear to the heart of Prof. Lehrman.  But what of the “hydraulic theory?”  Here’s what Lehrman has to say about it:

    Lorenz (1950) describes in some detail a hydraulic model, or analogy, of the instinct mechanism, including a reservoir of excitation and devices for keeping it dammed up (innate releasing mechanism) until appropriate keys unlock the sluices.  Hydraulic analogies have reappeared so regularly in Lorenz’s papers since 1937 as to justify the impression that they are not really analogies – they are actual representations of Lorenz’s conception and channeling of “instinctive energy.”

    Got that?  You’d better not hum the tune to the Rolling Stone’s “She’s Like a Rainbow” too often, or you’ll find yourself accused of proposing a “theory” of the transformation of women into rainbows.  The same goes for “Like the Dawn,” by the “Oh Hello’s.”  Heaven forefend that you ever describe a cloud as like a camel, or a whale, or a unicorn, or you might find yourself accused of proposing a “theory” of the transubstantiation of clouds.  That, my friends, was the magical process by which Lorenz’ simple model was transmuted into Pinker’s mythical “archaic hydraulic theory.”

    So much for Pinker’s “fake but true” history of the Blank Slate.  To my knowledge he has never yet shown the slightest remorse for the violence he has done to the history of what is probably the greatest scientific debacle of all time, not to mention to the legacy of the two men most responsible for restoring some semblance of sanity to the behavioral sciences.  I would caution those who expect that he ever will not to hold their breath.  As for Lehrman, he became a member of any number of prestigious learned societies, and received any number of prestigious awards and decorations for his brilliant contributions to the advancement of “science.”  It would seem that, just as no good deed goes unpunished, no bad deed goes unrewarded.

  • Why the Blank Slate? Let Max Eastman Explain

    Posted on July 29th, 2018 Helian 1 comment

    In my opinion, science, broadly construed, is the best “way of knowing” we have.  However, it is not infallible, is never “settled,” cannot “say” anything, and can be perverted and corrupted for any number of reasons.  The Blank Slate affair was probably the worst instance of the latter in history.  It involved the complete disruption of the behavioral sciences for a period of more than half a century in order to prop up the absurd lie that there is no such thing as human nature.  It’s grip on the behavioral sciences hasn’t been completely broken to this day.  It’s stunning when you think about it.  Whole branches of the sciences were derailed to support a claim that must seem ludicrous to any reasonably intelligent child.  Why?  How could such a thing have happened?  At least part of the answer was supplied by Max Eastman in an article that appeared in the June 1941 issue of The Reader’s Digest.  It was entitled, Socialism Doesn’t Jibe with Human Nature.

    Who was Max Eastman?  Well, he was quite a notable socialist himself in his younger days.  He edited a radical magazine called The Masses from 1913 until it was suppressed in 1918 for its antiwar content.  In 1922 he traveled to the Soviet Union, and stayed to witness the reality of Communism for nearly two years, becoming friends with a number of Bolshevik worthies, including Trotsky.  Evidently he saw some things that weren’t quite as ideal as he had imagined.  He became increasingly critical of the Stalin regime, and eventually of socialism itself.  In 1941 he became a roving editor for the anti-Communist Reader’s Digest, and the above article appeared shortly thereafter.

    In it, Eastman reviewed the history of socialism from it’s modest beginnings in Robert Owen’s utopian village of New Harmony through a host of similar abortive experiments to the teachings of Karl Marx, and finally to the realization of Marx’s dream in the greatest experiment of them all; the Bolshevik state in Russia.  He noted that all the earlier experiments had failed miserably but, in his words, “The results were not better than Robert Owen’s but a million times worse.”  The outcome of Lenin’s great experiment was,

    Officialdom gone mad, officialdom erected into a new and merciless exploiting class which literally wages war on its own people; the “slavery, horrors, savagery, absurdities and infamies of capitalist exploitation” so far outdone that men look back to them as to a picnic on a holiday; bureaucrats everywhere, and behind the bureaucrats the GPU; death for those who dare protest; death for theft – even of a piece of candy; and this sadistic penalty extended by a special law to children twelve years old!  People who still insist that this is a New Harmony are for the most part dolts or mental cowards.  To honest men with courage to face facts it is clear that Lenin’s experiment, like Robert Owen’s, failed.

    It would seem the world produced a great many dolts and mental cowards in the years leading up to 1941.  In the 30’s Communism was all the rage among intellectuals, not only in the United States but worldwide.  As Malcolm Muggeridge put it in his book, The Thirties, at the beginning of the decade it was rare to find a university professor who was a Marxist, but at the end of the decade it was rare to find one who wasn’t.  If you won’t take Muggeridge’s word for it, just look at the articles in U.S. intellectual journals such as The Nation, The New Republic, and the American Mercury during, say, the year 1934.  Many of them may be found online.  These were all very influential magazines in the 30’s, and at times during the decade they all took the line that capitalism was dead, and it was now merely a question of finding a suitable flavor of socialism to replace it.  If you prefer reality portrayed in fiction, read the guileless accounts of the pervasiveness of Communism among the intellectual elites of the 1930’s in the superb novels of Mary McCarthy, herself a leftist radical.

    Eastman was too intelligent to swallow the “common sense” socialist remedies of the news stand journals.  He had witnessed the reality of Communism firsthand, and had followed its descent into the hellish bloodbath of the Stalinist purges and mass murder by torture and starvation in the Gulag system.  He knew that socialism had failed everywhere else it had been tried as well.  He also knew the reason why.  Allow me to quote him at length:

    Why did the monumental efforts of these three great men (Owen, Marx and Lenin, ed.) and tens of millions of their followers, consecrated to the cause of human happiness – why did they so miserably fail? They failed because they had no science of human nature, and no place in their science for the common sense knowledge of it.

    In October 1917, after the news came that Kerensky’s government had fallen, Lenin, who had been in hiding, appeared at a meeting of the Workers and Soldiers’ Soviet of Petrograd.  He mounted the rostrum and, when the long wild happy shouts of greeting had died down, remarked: “We will now proceed to the construction of a socialist society.” He said this as simply as though he were proposing to put up a new cowbarn.  But in all his life he had never asked himself the equally simple question: “How is this newfangled contraption going to fit in with the instinctive tendencies of the animals it was made for?”

    Lenin actually knew less about the science of man, after a hundred years, than Robert Owen did.  Owen had described human nature, fairly well for an amateur, as “a compound of animal propensities, intellectual faculties and moral qualities.”  He had written into the preamble of the constitution of New Harmony that “man’s character… is the result of his formation, his location, and of the circumstances within which he exists.”

    It seems incredible, but Karl Marx, with all his talk about making socialism “scientific,” took a step back from this elementary notion. He dropped out the factor of man’s hereditary nature altogether.  He dropped out man altogether, so far as he might present an obstacle to social change.  “The individual,” he said, “has no real existence outside the milieu in which he lives.” By which he meant: Change the milieu, change the social relations, and man will change as much as you like.  That is all Marx ever said on the primary question.  And Lenin said nothing.

    That is why they failed.  They were amateurs – and worse than amateurs, mystics – in the subject most essential to their success.

    To begin with, man is the most plastic and adaptable of animals.  He truly can be changed by his environment, and even by himself, to a unique degree, and that makes extreme ideas of progress reasonable.  On the other hand, he inherits a set of emotional impulses or instincts which, although they can be trained in various ways in the individual, cannot be eradicated from the race.  And no matter how much they may be repressed or redirected by training, they reappear in the original form – as sure as a hedgehop puts out spines – in every baby that is born.

    Amazing, considering these words were written in 1941.  Eastman had a naïve faith that science would remedy the situation, and that, as our knowledge of human behavior advanced, mankind would see the truth.  In fact, by 1941, those who didn’t want to hear the inconvenient truth that the various versions of paradise on earth they were busily concocting for the rest of us were foredoomed to failure already had the behavioral sciences well in hand.  They made sure that “science said” what they wanted it to say.  The result was the Blank Slate, a scientific debacle that brought humanity’s efforts to gain self-understanding to a screeching halt for more than half a century, and one that continues to haunt us even now.  Their agenda was simple – if human nature stood in the way of heaven on earth, abolish human nature!  And that’s precisely what they did.  It wasn’t the first time that ideological myths have trumped the truth, and it certainly won’t be the last, but the Blank Slate may well go down in history as the deadliest myth of all.

    I note in passing that the Blank Slate was the child of the “progressive Left,” the same people who today preen themselves on their great respect for “science.”  In fact, all the flat earthers, space alien conspiracy nuts, and anti-Darwin religious fanatics combined have never pulled off anything as damaging to the advance of scientific knowledge as the Blank Slate debacle.  It’s worth keeping in mind the next time someone tries to regale you with fairy tales about what “science says.”

  • No, All Things are Not Permissible, and All Things are Not Not Permissible

    Posted on July 9th, 2018 Helian 1 comment

    IMHO it is a fact that good and evil do not exist as independent, objective things.  If they do not exist, then the moral properties that depend on them, such as “permissible,” have no objective existence, either.  It follows that it is not even rational to ask the question whether something is permissible or not as an independent fact.  In other words, if there is no such thing as objective morality, then it does not follow that “everything is permissible.”  It also does not follow that “everything is not permissible.”  As far as the universe is concerned, the term “permissible” does not exist.  In other words, there is no objective reason to obey a given set of moral rules, nor is there an objective reason not to obey those rules.

    I note in passing that if the above were not true, and the conclusion that good and evil do not exist as objective things actually did imply that “everything is permissible,” as some insist, it would not alter the facts one bit.  The universe would shrug its shoulders and ask, “So what?”  If the absence of good and evil as objective things leads to conclusions that some find unpleasant, will that alter reality and magically cause them to pop into existence?  That hasn’t worked with a God, and it won’t work with objective good and evil, either.

    I just read a paper by Matt McManus on the Quillette website that nicely, if unintentionally, demonstrates what kind of an intellectual morass one wades into if one insists that good and evil are real, objective things.  It’s entitled Why Should We Be Good?  The first two paragraphs include the following:

    Today we are witnessing an irrepressible and admirable pushback against the specters of ‘cultural relativism’ and moral ‘nihilism.’ …Indeed, relativism and the moral nihilism with which it is often affiliated, seems to be in retreat everywhere.  For many observers and critics, this is a wholly positive development since both have the corrosive effect of undermining ethical certainty.

    The author goes on to cite what he considers two motivations for the above, one “negative,” and one “positive.”  As he puts it,

    The negative motivation arises from moral dogmatism.  There are those who wish to dogmatically assert their own values without worrying that they may not be as universal as one might suppose… Ethical dogmatists do not want to be confronted with the possibility that it is possible to challenge their values because they often cannot provide good reasons to back them up.

    He adds that,

    The positive motivation was best expressed by Allan Bloom in his 1987 classic The Closing of the American Mind.

    Well, I wouldn’t exactly describe Bloom’s book as “positive.”  It struck me as a curmudgeonly rant about how “today’s youth” didn’t measure up to how he thought they “ought” to be.  Be that as it may, the author finally gets to the point:

    The issue I wish to explore is this:  even if we know which values are universal, why should we feel compelled to adhere to them?

    To this I would reply that there are no universal values, and since they don’t exist, they can’t be known.  This reduces the question of why we should feel compelled to adhere to them to nonsense.  In fact, what the author is doing here is outing himself as a dogmatist.  He just thinks he’s better than other dogmatists because he imagines he can “provide good reasons to back up” his personal dogmas.  It turns out his “good reasons” amount to an appeal to authority, as follows:

    Kant argued, very powerfully, that a human being’s innate practical reason begets a universal set of “moral laws” which any rational person knows they must follow.

    Good dogma, no?  After all, who can argue with Kant?  “Obscurely” would probably be a better word than “powerfully.”   Some of his sentences ran on for a page and a half, larded with turgid German philosophical jargon from start to finish.  Philosophers pique themselves on “understanding” him, but seldom manage to get much further than the categorical imperative in practice.  I suspect they’re wasting their time.  McManus assures us that Kant read Hume.  If so, he must not have comprehended what he was reading in passages such as,

    We speak not strictly and philosophically when we talk of the combat of passion and of reason.  Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.

    If morality had naturally no influence on human passions and actions, ’twere in vain to take such pains to inculcate it: and nothing wou’d be more fruitless than that multitude of rules and precepts, with which all moralists abound.

    Since morals, therefore, have an influence on the actions and affections, it follows, that they cannot be deriv’d from reason; and that because reason alone, as we have already prov’d, can never have any such influence.  Morals excite passions, and produce or prevent actions.  Reason of itself is utterly impotent in this particular.  The rules of morality, therefore, are not conclusions of our reason…

    What Hume wrote above isn’t just the expression of some personal ideological idiosyncrasy, but the logical conclusion of the thought of a long line of British and Scottish philosophers.  I find his thought on morality “very powerful,” and have seen no evidence that Kant ever seriously addressed his arguments.  We learned where the emotions Hume referred to actually came from in 1859 with the publication of The Origin of Species, more than half a century after Kant’s death.  It’s beyond me how Kant could have “argued powerfully” about a “universal set of moral laws” in spite of his ignorance of the real manner in which they are “begotten.”  No matter, McManus apparently still believes, “because Kant,” that we can “know” some “universal moral law.”  He continues,

    While we might know that these “moral laws” apply universally, why should we feel compelled to obey them?

    According to McManus, the 19th century philosopher Henry Sidgwick made some “profound contributions” to answering this question, which he considered “the profoundest problem in ethics.” Not everyone thought Sidgwick was all that profound.  Westermarck dealt rather harshly with his “profound” thoughts in his The Origin and Development of the Moral Ideas.  In the rest of his article, McManus reviews the thought of several other philosophers on the subject, and finds none of them entirely to his liking.  He finally peters out with nary an answer to the question, “Why should we be good?”  In fact there is no objective answer to the question, because there is no objective good.  McManus’ “dogma with good reasons” is just as imaginary as all the “dogmas without good reasons” at which he turns up his nose.

    The philosophers are in no hurry to wade back out of this intellectual morass.  Indeed, their jobs depend on expanding it.  For those of us who prefer staying out of swamps, however, the solution to McManus’ enigma is simple enough.  Stop believing in the ghosts of objective good and evil.  Accept the fact that what we call morality exists because the innate mental traits that give rise to it themselves exist by virtue of evolution by natural selection.  Then follow that fundamental fact to its logical conclusions.  One of those conclusions is that there is nothing whatsoever objective about morality.  It is a purely subjective phenomenon.  That is simply a fact of nature.  As such, it is quite incapable of rendering “everything permissible,” or “everything not permissible.”  Furthermore, realization of that fact will not change how the questions of what is permissible and what is not permissible are answered.  Those questions will continue to be answered just as they always have been, in the subjective minds of individuals.

    Acceptance of these truths about morality will not result in “moral nihilism,” or “cultural relativity,” or the hegemony of postmodernism.  All of these things can result from our attempts to reason about what our emotions are trying to tell us, but so can moral absolutism.  On the other hand, acceptance of the truth may enable us to avoid some of the real dangers posed by our current “system” of blindly responding to moral emotions, and just as blindly imagining that the result will be “moral progress.”  For example, if morality is a manifestation of evolved behavioral traits, those traits must have been selected in times that were very different from the present.  It is highly unlike that blindly following where our emotions seem to be leading us will have the same effect now as it did then.  In fact, those emotions might just as well be leading us over the edge of a cliff.

    If morality is a manifestation of evolved behavioral traits, then arbitrarily isolating moral behavior from the rest of our innate behavioral repertoire, sometimes referred to as human nature, can also be misleading.  For example, we have a powerful innate tendency to distinguish others in terms of ingroup and outgroup, applying different versions of morality to each.  This can delude us into seriously believing that vast numbers of the people we live with are “bad.”  In the past, we have often imagined that we must “resist” and “fight back” against these “bad” people, resulting in mayhem that has caused the death of countless millions, and misery for countless millions more.  From my own subjective point of view, it would be better to understand the innate emotional sources of such subjective fantasies, and at least attempt to find a way to avoid the danger they pose.  Perhaps one day enough people will agree with me to make a difference.  The universe doesn’t care one way or the other.

    Nihilism and chaos will not result from acceptance of the truth.  When it comes to morality, nihilism and chaos are what we have now.  I happen to be among those who would prefer some form of “moral absolutism,” even though I realize that its legitimacy must be based on the subjective desires of individuals rather on some mirage of “objective truth.”  I would prefer living under a simple moral code, in harmony with human nature, designed to enable us to live together with a minimum of friction and a maximum of personal liberty.  No rule would be accepted without examining its innate emotional basis, what the emotions in question accomplished at the time they evolved, and whether they would still accomplish the same thing in the different environment we live in now.  Generalities about “moral progress” and “human flourishing” would be studiously ignored.

    I see no reason why the subjective nature of morality would prevent us from adopting such an “absolute morality.”  There would, of course, be no objective reason why we “should be good” according to the rules of such a system.  The reasons would be the same subjective ones that have always been the real basis for all the versions of morality our species has ever come up with.  In the first place, if the system really was in harmony with human nature, then for many of us, our “conscience” would prompt us to “do good.”  Those with a “weak conscience” who ignored the moral law, free riders if you will, would be dealt with much the same way they have always been dealt with.  They would be shamed, punished, and, if necessary, isolated from the rest of society.

    I know, we are very far from realizing this utopia, or even from accepting the most simple truths about morality and what they imply.  I’ve always been one for daydreaming, though.

  • Please, Leave Me Out of Your Philosophical Pigeonholes

    Posted on June 27th, 2018 Helian 2 comments

    Yes, I know it is human nature to categorize virtually everything. As I noted in my last post, it reduces complexity to manageable levels. When it comes to worldviews and philosophies, we categorize them into schools of thought. I hope my readers will resist the tendency to stuff me into one of these pigeonholes. For better or worse, it seems to me I don’t belong in any of them.

    The fundamental truth I defend is the non-existence of objective morality. That does not mean, however, that I belong in the postmodernist category. Postmodernists may claim that moral truths are social constructs, but that doesn’t prevent them from furiously defending their own preferred version as their “truth,” or defending the alternative preferred versions of certain fashionable identity groups as “true” for those groups. I am not a postmodernist because I reject claims by any individual or group whatsoever that they have a legitimate right to apply their moral rules to me, whether they are socially constructed or not. Postmodernists act as if they had this right to dictate to others, regardless of what they say about “moral relativity.”

    Neither does the fact that I deny the existence of objective morality mean I am a “moral nihilist.” In fact, we actually live in a state of moral nihilism and chaos today for the very reason that we insist on the believing the illusion that there are objective moral truths. Human beings have an overwhelming innate tendency to believe that their idiosyncratic versions of “good” and “evil” represent “truths.” For the most part, they will continue to believe that regardless of what anyone happens to write on the subject. My personal preference would be to live in a world where such an “absolute” morality prevails. However, this “absolute” system would be constructed in full knowledge of the fact that it represented a necessary and useful expedient, and most decidedly not that it reflected objective moral truths. It would be possible to alter and amend this “absolute” system when necessary, but by a means more rational than the current method of allowing those bullies who throw the most flamboyant moralistic temper tantrums to set it up as they please. I propose such a system not because I think we “ought” to do it as a matter of objective fact, but merely because I would personally find it expedient as a means of pursuing the goals I happen to have in life, and believe that others may agree it would be expedient as far as they’re concerned as well.

    Finally, the fact that I deny the existence of objective morality most decidedly does not mean that I belong in the “error theory” category with the likes of J. L. Mackie. Mackie claimed he denied the objective existence of moral properties. However, he also claimed that we “ought” to do some things, and had a “duty” to do others. I consider this nonsense, and a complete contradiction of his claims about the non-existence of objective good and evil. I recently ran across a paper that illustrates very nicely why I would prefer to stay out of this particular pigeonhole. The paper in question was written by Prof. Bart Streumer of the University of Groningen in the Netherlands, and is entitled The Unbelievable Truth about Morality. The opening paragraph of the paper reads as follows:

    Have you ever suspected that even though we call some actions right and other actions wrong, nothing is really right or wrong? If so, there is a philosophical theory that agrees with you: the error theory. According to the error theory, moral judgments are beliefs that ascribe moral properties to actions or to people, but these properties do not exist. The error theory therefore entails that all moral judgments are false. Just as atheism says that God does not exist and that all religious beliefs are false, the error theory says that moral properties do not exist and that all moral judgments are false.

    That may seem to be a concise statement of my own beliefs regarding objective moral claims, but hold onto your hat. In what follows the author comes up with a number of highly dubious conclusions about the supposed implications of “error theory.” In the end he runs completely off the track into the same swamp we were in before, and something indistinguishable from objective morality still prevails. In closing, he triumphantly informs us of his amazing discovery that “error theory” doesn’t “undermine morality!”

    I’m not going to review the entire paper in detail. Interested readers are welcome to do that on their own. Instead I will focus on some of the things the author imagines follow from error theory. These include the notion that a “part” of error theory is “cognitivism.” A “cognitivist” is one who claims that moral judgments are “beliefs.” According to the author, there is a whole “school” of “cognitivists,” countered by another whole “school” of “non-cognitivists.” In his words,

    Opponents of cognitivism, who are known as non-cognitivists, deny that these judgments are beliefs. They instead take moral judgments to be non-cognitive attitudes, such as feelings of approval or disapproval.

    Really? Have philosophers now become that ignorant of philosophy? Whatever happened to the likes of Shaftesbury, Hutcheson, and Hume? They claimed that moral beliefs and moral “feelings of approval or disapproval” were inextricably bound together, that the former were the result of reasoning about the latter, and that moral beliefs are, in fact, impossible without these “feelings.” The very idea that human beings are capable of blindly responding to emotions without forming beliefs about what they imply is referred to by behavioral scientists as “genetic determinism,” and the term “genetic determinist” itself is used merely as a pejorative to describe someone who believes in an impossible fantasy. If we are to credit the author, such specimens actually exist somewhere in the dank halls of academia.

    It would seem, then, that I can’t be an “error theorist,” because I find this false dichotomy between “cognitivism” and “non-cognitivism” absurd, regardless of the author’s claims about how fashionable it is among the philosophers. Not only does the author fail to mention the work of important philosophers who would have deemed this dichotomy nonsense, but he fails to mention any connection between morality and evolution by natural selection. Is he ignorant of a discipline known as evolutionary psychology? Is he completely oblivious to what the neuroscientists have been telling us lately? If “error theory” rejects the objective existence of moral properties, shouldn’t a paper on the subject at least discuss in passing what reasons there might be for the nearly universal belief in such imaginary objects?  Natural selection is certainly among the more plausible explanations.

    In what follows, we finally discover the connection between this remarkable dichotomy and the “unbelievable truth” mentioned in the article’s title. According to the paper, an objection to error theory is as follows:

    If the error theory is true, all moral judgments are false.
    It is wrong to torture babies for fun.
    So the judgment that it is wrong to torture babies for fun is true.
    So at least one moral judgment is true.
    So the error theory is false.

    The author allows that this is a tough one for error theorists. In his words,

    …this objection is hard to answer for error theorists. It is overwhelmingly plausible that it is wrong to torture babies for fun. Error theorists could deny that this entails that the judgment that it is wrong to torture babies for fun is true. But they can only deny this if they endorse non-cognitivism about this judgment, and non-cognitivism conflicts with the error theory. It therefore seems that error theorist must answer this objection by denying that it is wrong to torture babies for fun. But then we should ask what is more plausible: that the error theory is true, or that it is wrong to torture babies for fun. This objection therefore seems to show that we should reject the error theory.

    Now do you see where the false dichotomy comes in? Why on earth should it be “overwhelmingly plausible” that it is wrong to torture babies for fun, regardless of what any individual happens to think about the matter, but as a matter of objective fact? Where is the basis for this “fact?” How did that basis acquire an independent and legitimate authority to dictate to human beings what they ought and ought not to do? How did it come into existence to begin with? Unless one can answer these questions, there is no reason to believe in the existence of objective moral truths, and therefore no rational explanation for the conclusion that any moral claim whatsoever is “overwhelmingly plausible.” It makes as much sense as the claim that there must be unicorns because one really, really believes deep down that it is “overwhelmingly plausible” that there are unicorns. It is only “overwhelmingly plausible” that it is wrong to torture babies because most of us have a very powerful “feeling” that it is wrong. But (aha, oho!) “error theorists” are prohibited from referencing that feeling in denying this “truth” because that would be “non-cognitivism” and they can’t be “non-cognitivists!”

    The rest of the paper goes something like this: Error theory is true. However, if error theory is true, then the claim that it is wrong to torture babies is false, and that is unbelievable. Therefore, error theory is both true and unbelievable. The conclusion:  “Our inability to believe this general error theory therefore prevents it from undermining morality.”  Whatever. One thing that the paper very definitely shows is that I am not an “error theorist.”

    What the “tortured babies” argument really amounts to is the claim that truth can be manufactured out of the vacuum by effective manipulation of moral emotions. It’s just another version of the similar arguments Sam Harris uses to prop up his equally bogus claim that there are objective moral truths. I note in passing the author’s claim that J. L. Mackie was the first philosopher to defend the error theory. That may be true as far as the description of error theory presented in the paper is concerned. However, a far more coherent argument to the effect that objective moral properties do not exist was published by Edvard Westermarck more than 70 years earlier. Perhaps it would be helpful if philosophers would at least reference his work in future discussions of error theory and related topics instead of continuing to ignore him.

    But to return to the moral of the story, not only am I not a postmodernist, a moral nihilist, or a moral relativist, I am not an “error theorist” either. I certainly believe that there are facts about the universe, and that they will stubbornly remain facts regardless of whether any conscious being chooses to believe they are facts or not. I simply don’t believe that these facts include objective moral truths. Apparently, at the risk of overdramatizing myself, I must conclude that I represent a church of one. I hope not but, in any case, when it comes to pigeonholing, please don’t round me up as one of the “usual suspects.”

  • On the “Immorality” of Einstein

    Posted on June 24th, 2018 Helian 2 comments

    Morality exists because of “human nature.”  In other words, it is a manifestation of innate behavioral traits that themselves exist by virtue of evolution by natural selection.  It follows that morality has no goal, no purpose, and no function, because in order to have those qualities it must necessarily have been created by some entity capable intending a goal, purpose, or function for it.  There was no such entity.  In human beings, the traits in question spawn the whimsical illusion that purely imaginary things that exist only in the subjective minds of individuals, such as good, evil, rights, values, etc., actually exist as independent objects.  The belief in these mirages is extremely powerful.  Occasionally a philosopher here and there will assert a belief in “moral relativity,” but in the end one always finds them, to quote a pithy Biblical phrase, returning like dogs to their own vomit.  After all their fine phrases, one finds them picking sides, sagely informing us that some individual or group is “good,” and some other ones “evil,” and that we “ought” to do one thing and “ought not” to do another.

    What does all this have to do with Einstein?  Well, recently he was accused of expressing impure thoughts in some correspondence he imagined would be private.  The nutshell version can be found in a recent article in the Guardian entitled, Einstein’s travel diaries reveal ‘shocking’ xenophobia Among other things, Einstein wrote that the Chinese he saw were “industrious, filthy, obtuse people,” and “even the children are spiritless and look obtuse.”  He, “…noticed how little difference there is between men and women,” adding, “I don’t understand what kind of fatal attraction Chinese women possess which enthralls the corresponding men to such an extent that they are incapable of defending themselves against the formidable blessing of offspring.”  He was more approving of the Japanese, noting that they were “unostentatious, decent, altogether very appealing,” and that he found “Pure souls as nowhere else among people.  One has to love and admire this country.”

    It goes without saying that only a moron could seriously find such comments “shocking” in the context of their time.  In the first place, Einstein was categorizing people into groups, as all human beings do, because we lack the mental capacity to store individual vignettes of all the billions of people on the planet.  He then pointed out certain things about these groups that he honestly imagined to be true.  He nowhere expressed hatred of any of the people he described, nor did he anywhere claim that the traits he described were “innate” or had a “biological origin,” as falsely claimed by the author of the article.  He associated them with the Chinese “race,” but might just as easily been describing cultural characteristics at a given time as anything innate.  Furthermore, “race” at the time that Einstein wrote could be understood quite differently from the way it is now.  In the 19th century, for example, the British and Afrikaners in South Africa were commonly described as different “races.”  Today we have learned some hard lessons about the potential harm of broadly associating negative qualities to entire populations, but in the context of the time they were written, ideas similar to the ones expressed by Einstein were entirely commonplace.

    In light of the above, consider the public response to the recent revelations about the content of Einstein’s private papers.  It is a testimony to the gross absurdity of human moral behavior in the context of an environment radically different from the one in which it evolved.  Einstein is actually accused by some of being a “racist,” a “xenophobe,” a “misogynist,” or, in short, a “bad” man.  Admirers of Einstein have responded by citing all the good-sounding reasons for the claim that Einstein was actually a “good” man.  These responses are equivalent to debating whether Einstein was “really a green unicorn,” or “really a blue unicorn.”  The problem with that is, of course, that there are no unicorns to begin with.  The same is true of objective morality.  It doesn’t exist.  Einstein wasn’t “good,” nor was he “bad,” because these categories do not exist as independent objects.  They are subjective, and exist only in our imaginations.  They are imagined to be real because there was a selective advantage to imagining them to be real in a given environment.  That environment no longer exists.  These are simple statements of fact.

    As so often happens in such cases, one side accuses the other of “moral relativity.”  In his response to this story at the Instapundit website, for example, Ed Driscoll wrote, “A century later, is the age of moral relativity about to devour the legacy of the man who invented the real theory of relativity?”  The problem here is most definitely not moral relativity.  In fact, it is the opposite – the illusion of objective morality.  The people attacking Einstein are moral absolutists.  If that were not true, what could possibly be the point of attacking him?  A genuine moral relativist would simply conclude that Einstein’s personal version of morality was different from theirs, and leave it at that.  That is not what is happening here.  Instead, Einstein is accused of violating “moral laws,” the most fashionable and up-to-date versions of which were concocted long after he was in his grave.  In spite of that, these “moral laws” are treated as if they represented objective facts.  Einstein was “bad” for violating them even though he had no way of knowing that these “moral laws” would exist nearly a century after he wrote his journals.  Is it not obvious that judging Einstein in this way would be utterly irrational unless these newly minted “moral laws” were deemed to be absolute, with a magical existence of their own, independent of what goes on in the subjective minds of individuals?

    Consider what is actually meant by this accusation of “racism.”  Normally a racist is defined as one who considers others to be innately evil or inferior by virtue of their race, and who hates and despises them by virtue of that fact.  It is simply one manifestation of the universal human tendency to perceive others in terms of ingroups and outgroups.  When this type of behavior evolved, there was no ambiguity about the identity of the outgroup.  It was simply the next tribe over.  The perception of “outgroup” could therefore be determined by very subtle differences without affecting the selective value of the behavior.  Now, however, with our vastly increased ability to travel long distances and communicate with others all over the world, we are quite capable of identifying others as “outgroup” whom we never would have heard of or come in contact with in our hunter-gatherer days.  As a result, the behavior has become “dysfunctional.”  It no longer accomplishes the same thing it did when it evolved.  Racism is merely one of the many manifestations of this now dysfunctional trait that has been determined by hard experience to be harmful in the environment we live in today.  As a result, it has been deemed “bad.”  Without understanding the underlying innate traits that give rise to the behavior, however, this attempt to patch up human moral behavior is of very limited value.

    The above becomes obvious if we examine the behavior of those who are in the habit of accusing others of racism.  They are hardly immune to similar manifestations of bigotry.  They simply define their outgroups based on criteria other than race.  The outgroup is always there, and it is hated and despised just the same.  Indeed, they may hate and despise their outgroups a great deal more violently and irrationally than those they accuse of racism ever did, but are oblivious to the possibility that their behavior may be similarly “bad” merely because they perceive their outgroup in terms of ideology, for example, rather than race.  Extreme examples of hatred of outgroups defined by ideology are easy to find on the Internet.  For example,

    • Actor Jim Carrey is quoted as saying, “40 percent of the U.S. doesn’t care if Trump deports people and kidnaps their babies as political hostages.
    • Actor Peter Fonda suggested to his followers on Twitter that they should “rip Barron Trump from his mother’s arms and put him in a cage with pedophiles.”  The brother of Jane Fonda also called for violence against Secretary of Homeland Security Kirstjen Nielsen and called White House Press Secretary Sarah Sanders a “c**t.”
    • An unidentified FBI agent is quoted as saying in a government report that, “Trump’s supporters are all poor to middle class, uneducated, lazy POS.”
    • According to New York Times editorialist Roxanne Gay, “Having a major character on a prominent television show as a Trump supporter normalizes racism and misogyny and xenophobia.”

    Such alternative forms of bigotry are often more harmful than garden variety racism itself merely by virtue of the fact that they have not yet been included in one of the forms of outgroup identification that has already been generally recognized as “bad.”  The underlying behavior responsible for the extreme hatred typified by the above statements won’t change, and if we whack the racism mole, or the anti-Semitism mole, or the homophobia mole, other moles will pop up to take their places.  The Carreys and Fondas and Roxanne Gays of the world will continue to hate their ideological outgroup as furiously as ever, until it occurs to someone to assign as “ism” to their idiosyncratic version of outgroup hatred, and people finally realize that they are no less bigoted than the “racists” they delight in hating.  Then a new “mole” will pop up with a new, improved version of outgroup hatred.  We will never control the underlying behavior and minimize the harm it does until we understand the innate reasons it exists to begin with.  In other words, it won’t go away until we learn to understand ourselves.

    And what of Einstein, not to mention the likes of Columbus, Washington, Madison, and Jefferson?  True, these men did more for the welfare of all mankind than any combination of their Social Justice Warrior accusers you could come up with, but for the time being, admiring them is forbidden.  After all, these men were “bad.”

  • How a “Study” Repaired History and the Evolutionary Psychologists Lived Happily Ever After

    Posted on June 12th, 2018 Helian No comments

    It’s a bit of a stretch to claim that those who have asserted the existence and importance of human nature have never experienced ideological bias. If that claim is true, then the Blank Slate debacle could never have happened. However, we know that it happened, based not only on the testimony of those who saw it for the ideologically motivated debasement of science that it was, such as Steven Pinker and Carl Degler, but of the ideological zealots responsible for it themselves, such as Hamilton Cravens, who portrayed it as The Triumph of Evolution. The idea that the Blank Slaters were “unbiased” is absurd on the face of it, and can be immediately debunked by simply counting the number of times they accused their opponents of being “racists,” “fascists,” etc., in books such as Richard Lewontin’s Not in Our Genes, and Ashley Montagu’s Man and Aggression. More recently, the discipline of evolutionary psychology has experienced many similar attacks, as detailed, for example, by Robert Kurzban in an article entitled, Alas poor evolutionary psychology.

    The reasons for this bias has never been a mystery, either to the Blank Slaters and their latter day leftist descendants, or to evolutionary psychologists and other proponents of the importance of human nature. Leftist ideology requires not only that human beings be equal before the law, but that the menagerie of human identity groups they have become obsessed with over the years actually be equal, in intelligence, creativity, degree of “civilization,” and every other conceivable measure of human achievement. On top of that, they must be “malleable,” and “plastic,” and therefore perfectly adaptable to whatever revolutionary rearrangement in society happened to be in fashion. The existence and importance of human nature has always been perceived as a threat to all these romantic mirages, as indeed it is. Hence the obvious and seemingly indisputable bias.

    Enter Jeffrey Winking of the Department of Anthropology at Texas A&M, who assures us that it’s all a big mistake, and there’s really no bias at all! Not only that, but he “proves” it with a “study” in a paper entitled, Exploring the Great Schism in the Social Sciences, that recently appeared in the journal Evolutionary Psychology. We must assume that, in spite of his background in anthropology, Winking has never heard of a man named Napoleon Chagnon, or run across an article entitled Darkness’s Descent on the American Anthropological Association, by Alice Degler.

    Winking begins his article by noting that “The nature-nurture debate is one that biologists often dismiss as a false dichotomy,” but adds, “However, such dismissiveness belies the long-standing debate that is unmistakable throughout the biological and social sciences concerning the role of biological influences in the development of psychological and behavioral traits in humans.” I agree entirely. One can’t simply hand-wave away the Blank Slate affair and a century of bitter ideological debate by turning up one’s nose and asserting the term isn’t helpful from a purely scientific point of view.

    We also find that Winking isn’t completely oblivious to examples of bias on the “nature” side of the debate. He cites the Harvard study group which “evaluated the merits of sociobiology, and which included intellectual giants like Stephen J. Gould and Richard Lewontin.” I am content to let history judge whether Gould and Lewontin were really “intellectual giants.” Regardless, if Winking actually read these “evaluations,” he cannot have failed to notice that they contained vicious ad hominem attacks on E. O. Wilson and others that it is extremely difficult to construe as anything but biased. Winking goes on to note similar instances of bias by other authors in various disciplines, such as,

    Many researchers use [evolutionary approaches to the study of international relations] to justify the status quo in the guise of science.

    The totality [of sociobiology and evolutionary psychology] is a myth of origin that is compelling precisely because it resonates strongly with Euro American presuppositions about the nature of the world.

    …in the social sciences (with the exception of primatology and psychology) sociobiology appeals most to right-wing social scientists.

    These are certainly compelling examples of bias. Now, however, Winking attempts to demonstrate that those who point out the bias, and correctly interpret the reasons for it, are just as biased themselves. As he puts it,

    Conversely, those who favor biological approaches have argued that those on the other side are rendered incapable of objective assessment by their ideological promotion of equality. They are alleged to erroneously reject evidence of biological influences because such evidence suggests that social outcomes are partially explained by biology, and this might inhibit the realization of equality. Their critiques of biological approaches are therefore often blithely dismissed as examples of the moralistic/naturalistic fallacy. This line of reason is exemplified in the quote by biologist Jerry Coyne

    If you can read the [major Evolutionary Psychology review paper] and still dismiss the entire field as worthless, or as a mere attempt to justify scientists’ social prejudices, then I’d suggest your opinions are based more on ideology than judicious scientific inquiry.

    I can’t imagine what Winking finds “blithe” about that statement! Is it really “blithe” to so much as suggest that people who dismiss entire fields of science as worthless may be ideologically motivated? I note in passing that Coyne must have thought long and hard about that statement, because his Ph.D. advisor was none other than Richard Lewontin, whom he still honors and admires!  Add to that the fact that Coyne is about as far as you can imagine from “right wing,” as anyone can see by simply visiting his Why Evolution is True website, and the notion that he is being “blithe” here is ludicrous. Winking’s other examples of “blithness” are similarly dubious, including,

    For critics, the heart of the intellectual problem remains an ideological adherence to the increasingly implausible view that human behavior is strictly determined by socialization… Should [social]hierarchies result strictly from culture, then the possibilities for an egalitarian future were seen to be as open and boundless as our ever-malleable brains might imagine.

    Like the Church, a number of contemporary thinkers have also grounded their moral and political views in scientific assumptions about… human nature, specifically that there isn’t one.

    Unlike the “comparable” statements by the Blank Slaters, these statements neither accuse those who deny the existence of human nature of being Nazis, nor is evidence lacking to back them up.  On the contrary, one could cite a mountain of evidence to back them up supplied by the Blank Slaters themselves.  Winking soon supplies us with the reason for this strained attempt to establish “moral equivalence” between “nature” and “nurture.”  It appears in his “hypothesis,” as follows:

    It is entirely possible that confirmation bias plays no role in driving disagreement and that the overarching debate in academia is driven by sincere disagreements concerning the inferential value of the research designs informing the debate.

    Wait a minute!  Don’t roll your eyes like that!  Winking has a “study” to back up this hypothesis.  Let me explain it to you.  He invented some “mock results” of studies which purported to establish, for example, the increased prevalence of an allele associated with “appetitive aggression” in populations with African ancestry.  Subtle, no?  Then he used Mechanical Turk and social media to come up with a sample of 365 people with Masters degrees or Ph.D.’s for a survey on what they thought of the “inferential power” of the fake data.  Another sample of 71 were scraped together for another survey on “research design.”  In the larger sample, 307 described themselves as either only “somewhat” on the “nature” side, or “somewhat” on the “nurture” side.  Only 57 claimed they leaned strongly one way or the other.  The triumphant results of the study included, for example, that,

    Participants perceptions of inferential value did not vary by the degree to which results supported a particular ideology, suggesting that ideological confirmation bias is not affecting participant perceptions of inferential value.

    Seriously?  Even the author admits that the statistical power of his “study” is low because of the small sample sizes.  However statistical power only applies where the samples are truly random, meaning, in this case, where the participants are either unequivocably on the “nature” or “nurture” side.  That is hardly the case.  Mechanical Turk samples, for example are biased towards a younger and more liberal demographic.  Most of the participants were on the fence between nature and nurture.  In other words, there’s no telling what their true opinions were even if they were honest about them.  Even the most extreme Blank Slaters admitted that nature plays a significant role in such bodily functions as urinating, defecating, and breathing, and so could have easily described themselves as “somewhat bioist.”  Perhaps most importantly, any high school student could have easily seen what this “study” was about.  There is no doubt whatsoever that holders of Masters and Doctors degrees in related disciplines had no trouble a) inferring what the study was about, and b) had an interest in making sure that the results demonstrated that they were “unbiased.”  In other words, were not exactly talking “double blind” here.

    I think the author was well aware that most readers would have no trouble detecting the blatant shortcomings of his “study.”  Apparently to ward off ridicule he wrote,

    Regardless of one’s position, it is important to remind scholars that if they believe a group of intelligent and informed academics could be so unknowingly blinded by ideology that they wholeheartedly subscribe to an unquestionably erroneous interpretation of an entire body of research, then they must acknowledge they themselves are equally as capable of being so misguided.

    Kind of reminds you of the curse over King Tut’s tomb, doesn’t it?  “May those who question my study be damned to dwell among the misguided forever!”  Sorry, my dear Winking, but “a group of intelligent and informed academics” not only could, but were “so unknowingly blinded by ideology that they wholeheartedly subscribed to an unquestionably erroneous interpretation of an entire body of research.”  It was called the Blank Slate, and it derailed the behavioral sciences for more than half a century.  That’s what Pinker’s book was about.  That’s what Degler’s book was about, and yes, that’s even what Cravens’ book was about.  They all did an excellent job of documenting the debacle.  I suggest you read them.

    Or not.  You could decide to believe your study instead.  I have to admit, it would have its advantages.  History would be “fixed,” the lions would lie down with the lambs, and the evolutionary psychologists would live happily ever after.

  • On the Gleichschaltung of Evolutionary Psychology

    Posted on June 11th, 2018 Helian No comments

    When Robert Ardrey began his debunking of the ideologically motivated dogmas that passed for the “science” of human behavior in 1961 with the publication of his first book, African Genesis, he knew perfectly well what was at stake.  By that time what we now know as the Blank Slate orthodoxy had derailed any serious attempt by our species to achieve self-understanding for upwards of three decades.  This debacle in the behavioral sciences paralyzed any serious attempt to understand the roots of human warfare and aggression, the sources of racism, anti-Semitism, religious bigotry, and the myriad other manifestations of our innate tendency to perceive others in terms of ingroups and outgroups, the nature of human territorialism and status-seeking behavior, and the wellsprings of human morality itself.  A bit later, E. O. Wilson summed up our predicament as follows:

    Humanity today is like a waking dreamer, caught between the fantasies of sleep and the chaos of the real world.  The mind seeks but cannot find the precise place and hour.  We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology.  We thrash about.  We are terribly confused about the mere fact of our existence, and a danger to ourselves and the rest of life.

    In the end, the Blank Slate collapsed under the weight of its own absurdity, in spite of the now-familiar attempts to silence its opponents by vilification rather than logical argument.  The science of evolutionary psychology emerged based explicitly on acceptance of the reality and importance of innate human behavioral traits.  However, the ideological trends that resulted in the Blank Slate disaster to begin with haven’t disappeared.  On the contrary, they have achieved nearly unchallenged control of the social means of communication, including the entertainment industry, the “mainstream” news media, Internet monopolies such as Facebook, Google and Twitter, and, perhaps most importantly, academia.  There an ingroup defined by ideology has emerged that has always viewed the new science with a jaundiced eye.  By its very nature it challenges their assumptions of moral superiority, their cherished myths about the nature of human beings, and the viability of the various utopias they have always enjoyed concocting for the rest of us.  As Marx might have put it, this clash of thesis and antithesis has led to a synthesis in evolutionary psychology that might be described as creeping Gleichschaltung.  In other words, it is undergoing a slow process of getting “in step” with the controlling ideology.  It no longer seriously challenges the dogmas of that ideology, and the “studies” emerging from the field are increasingly, if not yet exclusively, limited to subjects that are deemed ideologically “benign.”  As a result, when it comes to addressing issues that are of real importance in terms of the survival and welfare of our species, the science of evolutionary psychology has become largely irrelevant.

    Consider, for example, the sort of articles that one typically finds in the relevant journals.  In the last four issues of Evolutionary Behavioral Sciences they have addressed such subjects as “Committed romantic relationships,” Long-term romantic relationships,” “The effect of predictable early childhood environments on sociosexuality in early adulthood,” “Daily relationship quality in same-sex couples,” “Modern-day female preferences for resources and provisioning by long-term mates,” “Behavioral reactions to emotional and sexual infidelity: mate abandonment versus mate retention,” and “An evolutionary perspective on orgasm.”  Peering through the last four issues of Evolutionary Psychology Journal we find, “Mating goals moderate power’s effect on conspicuous consumption among women,” “In-law preferences in China: What parents look for in the parents of their children’s mates,” “Endorsement of social and personal values predicts the desirability of men and women as long-term partners,” “Adaptive memory: remembering potential mates,” “Passion, relational mobility, and proof of commitment,” “Do men produce high quality ejaculates when primed with thoughts of partner infidelity?” and “Displaying red and black on a first date: A field study using the ‘First Dates’ television series.”

    All very interesting stuff, I’m sure, but the last time I checked humanity wasn’t faced with an existential threat due to cluelessness about the mechanics of reproduction.  Articles that might actually bear on our chances of avoiding self-destruction, on the other hand, are few and far between.  In short, evolutionary psychology has been effectively neutered.  Ostensibly, it’s only remaining purpose is to pad the curriculum vitae of the professoriat in the publish or perish world of academia.

    Does it really matter?  Probably not much.  The claims of any branch of psychology to be a genuine science have always been rather tenuous, and must remain so as long as our knowledge of how the mind works and how consciousness can exist remains so limited.  Real knowledge of how the brain gives rise to innate behavioral predispositions, and how they are perceived and interpreted by our “rational” consciousness is far more likely to be forthcoming from fields like neuroscience, genetics, and evolutionary biology than evolutionary psychology.  Meanwhile, we are free of the Blank Slate straitjacket, at least temporarily.  We must no longer endure the sight of the court jesters of the Blank Slate striking heroic poses as paragons of “science,” and uttering cringeworthy imbecilities that are taken perfectly seriously by a fawning mass media.  Consider, for example, the following gems from clown-in-chief Ashley Montagu:

    All the field observers agree that these creatures (chimpanzees and other great apes) are amiable and quite unaggressive, and there is not the least reason to suppose that man’s pre-human primate ancestors were in any way different.

    The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.

    …man is man because he has no instincts, because everything he is and has become he has learned, acquired, from his culture, from the man-made part of the environment, from other human beings.

    In fact, I also think it very doubtful that any of the great apes have any instincts.  On the contrary, it seems that as social animals they must learn from others everything they come to know and do.  Their capacities for learning are simply more limited than those of Homo sapiens.

    In his heyday Montagu could rave on like that nonstop, and be taken perfectly seriously, not only by the media, but by the vast majority of the “scientists” in the behavioral disciplines.  Anyone who begged to differ was shouted down as a racist and a fascist.  We can take heart in the fact that we’ve made at least some progress since then.  Today one finds articles about human “instincts” in the popular media, and even academic journals, as if the subject had never been the least bit controversial.  True, the same “progressives” who brought us the Blank Slate now have evolutionary psychology firmly in hand, and are keeping it on a very short leash.  For all that, one can now at least study the subject of innate human behavior without fear that undue interest in the subject is likely to bring one’s career to an abrupt end.  Who knows?  With concurrent advances in our knowledge of the actual physics of the mind and consciousness, we may eventually begin to understand ourselves.

  • Morality and the Floundering Philosophers

    Posted on May 26th, 2018 Helian No comments

    In my last post I noted the similarities between belief in objective morality, or the existence of “moral truths,” and traditional religious beliefs. Both posit the existence of things without evidence, with no account of what these things are made of (assuming that they are not things that are made of nothing), and with no plausible explanation of how these things themselves came into existence or why their existence is necessary. In both cases one can cite many reasons why the believers in these nonexistent things want to believe in them. In both cases, for example, the livelihood of myriads of “experts” depends on maintaining the charade. Philosophers are no different from priests and theologians in this respect, but their problem is even bigger. If Darwin gave the theologians a cold, he gave the philosophers pneumonia. Not long after he published his great theory it became clear, not only to him, but to thousands of others, that morality exists because the behavioral traits which give rise to it evolved. The Finnish philosopher Edvard Westermarck formalized these rather obvious conclusions in his The Origin and Development of the Moral Ideas (1906) and Ethical Relativity (1932). At that point, belief in the imaginary entities known as “moral truths” became entirely superfluous. Philosophers have been floundering behind their curtains ever since, trying desperately to maintain the illusion.

    An excellent example of the futility of their efforts may be found online in the Stanford Encyclopedia of Philosophy in an entry entitled Morality and Evolutionary Biology. The most recent version was published in 2014.  It’s rather long, but to better understand what follows it would be best if you endured the pain of wading through it.  However, in a nutshell, it seeks to demonstrate that, even if there is some connection between evolution and morality, it’s no challenge to the existence of “moral truths,” which we are to believe can be detected by well-trained philosophers via “reason” and “intuition.”  Quaintly enough, the earliest source given for a biological explanation of morality is E. O. Wilson.  Apparently the Blank Slate catastrophe is as much a bugaboo for philosophers as for scientists.  Evidently it’s too indelicate for either of them to mention that the behavioral sciences were completely derailed for upwards of 50 years by an ideologically driven orthodoxy.  In fact, a great many highly intelligent scientists and philosophers wrote a great deal more than Wilson about the connection between biology and morality before they were silenced by the high priests of the Blank Slate.  Even during the Blank Slate men like Sir Arthur Keith had important things to say about the biological roots of morality.  Robert Ardrey, by far the single most influential individual in smashing the Blank Slate hegemony, addressed the subject at length long before Wilson, as did thinkers like Konrad Lorenz and Niko Tinbergen.  Perhaps if its authors expect to be taken seriously, this “Encyclopedia” should at least set the historical record straight.

    It’s already evident in the Overview section that the author will be running with some dubious assumptions.  For example, he speaks of “morality understood as a set of empirical phenomena to be explained,” and the “very different sets of questions and projects pursued by philosophers when they inquire into the nature and source of morality,” as if they were examples of the non-overlapping magisterial once invoked by Stephen Jay Gould. In fact, if one “understands the empirical phenomena” of morality, then the problem of the “nature and source of morality” is hardly “non-overlapping.”  In fact, it solves itself.  The suggestion that they are non-overlapping depends on the assumption that “moral truth” exists in a realm of its own.  A bit later the author confirms he is making that assumption as follows:

    Moral philosophers tend to focus on questions about the justification of moral claims, the existence and grounds of moral truths, and what morality requires of us.  These are very different from the empirical questions pursued by the sciences, but how we answer each set of questions may have implications for how we should answer the other.

    He allows that philosophy and the sciences must inform each other on these “distinct” issues.  In fact, neither philosophy nor the sciences can have anything useful to say about these questions, other than to point out that they relate to imaginary things.  “Objects” in the guise of “justification of moral claims,” “grounds of moral truths,” and the “requirements of morality” exist only in fantasy.  The whole burden of the article is to maintain that fantasy, and insist that the mirage is real.  We are supposed to be able to detect that the mirages are real by thinking really hard until we “grasp moral truths,” and “gain moral knowledge.”  It is never explained what kind of a reasoning process leads to “truths” and “knowledge” about things that don’t exist.  Consider, for example, the following from the article:

    …a significant amount of moral judgment and behavior may be the result of gaining moral knowledge, rather than just reflecting the causal conditioning of evolution.  This might apply even to universally held moral beliefs or distinctions, which are often cited as evidence of an evolved “universal moral grammar.”  For example, people everywhere and from a very young age distinguish between violations of merely conventional norms and violations of norms involving harm, and they are strongly disposed to respond to suffering with concern.  But even if this partly reflects evolved psychological mechanisms or “modules” governing social sentiments and responses, much of it may also be the result of human intelligence grasping (under varying cultural conditions) genuine morally relevant distinctions or facts – such as the difference between the normative force that attends harm and that which attends mere violations of convention.

    It’s amusing to occasionally substitute “the flying spaghetti monster” or “the great green grasshopper god” for the author’s “moral truths.”  The “proofs” of their existence work just as well.  In the above, he is simply assuming the existence of “morally relevant distinctions,” and further assuming that they can be grasped and understood logically.  Such assumptions fly in the face of the work of many philosophers who demonstrated that moral judgments are always grounded in emotions, sometimes referred to by earlier authors as “sentiments,” or “passions,” and it is therefore impossible to arrive at moral truths through reason alone.  Assuming some undergraduate didn’t write the article, one must assume the author had at least a passing familiarity with some of these people.  The Earl of Shaftesbury, for example, demonstrated the decisive role of “natural affections” as the origins of moral judgment in his Inquiry Concerning Virtue or Merit (1699), even noting in that early work the similarities between humans and the higher animals in that regard.  Francis Hutcheson very convincingly demonstrated the impotence of reason alone in detecting moral truths, and the essential role of “instincts and affections” as the origin of all moral judgment in his An Essay on the Nature and Conduct of the Passions and Affections (1728).  Hutcheson thought that God was the source of these passions and affections.  It remained for David Hume to present similar arguments on a secular basis in his A Treatise on Human Nature (1740).

    The author prefers to ignore these earlier philosophers, focusing instead on the work of Jonathan Haidt, who has also insisted on the role of emotions in shaping moral judgment.  Here I must impose on the reader’s patience with a long quote to demonstrate the type of “logic” we’re dealing with.  According to the author,

    There are also important philosophical worries about the methodologies by which Haidt comes to his deflationary conclusions about the role played by reasoning in ordinary people’s moral judgments.

    To take just one example, Haidt cites a study where people made negative moral judgments in response to “actions that were offensive yet harmless, such as…cleaning one’s toilet with the national flag.” People had negative emotional reactions to these things and judged them to be wrong, despite the fact that they did not cause any harms to anyone; that is, “affective reactions were good predictors of judgment, whereas perceptions of harmfulness were not” (Haidt 2001, 817). He takes this to support the conclusion that people’s moral judgments in these cases are based on gut feelings and merely rationalized, since the actions, being harmless, don’t actually warrant such negative moral judgments. But such a conclusion would be supported only if all the subjects in the experiment were consequentialists, specifically believing that only harmful consequences are relevant to moral wrongness. If they are not, and believe—perhaps quite rightly (though it doesn’t matter for the present point what the truth is here)—that there are other factors that can make an action wrong, then their judgments may be perfectly appropriate despite the lack of harmful consequences.

    This is in fact entirely plausible in the cases studied: most people think that it is inherently disrespectful, and hence wrong, to clean a toilet with their nation’s flag, quite apart from the fact that it doesn’t hurt anyone; so the fact that their moral judgment lines up with their emotions but not with a belief that there will be harmful consequences does not show (or even suggest) that the moral judgment is merely caused by emotions or gut reactions. Nor is it surprising that people have trouble articulating their reasons when they find an action intrinsically inappropriate, as by being disrespectful (as opposed to being instrumentally bad, which is much easier to explain).

    Here one can but roll ones eyes.  It doesn’t matter a bit whether the subjects are consequentialists or not.  Haidt’s point is that logical arguments will always break down at some point, whether they are based on harm or not, because moral judgments are grounded in emotions.  Harm plays a purely ancillary role.  One could just as easily ask why the action in question is considered disrespectful, and the chain of logical reasons would break down just as surely.  Whoever wrote the article must know what Haidt is really saying, because he refers explicitly to the ideas of Hume in the same book.  Absent the alternative that the author simply doesn’t know what he’s talking about, we must conclude that he is deliberately misrepresenting what Haidt was trying to say.

    One of the author’s favorite conceits is that one can apply “autonomous applications of human intelligence,” meaning applications free of emotional bias, to the discovery of “moral truths” in the same way those logical faculties are applied in such fields as algebraic topology, quantum field theory, population biology, etc.  In his words,

    We assume in general that people are capable of significant autonomy in their thinking, in the following sense:

    Autonomy Assumption: people have, to greater or lesser degrees, a capacity for reasoning that follows autonomous standards appropriate to the subjects in question, rather than in slavish service to evolutionarily given instincts merely filtered through cultural forms or applied in novel environments. Such reflection, reasoning, judgment and resulting behavior seem to be autonomous in the sense that they involve exercises of thought that are not themselves significantly shaped by specific evolutionarily given tendencies, but instead follow independent norms appropriate to the pursuits in question (Nagel 1979).

    This assumption seems hard to deny in the face of such abstract pursuits as algebraic topology, quantum field theory, population biology, modal metaphysics, or twelve-tone musical composition, all of which seem transparently to involve precisely such autonomous applications of human intelligence.

    This, of course, leads up to the argument that one can apply this “autonomy assumption” to moral judgment as well.  The problem is that, in the other fields mentioned, one actually has something to reason about.  In mathematics, for example, one starts with a collection of axioms that are simply accepted as true, without worrying about whether they are “really” true or not.  In physics, there are observables that one can measure and record as a check on whether one’s “autonomous application of intelligence” was warranted or not.  In other words, one has physical evidence.  The same goes for the other subjects mentioned.  In each case, one is reasoning about something that actually exists.  In the case of morality, however, “autonomous intelligence” is being applied to a phantom.  Again, the same arguments are just as strong if one applies them to grasshopper gods.  “Autonomous intelligence” is useless if it is “applied” to something that doesn’t exist.  You can “reflect” all you want about the grasshopper god, but he will still stubbornly refuse to pop into existence.  The exact nature of the recondite logical gymnastics one must apply to successfully apply “autonomous intelligence” in this way is never explained.  Perhaps a Ph.D. in philosophy at Stanford is a prerequisite before one can even dare to venture forth on such a daunting logical quest.  Perhaps then, in addition to the sheepskin, they fork over a philosopher’s stone that enables one to transmute lead into gold, create the elixir of life, and extract “moral truths” right out of the vacuum.

    In short, the philosophers continue to flounder.  Their logical demonstrations of nonexistent “moral truths” are similar in kind to logical demonstrations of the existence of imaginary super-beings, and just as threadbare.  Why does it matter?  I can’t supply you with any objective “oughts,” here, but at least I can tell you my personal prejudices on the matter, and my reasons for them.  We are living in a time of moral chaos, and will continue to do so until we accept the truth about the evolutionary origin of human morality and the implications of that truth.  There are no objective moral truths, and it will be extremely dangerous for us to continue to ignore that fact.  Competing morally loaded ideologies are already demonstrably disrupting our political systems.  It is hardly unlikely that we will once again experience what happens when fanatics stuff their “moral truths” down our throats as they did in the last century with the morally loaded ideologies of Communism and Nazism.  Do you dislike being bullied by Social Justice Warriors?  I’m sorry to inform you that the bullying will continue unabated until we explode the myth that they are bearers of “moral truths” that they are justified, according to “autonomous logic” in imposing on the rest of us.  I could go on and on, but do I really need to?  Isn’t it obvious that a world full of fanatical zealots, all utterly convinced that they have a monopoly on “moral truth,” and a perfect right to impose these “truths” on everyone else, isn’t exactly a utopia?  Allow me to suggest that, instead, it might be preferable to live according to a simple and mutually acceptable “absolute” morality, in which “moral relativism” is excluded, and which doesn’t change from day to day in willy-nilly fashion according to the whims of those who happen to control the social means of communication?  As counter-intuitive as it seems, the only practicable way to such an outcome is acceptance of the fact that morality is a manifestation of evolved human nature, and of the truth that there are no such things as “moral truths.”

     

  • Morality and the Spiritualism of the Atheists

    Posted on May 11th, 2018 Helian No comments

    I’m an atheist.  I concluded there was no God when I was 12 years old, and never looked back.  Apparently many others have come to the same conclusion in western democratic societies where there is access to diverse opinions on the subject, and where social sanctions and threats of force against atheists are no longer as intimidating as they once were.  Belief in traditional religions is gradually diminishing in such societies.  However, they have hardly been replaced by “pure reason.”  They have merely been replaced by a new form of “spiritualism.”  Indeed, I would maintain that most atheists today have as strong a belief in imaginary things as the religious believers they so often despise.  They believe in the “ghosts” of good and evil.

    Most atheists today may be found on the left of the ideological spectrum.  A characteristic trait of leftists today is the assumption that they occupy the moral high ground. That assumption can only be maintained by belief in a delusion, a form of spiritualism, if you will – that there actually is a moral high ground.  Ironically, while atheists are typically blind to the fact that they are delusional in this way, it is often perfectly obvious to religious believers.  Indeed, this insight has led some of them to draw conclusions about the current moral state of society similar to my own.  Perhaps the most obvious conclusion is that atheists have no objective basis for claiming that one thing is “good” and another thing is “evil.”  For example, as noted by Tom Trinko at American Thinker in an article entitled “Imagine a World with No Religion,”

    Take the Golden Rule, for example. It says, “Do onto others what you’d have them do onto you.” Faithless people often point out that one doesn’t need to believe in God to believe in that rule. That’s true. The problem is that without God, there can’t be any objective moral code.

    My reply would be, that’s quite true, and since there is no God, there isn’t any objective moral code, either.  However, most atheists, far from being “moral relativists,” are highly moralistic.  As a consequence, they are dumbfounded by anything like Trinko’s remark.  It pulls the moral rug right out from under their feet.  Typically, they try to get around the problem by appealing to moral emotions.  For example, they might say something like, “What?  Don’t you think it’s really bad to torture puppies to death?”, or, “What?  Don’t you believe that Hitler was really evil?”  I certainly have a powerful emotional response to Hitler and tortured puppies.  However, no matter how powerful those emotions are, I realize that they can’t magically conjure objects into being that exist independently of my subjective mind.  Most leftists, and hence, most so-called atheists, actually do believe in the existence of such objects, which they call “good” and “evil,” whether they admit it explicitly or not.  Regardless, they speak and act as if the objects were real.

    The kinds of speech and actions I’m talking about are ubiquitous and obvious.  For example, many of these “atheists” assume a dictatorial right to demand that others conform to novel versions of “good” and “evil” they may have concocted yesterday or the day before.  If those others refuse to conform, they exhibit all the now familiar symptoms of outrage and virtuous indignation.  Do rational people imagine that they are gods with the right to demand that others obey whatever their latest whims happen to be?  Do they assume that their subjective, emotional whims somehow immediately endow them with a legitimate authority to demand that others behave in certain ways and not in others?  I certainly hope that no rational person would act that way.  However, that is exactly the way that many so-called atheists act.  To the extent that we may consider them rational at all, then, we must assume that they actually believe that whatever versions of “good” or “evil” they happen to favor at the moment are “things” that somehow exist on their own, independently of their subjective minds.  In other words, they believe in ghosts.

    Does this make any difference?  I suggest that it makes a huge difference.  I personally don’t enjoy being constantly subjected to moralistic bullying.  I doubt that many people enjoy jumping through hoops to conform to the whims of others.  I submit that it may behoove those of us who don’t like being bullied to finally call out this type of irrational, quasi-religious behavior for what it really is.

    It also makes a huge difference because this form of belief in imaginary objects has led us directly into the moral chaos we find ourselves in today.  New versions of “absolute morality” are now popping up on an almost daily basis.  Obviously, we can’t conform to all of them at once, and must therefore put up with the inconvenience of either keeping our mouths shut or risk being furiously condemned as “evil” by whatever faction we happen to offend.  Again, traditional theists are a great deal more clear-sighted than “atheists” about this sort of thing.  For example, in an article entitled, “Moral relativism can lead to ethical anarchy,” Christian believer Phil Schurrer, a professor at Bowling Green State University, writes,

    …the lack of a uniform standard of what constitutes right and wrong based on Natural Law leads to the moral anarchy we see today.

    Prof. Schurrer is right about the fact that we live in a world of moral anarchy.  I also happen to agree with him that most of us would find it useful and beneficial if we could come up with a “uniform standard of what constitutes right and wrong.”  Where I differ with him is on the rationality of attempting to base that standard on “Natural Law,” because there is no such thing.  For religious believers, “Natural Law” is law passed down by God, and since there is no God, there can be no “Natural Law,” either.  How, then, can we come up with such a uniform moral code?

    I certainly can’t suggest a standard based on what is “really good” or “really bad” because I don’t believe in the existence of such objects.  I can only tell you what I would personally consider expedient.  It would be a standard that takes into account what I consider to be some essential facts.  These are as follows.

    • What we refer to as morality is an artifact of “human nature,” or, in other words, innate predispositions that affect our behavior.
    • These predispositions exist because they evolved by natural selection.
    • They evolved by natural selection because they happened to improve the odds that the genes responsible for their existence would survive and reproduce at the time and in the environment in which they evolved.
    • We are now living at a different time, and in a different environment, and it cannot be assumed that blindly responding to the predispositions in question will have the same outcome now as it did when those predispositions evolved.  Indeed, it has been repeatedly demonstrated that such behavior can be extremely dangerous.
    • Outcomes of these predispositions include a tendency to judge the behavior of others as “good” or “evil.”  These categories are typically deemed to be absolute, and to exist independently of the conscious minds that imagine them.
    • Human morality is dual in nature.  Others are perceived in terms of ingroups and outgroups, with different standards applying to what is deemed “good” or “evil” behavior towards those others depending on the category to which they are imagined to belong.

    I could certainly expand on this list, but the above are certainly some of the most salient and essential facts about human morality.  If they are true, then it is possible to make at least some preliminary suggestions about how a “uniform standard” might look.  It would be as simple as possible.  It would be derived to minimize the dangers referred to above, with particular attention to the dangers arising from ingroup/outgroup behavior.  It would be limited in scope to interactions between individuals and small groups in cases where the rational analysis of alternatives is impractical due to time constraints, etc.  It would be in harmony with innate human behavioral traits, or “human nature.”  It is our nature to perceive good and evil as real objective things, even though they are not.  This implies there would be no “moral relativism.”  Once in place, the moral code would be treated as an absolute standard, in conformity with the way in which moral standards are usually perceived.  One might think of it as a “moral constitution.”  As with political constitutions, there would necessarily be some means of amending it if necessary.  However, it would not be open to arbitrary innovations spawned by the emotional whims of noisy minorities.

    How would such a system be implemented?  It’s certainly unlikely that any state will attempt it any time in the foreseeable future.  Perhaps it might happen gradually, just as changes to the “moral landscape” have usually happened in the past.  For that to happen, however, it would be necessary for significant numbers of people to finally understand what morality is, and why it exists.  And that is where, as an atheist, I must part company with Mr. Trinko, Prof. Schurrer, and the rest of the religious right.  Progress towards a uniform morality that most of us would find a great deal more useful and beneficial than the versions currently on tap, regardless of what goals or purposes we happen to be pursuing in life, cannot be based on the illusion that a “natural law” exists that has been handed down by an imaginary God, any more than it can be based on the emotional whims of leftist bullies.  It must be based on a realistic understanding of what kind of animals we are, and how we came to be.  However, such self knowledge will remain inaccessible until we shed the shackles of religion.  Perhaps, as they witness many of the traditional churches increasingly becoming leftist political clubs before their eyes, people on the right of the political spectrum will begin to find it less difficult to free themselves from those shackles.  I hope so.  I think that an Ansatz based on simple, traditional moral rules, such as the Ten Commandments, is more likely to lead to a rational morality than one based on furious rants over who should be allowed to use what bathrooms.  In other words, I am more optimistic that a useful reform of morality will come from the right rather than the left of the ideological spectrum, as it now stands.  Most leftists today are much too heavily invested in indulging their moral emotions to escape from the world of illusion they live in.  To all appearances they seriously believe that blindly responding to these emotions will somehow magically result in “moral progress” and “human flourishing.”  Conservatives, on the other hand, are unlikely to accomplish anything useful in terms of a rational morality until they free themselves of the “God delusion.”  It would seem, then, that for such a moral “revolution” to happen, it will be necessary for those on both the left and the right to shed their belief in “spirits.”

     

  • On the Illusion of Moral Relativism

    Posted on April 8th, 2018 Helian No comments

    As recently as 2009 the eminent historian Paul Johnson informed his readers that he made “…the triumph of moral relativism the central theme of my history of the 20th century, Modern Times, first published in 1983.”  More recently, however, obituaries of moral relativism have turned up here and there.  For example one appeared in The American Spectator back in 2012, fittingly entitled Moral Relativism, R.I.P.  It was echoed a few years later by a piece in The Atlantic that announced The Death of Moral Relativism.”  There’s just one problem with these hopeful announcements.  Genuine moral relativists are as rare as unicorns.

    True, many have proclaimed their moral relativism.  To that I can only reply, watch their behavior.  You will soon find each and every one of these “relativists” making morally loaded pronouncements about this or that social evil, wrong-headed political faction, or less than virtuous individual.  In other words, their “moral relativism” is of a rather odd variety that occasionally makes it hard to distinguish their behavior from that of the more zealous moral bigots.  Scratch the surface of any so-called “moral relativist,” and you will often find a moralistic bully.  We are not moral relativists because it is not in the nature of our species to be moral relativists.  The wellsprings of human morality are innate.  One cannot arbitrarily turn them on or off by embracing this or that philosophy, or reading this or that book.

    I am, perhaps, the closest thing to a moral relativist you will ever find, but when my moral emotions kick in, I’m not much different from anyone else.  Just ask my dog.  When she’s riding with me she’ll often glance my way with a concerned look as I curse the lax morals of other drivers.  No doubt she’s often wondered whether the canine’s symbiotic relationship with our species was such a good idea after all.  I know perfectly well the kind of people Paul Johnson was thinking of when he spoke of “moral relativists.”  However, I’ve watched the behavior of the same types my whole life.  If there’s one thing they all have in common, it’s a pronounced tendency to dictate morality to everyone else.  They are anything but “amoral,” or “moral relativists.”  The difference between them and Johnson is mainly a difference in their choice of outgroups.

    Edvard Westermarck may have chosen the title Ethical Relativity for his brilliant analysis of human morality, yet he was well aware of the human tendency to perceive good and evil as real, independent things.  The title of his book did not imply that moral (or ethical) relativism was practical for our species.  Rather, he pointed out that morality is a manifestation of our package of innate behavioral predispositions, and that it follows that objective moral truths do not exist.  In doing so he was pointing out a fundamental truth.  Recognition of that truth will not result in an orgy of amoral behavior.  On the contrary, it is the only way out of the extremely dangerous moral chaos we find ourselves in today.

    The moral conundrum we find ourselves in is a result of the inability of natural selection to keep up with the rapidly increasingly complexity and size of human societies.  For example, a key aspect of human moral behavior is its dual nature – our tendency to perceive others in terms of ingroups and outgroups.  We commonly associate “good” traits with our ingroup, and “evil” ones with our outgroup.  That aspect of our behavior enhanced the odds that we would survive and reproduce at a time when there was no ambiguity about who belonged in these categories.  The ingroup was our own tribe, and the outgroup was the next tribe over.  Our mutual antagonism tended to make us spread out and avoid starvation due to over-exploitation of a small territory.  We became adept at detecting subtle differences between “us” and “them” at a time when it was unlikely that neighboring tribes differed by anything as pronounced as race or even language.  Today we have given bad names to all sorts of destructive manifestations of outgroup identification without ever grasping the fundamental truth that the relevant behavior is innate, and no one is immune to it.  Racism, anti-Semitism, bigotry, you name it.  They’re all fundamentally the same.  Those who condemn others for one manifestation of the behavior will almost invariably be found doing the same thing themselves, the only difference being who they have identified as the outgroup.

    Not unexpectedly, behavior that accomplished one thing in the Pleistocene does not necessarily accomplish the same thing today.  The disconnect is often absurd, leading in some cases to what I’ve referred to as morality inversions – moral behavior that promotes suicide rather than survival.  That has not prevented those who are happily tripping down the path to their own extinction from proclaiming their moral superiority and raining down pious anathemas on anyone who doesn’t agree.  Meanwhile, new versions of morality are concocted on an almost daily basis, each one pretending to objective validity, complete with a built in right to dictate “goods” and “bads” that never occurred to anyone just a few years ago.

    There don’t appear to be any easy solutions to the moral mess we find ourselves in.  It would certainly help if more of us could accept the fact that morality is an artifact of natural selection, and that, as a consequence, objective good and evil are figments of our imaginations.  Perhaps then we could come up with some version of “absolute” morality that would be in tune with our moral emotions and at the same time allow us to interact in a manner that minimizes both the harm we do to each other and our exposure to the tiresome innovations of moralistic bullies.  That doesn’t appear likely to happen anytime soon, though.  The careers of too many moral pontificators and “experts on ethics” depend on maintaining the illusion.  Meanwhile, we find evolutionary biologists, evolutionary psychologists, and neuroscientists who should know better openly proclaiming the innate sources of moral behavior in one breath, and extolling some idiosyncratic version of “moral progress” and “human flourishing” in the next.  As one of Evelyn Waugh’s “bright young things” might have said, it’s just too shy-making.

    There is a silver lining to the picture, though.  At least you don’t have to worry about “moral relativism” anymore.