Helian Unbound

The world as I see it
RSS icon Email icon Home icon
  • No, All Things are Not Permissible, and All Things are Not Not Permissible

    Posted on July 9th, 2018 Helian No comments

    IMHO it is a fact that good and evil do not exist as independent, objective things.  If they do not exist, then the moral properties that depend on them, such as “permissible,” have no objective existence, either.  It follows that it is not even rational to ask the question whether something is permissible or not as an independent fact.  In other words, if there is no such thing as objective morality, then it does not follow that “everything is permissible.”  It also does not follow that “everything is not permissible.”  As far as the universe is concerned, the term “permissible” does not exist.  In other words, there is no objective reason to obey a given set of moral rules, nor is there an objective reason not to obey those rules.

    I note in passing that if the above were not true, and the conclusion that good and evil do not exist as objective things actually did imply that “everything is permissible,” as some insist, it would not alter the facts one bit.  The universe would shrug its shoulders and ask, “So what?”  If the absence of good and evil as objective things leads to conclusions that some find unpleasant, will that alter reality and magically cause them to pop into existence?  That hasn’t worked with a God, and it won’t work with objective good and evil, either.

    I just read a paper by Matt McManus on the Quillette website that nicely, if unintentionally, demonstrates what kind of an intellectual morass one wades into if one insists that good and evil are real, objective things.  It’s entitled Why Should We Be Good?  The first two paragraphs include the following:

    Today we are witnessing an irrepressible and admirable pushback against the specters of ‘cultural relativism’ and moral ‘nihilism.’ …Indeed, relativism and the moral nihilism with which it is often affiliated, seems to be in retreat everywhere.  For many observers and critics, this is a wholly positive development since both have the corrosive effect of undermining ethical certainty.

    The author goes on to cite what he considers two motivations for the above, one “negative,” and one “positive.”  As he puts it,

    The negative motivation arises from moral dogmatism.  There are those who wish to dogmatically assert their own values without worrying that they may not be as universal as one might suppose… Ethical dogmatists do not want to be confronted with the possibility that it is possible to challenge their values because they often cannot provide good reasons to back them up.

    He adds that,

    The positive motivation was best expressed by Allan Bloom in his 1987 classic The Closing of the American Mind.

    Well, I wouldn’t exactly describe Bloom’s book as “positive.”  It struck me as a curmudgeonly rant about how “today’s youth” didn’t measure up to how he thought they “ought” to be.  Be that as it may, the author finally gets to the point:

    The issue I wish to explore is this:  even if we know which values are universal, why should we feel compelled to adhere to them?

    To this I would reply that there are no universal values, and since they don’t exist, they can’t be known.  This reduces the question of why we should feel compelled to adhere to them to nonsense.  In fact, what the author is doing here is outing himself as a dogmatist.  He just thinks he’s better than other dogmatists because he imagines he can “provide good reasons to back up” his personal dogmas.  It turns out his “good reasons” amount to an appeal to authority, as follows:

    Kant argued, very powerfully, that a human being’s innate practical reason begets a universal set of “moral laws” which any rational person knows they must follow.

    Good dogma, no?  After all, who can argue with Kant?  “Obscurely” would probably be a better word than “powerfully.”   Some of his sentences ran on for a page and a half, larded with turgid German philosophical jargon from start to finish.  Philosophers pique themselves on “understanding” him, but seldom manage to get much further than the categorical imperative in practice.  I suspect they’re wasting their time.  McManus assures us that Kant read Hume.  If so, he must not have comprehended what he was reading in passages such as,

    We speak not strictly and philosophically when we talk of the combat of passion and of reason.  Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.

    If morality had naturally no influence on human passions and actions, ’twere in vain to take such pains to inculcate it: and nothing wou’d be more fruitless than that multitude of rules and precepts, with which all moralists abound.

    Since morals, therefore, have an influence on the actions and affections, it follows, that they cannot be deriv’d from reason; and that because reason alone, as we have already prov’d, can never have any such influence.  Morals excite passions, and produce or prevent actions.  Reason of itself is utterly impotent in this particular.  The rules of morality, therefore, are not conclusions of our reason…

    What Hume wrote above isn’t just the expression of some personal ideological idiosyncrasy, but the logical conclusion of the thought of a long line of British and Scottish philosophers.  I find his thought on morality “very powerful,” and have seen no evidence that Kant ever seriously addressed his arguments.  We learned where the emotions Hume referred to actually came from in 1859 with the publication of The Origin of Species, more than half a century after Kant’s death.  It’s beyond me how Kant could have “argued powerfully” about a “universal set of moral laws” in spite of his ignorance of the real manner in which they are “begotten.”  No matter, McManus apparently still believes, “because Kant,” that we can “know” some “universal moral law.”  He continues,

    While we might know that these “moral laws” apply universally, why should we feel compelled to obey them?

    According to McManus, the 19th century philosopher Henry Sidgwick made some “profound contributions” to answering this question, which he considered “the profoundest problem in ethics.” Not everyone thought Sidgwick was all that profound.  Westermarck dealt rather harshly with his “profound” thoughts in his The Origin and Development of the Moral Ideas.  In the rest of his article, McManus reviews the thought of several other philosophers on the subject, and finds none of them entirely to his liking.  He finally peters out with nary an answer to the question, “Why should we be good?”  In fact there is no objective answer to the question, because there is no objective good.  McManus’ “dogma with good reasons” is just as imaginary as all the “dogmas without good reasons” at which he turns up his nose.

    The philosophers are in no hurry to wade back out of this intellectual morass.  Indeed, their jobs depend on expanding it.  For those of us who prefer staying out of swamps, however, the solution to McManus’ enigma is simple enough.  Stop believing in the ghosts of objective good and evil.  Accept the fact that what we call morality exists because the innate mental traits that give rise to it themselves exist by virtue of evolution by natural selection.  Then follow that fundamental fact to its logical conclusions.  One of those conclusions is that there is nothing whatsoever objective about morality.  It is a purely subjective phenomenon.  That is simply a fact of nature.  As such, it is quite incapable of rendering “everything permissible,” or “everything not permissible.”  Furthermore, realization of that fact will not change how the questions of what is permissible and what is not permissible are answered.  Those questions will continue to be answered just as they always have been, in the subjective minds of individuals.

    Acceptance of these truths about morality will not result in “moral nihilism,” or “cultural relativity,” or the hegemony of postmodernism.  All of these things can result from our attempts to reason about what our emotions are trying to tell us, but so can moral absolutism.  On the other hand, acceptance of the truth may enable us to avoid some of the real dangers posed by our current “system” of blindly responding to moral emotions, and just as blindly imagining that the result will be “moral progress.”  For example, if morality is a manifestation of evolved behavioral traits, those traits must have been selected in times that were very different from the present.  It is highly unlike that blindly following where our emotions seem to be leading us will have the same effect now as it did then.  In fact, those emotions might just as well be leading us over the edge of a cliff.

    If morality is a manifestation of evolved behavioral traits, then arbitrarily isolating moral behavior from the rest of our innate behavioral repertoire, sometimes referred to as human nature, can also be misleading.  For example, we have a powerful innate tendency to distinguish others in terms of ingroup and outgroup, applying different versions of morality to each.  This can delude us into seriously believing that vast numbers of the people we live with are “bad.”  In the past, we have often imagined that we must “resist” and “fight back” against these “bad” people, resulting in mayhem that has caused the death of countless millions, and misery for countless millions more.  From my own subjective point of view, it would be better to understand the innate emotional sources of such subjective fantasies, and at least attempt to find a way to avoid the danger they pose.  Perhaps one day enough people will agree with me to make a difference.  The universe doesn’t care one way or the other.

    Nihilism and chaos will not result from acceptance of the truth.  When it comes to morality, nihilism and chaos are what we have now.  I happen to be among those who would prefer some form of “moral absolutism,” even though I realize that its legitimacy must be based on the subjective desires of individuals rather on some mirage of “objective truth.”  I would prefer living under a simple moral code, in harmony with human nature, designed to enable us to live together with a minimum of friction and a maximum of personal liberty.  No rule would be accepted without examining its innate emotional basis, what the emotions in question accomplished at the time they evolved, and whether they would still accomplish the same thing in the different environment we live in now.  Generalities about “moral progress” and “human flourishing” would be studiously ignored.

    I see no reason why the subjective nature of morality would prevent us from adopting such an “absolute morality.”  There would, of course, be no objective reason why we “should be good” according to the rules of such a system.  The reasons would be the same subjective ones that have always been the real basis for all the versions of morality our species has ever come up with.  In the first place, if the system really was in harmony with human nature, then for many of us, our “conscience” would prompt us to “do good.”  Those with a “weak conscience” who ignored the moral law, free riders if you will, would be dealt with much the same way they have always been dealt with.  They would be shamed, punished, and, if necessary, isolated from the rest of society.

    I know, we are very far from realizing this utopia, or even from accepting the most simple truths about morality and what they imply.  I’ve always been one for daydreaming, though.

  • Please, Leave Me Out of Your Philosophical Pigeonholes

    Posted on June 27th, 2018 Helian 2 comments

    Yes, I know it is human nature to categorize virtually everything. As I noted in my last post, it reduces complexity to manageable levels. When it comes to worldviews and philosophies, we categorize them into schools of thought. I hope my readers will resist the tendency to stuff me into one of these pigeonholes. For better or worse, it seems to me I don’t belong in any of them.

    The fundamental truth I defend is the non-existence of objective morality. That does not mean, however, that I belong in the postmodernist category. Postmodernists may claim that moral truths are social constructs, but that doesn’t prevent them from furiously defending their own preferred version as their “truth,” or defending the alternative preferred versions of certain fashionable identity groups as “true” for those groups. I am not a postmodernist because I reject claims by any individual or group whatsoever that they have a legitimate right to apply their moral rules to me, whether they are socially constructed or not. Postmodernists act as if they had this right to dictate to others, regardless of what they say about “moral relativity.”

    Neither does the fact that I deny the existence of objective morality mean I am a “moral nihilist.” In fact, we actually live in a state of moral nihilism and chaos today for the very reason that we insist on the believing the illusion that there are objective moral truths. Human beings have an overwhelming innate tendency to believe that their idiosyncratic versions of “good” and “evil” represent “truths.” For the most part, they will continue to believe that regardless of what anyone happens to write on the subject. My personal preference would be to live in a world where such an “absolute” morality prevails. However, this “absolute” system would be constructed in full knowledge of the fact that it represented a necessary and useful expedient, and most decidedly not that it reflected objective moral truths. It would be possible to alter and amend this “absolute” system when necessary, but by a means more rational than the current method of allowing those bullies who throw the most flamboyant moralistic temper tantrums to set it up as they please. I propose such a system not because I think we “ought” to do it as a matter of objective fact, but merely because I would personally find it expedient as a means of pursuing the goals I happen to have in life, and believe that others may agree it would be expedient as far as they’re concerned as well.

    Finally, the fact that I deny the existence of objective morality most decidedly does not mean that I belong in the “error theory” category with the likes of J. L. Mackie. Mackie claimed he denied the objective existence of moral properties. However, he also claimed that we “ought” to do some things, and had a “duty” to do others. I consider this nonsense, and a complete contradiction of his claims about the non-existence of objective good and evil. I recently ran across a paper that illustrates very nicely why I would prefer to stay out of this particular pigeonhole. The paper in question was written by Prof. Bart Streumer of the University of Groningen in the Netherlands, and is entitled The Unbelievable Truth about Morality. The opening paragraph of the paper reads as follows:

    Have you ever suspected that even though we call some actions right and other actions wrong, nothing is really right or wrong? If so, there is a philosophical theory that agrees with you: the error theory. According to the error theory, moral judgments are beliefs that ascribe moral properties to actions or to people, but these properties do not exist. The error theory therefore entails that all moral judgments are false. Just as atheism says that God does not exist and that all religious beliefs are false, the error theory says that moral properties do not exist and that all moral judgments are false.

    That may seem to be a concise statement of my own beliefs regarding objective moral claims, but hold onto your hat. In what follows the author comes up with a number of highly dubious conclusions about the supposed implications of “error theory.” In the end he runs completely off the track into the same swamp we were in before, and something indistinguishable from objective morality still prevails. In closing, he triumphantly informs us of his amazing discovery that “error theory” doesn’t “undermine morality!”

    I’m not going to review the entire paper in detail. Interested readers are welcome to do that on their own. Instead I will focus on some of the things the author imagines follow from error theory. These include the notion that a “part” of error theory is “cognitivism.” A “cognitivist” is one who claims that moral judgments are “beliefs.” According to the author, there is a whole “school” of “cognitivists,” countered by another whole “school” of “non-cognitivists.” In his words,

    Opponents of cognitivism, who are known as non-cognitivists, deny that these judgments are beliefs. They instead take moral judgments to be non-cognitive attitudes, such as feelings of approval or disapproval.

    Really? Have philosophers now become that ignorant of philosophy? Whatever happened to the likes of Shaftesbury, Hutcheson, and Hume? They claimed that moral beliefs and moral “feelings of approval or disapproval” were inextricably bound together, that the former were the result of reasoning about the latter, and that moral beliefs are, in fact, impossible without these “feelings.” The very idea that human beings are capable of blindly responding to emotions without forming beliefs about what they imply is referred to by behavioral scientists as “genetic determinism,” and the term “genetic determinist” itself is used merely as a pejorative to describe someone who believes in an impossible fantasy. If we are to credit the author, such specimens actually exist somewhere in the dank halls of academia.

    It would seem, then, that I can’t be an “error theorist,” because I find this false dichotomy between “cognitivism” and “non-cognitivism” absurd, regardless of the author’s claims about how fashionable it is among the philosophers. Not only does the author fail to mention the work of important philosophers who would have deemed this dichotomy nonsense, but he fails to mention any connection between morality and evolution by natural selection. Is he ignorant of a discipline known as evolutionary psychology? Is he completely oblivious to what the neuroscientists have been telling us lately? If “error theory” rejects the objective existence of moral properties, shouldn’t a paper on the subject at least discuss in passing what reasons there might be for the nearly universal belief in such imaginary objects?  Natural selection is certainly among the more plausible explanations.

    In what follows, we finally discover the connection between this remarkable dichotomy and the “unbelievable truth” mentioned in the article’s title. According to the paper, an objection to error theory is as follows:

    If the error theory is true, all moral judgments are false.
    It is wrong to torture babies for fun.
    So the judgment that it is wrong to torture babies for fun is true.
    So at least one moral judgment is true.
    So the error theory is false.

    The author allows that this is a tough one for error theorists. In his words,

    …this objection is hard to answer for error theorists. It is overwhelmingly plausible that it is wrong to torture babies for fun. Error theorists could deny that this entails that the judgment that it is wrong to torture babies for fun is true. But they can only deny this if they endorse non-cognitivism about this judgment, and non-cognitivism conflicts with the error theory. It therefore seems that error theorist must answer this objection by denying that it is wrong to torture babies for fun. But then we should ask what is more plausible: that the error theory is true, or that it is wrong to torture babies for fun. This objection therefore seems to show that we should reject the error theory.

    Now do you see where the false dichotomy comes in? Why on earth should it be “overwhelmingly plausible” that it is wrong to torture babies for fun, regardless of what any individual happens to think about the matter, but as a matter of objective fact? Where is the basis for this “fact?” How did that basis acquire an independent and legitimate authority to dictate to human beings what they ought and ought not to do? How did it come into existence to begin with? Unless one can answer these questions, there is no reason to believe in the existence of objective moral truths, and therefore no rational explanation for the conclusion that any moral claim whatsoever is “overwhelmingly plausible.” It makes as much sense as the claim that there must be unicorns because one really, really believes deep down that it is “overwhelmingly plausible” that there are unicorns. It is only “overwhelmingly plausible” that it is wrong to torture babies because most of us have a very powerful “feeling” that it is wrong. But (aha, oho!) “error theorists” are prohibited from referencing that feeling in denying this “truth” because that would be “non-cognitivism” and they can’t be “non-cognitivists!”

    The rest of the paper goes something like this: Error theory is true. However, if error theory is true, then the claim that it is wrong to torture babies is false, and that is unbelievable. Therefore, error theory is both true and unbelievable. The conclusion:  “Our inability to believe this general error theory therefore prevents it from undermining morality.”  Whatever. One thing that the paper very definitely shows is that I am not an “error theorist.”

    What the “tortured babies” argument really amounts to is the claim that truth can be manufactured out of the vacuum by effective manipulation of moral emotions. It’s just another version of the similar arguments Sam Harris uses to prop up his equally bogus claim that there are objective moral truths. I note in passing the author’s claim that J. L. Mackie was the first philosopher to defend the error theory. That may be true as far as the description of error theory presented in the paper is concerned. However, a far more coherent argument to the effect that objective moral properties do not exist was published by Edvard Westermarck more than 70 years earlier. Perhaps it would be helpful if philosophers would at least reference his work in future discussions of error theory and related topics instead of continuing to ignore him.

    But to return to the moral of the story, not only am I not a postmodernist, a moral nihilist, or a moral relativist, I am not an “error theorist” either. I certainly believe that there are facts about the universe, and that they will stubbornly remain facts regardless of whether any conscious being chooses to believe they are facts or not. I simply don’t believe that these facts include objective moral truths. Apparently, at the risk of overdramatizing myself, I must conclude that I represent a church of one. I hope not but, in any case, when it comes to pigeonholing, please don’t round me up as one of the “usual suspects.”

  • On the “Immorality” of Einstein

    Posted on June 24th, 2018 Helian 2 comments

    Morality exists because of “human nature.”  In other words, it is a manifestation of innate behavioral traits that themselves exist by virtue of evolution by natural selection.  It follows that morality has no goal, no purpose, and no function, because in order to have those qualities it must necessarily have been created by some entity capable intending a goal, purpose, or function for it.  There was no such entity.  In human beings, the traits in question spawn the whimsical illusion that purely imaginary things that exist only in the subjective minds of individuals, such as good, evil, rights, values, etc., actually exist as independent objects.  The belief in these mirages is extremely powerful.  Occasionally a philosopher here and there will assert a belief in “moral relativity,” but in the end one always finds them, to quote a pithy Biblical phrase, returning like dogs to their own vomit.  After all their fine phrases, one finds them picking sides, sagely informing us that some individual or group is “good,” and some other ones “evil,” and that we “ought” to do one thing and “ought not” to do another.

    What does all this have to do with Einstein?  Well, recently he was accused of expressing impure thoughts in some correspondence he imagined would be private.  The nutshell version can be found in a recent article in the Guardian entitled, Einstein’s travel diaries reveal ‘shocking’ xenophobia Among other things, Einstein wrote that the Chinese he saw were “industrious, filthy, obtuse people,” and “even the children are spiritless and look obtuse.”  He, “…noticed how little difference there is between men and women,” adding, “I don’t understand what kind of fatal attraction Chinese women possess which enthralls the corresponding men to such an extent that they are incapable of defending themselves against the formidable blessing of offspring.”  He was more approving of the Japanese, noting that they were “unostentatious, decent, altogether very appealing,” and that he found “Pure souls as nowhere else among people.  One has to love and admire this country.”

    It goes without saying that only a moron could seriously find such comments “shocking” in the context of their time.  In the first place, Einstein was categorizing people into groups, as all human beings do, because we lack the mental capacity to store individual vignettes of all the billions of people on the planet.  He then pointed out certain things about these groups that he honestly imagined to be true.  He nowhere expressed hatred of any of the people he described, nor did he anywhere claim that the traits he described were “innate” or had a “biological origin,” as falsely claimed by the author of the article.  He associated them with the Chinese “race,” but might just as easily been describing cultural characteristics at a given time as anything innate.  Furthermore, “race” at the time that Einstein wrote could be understood quite differently from the way it is now.  In the 19th century, for example, the British and Afrikaners in South Africa were commonly described as different “races.”  Today we have learned some hard lessons about the potential harm of broadly associating negative qualities to entire populations, but in the context of the time they were written, ideas similar to the ones expressed by Einstein were entirely commonplace.

    In light of the above, consider the public response to the recent revelations about the content of Einstein’s private papers.  It is a testimony to the gross absurdity of human moral behavior in the context of an environment radically different from the one in which it evolved.  Einstein is actually accused by some of being a “racist,” a “xenophobe,” a “misogynist,” or, in short, a “bad” man.  Admirers of Einstein have responded by citing all the good-sounding reasons for the claim that Einstein was actually a “good” man.  These responses are equivalent to debating whether Einstein was “really a green unicorn,” or “really a blue unicorn.”  The problem with that is, of course, that there are no unicorns to begin with.  The same is true of objective morality.  It doesn’t exist.  Einstein wasn’t “good,” nor was he “bad,” because these categories do not exist as independent objects.  They are subjective, and exist only in our imaginations.  They are imagined to be real because there was a selective advantage to imagining them to be real in a given environment.  That environment no longer exists.  These are simple statements of fact.

    As so often happens in such cases, one side accuses the other of “moral relativity.”  In his response to this story at the Instapundit website, for example, Ed Driscoll wrote, “A century later, is the age of moral relativity about to devour the legacy of the man who invented the real theory of relativity?”  The problem here is most definitely not moral relativity.  In fact, it is the opposite – the illusion of objective morality.  The people attacking Einstein are moral absolutists.  If that were not true, what could possibly be the point of attacking him?  A genuine moral relativist would simply conclude that Einstein’s personal version of morality was different from theirs, and leave it at that.  That is not what is happening here.  Instead, Einstein is accused of violating “moral laws,” the most fashionable and up-to-date versions of which were concocted long after he was in his grave.  In spite of that, these “moral laws” are treated as if they represented objective facts.  Einstein was “bad” for violating them even though he had no way of knowing that these “moral laws” would exist nearly a century after he wrote his journals.  Is it not obvious that judging Einstein in this way would be utterly irrational unless these newly minted “moral laws” were deemed to be absolute, with a magical existence of their own, independent of what goes on in the subjective minds of individuals?

    Consider what is actually meant by this accusation of “racism.”  Normally a racist is defined as one who considers others to be innately evil or inferior by virtue of their race, and who hates and despises them by virtue of that fact.  It is simply one manifestation of the universal human tendency to perceive others in terms of ingroups and outgroups.  When this type of behavior evolved, there was no ambiguity about the identity of the outgroup.  It was simply the next tribe over.  The perception of “outgroup” could therefore be determined by very subtle differences without affecting the selective value of the behavior.  Now, however, with our vastly increased ability to travel long distances and communicate with others all over the world, we are quite capable of identifying others as “outgroup” whom we never would have heard of or come in contact with in our hunter-gatherer days.  As a result, the behavior has become “dysfunctional.”  It no longer accomplishes the same thing it did when it evolved.  Racism is merely one of the many manifestations of this now dysfunctional trait that has been determined by hard experience to be harmful in the environment we live in today.  As a result, it has been deemed “bad.”  Without understanding the underlying innate traits that give rise to the behavior, however, this attempt to patch up human moral behavior is of very limited value.

    The above becomes obvious if we examine the behavior of those who are in the habit of accusing others of racism.  They are hardly immune to similar manifestations of bigotry.  They simply define their outgroups based on criteria other than race.  The outgroup is always there, and it is hated and despised just the same.  Indeed, they may hate and despise their outgroups a great deal more violently and irrationally than those they accuse of racism ever did, but are oblivious to the possibility that their behavior may be similarly “bad” merely because they perceive their outgroup in terms of ideology, for example, rather than race.  Extreme examples of hatred of outgroups defined by ideology are easy to find on the Internet.  For example,

    • Actor Jim Carrey is quoted as saying, “40 percent of the U.S. doesn’t care if Trump deports people and kidnaps their babies as political hostages.
    • Actor Peter Fonda suggested to his followers on Twitter that they should “rip Barron Trump from his mother’s arms and put him in a cage with pedophiles.”  The brother of Jane Fonda also called for violence against Secretary of Homeland Security Kirstjen Nielsen and called White House Press Secretary Sarah Sanders a “c**t.”
    • An unidentified FBI agent is quoted as saying in a government report that, “Trump’s supporters are all poor to middle class, uneducated, lazy POS.”
    • According to New York Times editorialist Roxanne Gay, “Having a major character on a prominent television show as a Trump supporter normalizes racism and misogyny and xenophobia.”

    Such alternative forms of bigotry are often more harmful than garden variety racism itself merely by virtue of the fact that they have not yet been included in one of the forms of outgroup identification that has already been generally recognized as “bad.”  The underlying behavior responsible for the extreme hatred typified by the above statements won’t change, and if we whack the racism mole, or the anti-Semitism mole, or the homophobia mole, other moles will pop up to take their places.  The Carreys and Fondas and Roxanne Gays of the world will continue to hate their ideological outgroup as furiously as ever, until it occurs to someone to assign as “ism” to their idiosyncratic version of outgroup hatred, and people finally realize that they are no less bigoted than the “racists” they delight in hating.  Then a new “mole” will pop up with a new, improved version of outgroup hatred.  We will never control the underlying behavior and minimize the harm it does until we understand the innate reasons it exists to begin with.  In other words, it won’t go away until we learn to understand ourselves.

    And what of Einstein, not to mention the likes of Columbus, Washington, Madison, and Jefferson?  True, these men did more for the welfare of all mankind than any combination of their Social Justice Warrior accusers you could come up with, but for the time being, admiring them is forbidden.  After all, these men were “bad.”

  • How a “Study” Repaired History and the Evolutionary Psychologists Lived Happily Ever After

    Posted on June 12th, 2018 Helian No comments

    It’s a bit of a stretch to claim that those who have asserted the existence and importance of human nature have never experienced ideological bias. If that claim is true, then the Blank Slate debacle could never have happened. However, we know that it happened, based not only on the testimony of those who saw it for the ideologically motivated debasement of science that it was, such as Steven Pinker and Carl Degler, but of the ideological zealots responsible for it themselves, such as Hamilton Cravens, who portrayed it as The Triumph of Evolution. The idea that the Blank Slaters were “unbiased” is absurd on the face of it, and can be immediately debunked by simply counting the number of times they accused their opponents of being “racists,” “fascists,” etc., in books such as Richard Lewontin’s Not in Our Genes, and Ashley Montagu’s Man and Aggression. More recently, the discipline of evolutionary psychology has experienced many similar attacks, as detailed, for example, by Robert Kurzban in an article entitled, Alas poor evolutionary psychology.

    The reasons for this bias has never been a mystery, either to the Blank Slaters and their latter day leftist descendants, or to evolutionary psychologists and other proponents of the importance of human nature. Leftist ideology requires not only that human beings be equal before the law, but that the menagerie of human identity groups they have become obsessed with over the years actually be equal, in intelligence, creativity, degree of “civilization,” and every other conceivable measure of human achievement. On top of that, they must be “malleable,” and “plastic,” and therefore perfectly adaptable to whatever revolutionary rearrangement in society happened to be in fashion. The existence and importance of human nature has always been perceived as a threat to all these romantic mirages, as indeed it is. Hence the obvious and seemingly indisputable bias.

    Enter Jeffrey Winking of the Department of Anthropology at Texas A&M, who assures us that it’s all a big mistake, and there’s really no bias at all! Not only that, but he “proves” it with a “study” in a paper entitled, Exploring the Great Schism in the Social Sciences, that recently appeared in the journal Evolutionary Psychology. We must assume that, in spite of his background in anthropology, Winking has never heard of a man named Napoleon Chagnon, or run across an article entitled Darkness’s Descent on the American Anthropological Association, by Alice Degler.

    Winking begins his article by noting that “The nature-nurture debate is one that biologists often dismiss as a false dichotomy,” but adds, “However, such dismissiveness belies the long-standing debate that is unmistakable throughout the biological and social sciences concerning the role of biological influences in the development of psychological and behavioral traits in humans.” I agree entirely. One can’t simply hand-wave away the Blank Slate affair and a century of bitter ideological debate by turning up one’s nose and asserting the term isn’t helpful from a purely scientific point of view.

    We also find that Winking isn’t completely oblivious to examples of bias on the “nature” side of the debate. He cites the Harvard study group which “evaluated the merits of sociobiology, and which included intellectual giants like Stephen J. Gould and Richard Lewontin.” I am content to let history judge whether Gould and Lewontin were really “intellectual giants.” Regardless, if Winking actually read these “evaluations,” he cannot have failed to notice that they contained vicious ad hominem attacks on E. O. Wilson and others that it is extremely difficult to construe as anything but biased. Winking goes on to note similar instances of bias by other authors in various disciplines, such as,

    Many researchers use [evolutionary approaches to the study of international relations] to justify the status quo in the guise of science.

    The totality [of sociobiology and evolutionary psychology] is a myth of origin that is compelling precisely because it resonates strongly with Euro American presuppositions about the nature of the world.

    …in the social sciences (with the exception of primatology and psychology) sociobiology appeals most to right-wing social scientists.

    These are certainly compelling examples of bias. Now, however, Winking attempts to demonstrate that those who point out the bias, and correctly interpret the reasons for it, are just as biased themselves. As he puts it,

    Conversely, those who favor biological approaches have argued that those on the other side are rendered incapable of objective assessment by their ideological promotion of equality. They are alleged to erroneously reject evidence of biological influences because such evidence suggests that social outcomes are partially explained by biology, and this might inhibit the realization of equality. Their critiques of biological approaches are therefore often blithely dismissed as examples of the moralistic/naturalistic fallacy. This line of reason is exemplified in the quote by biologist Jerry Coyne

    If you can read the [major Evolutionary Psychology review paper] and still dismiss the entire field as worthless, or as a mere attempt to justify scientists’ social prejudices, then I’d suggest your opinions are based more on ideology than judicious scientific inquiry.

    I can’t imagine what Winking finds “blithe” about that statement! Is it really “blithe” to so much as suggest that people who dismiss entire fields of science as worthless may be ideologically motivated? I note in passing that Coyne must have thought long and hard about that statement, because his Ph.D. advisor was none other than Richard Lewontin, whom he still honors and admires!  Add to that the fact that Coyne is about as far as you can imagine from “right wing,” as anyone can see by simply visiting his Why Evolution is True website, and the notion that he is being “blithe” here is ludicrous. Winking’s other examples of “blithness” are similarly dubious, including,

    For critics, the heart of the intellectual problem remains an ideological adherence to the increasingly implausible view that human behavior is strictly determined by socialization… Should [social]hierarchies result strictly from culture, then the possibilities for an egalitarian future were seen to be as open and boundless as our ever-malleable brains might imagine.

    Like the Church, a number of contemporary thinkers have also grounded their moral and political views in scientific assumptions about… human nature, specifically that there isn’t one.

    Unlike the “comparable” statements by the Blank Slaters, these statements neither accuse those who deny the existence of human nature of being Nazis, nor is evidence lacking to back them up.  On the contrary, one could cite a mountain of evidence to back them up supplied by the Blank Slaters themselves.  Winking soon supplies us with the reason for this strained attempt to establish “moral equivalence” between “nature” and “nurture.”  It appears in his “hypothesis,” as follows:

    It is entirely possible that confirmation bias plays no role in driving disagreement and that the overarching debate in academia is driven by sincere disagreements concerning the inferential value of the research designs informing the debate.

    Wait a minute!  Don’t roll your eyes like that!  Winking has a “study” to back up this hypothesis.  Let me explain it to you.  He invented some “mock results” of studies which purported to establish, for example, the increased prevalence of an allele associated with “appetitive aggression” in populations with African ancestry.  Subtle, no?  Then he used Mechanical Turk and social media to come up with a sample of 365 people with Masters degrees or Ph.D.’s for a survey on what they thought of the “inferential power” of the fake data.  Another sample of 71 were scraped together for another survey on “research design.”  In the larger sample, 307 described themselves as either only “somewhat” on the “nature” side, or “somewhat” on the “nurture” side.  Only 57 claimed they leaned strongly one way or the other.  The triumphant results of the study included, for example, that,

    Participants perceptions of inferential value did not vary by the degree to which results supported a particular ideology, suggesting that ideological confirmation bias is not affecting participant perceptions of inferential value.

    Seriously?  Even the author admits that the statistical power of his “study” is low because of the small sample sizes.  However statistical power only applies where the samples are truly random, meaning, in this case, where the participants are either unequivocably on the “nature” or “nurture” side.  That is hardly the case.  Mechanical Turk samples, for example are biased towards a younger and more liberal demographic.  Most of the participants were on the fence between nature and nurture.  In other words, there’s no telling what their true opinions were even if they were honest about them.  Even the most extreme Blank Slaters admitted that nature plays a significant role in such bodily functions as urinating, defecating, and breathing, and so could have easily described themselves as “somewhat bioist.”  Perhaps most importantly, any high school student could have easily seen what this “study” was about.  There is no doubt whatsoever that holders of Masters and Doctors degrees in related disciplines had no trouble a) inferring what the study was about, and b) had an interest in making sure that the results demonstrated that they were “unbiased.”  In other words, were not exactly talking “double blind” here.

    I think the author was well aware that most readers would have no trouble detecting the blatant shortcomings of his “study.”  Apparently to ward off ridicule he wrote,

    Regardless of one’s position, it is important to remind scholars that if they believe a group of intelligent and informed academics could be so unknowingly blinded by ideology that they wholeheartedly subscribe to an unquestionably erroneous interpretation of an entire body of research, then they must acknowledge they themselves are equally as capable of being so misguided.

    Kind of reminds you of the curse over King Tut’s tomb, doesn’t it?  “May those who question my study be damned to dwell among the misguided forever!”  Sorry, my dear Winking, but “a group of intelligent and informed academics” not only could, but were “so unknowingly blinded by ideology that they wholeheartedly subscribed to an unquestionably erroneous interpretation of an entire body of research.”  It was called the Blank Slate, and it derailed the behavioral sciences for more than half a century.  That’s what Pinker’s book was about.  That’s what Degler’s book was about, and yes, that’s even what Cravens’ book was about.  They all did an excellent job of documenting the debacle.  I suggest you read them.

    Or not.  You could decide to believe your study instead.  I have to admit, it would have its advantages.  History would be “fixed,” the lions would lie down with the lambs, and the evolutionary psychologists would live happily ever after.

  • On the Gleichschaltung of Evolutionary Psychology

    Posted on June 11th, 2018 Helian No comments

    When Robert Ardrey began his debunking of the ideologically motivated dogmas that passed for the “science” of human behavior in 1961 with the publication of his first book, African Genesis, he knew perfectly well what was at stake.  By that time what we now know as the Blank Slate orthodoxy had derailed any serious attempt by our species to achieve self-understanding for upwards of three decades.  This debacle in the behavioral sciences paralyzed any serious attempt to understand the roots of human warfare and aggression, the sources of racism, anti-Semitism, religious bigotry, and the myriad other manifestations of our innate tendency to perceive others in terms of ingroups and outgroups, the nature of human territorialism and status-seeking behavior, and the wellsprings of human morality itself.  A bit later, E. O. Wilson summed up our predicament as follows:

    Humanity today is like a waking dreamer, caught between the fantasies of sleep and the chaos of the real world.  The mind seeks but cannot find the precise place and hour.  We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology.  We thrash about.  We are terribly confused about the mere fact of our existence, and a danger to ourselves and the rest of life.

    In the end, the Blank Slate collapsed under the weight of its own absurdity, in spite of the now-familiar attempts to silence its opponents by vilification rather than logical argument.  The science of evolutionary psychology emerged based explicitly on acceptance of the reality and importance of innate human behavioral traits.  However, the ideological trends that resulted in the Blank Slate disaster to begin with haven’t disappeared.  On the contrary, they have achieved nearly unchallenged control of the social means of communication, including the entertainment industry, the “mainstream” news media, Internet monopolies such as Facebook, Google and Twitter, and, perhaps most importantly, academia.  There an ingroup defined by ideology has emerged that has always viewed the new science with a jaundiced eye.  By its very nature it challenges their assumptions of moral superiority, their cherished myths about the nature of human beings, and the viability of the various utopias they have always enjoyed concocting for the rest of us.  As Marx might have put it, this clash of thesis and antithesis has led to a synthesis in evolutionary psychology that might be described as creeping Gleichschaltung.  In other words, it is undergoing a slow process of getting “in step” with the controlling ideology.  It no longer seriously challenges the dogmas of that ideology, and the “studies” emerging from the field are increasingly, if not yet exclusively, limited to subjects that are deemed ideologically “benign.”  As a result, when it comes to addressing issues that are of real importance in terms of the survival and welfare of our species, the science of evolutionary psychology has become largely irrelevant.

    Consider, for example, the sort of articles that one typically finds in the relevant journals.  In the last four issues of Evolutionary Behavioral Sciences they have addressed such subjects as “Committed romantic relationships,” Long-term romantic relationships,” “The effect of predictable early childhood environments on sociosexuality in early adulthood,” “Daily relationship quality in same-sex couples,” “Modern-day female preferences for resources and provisioning by long-term mates,” “Behavioral reactions to emotional and sexual infidelity: mate abandonment versus mate retention,” and “An evolutionary perspective on orgasm.”  Peering through the last four issues of Evolutionary Psychology Journal we find, “Mating goals moderate power’s effect on conspicuous consumption among women,” “In-law preferences in China: What parents look for in the parents of their children’s mates,” “Endorsement of social and personal values predicts the desirability of men and women as long-term partners,” “Adaptive memory: remembering potential mates,” “Passion, relational mobility, and proof of commitment,” “Do men produce high quality ejaculates when primed with thoughts of partner infidelity?” and “Displaying red and black on a first date: A field study using the ‘First Dates’ television series.”

    All very interesting stuff, I’m sure, but the last time I checked humanity wasn’t faced with an existential threat due to cluelessness about the mechanics of reproduction.  Articles that might actually bear on our chances of avoiding self-destruction, on the other hand, are few and far between.  In short, evolutionary psychology has been effectively neutered.  Ostensibly, it’s only remaining purpose is to pad the curriculum vitae of the professoriat in the publish or perish world of academia.

    Does it really matter?  Probably not much.  The claims of any branch of psychology to be a genuine science have always been rather tenuous, and must remain so as long as our knowledge of how the mind works and how consciousness can exist remains so limited.  Real knowledge of how the brain gives rise to innate behavioral predispositions, and how they are perceived and interpreted by our “rational” consciousness is far more likely to be forthcoming from fields like neuroscience, genetics, and evolutionary biology than evolutionary psychology.  Meanwhile, we are free of the Blank Slate straitjacket, at least temporarily.  We must no longer endure the sight of the court jesters of the Blank Slate striking heroic poses as paragons of “science,” and uttering cringeworthy imbecilities that are taken perfectly seriously by a fawning mass media.  Consider, for example, the following gems from clown-in-chief Ashley Montagu:

    All the field observers agree that these creatures (chimpanzees and other great apes) are amiable and quite unaggressive, and there is not the least reason to suppose that man’s pre-human primate ancestors were in any way different.

    The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.

    …man is man because he has no instincts, because everything he is and has become he has learned, acquired, from his culture, from the man-made part of the environment, from other human beings.

    In fact, I also think it very doubtful that any of the great apes have any instincts.  On the contrary, it seems that as social animals they must learn from others everything they come to know and do.  Their capacities for learning are simply more limited than those of Homo sapiens.

    In his heyday Montagu could rave on like that nonstop, and be taken perfectly seriously, not only by the media, but by the vast majority of the “scientists” in the behavioral disciplines.  Anyone who begged to differ was shouted down as a racist and a fascist.  We can take heart in the fact that we’ve made at least some progress since then.  Today one finds articles about human “instincts” in the popular media, and even academic journals, as if the subject had never been the least bit controversial.  True, the same “progressives” who brought us the Blank Slate now have evolutionary psychology firmly in hand, and are keeping it on a very short leash.  For all that, one can now at least study the subject of innate human behavior without fear that undue interest in the subject is likely to bring one’s career to an abrupt end.  Who knows?  With concurrent advances in our knowledge of the actual physics of the mind and consciousness, we may eventually begin to understand ourselves.

  • Morality and the Floundering Philosophers

    Posted on May 26th, 2018 Helian No comments

    In my last post I noted the similarities between belief in objective morality, or the existence of “moral truths,” and traditional religious beliefs. Both posit the existence of things without evidence, with no account of what these things are made of (assuming that they are not things that are made of nothing), and with no plausible explanation of how these things themselves came into existence or why their existence is necessary. In both cases one can cite many reasons why the believers in these nonexistent things want to believe in them. In both cases, for example, the livelihood of myriads of “experts” depends on maintaining the charade. Philosophers are no different from priests and theologians in this respect, but their problem is even bigger. If Darwin gave the theologians a cold, he gave the philosophers pneumonia. Not long after he published his great theory it became clear, not only to him, but to thousands of others, that morality exists because the behavioral traits which give rise to it evolved. The Finnish philosopher Edvard Westermarck formalized these rather obvious conclusions in his The Origin and Development of the Moral Ideas (1906) and Ethical Relativity (1932). At that point, belief in the imaginary entities known as “moral truths” became entirely superfluous. Philosophers have been floundering behind their curtains ever since, trying desperately to maintain the illusion.

    An excellent example of the futility of their efforts may be found online in the Stanford Encyclopedia of Philosophy in an entry entitled Morality and Evolutionary Biology. The most recent version was published in 2014.  It’s rather long, but to better understand what follows it would be best if you endured the pain of wading through it.  However, in a nutshell, it seeks to demonstrate that, even if there is some connection between evolution and morality, it’s no challenge to the existence of “moral truths,” which we are to believe can be detected by well-trained philosophers via “reason” and “intuition.”  Quaintly enough, the earliest source given for a biological explanation of morality is E. O. Wilson.  Apparently the Blank Slate catastrophe is as much a bugaboo for philosophers as for scientists.  Evidently it’s too indelicate for either of them to mention that the behavioral sciences were completely derailed for upwards of 50 years by an ideologically driven orthodoxy.  In fact, a great many highly intelligent scientists and philosophers wrote a great deal more than Wilson about the connection between biology and morality before they were silenced by the high priests of the Blank Slate.  Even during the Blank Slate men like Sir Arthur Keith had important things to say about the biological roots of morality.  Robert Ardrey, by far the single most influential individual in smashing the Blank Slate hegemony, addressed the subject at length long before Wilson, as did thinkers like Konrad Lorenz and Niko Tinbergen.  Perhaps if its authors expect to be taken seriously, this “Encyclopedia” should at least set the historical record straight.

    It’s already evident in the Overview section that the author will be running with some dubious assumptions.  For example, he speaks of “morality understood as a set of empirical phenomena to be explained,” and the “very different sets of questions and projects pursued by philosophers when they inquire into the nature and source of morality,” as if they were examples of the non-overlapping magisterial once invoked by Stephen Jay Gould. In fact, if one “understands the empirical phenomena” of morality, then the problem of the “nature and source of morality” is hardly “non-overlapping.”  In fact, it solves itself.  The suggestion that they are non-overlapping depends on the assumption that “moral truth” exists in a realm of its own.  A bit later the author confirms he is making that assumption as follows:

    Moral philosophers tend to focus on questions about the justification of moral claims, the existence and grounds of moral truths, and what morality requires of us.  These are very different from the empirical questions pursued by the sciences, but how we answer each set of questions may have implications for how we should answer the other.

    He allows that philosophy and the sciences must inform each other on these “distinct” issues.  In fact, neither philosophy nor the sciences can have anything useful to say about these questions, other than to point out that they relate to imaginary things.  “Objects” in the guise of “justification of moral claims,” “grounds of moral truths,” and the “requirements of morality” exist only in fantasy.  The whole burden of the article is to maintain that fantasy, and insist that the mirage is real.  We are supposed to be able to detect that the mirages are real by thinking really hard until we “grasp moral truths,” and “gain moral knowledge.”  It is never explained what kind of a reasoning process leads to “truths” and “knowledge” about things that don’t exist.  Consider, for example, the following from the article:

    …a significant amount of moral judgment and behavior may be the result of gaining moral knowledge, rather than just reflecting the causal conditioning of evolution.  This might apply even to universally held moral beliefs or distinctions, which are often cited as evidence of an evolved “universal moral grammar.”  For example, people everywhere and from a very young age distinguish between violations of merely conventional norms and violations of norms involving harm, and they are strongly disposed to respond to suffering with concern.  But even if this partly reflects evolved psychological mechanisms or “modules” governing social sentiments and responses, much of it may also be the result of human intelligence grasping (under varying cultural conditions) genuine morally relevant distinctions or facts – such as the difference between the normative force that attends harm and that which attends mere violations of convention.

    It’s amusing to occasionally substitute “the flying spaghetti monster” or “the great green grasshopper god” for the author’s “moral truths.”  The “proofs” of their existence work just as well.  In the above, he is simply assuming the existence of “morally relevant distinctions,” and further assuming that they can be grasped and understood logically.  Such assumptions fly in the face of the work of many philosophers who demonstrated that moral judgments are always grounded in emotions, sometimes referred to by earlier authors as “sentiments,” or “passions,” and it is therefore impossible to arrive at moral truths through reason alone.  Assuming some undergraduate didn’t write the article, one must assume the author had at least a passing familiarity with some of these people.  The Earl of Shaftesbury, for example, demonstrated the decisive role of “natural affections” as the origins of moral judgment in his Inquiry Concerning Virtue or Merit (1699), even noting in that early work the similarities between humans and the higher animals in that regard.  Francis Hutcheson very convincingly demonstrated the impotence of reason alone in detecting moral truths, and the essential role of “instincts and affections” as the origin of all moral judgment in his An Essay on the Nature and Conduct of the Passions and Affections (1728).  Hutcheson thought that God was the source of these passions and affections.  It remained for David Hume to present similar arguments on a secular basis in his A Treatise on Human Nature (1740).

    The author prefers to ignore these earlier philosophers, focusing instead on the work of Jonathan Haidt, who has also insisted on the role of emotions in shaping moral judgment.  Here I must impose on the reader’s patience with a long quote to demonstrate the type of “logic” we’re dealing with.  According to the author,

    There are also important philosophical worries about the methodologies by which Haidt comes to his deflationary conclusions about the role played by reasoning in ordinary people’s moral judgments.

    To take just one example, Haidt cites a study where people made negative moral judgments in response to “actions that were offensive yet harmless, such as…cleaning one’s toilet with the national flag.” People had negative emotional reactions to these things and judged them to be wrong, despite the fact that they did not cause any harms to anyone; that is, “affective reactions were good predictors of judgment, whereas perceptions of harmfulness were not” (Haidt 2001, 817). He takes this to support the conclusion that people’s moral judgments in these cases are based on gut feelings and merely rationalized, since the actions, being harmless, don’t actually warrant such negative moral judgments. But such a conclusion would be supported only if all the subjects in the experiment were consequentialists, specifically believing that only harmful consequences are relevant to moral wrongness. If they are not, and believe—perhaps quite rightly (though it doesn’t matter for the present point what the truth is here)—that there are other factors that can make an action wrong, then their judgments may be perfectly appropriate despite the lack of harmful consequences.

    This is in fact entirely plausible in the cases studied: most people think that it is inherently disrespectful, and hence wrong, to clean a toilet with their nation’s flag, quite apart from the fact that it doesn’t hurt anyone; so the fact that their moral judgment lines up with their emotions but not with a belief that there will be harmful consequences does not show (or even suggest) that the moral judgment is merely caused by emotions or gut reactions. Nor is it surprising that people have trouble articulating their reasons when they find an action intrinsically inappropriate, as by being disrespectful (as opposed to being instrumentally bad, which is much easier to explain).

    Here one can but roll ones eyes.  It doesn’t matter a bit whether the subjects are consequentialists or not.  Haidt’s point is that logical arguments will always break down at some point, whether they are based on harm or not, because moral judgments are grounded in emotions.  Harm plays a purely ancillary role.  One could just as easily ask why the action in question is considered disrespectful, and the chain of logical reasons would break down just as surely.  Whoever wrote the article must know what Haidt is really saying, because he refers explicitly to the ideas of Hume in the same book.  Absent the alternative that the author simply doesn’t know what he’s talking about, we must conclude that he is deliberately misrepresenting what Haidt was trying to say.

    One of the author’s favorite conceits is that one can apply “autonomous applications of human intelligence,” meaning applications free of emotional bias, to the discovery of “moral truths” in the same way those logical faculties are applied in such fields as algebraic topology, quantum field theory, population biology, etc.  In his words,

    We assume in general that people are capable of significant autonomy in their thinking, in the following sense:

    Autonomy Assumption: people have, to greater or lesser degrees, a capacity for reasoning that follows autonomous standards appropriate to the subjects in question, rather than in slavish service to evolutionarily given instincts merely filtered through cultural forms or applied in novel environments. Such reflection, reasoning, judgment and resulting behavior seem to be autonomous in the sense that they involve exercises of thought that are not themselves significantly shaped by specific evolutionarily given tendencies, but instead follow independent norms appropriate to the pursuits in question (Nagel 1979).

    This assumption seems hard to deny in the face of such abstract pursuits as algebraic topology, quantum field theory, population biology, modal metaphysics, or twelve-tone musical composition, all of which seem transparently to involve precisely such autonomous applications of human intelligence.

    This, of course, leads up to the argument that one can apply this “autonomy assumption” to moral judgment as well.  The problem is that, in the other fields mentioned, one actually has something to reason about.  In mathematics, for example, one starts with a collection of axioms that are simply accepted as true, without worrying about whether they are “really” true or not.  In physics, there are observables that one can measure and record as a check on whether one’s “autonomous application of intelligence” was warranted or not.  In other words, one has physical evidence.  The same goes for the other subjects mentioned.  In each case, one is reasoning about something that actually exists.  In the case of morality, however, “autonomous intelligence” is being applied to a phantom.  Again, the same arguments are just as strong if one applies them to grasshopper gods.  “Autonomous intelligence” is useless if it is “applied” to something that doesn’t exist.  You can “reflect” all you want about the grasshopper god, but he will still stubbornly refuse to pop into existence.  The exact nature of the recondite logical gymnastics one must apply to successfully apply “autonomous intelligence” in this way is never explained.  Perhaps a Ph.D. in philosophy at Stanford is a prerequisite before one can even dare to venture forth on such a daunting logical quest.  Perhaps then, in addition to the sheepskin, they fork over a philosopher’s stone that enables one to transmute lead into gold, create the elixir of life, and extract “moral truths” right out of the vacuum.

    In short, the philosophers continue to flounder.  Their logical demonstrations of nonexistent “moral truths” are similar in kind to logical demonstrations of the existence of imaginary super-beings, and just as threadbare.  Why does it matter?  I can’t supply you with any objective “oughts,” here, but at least I can tell you my personal prejudices on the matter, and my reasons for them.  We are living in a time of moral chaos, and will continue to do so until we accept the truth about the evolutionary origin of human morality and the implications of that truth.  There are no objective moral truths, and it will be extremely dangerous for us to continue to ignore that fact.  Competing morally loaded ideologies are already demonstrably disrupting our political systems.  It is hardly unlikely that we will once again experience what happens when fanatics stuff their “moral truths” down our throats as they did in the last century with the morally loaded ideologies of Communism and Nazism.  Do you dislike being bullied by Social Justice Warriors?  I’m sorry to inform you that the bullying will continue unabated until we explode the myth that they are bearers of “moral truths” that they are justified, according to “autonomous logic” in imposing on the rest of us.  I could go on and on, but do I really need to?  Isn’t it obvious that a world full of fanatical zealots, all utterly convinced that they have a monopoly on “moral truth,” and a perfect right to impose these “truths” on everyone else, isn’t exactly a utopia?  Allow me to suggest that, instead, it might be preferable to live according to a simple and mutually acceptable “absolute” morality, in which “moral relativism” is excluded, and which doesn’t change from day to day in willy-nilly fashion according to the whims of those who happen to control the social means of communication?  As counter-intuitive as it seems, the only practicable way to such an outcome is acceptance of the fact that morality is a manifestation of evolved human nature, and of the truth that there are no such things as “moral truths.”


  • Morality and the Spiritualism of the Atheists

    Posted on May 11th, 2018 Helian No comments

    I’m an atheist.  I concluded there was no God when I was 12 years old, and never looked back.  Apparently many others have come to the same conclusion in western democratic societies where there is access to diverse opinions on the subject, and where social sanctions and threats of force against atheists are no longer as intimidating as they once were.  Belief in traditional religions is gradually diminishing in such societies.  However, they have hardly been replaced by “pure reason.”  They have merely been replaced by a new form of “spiritualism.”  Indeed, I would maintain that most atheists today have as strong a belief in imaginary things as the religious believers they so often despise.  They believe in the “ghosts” of good and evil.

    Most atheists today may be found on the left of the ideological spectrum.  A characteristic trait of leftists today is the assumption that they occupy the moral high ground. That assumption can only be maintained by belief in a delusion, a form of spiritualism, if you will – that there actually is a moral high ground.  Ironically, while atheists are typically blind to the fact that they are delusional in this way, it is often perfectly obvious to religious believers.  Indeed, this insight has led some of them to draw conclusions about the current moral state of society similar to my own.  Perhaps the most obvious conclusion is that atheists have no objective basis for claiming that one thing is “good” and another thing is “evil.”  For example, as noted by Tom Trinko at American Thinker in an article entitled “Imagine a World with No Religion,”

    Take the Golden Rule, for example. It says, “Do onto others what you’d have them do onto you.” Faithless people often point out that one doesn’t need to believe in God to believe in that rule. That’s true. The problem is that without God, there can’t be any objective moral code.

    My reply would be, that’s quite true, and since there is no God, there isn’t any objective moral code, either.  However, most atheists, far from being “moral relativists,” are highly moralistic.  As a consequence, they are dumbfounded by anything like Trinko’s remark.  It pulls the moral rug right out from under their feet.  Typically, they try to get around the problem by appealing to moral emotions.  For example, they might say something like, “What?  Don’t you think it’s really bad to torture puppies to death?”, or, “What?  Don’t you believe that Hitler was really evil?”  I certainly have a powerful emotional response to Hitler and tortured puppies.  However, no matter how powerful those emotions are, I realize that they can’t magically conjure objects into being that exist independently of my subjective mind.  Most leftists, and hence, most so-called atheists, actually do believe in the existence of such objects, which they call “good” and “evil,” whether they admit it explicitly or not.  Regardless, they speak and act as if the objects were real.

    The kinds of speech and actions I’m talking about are ubiquitous and obvious.  For example, many of these “atheists” assume a dictatorial right to demand that others conform to novel versions of “good” and “evil” they may have concocted yesterday or the day before.  If those others refuse to conform, they exhibit all the now familiar symptoms of outrage and virtuous indignation.  Do rational people imagine that they are gods with the right to demand that others obey whatever their latest whims happen to be?  Do they assume that their subjective, emotional whims somehow immediately endow them with a legitimate authority to demand that others behave in certain ways and not in others?  I certainly hope that no rational person would act that way.  However, that is exactly the way that many so-called atheists act.  To the extent that we may consider them rational at all, then, we must assume that they actually believe that whatever versions of “good” or “evil” they happen to favor at the moment are “things” that somehow exist on their own, independently of their subjective minds.  In other words, they believe in ghosts.

    Does this make any difference?  I suggest that it makes a huge difference.  I personally don’t enjoy being constantly subjected to moralistic bullying.  I doubt that many people enjoy jumping through hoops to conform to the whims of others.  I submit that it may behoove those of us who don’t like being bullied to finally call out this type of irrational, quasi-religious behavior for what it really is.

    It also makes a huge difference because this form of belief in imaginary objects has led us directly into the moral chaos we find ourselves in today.  New versions of “absolute morality” are now popping up on an almost daily basis.  Obviously, we can’t conform to all of them at once, and must therefore put up with the inconvenience of either keeping our mouths shut or risk being furiously condemned as “evil” by whatever faction we happen to offend.  Again, traditional theists are a great deal more clear-sighted than “atheists” about this sort of thing.  For example, in an article entitled, “Moral relativism can lead to ethical anarchy,” Christian believer Phil Schurrer, a professor at Bowling Green State University, writes,

    …the lack of a uniform standard of what constitutes right and wrong based on Natural Law leads to the moral anarchy we see today.

    Prof. Schurrer is right about the fact that we live in a world of moral anarchy.  I also happen to agree with him that most of us would find it useful and beneficial if we could come up with a “uniform standard of what constitutes right and wrong.”  Where I differ with him is on the rationality of attempting to base that standard on “Natural Law,” because there is no such thing.  For religious believers, “Natural Law” is law passed down by God, and since there is no God, there can be no “Natural Law,” either.  How, then, can we come up with such a uniform moral code?

    I certainly can’t suggest a standard based on what is “really good” or “really bad” because I don’t believe in the existence of such objects.  I can only tell you what I would personally consider expedient.  It would be a standard that takes into account what I consider to be some essential facts.  These are as follows.

    • What we refer to as morality is an artifact of “human nature,” or, in other words, innate predispositions that affect our behavior.
    • These predispositions exist because they evolved by natural selection.
    • They evolved by natural selection because they happened to improve the odds that the genes responsible for their existence would survive and reproduce at the time and in the environment in which they evolved.
    • We are now living at a different time, and in a different environment, and it cannot be assumed that blindly responding to the predispositions in question will have the same outcome now as it did when those predispositions evolved.  Indeed, it has been repeatedly demonstrated that such behavior can be extremely dangerous.
    • Outcomes of these predispositions include a tendency to judge the behavior of others as “good” or “evil.”  These categories are typically deemed to be absolute, and to exist independently of the conscious minds that imagine them.
    • Human morality is dual in nature.  Others are perceived in terms of ingroups and outgroups, with different standards applying to what is deemed “good” or “evil” behavior towards those others depending on the category to which they are imagined to belong.

    I could certainly expand on this list, but the above are certainly some of the most salient and essential facts about human morality.  If they are true, then it is possible to make at least some preliminary suggestions about how a “uniform standard” might look.  It would be as simple as possible.  It would be derived to minimize the dangers referred to above, with particular attention to the dangers arising from ingroup/outgroup behavior.  It would be limited in scope to interactions between individuals and small groups in cases where the rational analysis of alternatives is impractical due to time constraints, etc.  It would be in harmony with innate human behavioral traits, or “human nature.”  It is our nature to perceive good and evil as real objective things, even though they are not.  This implies there would be no “moral relativism.”  Once in place, the moral code would be treated as an absolute standard, in conformity with the way in which moral standards are usually perceived.  One might think of it as a “moral constitution.”  As with political constitutions, there would necessarily be some means of amending it if necessary.  However, it would not be open to arbitrary innovations spawned by the emotional whims of noisy minorities.

    How would such a system be implemented?  It’s certainly unlikely that any state will attempt it any time in the foreseeable future.  Perhaps it might happen gradually, just as changes to the “moral landscape” have usually happened in the past.  For that to happen, however, it would be necessary for significant numbers of people to finally understand what morality is, and why it exists.  And that is where, as an atheist, I must part company with Mr. Trinko, Prof. Schurrer, and the rest of the religious right.  Progress towards a uniform morality that most of us would find a great deal more useful and beneficial than the versions currently on tap, regardless of what goals or purposes we happen to be pursuing in life, cannot be based on the illusion that a “natural law” exists that has been handed down by an imaginary God, any more than it can be based on the emotional whims of leftist bullies.  It must be based on a realistic understanding of what kind of animals we are, and how we came to be.  However, such self knowledge will remain inaccessible until we shed the shackles of religion.  Perhaps, as they witness many of the traditional churches increasingly becoming leftist political clubs before their eyes, people on the right of the political spectrum will begin to find it less difficult to free themselves from those shackles.  I hope so.  I think that an Ansatz based on simple, traditional moral rules, such as the Ten Commandments, is more likely to lead to a rational morality than one based on furious rants over who should be allowed to use what bathrooms.  In other words, I am more optimistic that a useful reform of morality will come from the right rather than the left of the ideological spectrum, as it now stands.  Most leftists today are much too heavily invested in indulging their moral emotions to escape from the world of illusion they live in.  To all appearances they seriously believe that blindly responding to these emotions will somehow magically result in “moral progress” and “human flourishing.”  Conservatives, on the other hand, are unlikely to accomplish anything useful in terms of a rational morality until they free themselves of the “God delusion.”  It would seem, then, that for such a moral “revolution” to happen, it will be necessary for those on both the left and the right to shed their belief in “spirits.”


  • Fisking a Fusion Fata Morgana

    Posted on April 10th, 2018 Helian 2 comments

    Why is it that popular science articles about fusion energy are always so cringe-worthy? Is scientific illiteracy a prerequisite for writing them? Take the latest one to hit the streets, for example. Entitled Lockheed Martin Now Has a Patent For Its Potentially World Changing Fusion Reactor, it had all the familiar “unlimited energy is just around the corner” hubris we’ve come to expect in articles about fusion. When I finished reading it I wondered whether the author imagined all that nonsense on his own, or some devilish plasma physicist put him up to it as a practical joke. The fun starts in the first paragraph, where we are assured that,

    If this project has been progressing on schedule, the company could debut a prototype system that size of shipping container, but capable of powering a Nimitz-class aircraft carrier or 80,000 homes, sometime in the next year or so.

    Trust me, dear reader, barring divine intervention no such prototype system, capable of both generating electric energy and fitting within a volume anywhere near that of a shipping container, will debut in the next year, or the next five years, or the next ten years.  Reading on, we learn that,

    Unlike in nuclear fission, where atoms hit each other release energy, a fusion reaction involves heating up a gaseous fuel to the point where its atomic structure gets disrupted from the pressure and some of the particles fuse into a heavier nucleus.

    Well, not really.  Fission is caused by free neutrons, not by “atoms hitting each other.”  It would actually be more accurate to say that fusion takes place when “atoms hit each other,” although it’s really the atomic nuclei that “hit” each other.  Fusion doesn’t involve “atomic structure getting disrupted from pressure.” Rather, it happens when atoms acquire enough energy to overcome the Coulomb repulsion between two positively charged atomic nuclei (remember, like charges repel), and come within a sufficiently short distance of each other for the much greater strong nuclear force of attraction to take over. According to the author,

    But to do this you need to be able to hold the gas, which is eventually in a highly energized plasma state, for a protracted period of time at a temperature of hundreds of millions of degrees Fahrenheit.

    This is like claiming that a solid can be in a liquid state. A plasma is not a gas. It is a fourth state of matter quite unlike the three (solid, liquid, gas) that most of us are familiar with. Shortly thereafter we are assured that,

    Running on approximately 25 pounds of fuel – a mixture of hydrogen isotopes deuterium and tritium – Lockheed Martin estimated the notional reactor would be able to run for an entire year without stopping. The device would be able to generate a constant 100 megawatts of power during that period.

    25 pounds of fuel would include about 15 pounds of tritium, a radioactive isotope of hydrogen with a half-life of just over 12 years. In other words, its atoms decay about 2000 times faster than those of the plutonium 239 found in nuclear weapons.  It’s true that the beta particle (electron) emitted in tritium decay is quite low energy by nuclear standards but, as noted in Wiki, “Tritium is an isotope of hydrogen, which allows it to readily bind to hydroxyl radicals, forming tritiated water (HTO), and to carbon atoms. Since tritium is a low energy beta emitter, it is not dangerous externally (its beta particles are unable to penetrate the skin), but it can be a radiation hazard when inhaled, ingested via food or water, or absorbed through the skin.”  Obviously, water and many carbon compounds can be easily inhaled or ingested. Tritium is anything but benign if released into the environment. Here we will charitably assume that the author didn’t mean to say that 25 pounds of fuel would be available all at once, but would be bred gradually and then consumed as fuel in the reactor during operation.  The amount present at any given time would more appropriately be measured in grams than in pounds.  The article continues with rosy scenarios that might have been lifted from a “Back to the Future” movie:

    Those same benefits could apply to vehicles on land, ships at sea, or craft in space, providing nearly unlimited power in compact form allowing for operations across large areas, effectively eliminating the tyranny of distance in many cases. Again, for military applications, unmanned ground vehicles or ships could patrol indefinitely far removed from traditional logistics chains and satellites could conduct long-term, resource intensive activities without the need for large and potentially dangerous fission reactors.

    Great shades of “Dr. Fusion!” Let’s just say that “vehicles on land” is a bit of a stretch. I can only hope that no Lockheed engineer was mean-spirited enough to feed the author such nonsense. Moving right along, we read,

    Therein lies perhaps the biggest potential benefits of nuclear fusion over fission. It’s produces no emissions dangerous to the ozone layer and if the system fails it doesn’t pose nearly the same threat of a large scale radiological incident. Both deuterium and tritium are commonly found in a number of regular commercial applications and are relatively harmless in low doses.

    I have no idea what “emission” of the fission process the author thinks is “dangerous to the ozone layer.” Again, as noted above, tritium is anything but “relatively harmless” if ingested. Next we find perhaps the worst piece of disinformation of all:

    And since a fusion reactor doesn’t need refined fissile material, its much harder for it to serve as a starting place for a nuclear weapons program.

    Good grief, the highly energetic neutrons produced in a fusion reactor are not only capable of breeding tritium, but plutonium 239 and uranium 233 from naturally occurring uranium and thorium as well.  Both are superb explosive fuels for nuclear weapons.  And tritium?  It is used in a process known as “boosting” to improve the performance of nuclear weapons.  Finally, we run into what might be called the Achilles heel of all tritium-based fusion reactor designs:

    Fuel would also be abundant and relatively easy to source, since sea water provides a nearly unlimited source of deuterium, while there are ready sources of lithium to provide the starting place for scientists to “breed” tritium.

    I think not. Breeding tritium will be anything but a piece of cake.  The process will involve capturing the neutrons produced by the fusion reactions in a lithium blanket surrounding the reactor, doing so efficiently enough to generate more tritium from the resulting reactions than the reactor consumes as fuel, and then extracting the tritium and recycling it into the reactor without releasing any of the slippery stuff into the environment.  Do you think the same caliber of engineers who brought us Chernobyl, Fukushima, and Three Mile Island will be able to pull that rabbit out of their hats without a hitch?  If so, you’re more optimistic than I am.

    Hey, I like to be as optimistic about fusion as it’s reasonable to be. I think it’s certainly possible that some startup company with a bright idea will find the magic bullet that makes fusion reactors feasible, preferably involving fusion reactions that don’t involve tritium. It’s also quite possible that the guys at Lockheed will achieve breakeven, although getting a high enough gain of energy in versus energy out to enable efficient generation of electric power is another matter.  There’s a difference between optimism and scientifically illiterate hubris, though.  Is it too much to ask that people who write articles about fusion at least run them by somebody who actually knows something about the subject to see if they pass the “ho, ho” test before publishing?  What’s that you say?  What about me?  Please read the story about the Little Red Hen.

  • Mary McCarthy and McCarthyism: A Review of “The Group”

    Posted on April 9th, 2018 Helian No comments

    Not many people remember Mary McCarthy anymore, but she was a household name among the literati back in the 50’s and 60’s, as both a novelist and a political activist.  I’d never read any of her work, but noticed in an old review of her novel The Group that she was a Vassar grad.  I used to date a Vassar girl, as my alma mater was West Point, about 30 miles down the Hudson from Poughkeepsie, and the novel was about Vassar girls, so for no more substantial reason than that, I decided to have a look.  It was a good decision.  Given what I look for in novels, The Group was one of the best I’ve ever read.

    When it comes to literature, I agree with my favorite author, Stendhal.  He said that novels were artifacts of the time in which they were written, and were meant to appeal to the tastes of people who lived in those times.  I also agree with George Orwell, who held that novels are a way of expressing truths that the limitations of language make it difficult to express in any other way.  From both points of view, I found The Group superb.  It is full of the impressions left in the mind of a very intelligent woman by the life going on around her, in this case, in the 30’s, following her graduation from Vassar in 1933, told from the point of view of a “group” of her fellow graduates.  It is a perfect time capsule.

    What’s in the capsule?  Well, to begin, I found an artifact of the contemporary “progressives'” embrace of eugenics before Hitler ruined everything, as exemplified by the father of Kay Strong, one of “the group.”

    Dad, like all modern doctors, believed in birth control and was for sterilizing criminals and the unfit.

    How about a morality inversion?  Kay’s dad had sent her a check on the occasion of her marriage to a playwright by the name of Harald Peterson and she agreed with him that,

    It was a declaration of faith… And she and Harald did not intend to betray that faith by breeding children(!, ed.), when Harald had his name to make in the theatre.

    Which, of course, begs the question of why it is that anyone is predisposed to “make a name” for himself.  My readers should know the answer to that question.  I note in passing that McCarthy’s first husband was also named Harald.  Another thing documented in the novel is the fact that, at least for some, the sexual revolution happened a long time before the pill was ever heard of.  There are detailed descriptions of the prophylactic techniques of the day, including the diaphragm, sort of a trap door in the way of hopeful sperm that was carefully fitted to cover the opening of the cervix by a gynecologist.  This was used in tandem with the douchebag, containing a spermicidal concoction to finish off the more recalcitrant searchers for the holy grail.  According to the novel, women who were open to sexual adventures would announce the fact by hanging these on the back of their bathroom door.

    Perhaps the most useful insight one can glean from The Group is the prevalence and matter-of-fact acceptance of Communists in the 30’s.  Many magazines, some of which are still around today, were open advocates of Communism in those days.  Kay’s classmate, Libby MacAusland, an aspiring book reviewer, noticed this in the case of two titles still familiar today.  As she put it:

    At the Nation and the New Republic they said too that you had to run a gauntlet of Communists before getting in to see the book editor – all sorts of strange characters, tattooed sailors right off the docks and longshoremen and tramps and bearded cranks from the Village cafeterias, none of them having had a bath for weeks.

    Most of the action takes place in New York, and playwrights there noticed the same phenomenon.  For example, Kay’s playwright husband, Harald,

    …had been directing a play for a left-wing group downtown.  It was one of those profit-sharing things, co-operatives, but run really by Communists behind the scenes, as Harald found out in due course.  The play was about labor, and the audiences were mostly theatre parties got up by the trade unions.

    Another of the Vassar classmates, Polly Andrews, became the lover of Gus LeRoy, a book reviewer for one of the big New York Publishers.  He is described as a humdrum man whose embrace of Communism was described as something entirely commonplace and unremarkable:

    His liking for name brands was what had sold him on Communism years ago, when he graduated from Brown spank into the depression.  (George Bernard) Shaw had already converted him to socialism, but if you were going to be a socialist, his roommate argued, you ought to give your business to the biggest and best firm producing socialism, i.e., the Soviet Union.  So Gus switched to Communism, but only after he had gone to see for himself.  He and his roommate made a tour of the Soviet Union the summer after college and they were impresse3d by the dams and power plants and the collective farms and the Intourist girl Guide.  After that, Norman Thomas (longtime leader of the Socialist Party in the U.S., ed.) seemed pretty ineffectual.

    Polly’s father, who comes to live with her after divorcing his wife, preferred another flavor of Communism:

    And unlike the village cure in France, who had required him to take instruction before being “received,” the Trotskyites, apparently, had accepted him as he was.  He never understood the “dialectic” and was lax in attendance at meetings, but he made up for this by the zeal with which, wearing a red necktie and an ancient pair of spats, he sold the Socialist Appeal on the street outside Stalinist rallies.

    Polly’s dad has some choice words for the New York Times’ prize, Pulitzer Prize winning journalist in Moscow.  Chagrined at the refusal of his daughter’s Aunt Julia to include a small behest to the Trotskyists in her will he remarks:

    But Julia has been convinced by what she reads in the papers that we Trotskyites are counter-revolutionary agents bent on destroying the Soviet Union.  Walter Duranty and those fellows, you know, have made her believe in the trials (the Great Purge Trials of the old Bolsheviks, ed.).  If what they write wasn’t true, she says, it wouldn’t be in the New York Times, would it?

    In short, what the novel is documenting here is the fact that, among the “woke” elements in the population back in the 30’s, Communism was a commonplace.  Look at the entertainment and literary magazines of the day, and you’ll see that it was just as prevalent in Hollywood as it was in New York.  Which brings us back to the title of this post.  I refer, of course, to McCarthyism.

    McCarthyism lays fair claim to being the next to the biggest media scam of the 20th century, taking second place only to the Watergate coup d’état.  The news media were nearly as firmly in the grip of the ideological Left in Joe McCarthy’s day as they are now, and those who controlled the message were perfectly well aware that many of their friends and ideological soulmates had been party members or fellow travelers in the 30’s.  Once it became obvious that Walter Duranty and his pals had been purveying some of the most egregious “fake news” ever heard of, and the Communists and their collaborators actually had the blood of tens of millions on their hands, all these would be saviors of the proletariat were in a precarious position.  Then tail gunner Joe began seriously rocking the boat, “kicking ass and taking names,” as we used to say in the Army.  Something had to be done.  The result was the media-contrived charade we now know as McCarthyism.  Instead of feeling sympathy for the tens of millions of voiceless victims of Communism lying in mass graves starved and tortured to death or with bullet holes in their skulls, the American people were successfully bamboozled into wringing their hands over blighted careers of those who had gleefully collaborated in their murder.  McCarthy was cast in the role of one of the media’s greatest villains, an evil witch hunter.  The fact that the witches were actually there, and in great abundance, didn’t seem to matter.

    If you think Mary McCarthy was some right wing zealot who was trying to exonerate tail gunner Joe when The Group was published in 1954, guess again.  Indeed, as Alex might have said in A Clockwork Orange, “now comes the weepy part of the story, oh my brothers (and sisters).”  Mary McCarthy was actually a lesser, albeit smarter, version of Jane Fonda.  That’s right.  She, too, traveled to North Vietnam as the war was raging in the south and openly collaborated with the enemy.  She was a leftist activist of the first water.

    What can I say?  I still loved the book.  As it happens, not everyone agreed with me.  Stanley Kauffmann, a noted critic back in the day, wrote a scathing review of The Group when it was republished in 1964.  Kauffmann, too, was a leftist, and complained that McCarthy had been insufficiently zealous in portraying the oppression and victimization of his pet identity groups.  Beyond that, however, he criticized the disconnected story line and McCarthy’s lack of “style.”  To tell the truth, I really don’t know what the critics mean when they speak of “style,” and I could care less about it.  It appears my favorite Stendhal was also lacking in “style.”  It’s a matter of complete indifference to me.  What I look for in novels are such things as the accurate portrayal of the times in which they were written, insight into human nature, and bits that teach me a little bit something about my own quirks and follies.  I like Stendhal, Sinclair Lewis, Somerset Maugham, and Kafka (because he’s so good at amplifying my worst nightmares).  I don’t like Dickens, I don’t like Joyce, and I don’t like Proust.  That’s not to say they aren’t great authors.  I don’t doubt that they are, because people whose opinions I respect have found much to like in them.  I just didn’t find what I like.  I did find it in The Group.  Have a look and see if you find it, too.  Don’t miss the bits about “advanced” methods of child rearing back in the 30’s.  I suspect they would make any modern pediatrician’s hair stand on end.  Meanwhile, I’ll be checking out some of McCarthy’s other stuff.

  • On the Illusion of Moral Relativism

    Posted on April 8th, 2018 Helian No comments

    As recently as 2009 the eminent historian Paul Johnson informed his readers that he made “…the triumph of moral relativism the central theme of my history of the 20th century, Modern Times, first published in 1983.”  More recently, however, obituaries of moral relativism have turned up here and there.  For example one appeared in The American Spectator back in 2012, fittingly entitled Moral Relativism, R.I.P.  It was echoed a few years later by a piece in The Atlantic that announced The Death of Moral Relativism.”  There’s just one problem with these hopeful announcements.  Genuine moral relativists are as rare as unicorns.

    True, many have proclaimed their moral relativism.  To that I can only reply, watch their behavior.  You will soon find each and every one of these “relativists” making morally loaded pronouncements about this or that social evil, wrong-headed political faction, or less than virtuous individual.  In other words, their “moral relativism” is of a rather odd variety that occasionally makes it hard to distinguish their behavior from that of the more zealous moral bigots.  Scratch the surface of any so-called “moral relativist,” and you will often find a moralistic bully.  We are not moral relativists because it is not in the nature of our species to be moral relativists.  The wellsprings of human morality are innate.  One cannot arbitrarily turn them on or off by embracing this or that philosophy, or reading this or that book.

    I am, perhaps, the closest thing to a moral relativist you will ever find, but when my moral emotions kick in, I’m not much different from anyone else.  Just ask my dog.  When she’s riding with me she’ll often glance my way with a concerned look as I curse the lax morals of other drivers.  No doubt she’s often wondered whether the canine’s symbiotic relationship with our species was such a good idea after all.  I know perfectly well the kind of people Paul Johnson was thinking of when he spoke of “moral relativists.”  However, I’ve watched the behavior of the same types my whole life.  If there’s one thing they all have in common, it’s a pronounced tendency to dictate morality to everyone else.  They are anything but “amoral,” or “moral relativists.”  The difference between them and Johnson is mainly a difference in their choice of outgroups.

    Edvard Westermarck may have chosen the title Ethical Relativity for his brilliant analysis of human morality, yet he was well aware of the human tendency to perceive good and evil as real, independent things.  The title of his book did not imply that moral (or ethical) relativism was practical for our species.  Rather, he pointed out that morality is a manifestation of our package of innate behavioral predispositions, and that it follows that objective moral truths do not exist.  In doing so he was pointing out a fundamental truth.  Recognition of that truth will not result in an orgy of amoral behavior.  On the contrary, it is the only way out of the extremely dangerous moral chaos we find ourselves in today.

    The moral conundrum we find ourselves in is a result of the inability of natural selection to keep up with the rapidly increasingly complexity and size of human societies.  For example, a key aspect of human moral behavior is its dual nature – our tendency to perceive others in terms of ingroups and outgroups.  We commonly associate “good” traits with our ingroup, and “evil” ones with our outgroup.  That aspect of our behavior enhanced the odds that we would survive and reproduce at a time when there was no ambiguity about who belonged in these categories.  The ingroup was our own tribe, and the outgroup was the next tribe over.  Our mutual antagonism tended to make us spread out and avoid starvation due to over-exploitation of a small territory.  We became adept at detecting subtle differences between “us” and “them” at a time when it was unlikely that neighboring tribes differed by anything as pronounced as race or even language.  Today we have given bad names to all sorts of destructive manifestations of outgroup identification without ever grasping the fundamental truth that the relevant behavior is innate, and no one is immune to it.  Racism, anti-Semitism, bigotry, you name it.  They’re all fundamentally the same.  Those who condemn others for one manifestation of the behavior will almost invariably be found doing the same thing themselves, the only difference being who they have identified as the outgroup.

    Not unexpectedly, behavior that accomplished one thing in the Pleistocene does not necessarily accomplish the same thing today.  The disconnect is often absurd, leading in some cases to what I’ve referred to as morality inversions – moral behavior that promotes suicide rather than survival.  That has not prevented those who are happily tripping down the path to their own extinction from proclaiming their moral superiority and raining down pious anathemas on anyone who doesn’t agree.  Meanwhile, new versions of morality are concocted on an almost daily basis, each one pretending to objective validity, complete with a built in right to dictate “goods” and “bads” that never occurred to anyone just a few years ago.

    There don’t appear to be any easy solutions to the moral mess we find ourselves in.  It would certainly help if more of us could accept the fact that morality is an artifact of natural selection, and that, as a consequence, objective good and evil are figments of our imaginations.  Perhaps then we could come up with some version of “absolute” morality that would be in tune with our moral emotions and at the same time allow us to interact in a manner that minimizes both the harm we do to each other and our exposure to the tiresome innovations of moralistic bullies.  That doesn’t appear likely to happen anytime soon, though.  The careers of too many moral pontificators and “experts on ethics” depend on maintaining the illusion.  Meanwhile, we find evolutionary biologists, evolutionary psychologists, and neuroscientists who should know better openly proclaiming the innate sources of moral behavior in one breath, and extolling some idiosyncratic version of “moral progress” and “human flourishing” in the next.  As one of Evelyn Waugh’s “bright young things” might have said, it’s just too shy-making.

    There is a silver lining to the picture, though.  At least you don’t have to worry about “moral relativism” anymore.