The world as I see it
RSS icon Email icon Home icon
  • Why the Blank Slate? Let Max Eastman Explain

    Posted on July 29th, 2018 Helian 1 comment

    In my opinion, science, broadly construed, is the best “way of knowing” we have.  However, it is not infallible, is never “settled,” cannot “say” anything, and can be perverted and corrupted for any number of reasons.  The Blank Slate affair was probably the worst instance of the latter in history.  It involved the complete disruption of the behavioral sciences for a period of more than half a century in order to prop up the absurd lie that there is no such thing as human nature.  It’s grip on the behavioral sciences hasn’t been completely broken to this day.  It’s stunning when you think about it.  Whole branches of the sciences were derailed to support a claim that must seem ludicrous to any reasonably intelligent child.  Why?  How could such a thing have happened?  At least part of the answer was supplied by Max Eastman in an article that appeared in the June 1941 issue of The Reader’s Digest.  It was entitled, Socialism Doesn’t Jibe with Human Nature.

    Who was Max Eastman?  Well, he was quite a notable socialist himself in his younger days.  He edited a radical magazine called The Masses from 1913 until it was suppressed in 1918 for its antiwar content.  In 1922 he traveled to the Soviet Union, and stayed to witness the reality of Communism for nearly two years, becoming friends with a number of Bolshevik worthies, including Trotsky.  Evidently he saw some things that weren’t quite as ideal as he had imagined.  He became increasingly critical of the Stalin regime, and eventually of socialism itself.  In 1941 he became a roving editor for the anti-Communist Reader’s Digest, and the above article appeared shortly thereafter.

    In it, Eastman reviewed the history of socialism from it’s modest beginnings in Robert Owen’s utopian village of New Harmony through a host of similar abortive experiments to the teachings of Karl Marx, and finally to the realization of Marx’s dream in the greatest experiment of them all; the Bolshevik state in Russia.  He noted that all the earlier experiments had failed miserably but, in his words, “The results were not better than Robert Owen’s but a million times worse.”  The outcome of Lenin’s great experiment was,

    Officialdom gone mad, officialdom erected into a new and merciless exploiting class which literally wages war on its own people; the “slavery, horrors, savagery, absurdities and infamies of capitalist exploitation” so far outdone that men look back to them as to a picnic on a holiday; bureaucrats everywhere, and behind the bureaucrats the GPU; death for those who dare protest; death for theft – even of a piece of candy; and this sadistic penalty extended by a special law to children twelve years old!  People who still insist that this is a New Harmony are for the most part dolts or mental cowards.  To honest men with courage to face facts it is clear that Lenin’s experiment, like Robert Owen’s, failed.

    It would seem the world produced a great many dolts and mental cowards in the years leading up to 1941.  In the 30’s Communism was all the rage among intellectuals, not only in the United States but worldwide.  As Malcolm Muggeridge put it in his book, The Thirties, at the beginning of the decade it was rare to find a university professor who was a Marxist, but at the end of the decade it was rare to find one who wasn’t.  If you won’t take Muggeridge’s word for it, just look at the articles in U.S. intellectual journals such as The Nation, The New Republic, and the American Mercury during, say, the year 1934.  Many of them may be found online.  These were all very influential magazines in the 30’s, and at times during the decade they all took the line that capitalism was dead, and it was now merely a question of finding a suitable flavor of socialism to replace it.  If you prefer reality portrayed in fiction, read the guileless accounts of the pervasiveness of Communism among the intellectual elites of the 1930’s in the superb novels of Mary McCarthy, herself a leftist radical.

    Eastman was too intelligent to swallow the “common sense” socialist remedies of the news stand journals.  He had witnessed the reality of Communism firsthand, and had followed its descent into the hellish bloodbath of the Stalinist purges and mass murder by torture and starvation in the Gulag system.  He knew that socialism had failed everywhere else it had been tried as well.  He also knew the reason why.  Allow me to quote him at length:

    Why did the monumental efforts of these three great men (Owen, Marx and Lenin, ed.) and tens of millions of their followers, consecrated to the cause of human happiness – why did they so miserably fail? They failed because they had no science of human nature, and no place in their science for the common sense knowledge of it.

    In October 1917, after the news came that Kerensky’s government had fallen, Lenin, who had been in hiding, appeared at a meeting of the Workers and Soldiers’ Soviet of Petrograd.  He mounted the rostrum and, when the long wild happy shouts of greeting had died down, remarked: “We will now proceed to the construction of a socialist society.” He said this as simply as though he were proposing to put up a new cowbarn.  But in all his life he had never asked himself the equally simple question: “How is this newfangled contraption going to fit in with the instinctive tendencies of the animals it was made for?”

    Lenin actually knew less about the science of man, after a hundred years, than Robert Owen did.  Owen had described human nature, fairly well for an amateur, as “a compound of animal propensities, intellectual faculties and moral qualities.”  He had written into the preamble of the constitution of New Harmony that “man’s character… is the result of his formation, his location, and of the circumstances within which he exists.”

    It seems incredible, but Karl Marx, with all his talk about making socialism “scientific,” took a step back from this elementary notion. He dropped out the factor of man’s hereditary nature altogether.  He dropped out man altogether, so far as he might present an obstacle to social change.  “The individual,” he said, “has no real existence outside the milieu in which he lives.” By which he meant: Change the milieu, change the social relations, and man will change as much as you like.  That is all Marx ever said on the primary question.  And Lenin said nothing.

    That is why they failed.  They were amateurs – and worse than amateurs, mystics – in the subject most essential to their success.

    To begin with, man is the most plastic and adaptable of animals.  He truly can be changed by his environment, and even by himself, to a unique degree, and that makes extreme ideas of progress reasonable.  On the other hand, he inherits a set of emotional impulses or instincts which, although they can be trained in various ways in the individual, cannot be eradicated from the race.  And no matter how much they may be repressed or redirected by training, they reappear in the original form – as sure as a hedgehop puts out spines – in every baby that is born.

    Amazing, considering these words were written in 1941.  Eastman had a naïve faith that science would remedy the situation, and that, as our knowledge of human behavior advanced, mankind would see the truth.  In fact, by 1941, those who didn’t want to hear the inconvenient truth that the various versions of paradise on earth they were busily concocting for the rest of us were foredoomed to failure already had the behavioral sciences well in hand.  They made sure that “science said” what they wanted it to say.  The result was the Blank Slate, a scientific debacle that brought humanity’s efforts to gain self-understanding to a screeching halt for more than half a century, and one that continues to haunt us even now.  Their agenda was simple – if human nature stood in the way of heaven on earth, abolish human nature!  And that’s precisely what they did.  It wasn’t the first time that ideological myths have trumped the truth, and it certainly won’t be the last, but the Blank Slate may well go down in history as the deadliest myth of all.

    I note in passing that the Blank Slate was the child of the “progressive Left,” the same people who today preen themselves on their great respect for “science.”  In fact, all the flat earthers, space alien conspiracy nuts, and anti-Darwin religious fanatics combined have never pulled off anything as damaging to the advance of scientific knowledge as the Blank Slate debacle.  It’s worth keeping in mind the next time someone tries to regale you with fairy tales about what “science says.”

  • Morality and the Floundering Philosophers

    Posted on May 26th, 2018 Helian No comments

    In my last post I noted the similarities between belief in objective morality, or the existence of “moral truths,” and traditional religious beliefs. Both posit the existence of things without evidence, with no account of what these things are made of (assuming that they are not things that are made of nothing), and with no plausible explanation of how these things themselves came into existence or why their existence is necessary. In both cases one can cite many reasons why the believers in these nonexistent things want to believe in them. In both cases, for example, the livelihood of myriads of “experts” depends on maintaining the charade. Philosophers are no different from priests and theologians in this respect, but their problem is even bigger. If Darwin gave the theologians a cold, he gave the philosophers pneumonia. Not long after he published his great theory it became clear, not only to him, but to thousands of others, that morality exists because the behavioral traits which give rise to it evolved. The Finnish philosopher Edvard Westermarck formalized these rather obvious conclusions in his The Origin and Development of the Moral Ideas (1906) and Ethical Relativity (1932). At that point, belief in the imaginary entities known as “moral truths” became entirely superfluous. Philosophers have been floundering behind their curtains ever since, trying desperately to maintain the illusion.

    An excellent example of the futility of their efforts may be found online in the Stanford Encyclopedia of Philosophy in an entry entitled Morality and Evolutionary Biology. The most recent version was published in 2014.  It’s rather long, but to better understand what follows it would be best if you endured the pain of wading through it.  However, in a nutshell, it seeks to demonstrate that, even if there is some connection between evolution and morality, it’s no challenge to the existence of “moral truths,” which we are to believe can be detected by well-trained philosophers via “reason” and “intuition.”  Quaintly enough, the earliest source given for a biological explanation of morality is E. O. Wilson.  Apparently the Blank Slate catastrophe is as much a bugaboo for philosophers as for scientists.  Evidently it’s too indelicate for either of them to mention that the behavioral sciences were completely derailed for upwards of 50 years by an ideologically driven orthodoxy.  In fact, a great many highly intelligent scientists and philosophers wrote a great deal more than Wilson about the connection between biology and morality before they were silenced by the high priests of the Blank Slate.  Even during the Blank Slate men like Sir Arthur Keith had important things to say about the biological roots of morality.  Robert Ardrey, by far the single most influential individual in smashing the Blank Slate hegemony, addressed the subject at length long before Wilson, as did thinkers like Konrad Lorenz and Niko Tinbergen.  Perhaps if its authors expect to be taken seriously, this “Encyclopedia” should at least set the historical record straight.

    It’s already evident in the Overview section that the author will be running with some dubious assumptions.  For example, he speaks of “morality understood as a set of empirical phenomena to be explained,” and the “very different sets of questions and projects pursued by philosophers when they inquire into the nature and source of morality,” as if they were examples of the non-overlapping magisterial once invoked by Stephen Jay Gould. In fact, if one “understands the empirical phenomena” of morality, then the problem of the “nature and source of morality” is hardly “non-overlapping.”  In fact, it solves itself.  The suggestion that they are non-overlapping depends on the assumption that “moral truth” exists in a realm of its own.  A bit later the author confirms he is making that assumption as follows:

    Moral philosophers tend to focus on questions about the justification of moral claims, the existence and grounds of moral truths, and what morality requires of us.  These are very different from the empirical questions pursued by the sciences, but how we answer each set of questions may have implications for how we should answer the other.

    He allows that philosophy and the sciences must inform each other on these “distinct” issues.  In fact, neither philosophy nor the sciences can have anything useful to say about these questions, other than to point out that they relate to imaginary things.  “Objects” in the guise of “justification of moral claims,” “grounds of moral truths,” and the “requirements of morality” exist only in fantasy.  The whole burden of the article is to maintain that fantasy, and insist that the mirage is real.  We are supposed to be able to detect that the mirages are real by thinking really hard until we “grasp moral truths,” and “gain moral knowledge.”  It is never explained what kind of a reasoning process leads to “truths” and “knowledge” about things that don’t exist.  Consider, for example, the following from the article:

    …a significant amount of moral judgment and behavior may be the result of gaining moral knowledge, rather than just reflecting the causal conditioning of evolution.  This might apply even to universally held moral beliefs or distinctions, which are often cited as evidence of an evolved “universal moral grammar.”  For example, people everywhere and from a very young age distinguish between violations of merely conventional norms and violations of norms involving harm, and they are strongly disposed to respond to suffering with concern.  But even if this partly reflects evolved psychological mechanisms or “modules” governing social sentiments and responses, much of it may also be the result of human intelligence grasping (under varying cultural conditions) genuine morally relevant distinctions or facts – such as the difference between the normative force that attends harm and that which attends mere violations of convention.

    It’s amusing to occasionally substitute “the flying spaghetti monster” or “the great green grasshopper god” for the author’s “moral truths.”  The “proofs” of their existence work just as well.  In the above, he is simply assuming the existence of “morally relevant distinctions,” and further assuming that they can be grasped and understood logically.  Such assumptions fly in the face of the work of many philosophers who demonstrated that moral judgments are always grounded in emotions, sometimes referred to by earlier authors as “sentiments,” or “passions,” and it is therefore impossible to arrive at moral truths through reason alone.  Assuming some undergraduate didn’t write the article, one must assume the author had at least a passing familiarity with some of these people.  The Earl of Shaftesbury, for example, demonstrated the decisive role of “natural affections” as the origins of moral judgment in his Inquiry Concerning Virtue or Merit (1699), even noting in that early work the similarities between humans and the higher animals in that regard.  Francis Hutcheson very convincingly demonstrated the impotence of reason alone in detecting moral truths, and the essential role of “instincts and affections” as the origin of all moral judgment in his An Essay on the Nature and Conduct of the Passions and Affections (1728).  Hutcheson thought that God was the source of these passions and affections.  It remained for David Hume to present similar arguments on a secular basis in his A Treatise on Human Nature (1740).

    The author prefers to ignore these earlier philosophers, focusing instead on the work of Jonathan Haidt, who has also insisted on the role of emotions in shaping moral judgment.  Here I must impose on the reader’s patience with a long quote to demonstrate the type of “logic” we’re dealing with.  According to the author,

    There are also important philosophical worries about the methodologies by which Haidt comes to his deflationary conclusions about the role played by reasoning in ordinary people’s moral judgments.

    To take just one example, Haidt cites a study where people made negative moral judgments in response to “actions that were offensive yet harmless, such as…cleaning one’s toilet with the national flag.” People had negative emotional reactions to these things and judged them to be wrong, despite the fact that they did not cause any harms to anyone; that is, “affective reactions were good predictors of judgment, whereas perceptions of harmfulness were not” (Haidt 2001, 817). He takes this to support the conclusion that people’s moral judgments in these cases are based on gut feelings and merely rationalized, since the actions, being harmless, don’t actually warrant such negative moral judgments. But such a conclusion would be supported only if all the subjects in the experiment were consequentialists, specifically believing that only harmful consequences are relevant to moral wrongness. If they are not, and believe—perhaps quite rightly (though it doesn’t matter for the present point what the truth is here)—that there are other factors that can make an action wrong, then their judgments may be perfectly appropriate despite the lack of harmful consequences.

    This is in fact entirely plausible in the cases studied: most people think that it is inherently disrespectful, and hence wrong, to clean a toilet with their nation’s flag, quite apart from the fact that it doesn’t hurt anyone; so the fact that their moral judgment lines up with their emotions but not with a belief that there will be harmful consequences does not show (or even suggest) that the moral judgment is merely caused by emotions or gut reactions. Nor is it surprising that people have trouble articulating their reasons when they find an action intrinsically inappropriate, as by being disrespectful (as opposed to being instrumentally bad, which is much easier to explain).

    Here one can but roll ones eyes.  It doesn’t matter a bit whether the subjects are consequentialists or not.  Haidt’s point is that logical arguments will always break down at some point, whether they are based on harm or not, because moral judgments are grounded in emotions.  Harm plays a purely ancillary role.  One could just as easily ask why the action in question is considered disrespectful, and the chain of logical reasons would break down just as surely.  Whoever wrote the article must know what Haidt is really saying, because he refers explicitly to the ideas of Hume in the same book.  Absent the alternative that the author simply doesn’t know what he’s talking about, we must conclude that he is deliberately misrepresenting what Haidt was trying to say.

    One of the author’s favorite conceits is that one can apply “autonomous applications of human intelligence,” meaning applications free of emotional bias, to the discovery of “moral truths” in the same way those logical faculties are applied in such fields as algebraic topology, quantum field theory, population biology, etc.  In his words,

    We assume in general that people are capable of significant autonomy in their thinking, in the following sense:

    Autonomy Assumption: people have, to greater or lesser degrees, a capacity for reasoning that follows autonomous standards appropriate to the subjects in question, rather than in slavish service to evolutionarily given instincts merely filtered through cultural forms or applied in novel environments. Such reflection, reasoning, judgment and resulting behavior seem to be autonomous in the sense that they involve exercises of thought that are not themselves significantly shaped by specific evolutionarily given tendencies, but instead follow independent norms appropriate to the pursuits in question (Nagel 1979).

    This assumption seems hard to deny in the face of such abstract pursuits as algebraic topology, quantum field theory, population biology, modal metaphysics, or twelve-tone musical composition, all of which seem transparently to involve precisely such autonomous applications of human intelligence.

    This, of course, leads up to the argument that one can apply this “autonomy assumption” to moral judgment as well.  The problem is that, in the other fields mentioned, one actually has something to reason about.  In mathematics, for example, one starts with a collection of axioms that are simply accepted as true, without worrying about whether they are “really” true or not.  In physics, there are observables that one can measure and record as a check on whether one’s “autonomous application of intelligence” was warranted or not.  In other words, one has physical evidence.  The same goes for the other subjects mentioned.  In each case, one is reasoning about something that actually exists.  In the case of morality, however, “autonomous intelligence” is being applied to a phantom.  Again, the same arguments are just as strong if one applies them to grasshopper gods.  “Autonomous intelligence” is useless if it is “applied” to something that doesn’t exist.  You can “reflect” all you want about the grasshopper god, but he will still stubbornly refuse to pop into existence.  The exact nature of the recondite logical gymnastics one must apply to successfully apply “autonomous intelligence” in this way is never explained.  Perhaps a Ph.D. in philosophy at Stanford is a prerequisite before one can even dare to venture forth on such a daunting logical quest.  Perhaps then, in addition to the sheepskin, they fork over a philosopher’s stone that enables one to transmute lead into gold, create the elixir of life, and extract “moral truths” right out of the vacuum.

    In short, the philosophers continue to flounder.  Their logical demonstrations of nonexistent “moral truths” are similar in kind to logical demonstrations of the existence of imaginary super-beings, and just as threadbare.  Why does it matter?  I can’t supply you with any objective “oughts,” here, but at least I can tell you my personal prejudices on the matter, and my reasons for them.  We are living in a time of moral chaos, and will continue to do so until we accept the truth about the evolutionary origin of human morality and the implications of that truth.  There are no objective moral truths, and it will be extremely dangerous for us to continue to ignore that fact.  Competing morally loaded ideologies are already demonstrably disrupting our political systems.  It is hardly unlikely that we will once again experience what happens when fanatics stuff their “moral truths” down our throats as they did in the last century with the morally loaded ideologies of Communism and Nazism.  Do you dislike being bullied by Social Justice Warriors?  I’m sorry to inform you that the bullying will continue unabated until we explode the myth that they are bearers of “moral truths” that they are justified, according to “autonomous logic” in imposing on the rest of us.  I could go on and on, but do I really need to?  Isn’t it obvious that a world full of fanatical zealots, all utterly convinced that they have a monopoly on “moral truth,” and a perfect right to impose these “truths” on everyone else, isn’t exactly a utopia?  Allow me to suggest that, instead, it might be preferable to live according to a simple and mutually acceptable “absolute” morality, in which “moral relativism” is excluded, and which doesn’t change from day to day in willy-nilly fashion according to the whims of those who happen to control the social means of communication?  As counter-intuitive as it seems, the only practicable way to such an outcome is acceptance of the fact that morality is a manifestation of evolved human nature, and of the truth that there are no such things as “moral truths.”

     

  • Morality and the Spiritualism of the Atheists

    Posted on May 11th, 2018 Helian No comments

    I’m an atheist.  I concluded there was no God when I was 12 years old, and never looked back.  Apparently many others have come to the same conclusion in western democratic societies where there is access to diverse opinions on the subject, and where social sanctions and threats of force against atheists are no longer as intimidating as they once were.  Belief in traditional religions is gradually diminishing in such societies.  However, they have hardly been replaced by “pure reason.”  They have merely been replaced by a new form of “spiritualism.”  Indeed, I would maintain that most atheists today have as strong a belief in imaginary things as the religious believers they so often despise.  They believe in the “ghosts” of good and evil.

    Most atheists today may be found on the left of the ideological spectrum.  A characteristic trait of leftists today is the assumption that they occupy the moral high ground. That assumption can only be maintained by belief in a delusion, a form of spiritualism, if you will – that there actually is a moral high ground.  Ironically, while atheists are typically blind to the fact that they are delusional in this way, it is often perfectly obvious to religious believers.  Indeed, this insight has led some of them to draw conclusions about the current moral state of society similar to my own.  Perhaps the most obvious conclusion is that atheists have no objective basis for claiming that one thing is “good” and another thing is “evil.”  For example, as noted by Tom Trinko at American Thinker in an article entitled “Imagine a World with No Religion,”

    Take the Golden Rule, for example. It says, “Do onto others what you’d have them do onto you.” Faithless people often point out that one doesn’t need to believe in God to believe in that rule. That’s true. The problem is that without God, there can’t be any objective moral code.

    My reply would be, that’s quite true, and since there is no God, there isn’t any objective moral code, either.  However, most atheists, far from being “moral relativists,” are highly moralistic.  As a consequence, they are dumbfounded by anything like Trinko’s remark.  It pulls the moral rug right out from under their feet.  Typically, they try to get around the problem by appealing to moral emotions.  For example, they might say something like, “What?  Don’t you think it’s really bad to torture puppies to death?”, or, “What?  Don’t you believe that Hitler was really evil?”  I certainly have a powerful emotional response to Hitler and tortured puppies.  However, no matter how powerful those emotions are, I realize that they can’t magically conjure objects into being that exist independently of my subjective mind.  Most leftists, and hence, most so-called atheists, actually do believe in the existence of such objects, which they call “good” and “evil,” whether they admit it explicitly or not.  Regardless, they speak and act as if the objects were real.

    The kinds of speech and actions I’m talking about are ubiquitous and obvious.  For example, many of these “atheists” assume a dictatorial right to demand that others conform to novel versions of “good” and “evil” they may have concocted yesterday or the day before.  If those others refuse to conform, they exhibit all the now familiar symptoms of outrage and virtuous indignation.  Do rational people imagine that they are gods with the right to demand that others obey whatever their latest whims happen to be?  Do they assume that their subjective, emotional whims somehow immediately endow them with a legitimate authority to demand that others behave in certain ways and not in others?  I certainly hope that no rational person would act that way.  However, that is exactly the way that many so-called atheists act.  To the extent that we may consider them rational at all, then, we must assume that they actually believe that whatever versions of “good” or “evil” they happen to favor at the moment are “things” that somehow exist on their own, independently of their subjective minds.  In other words, they believe in ghosts.

    Does this make any difference?  I suggest that it makes a huge difference.  I personally don’t enjoy being constantly subjected to moralistic bullying.  I doubt that many people enjoy jumping through hoops to conform to the whims of others.  I submit that it may behoove those of us who don’t like being bullied to finally call out this type of irrational, quasi-religious behavior for what it really is.

    It also makes a huge difference because this form of belief in imaginary objects has led us directly into the moral chaos we find ourselves in today.  New versions of “absolute morality” are now popping up on an almost daily basis.  Obviously, we can’t conform to all of them at once, and must therefore put up with the inconvenience of either keeping our mouths shut or risk being furiously condemned as “evil” by whatever faction we happen to offend.  Again, traditional theists are a great deal more clear-sighted than “atheists” about this sort of thing.  For example, in an article entitled, “Moral relativism can lead to ethical anarchy,” Christian believer Phil Schurrer, a professor at Bowling Green State University, writes,

    …the lack of a uniform standard of what constitutes right and wrong based on Natural Law leads to the moral anarchy we see today.

    Prof. Schurrer is right about the fact that we live in a world of moral anarchy.  I also happen to agree with him that most of us would find it useful and beneficial if we could come up with a “uniform standard of what constitutes right and wrong.”  Where I differ with him is on the rationality of attempting to base that standard on “Natural Law,” because there is no such thing.  For religious believers, “Natural Law” is law passed down by God, and since there is no God, there can be no “Natural Law,” either.  How, then, can we come up with such a uniform moral code?

    I certainly can’t suggest a standard based on what is “really good” or “really bad” because I don’t believe in the existence of such objects.  I can only tell you what I would personally consider expedient.  It would be a standard that takes into account what I consider to be some essential facts.  These are as follows.

    • What we refer to as morality is an artifact of “human nature,” or, in other words, innate predispositions that affect our behavior.
    • These predispositions exist because they evolved by natural selection.
    • They evolved by natural selection because they happened to improve the odds that the genes responsible for their existence would survive and reproduce at the time and in the environment in which they evolved.
    • We are now living at a different time, and in a different environment, and it cannot be assumed that blindly responding to the predispositions in question will have the same outcome now as it did when those predispositions evolved.  Indeed, it has been repeatedly demonstrated that such behavior can be extremely dangerous.
    • Outcomes of these predispositions include a tendency to judge the behavior of others as “good” or “evil.”  These categories are typically deemed to be absolute, and to exist independently of the conscious minds that imagine them.
    • Human morality is dual in nature.  Others are perceived in terms of ingroups and outgroups, with different standards applying to what is deemed “good” or “evil” behavior towards those others depending on the category to which they are imagined to belong.

    I could certainly expand on this list, but the above are certainly some of the most salient and essential facts about human morality.  If they are true, then it is possible to make at least some preliminary suggestions about how a “uniform standard” might look.  It would be as simple as possible.  It would be derived to minimize the dangers referred to above, with particular attention to the dangers arising from ingroup/outgroup behavior.  It would be limited in scope to interactions between individuals and small groups in cases where the rational analysis of alternatives is impractical due to time constraints, etc.  It would be in harmony with innate human behavioral traits, or “human nature.”  It is our nature to perceive good and evil as real objective things, even though they are not.  This implies there would be no “moral relativism.”  Once in place, the moral code would be treated as an absolute standard, in conformity with the way in which moral standards are usually perceived.  One might think of it as a “moral constitution.”  As with political constitutions, there would necessarily be some means of amending it if necessary.  However, it would not be open to arbitrary innovations spawned by the emotional whims of noisy minorities.

    How would such a system be implemented?  It’s certainly unlikely that any state will attempt it any time in the foreseeable future.  Perhaps it might happen gradually, just as changes to the “moral landscape” have usually happened in the past.  For that to happen, however, it would be necessary for significant numbers of people to finally understand what morality is, and why it exists.  And that is where, as an atheist, I must part company with Mr. Trinko, Prof. Schurrer, and the rest of the religious right.  Progress towards a uniform morality that most of us would find a great deal more useful and beneficial than the versions currently on tap, regardless of what goals or purposes we happen to be pursuing in life, cannot be based on the illusion that a “natural law” exists that has been handed down by an imaginary God, any more than it can be based on the emotional whims of leftist bullies.  It must be based on a realistic understanding of what kind of animals we are, and how we came to be.  However, such self knowledge will remain inaccessible until we shed the shackles of religion.  Perhaps, as they witness many of the traditional churches increasingly becoming leftist political clubs before their eyes, people on the right of the political spectrum will begin to find it less difficult to free themselves from those shackles.  I hope so.  I think that an Ansatz based on simple, traditional moral rules, such as the Ten Commandments, is more likely to lead to a rational morality than one based on furious rants over who should be allowed to use what bathrooms.  In other words, I am more optimistic that a useful reform of morality will come from the right rather than the left of the ideological spectrum, as it now stands.  Most leftists today are much too heavily invested in indulging their moral emotions to escape from the world of illusion they live in.  To all appearances they seriously believe that blindly responding to these emotions will somehow magically result in “moral progress” and “human flourishing.”  Conservatives, on the other hand, are unlikely to accomplish anything useful in terms of a rational morality until they free themselves of the “God delusion.”  It would seem, then, that for such a moral “revolution” to happen, it will be necessary for those on both the left and the right to shed their belief in “spirits.”

     

  • On Legitimizing Moral Laws: “Purpose” as a God Substitute

    Posted on January 14th, 2018 Helian 4 comments

    The mental traits responsible for moral behavior did not evolve because they happened to correspond to “universal moral truths.”  They evolved because they increased the odds that the responsible genes would survive and reproduce.  The evolutionary origins of morality explain why we imagine the existence of “universal moral truths” to begin with.  We imagine that “moral truths” exist as objective things, independent of the minds that imagine them, because there was a selective advantage to perceiving them in that way.  Philosophers have long busied themselves with the futile task of “proving” that these figments of their imaginations really do exist just as they imagine them – as independent things.  Of course, even though they’ve been trying for thousands of years, they’ve never succeeded, for the very good reason that the things whose existence they’ve been trying to prove don’t exist.  No matter how powerfully our imaginations portray these illusions to us as real things, they remain illusions.

    God has always served as a convenient prop for objective morality.  It has always seemed plausible to many that, if God says something is morally good, it really is good.  Plato exposed the logical flaws of this claim in his Euthyphro.  However, such quibbles may be conveniently ignored by those who believe that the penalty for meddling with the logical basis of divine law is an eternity in hell.  They dispose of Plato by simply accepting without question the axiom that God is good.  If God is good, then his purposes must be good.  If, as claimed by the 18th century Scottish philosopher Francis Hutcheson, he endowed us with an innate moral sense, which serves as the fundamental source of morality, then he must have done it for a purpose.  Since that purpose is Godly, and therefore good in itself, moral rules that are true expressions of our God-given moral sense must be good in themselves as well. QED

    Unfortunately, there is no God, a fact that has become increasingly obvious over the years as the naturalistic explanations of the universe supplied by the advance of science have supplanted supernatural ones at an accelerating rate.  As a result, atheists already make up a large proportion of the population in many countries where threats of violence and ostracism are no longer effective props for the old religions.  However, most of these atheists haven’t yet succeeded in divorcing themselves from the spirit world.  They still believe that disembodied Goods and Evils hover about us in ghostly form, endowed with a magical power to dictate “right” behavior, not only to themselves, but to everyone else as well.

    The challenge these latter day moralists face, of course, is to supply an explanation of just how it is that the moral rules supplied by their vivid imaginations acquire the right to dictate behavior to the rest of us.  In view of the fact that, if one really believes in objective morality, independent of the subjective minds of individuals, one must also account for the recent disconcerting habit of the “moral law” to undergo drastic changes on an almost daily basis, this is no easy task.

    In fact, it is an impossible task, since the “objective” ghosts of Good and Evil exist no more in reality than does God.  However, there are powerful incentives to believe in these ghosts, just as there are powerful incentives to believe in God.  As a result, there has been no lack of trying.  One gambit in this direction, entitled Could Morality Have a Transcendent Evolved Purpose?, recently turned up at From Darwin to Eternity, one of the blogs hosted by Psychology Today.  According to the author, Michael Price, the “standard naturalistic conclusion” is that,

    It is hard to see how morality could ultimately serve any larger kind of purpose.  Conventional religions sidestep this problem, of course, by positing a supernatural purpose provider.  But that’s an unsatisfactory solution, if you wish to maintain a naturalistic worldview.

    Here it is important to notice an implied assumption that becomes increasingly obvious as we read further in the article.  The assumption is that, if we can successfully identify a “larger kind of purpose,” then the imagined good is somehow transformed into objective Good, and imagined evil into objective Evil.  There is no basis whatsoever for this assumption, regardless of where the “larger kind of purpose” comes from.  It is important to notice this disconnect, because Price apparently believes that, if morality can be shown to serve a “transcendent naturalistic purpose,” then it must thereby gain objective legitimacy and independent normative power.  He doesn’t say so explicitly, but if he doesn’t believe it, his article is pointless.  He goes on to claim that, according to the “conventional interpretation,” of those who accept the fact of evolution by natural selection,

    There can be no transcendent purpose, because no widely-understood natural process can generate such purpose. Transcendent purpose is a subject for religion, and maybe for philosophy, but not for science. That’s the standard naturalistic conclusion.

    I note in passing that, while this may be “the standard naturalistic conclusion,” it certainly hasn’t stopped the vast majority of its proponents from thinking and acting just as if they believed in objective morality.  I know of not a single exception among contemporary scientists or philosophers of any note, regardless of what their theories on the subject happen to be.  One can find artifacts in the writings or sayings of all of them that make no sense unless they believe in objective morality, regardless of what their philosophical theories on the subject happen to be.  Typically these artifacts take the form of assertions that some individual or group of individuals is morally good or evil, without any suggestion that the assertion is merely an opinion.  Such statements make no sense absent a belief in some objective Good, generally applicable to others besides themselves, and not merely an artifact of their subjective whims.  The innate illusion of objective Good has been too powerful for any of them to entirely free themselves of the fantasy.  Be that as it may, Price tells us that there is also an “unconventional interpretation.” He poses the rhetorical question,

    Could morality be “universal” in the sense that there is some transcendent moral purpose to human existence itself?… This is a tricky question because natural selection is the only process known to science that can ultimately engineer “purpose” (moral or otherwise). It does so by generating “function,” which is essentially synonymous with “purpose”: the function/purpose of an eye, for example, is to see.

    Notice the quotation marks around “purpose” and “function” when they’re first used in this quote.  That’s as it should be, as the terms are only used in this context as a convenient form of shorthand.  They refer to the reasons that the characteristics in question happened to enhance the odds that the responsible genes would survive and reproduce.  However, these shorthand terms should never be confused with a real function or purpose.  In the case of “purpose,” for example, consider the actual definition found in the Merriam-Webster Dictionary:

    Purpose: 1: something set up as an object or end to be attained 2 : a subject under discussion or an action in course of execution

    Clearly, someone must be there to set up the object or end, or to discuss the subject.  In the case of evolution, no “someone” is there.  In other words, there is no purpose to evolution or its outcomes in the proper sense of the term.  However, if you look at the final sentence in the Price quote above, you’ll notice something odd has happened.  The quote marks have disappeared.  “Function/purpose” has suddenly become function/purpose!  One might charitably assume that Price is still using the terms in the same sense, and has simply neglected the quote marks.  If so, one would be assuming wrong.  A bit further on, the “purpose” that we saw change to purpose metastasizes again.  It is now not just a purpose, but a “transcendent naturalistic purpose!”  In Price’s words,

    I think the standard naturalistic conclusion is premature, however. There is one way in which transcendent naturalistic purpose could in fact exist.

    In the very next sentence, “transcendent naturalistic purpose” has completed the transformation from egg to butterfly, and becomes “transcendent moral purpose!” Again quoting Price,

    If selection is the only natural source of purpose, then transcendent moral purpose could exist if selection were operating at some level more fundamental than the biological.  Specifically, transcendent purpose would require a process of cosmological natural selection, with universes being selected from a multiverse based on their reproductive ability, and intelligence emerging (as a subroutine of cosmological evolution) as a higher-level adaptation for universe reproduction.  From this perspective, intelligent life (including its moral systems) would have a transcendent purpose: to eventually develop the sociopolitical and technical expertise that would enable it to cooperatively create new universes…  These ideas are highly speculative and may seem strange, especially if you haven’t heard them before.

    That’s for sure! In his conclusion Price gets a bit slippery about whether he personally buys into this extravagant word game. As he puts it,

    At any rate, my goal here is not to argue that these ideas are likely to be true, nor that they are likely to be false. I simply want to point out that if they’re false, then it seems like it must also be false – from a naturalistic perspective, at least – that morality could have any transcendent purpose.

    This implies that Price accepts the idea that, if “these ideas are likely to be true,” then morality actually could have a “transcendent purpose.”  Apparently we are to assume that moral rules could somehow acquire objective legitimacy by virtue of having a “transcendent purpose.”  The “proof” goes something like this:

    1. Morality evolved because it serves a “purpose.”
    2. Miracle a happens
    3. Therefore, morality evolved because it serves a purpose.
    4. Miracle b happens
    5. Therefore, morality evolved to serve an independent naturalistic purpose.
    6. Miracle c happens
    7. Therefore, morality evolved to serve a transcendental moral purpose.
    8. Miracle d happens
    9. If a transcendental moral purpose exists, then it automatically becomes our duty to obey moral rules that serve that purpose. The rules acquire objective legitimacy.

    So much for a rigorous demonstration that a new God in the form of “transcendental moral purpose” exists to replace the old God.  I doubt much has been gained here.  At least the “proofs” of the old God’s existence didn’t require such a high level of “mental flexibility.”  Would it be impertinent to ask how the emotional responses we normally associate with morality could become completely divorced from the “transcendental moral purpose,” to serve which we are to believe they actually exist?  Has anyone told the genes responsible for the predispositions that are the ultimate cause of our moral behavior about this “transcendental moral purpose?”

    In short, it’s clear that while belief in God is falling out of fashion, at least in some countries, belief in an equally imaginary “objective morality” most decidedly is not.  We have just reviewed an example of the ludicrous lengths to which our philosophers and “experts on morality” are willing to go to prop up their faith in this particular mirage.  It has been much easier for them to give up the God fantasy than the fantasy of their own moral righteousness.  Indeed, legions of these “experts on morality” would quickly find themselves unemployed if it were generally realized that what they claim to be “expert” about is a mere fantasy.  So goes life in the asylum.

  • The Red Centennial

    Posted on November 7th, 2017 Helian 4 comments

    Today marks the 100th anniversary of the Bolshevik Revolution.  If there’s anything to celebrate, it’s that Communism was tried, it failed, and as a result it is no longer viable as a global secular religion.  Unfortunately, the cost of the experiment in human lives was far greater than that of any comparable revolutionary ideology before or since.  It’s not as if we weren’t warned.  As I noted in an earlier post, Herbert Spencer was probably the most accurate prophet of all.  In his A Plea for Liberty he wrote,

    Already on the continent, where governmental organizations are more elaborate and coercive than here, there are chronic complaints of the tyranny of bureaucracies – the hauteur and brutality of their members. What will these become when not only the more public actions of citizens are controlled, but there is added this far more extensive control of all their respective daily duties? What will happen when the various divisions of this vast army of officials, united by interests common to officialism – the interest of the regulators versus those of the regulated – have at their command whatever force is needful to suppress insubordination and act as ‘saviors of society’? Where will be the actual diggers and miners and smelters and weavers, when those who order and superintend, everywhere arranged class above class, have come, after some generations, to intermarry with those of kindred grades, under feelings such as are operative under existing classes; and when there have been so produced a series of castes rising in superiority; and when all these, having everything in their own power, have arranged modes of living for their own advantage: eventually forming a new aristocracy far more elaborate and better organized than the old?

    What will result from their (the bureaucracy’s) operation when they are relieved from all restraints?…The fanatical adherents of a social theory are capable of taking any measures, no matter how extreme, for carrying out their views: holding, like the merciless priesthoods of past times, that the end justifies the means. And when a general socialistic organization has been established, the vast, ramified, and consolidated body of those who direct its activities, using without check whatever coercion seems to them needful in the interests of the system (which will practically become their own interests) will have no hesitation in imposing their rigorous rule over the entire lives of the actual workers; until eventually, there is developed an official oligarchy, with its various grades, exercising a tyranny more gigantic and more terrible than any which the world has seen.

    Spencer’s prophesy was eloquently confirmed by former Communist Milovan Djilas in his The New Class, where he wrote,

    The transformation of the Party apparatus into a privileged monopoly (new class, nomenklatura) existed in embryonic form in Lenin’s prerevolutionary book Professional Revolutionaries, and in his time was already well under way. It is just this which has been the major reason for the decay of communism… Thus he, Stalin, the greatest Communist – for so everyone thought him save the dogmatic purists and naive “quintessentialists” – the incarnation of the real essence, the real possibilities, of the ideal – this greatest of all Communists, killed off more Communists than did all the opponents of Communism taken together, worldwide… Ideology exterminates its true believers.

    The biggest danger we face in the aftermath of Communism is that the lesson will be forgotten.  It was spawned on the left of the ideological spectrum, and today’s leftists would prefer that the monster they created be forgotten.  Since they control the present, in the form of the schools, they also control the past, according to the dictum set forth by George Orwell in his 1984.  As a result, today’s students hear virtually nothing about the horrors of Communism.  Instead, they are fed a bowdlerized “history,” according to which nothing of any significance has ever happened in the United States except the oppression and victimization of assorted racial and other minority groups.  No matter that, by any rational standard, the rise of the United States has been the greatest boon to “human flourishing” in the last 500 years.  No matter that Communism would almost certainly have spread its grip a great deal further and lasted a great deal longer if the US had never existed.  The Left must be spared embarrassment.  Therefore, the US is portrayed as the “villain,” and Communism has been dropped down the memory hole.

    Indeed, if Bernie Sanders recent bid for the Presidency, sadly sabotaged by the Clinton machine via the DNC, is any indication, socialism, if not Communism, is still alive and well.  Of course, anyone with even a passing knowledge of history knows that socialism has been tried in a virtually infinite array of guises, from the “hard” versions that resulted in the decapitation of Cambodia and the Soviet Union to the “soft” version foisted on the United Kingdom after World War II.  It has invariably failed.  No matter.  According to its proponents, that’s only because “it hasn’t been done right.”  These people are nothing if not remarkably slow learners.

    Consider the implications.  According to Marx, the proletarian revolution to come could not possibly result in the slaughter and oppression characteristic of past revolutions because, instead to the dictatorship of a minority over a majority, it would result in the dictatorship of the proletarian majority over a bourgeois minority.  However, the Bolshevik Revolution did result in oppression and mass slaughter on an unprecedented scale.  How to rescue Marx?  We could say that the revolution wasn’t really a proletarian revolution.  That would certainly have come as a shock to Lenin and his cronies.  If not a proletarian revolution, what kind was it?  There aren’t really many choices.  Was it a bourgeois revolution?  Then how is it that all the “owners of the social means of production” who were unlucky enough to remain in the country had their throats slit?  Who among the major players was an “owner of the social means of production?  Lenin?  Trotsky?  Stalin?  I doubt it.  If not a bourgeois revolution, could it have been a feudal revolution?  Not likely in view of the fact that virtually the entire surviving Russian nobility could be found a few years later waiting tables in French restaurants.  If we take Marx at his word, it must, in fact, have been a proletarian revolution, and Marx, in fact, must have been dead wrong.  In one of the last things he wrote, Trotsky, probably the best and the brightest of all the old Bolsheviks, admitted as much.  He had hoped until the end that Stalinism was merely a form of “bureaucratic parasitism,” and the proletariat would soon shrug it off and take charge as they should have from the start.  However, just before he was murdered by one of Stalin’s assassins, he wrote,

    If, however, it is conceded that the present war (World War II) will provoke not revolution but a decline of the proletariat, then there remains another alternative; the further decay of monopoly capitalism, its further fusion with the state and the replacement of democracy wherever it still remained by a totalitarian regime. The inability of the proletariat to take into its hands the leadership of society could actually lead under these conditions to the growth of a new exploiting class from the Bonapartist fascist bureaucracy. This would be, according to all indications, a regime of decline, signaling the eclipse of civilization… Then it would be necessary in retrospect to establish that in its fundamental traits the present USSR was the precursor of a new exploiting regime on an international scale… If (this) prognosis proves to be correct, then, of course, the bureaucracy will become a new exploiting class. However onerous the second perspective may be, if the world proletariat should actually prove incapable of fulfilling the mission placed upon it by the course of development, nothing else would remain except only to recognize that the socialist program, based on the internal contradictions of capitalist society, ended as a Utopia.

    And so it did.  Trotsky, convinced socialist that he was, saw the handwriting on the wall at last.  However, Trotsky was a very smart man.  Obviously, our latter day socialists aren’t quite as smart.  It follows that we drop the history of Communism down Orwell’s “memory hold” at our peril.  If we refuse to learn anything from the Communist experiment, we may well find them foisting another one on us before long.  Those who do want to learn something about it would do well to be wary of latter day “interpretations.”  With Communism, as with anything else, it’s necessary to consult the source literature yourself if you want to uncover anything resembling the truth.  There is a vast amount of great material out there.  Allow me to mention a few of my personal favorites.

    There were actually two Russian Revolutions in 1917.  In the first, which occurred in March (new style) the tsar was deposed and a provisional government established in the place of the old monarchy.  Among other things it issued decrees that resulted in a fatal relaxation of discipline in the Russian armies facing the Germans and Austro-Hungarians, paving the way for the Bolshevik coup that took place later that year.  Perhaps the best account of the disintegration of the armies that followed was written by a simple British nurse named Florence Farmborough in her With the Armies of the Tsar; A Nurse at the Russian Front, 1914-18.  The Communists themselves certainly learned from this experience, executing thousands of their own soldiers during World War II at the least hint of insubordination.  My favorite firsthand account of the revolution itself is The Russian Revolution 1917; An Eyewitness Account, by N. N. Sukhanov, a Russian socialist who played a prominent role in the Provisional Government.  He described Stalin at the time as a “grey blur.”  Sukhanov made the mistake of returning to the Soviet Union.  He was arrested in 1937 and executed in 1940.  Another good firsthand account is Political Memoirs, 1905-1917, by Pavel Miliukov.  An outstanding account of the aftermath of the revolution is Cursed Days, by novelist Ivan Bunin.  Good accounts by diplomats include An Ambassador’s Memoirs by French ambassador to the court of the tsar Maurice Paleologue, and British Agent by Bruce Lockhart.

    When it comes to the almost incredible brutality of Communism, it’s hard to beat Solzhenitsyn’s classic The Gulag Archipelago.  Other good accounts include Journey into the Whirlwind by Yevgenia Ginzburg and Back in Time by Nadezhda Joffe.  Ginzburg was the wife of a high Communist official, and Joffe was the daughter of Adolph Joffe, one of the most prominent early Bolsheviks.  Both were swept up in the Great Purge of the late 1930’s, and both were very lucky to survive life in the Gulag camps.  Ginzburg had been “convicted” of belong to a “counterrevolutionary Trotskyist terrorist organization,” and almost miraculously escaped being shot outright.  She spent the first years of her sentence in solitary confinement.  In one chapter of her book she describes what happened to an Italian Communist who dared to resist her jailers:

    I heard the sound of several feet, muffled cries, and a shuffling noise as though a body were being pulled along the stone floor.  Then there was a shrill cry of despair; it continued for a long while on the same note, and stopped abruptly.

    It was clear that someone was being dragged into a punishment cell and was offering resistance… The cry rang out again and stopped suddenly, as though the victim had been gagged… But it continued – a penetrating, scarcely human cry which seemed to come from the victim’s very entrails, to be viscous and tangible as it reverberated in the narrow space.  Compared with it, the cries of a woman in labor were sweet music.  They, after all, express hope as well as anguish, but here there was only a vast despair.

    I felt such terror as I had not experienced since the beginning of my wanderings through this inferno.  I felt that at any moment I should start screaming like my unknown neighbor, and from that it could only be a step to madness.

    At that moment I heard clearly, in the midst of the wailing, the words “Communista Italiana, Communista Italiana!”  So that was it!  No doubt she had fled from Mussolini just as Klara, my cellmate at Butyrki, had fled from Hitler.

    I heard the Italian’s door opened, and a kind of slithering sound which I could not identify.  Why did it remind me of flower beds?  Good God, it was a hose!  So Vevers (one of her jailers) had not been joking when he had said to me:  “We’ll hose you down with freezing water and then shove you in a punishment cell.”

    The wails became shorter as the victim gasped for breath.  Soon it was a tiny shrill sound, like a gnat’s.  The hose played again; then I heard blows being struck, and the iron door was slammed to.  Dead silence.

    That was just a minute part of the reality of the “worker’s paradise.”  Multiply it millions of times and you will begin to get some inkling of the reality of Communism under Stalin.  Many of the people who wrote such accounts began as convinced Communists and remained so until the end of their days.  They simply couldn’t accept the reality that the dream they had dedicated their lives to was really a nightmare.  Victor Serge was another prominent Bolshevik and “Trotskyist” who left an account of his own struggle to make sense of what he saw happening all around him in his Memoirs of a Revolutionary:

    Nobody was willing to see evil in the proportions it had reached.  As for the idea that the bureaucratic counterrevolution had attained power, and that a new despotic State had emerged from our own hands to crush us, and reduce the country to absolute silence – nobody, nobody in our ranks was willing to admit it.  From the depths of his exile in Alma-Ata Trotsky affirmed that this system was still ours, still proletarian, still Socialist, even though sick; the Party that was excommunicating, imprisoning, and beginning to murder us remained our Party, and we still owed everything to it:  we must live only for it, since only through it could we serve the Revolution.  We were defeated by Party patriotism:  It both provoked us to rebel and turned us against ourselves.

    Serge was lucky.  He was imprisoned years before the Great Purge began in earnest, and was merely sentenced to internal exile in Siberia.  The secret police even supplied him and a fellow exile with a bread ration.  After a few years, thanks to pressure from foreign socialists, he was allowed to leave the Soviet Union.  Conditions for the normal citizens of Orenburg where he spent his exile, were, if anything, worse than his, even though more than a decade had elapsed since the advent of the “worker’s paradise.”  In the following he describes what happened when they received their bread ration:

    I heard shouting from the street, and then a shower of vigorous knocks on the door.  “Quick, Victor Lvovich, open up!”  Bobrov was coming back from the bakery, with two huge four-kilo loaves of black bread on his shoulders.  He was surrounded by a swarm of hungry children, hopping after the bread like sparrows, clinging on his clothes, beseeching:  “A little bit, uncle, just a little bit!”  They were almost naked.  We threw them some morsels, over which a pitched battle promptly began.  The next moment, our barefooted maidservant brought boiling water, unasked, for us to make tea.  When she was alone with me for a moment, she said to me, her eyes smiling, “Give me a pound of bread and I’ll give you the signal in a minute… And mark my words, citizen, I can assure you that I don’t have the syphilis, no, not me…”  Bobrov and I decided to go out only by turns, so as to keep an eye on the bread.

    So much for the look of real oppression, as opposed to the somewhat less drastic versions that occupy the florid imaginations of today’s Social Justice Warriors.  Speaking of SJW’s, especially of the type whose tastes run to messianic revolutionary ideologies, the demise of Communism has had an interesting effect.  It has pulled the rug out from under their feet, leaving them floating in what one might describe as an ideological vacuum.  Somehow writing furious diatribes against Trump on Facebook just doesn’t tickle the same itch as Communism did in its day.  When it comes to fanatical worldviews, oddly enough, radical Islam is the only game in town.  The SJWs can’t really fall for it hook, line and sinker the way they once did for Communism.  After all, its ideology is diametrically opposed to what they’ve claimed to believe in lo these many years.  The result has been the weird love affair between the radical Left and Islam that’s been such an obvious aspect of the ideological scene lately, complete with bold flirtations and coy, steamy glances from afar.  Strange bedfellows indeed!

    In terms of the innate, ingroup/outgroup behavior of human beings I’ve often discussed on this blog, the outgroup of the Communist ingroup was, of course, the “bourgeoisie.”  If even the most tenuous connection could be made between some individual and the “bourgeoisie,” it became perfectly OK to murder and torture that individual, after the fashion of our species since time immemorial.  We saw nearly identical behavior directed against the “aristocrats” after the French Revolution, and against the Jews under the Nazis.  If our species learns nothing else from its experiment with Communism, it is to be hoped that we at least learn the extreme danger of continuing to uncritically indulge this aspect of our behavioral repertoire.  I realize that it is very likely to be a vain hope.  If anything, ingroup/outgroup identification according to ideology is intensifying and becoming increasingly dangerous.  The future results are unpredictable, but are very unlikely to be benign.  Let us at least hope that, under the circumstances, no new messianic secular religion appears on the scene to fill the vacuum left by Communism.  We can afford to wait a few more centuries for that.

  • Life Among the Mormons

    Posted on February 18th, 2017 Helian 8 comments

    A few years ago I moved into an almost entirely Mormon neighborhood.  It turns out that Mormons are a great deal more tolerant than the average atheist Social Justice Warrior.  As a result I was able to learn some things about them that certainly won’t be news to other Mormons, but may interest the readers of this blog.

    One day, shortly after my arrival, I was chatting with my next door neighbor, and she mentioned that some of the neighbors in our age group were in the habit of getting together socially every other week, and wondered if I would like to tag along.  I said, “Sure.”  She suggested I ride along with her and her husband, as the group rotated from house to house, and they knew the neighborhood.  Well, when we were underway, she casually slipped me a large Bible.  It turns out that the “social gathering” was what the Mormons call Family Home Evening, or FHE.  The host is responsible for coming up with a program that relates to the church in some way.  This time around it involved each guest reading passages from the Bible with a common theme, which the group would then discuss.  At other times the Book of Mormon or other Mormon religious books might be substituted for the Bible.  Once we were to act out different parables, and the others would try to guess what they were.  On another occasion there was a presentation about the Mormon system of indexing genealogical records, and how volunteers might help with the process.  I wasn’t particularly uncomfortable with any of this, as I attended Sunday School regularly and went to church camps as a child, and still know my Bible fairly well.

    After the first meeting I e-mailed my neighbor to thank her for taking me to FHE, but told her that I had no intention of changing my religion.  I quoted my favorite Bible passage, Ephesians 2: 8-9 in self defense.  It goes like this:

    For by grace are ye saved through faith; and that not of yourselves:  It is the gift of God:  Not of works, lest any man should boast.

    I strongly recommend it to my fellow atheists.  It’s great for warding off pesky proselytizers.  After all, if you’ve read the Bible and have an open mind, then nothing more can be done for you by human agency.  The rest depends on God, “lest any man should boast.”  It usually works, but not this time.  It turns out my neighbor was something of an activist in the Mormon community, and was bound and determined to make sure that when “grace” came, I would be standing close enough to the source to notice it.  She said that I’d made a very favorable impression on the other neighbors, and they would be very disappointed if I stopped coming to FHE.  They knew I wasn’t a Mormon, but it didn’t matter.

    Well, my curiosity got the best of me, and I agreed to keep coming.  I must admit with a certain degree of shame that I never flat out said I was an atheist.  I mentioned that an ancestor had been a Baptist preacher, and I think they took me for some kind of a hard core Protestant, probably with a distinct Calvinist bent.  As an extenuating circumstance I might mention that I’m not much of a cook, and delicious snacks were served at the end of each meeting.  I’m not talking potato chips.  I’m not sure if “my” FHE was typical, but these people were real gourmets.  They laid out some goodies that gladdened my heart, and were a welcome relief from the hamburgers and bologna sandwiches that were my usual fare.  It’s possible my FHE was an outlier in things other than food as well.  My boss was a Mormon, and seemed surprised when he heard that I attended.  He said I’d better watch out.  I was getting pretty close to the fire!

    In the meetings that followed I always felt accepted by the group, and never “othered” for not being a Mormon.  None of them ever came to my door to engage in spiritual arm twisting (that was limited to the local Jehovah’s Witnesses), nor was I ever subjected to any heavy-handed attempts at conversion.  They did let me know on occasion that, if I had any questions about the church, they would be glad to answer them.  They also encouraged me to come to church to see what it was like, and always invited me to other Mormon social affairs.  These included a barn dance, “Trick or Trunk,” a convenient substitute for trick or treating on Halloween at which candy is passed out from the trunks of cars parked side by side, Christmas dinner at the church, a Christmas pageant, etc.  The atmosphere at these affairs always reminded me of the church I grew up in during the 50’s and 60’s.  Now it is a typical mainstream Protestant church, attended mainly by people who appear to be well over 70, but in those days it was a great deal more vibrant, with a big congregation that included many children.  So it was in the Mormon church.  There were members of all ages, and there must have been 50 boys and girls in the children’s choir.  In a word, you didn’t get the feeling that the church was dying.

    I did attend church on one occasion, and it was quite different from a typical Protestant service.  To begin, there are no regular pastors.  Everything is done by lay people.  The church services last about three hours.  Ours was divided into a general service, another lesson delivered by one of the lay people, and another period in which the men and women were divided into separate groups.  Of course, there’s also Sunday school for the children.

    Each church is attended by one or more “wards,”  and there are several wards in a “stake.”  Each ward has a lay “Bishop,” who is appointed for a period of five years, give or take.  The stake is headed by a lay “President,” also appointed for a limited time.  These part time clergymen aren’t paid, don’t get to wear any gorgeous vestments, and certainly nothing like the Pope’s Gucci slippers, but they still have all the counseling, visiting, and other duties of more conventional clergy.  I was familiar with both my ward Bishop and stake President.  Both were intelligent and capable professional men.  They were respected by the rest of the congregation, but the ones I knew weren’t patronizing or in any way “stuck up.”  They were just members of the congregation at the service I attended, but perhaps they occasionally play a more active role.

    Hard core Mormons give ten percent of their gross income to the church.  I’m not sure what percentage is “hard core,” and I’m also not sure what the church does with all the money.  That question has probably been asked ever since the days of Joseph Smith.  I suspect the IRS is reasonably well informed, but otherwise they keep financial matters pretty close to the vest.  In any case, only members who tithe are allowed to attend services at or be married in a Mormon Temple.

    Mormons are a great deal more “moral” when it comes to reproduction than the average atheist.  In other words, their behavior in such matters is consistent with what the relevant predispositions accomplished at the time they evolved.  For example, the lady who tossed the Bible in my lap had 11 children and 37 grandchildren.  Large families were the rule in our neighborhood.  I can’t really understand the objections of the “anti-breeders” to such behavior in a country where the population would be declining if it weren’t for massive illegal immigration.  In any case, all those grandchildren and great grandchildren will have inherited the earth long after the mouths of those who criticized their ancestors have been stopped with dust.

    The people in my ward included some who were brought up in the Mormon faith, and some, including my zealous neighbor lady, who had been converted later in life.  Among the former there were some older people who still had a lively memory of the days when polygamy was a great deal more common than it is now.  They recall that there were federal “revenuers” who were on the lookout for such arrangements just as their more familiar peers were snooping after moonshine stills.  A neighbor, aged about 80, recounted a story of one such family she had heard as a child.  A baby had been born to a man with several wives, but died soon after birth.  The “revenuers” were aware of the fact.  Soon, however, the stork arrived again, and this time delivered a healthy baby.  Shortly thereafter the man was sitting at the dinner table holding the new arrival when he was warned that inspectors were on the way to pay him a visit.  He took it on the lam out the back door, and hid in the family cemetery were the first child was buried.  When the inspectors arrived, they asked the wife who happened to be in the house where they could find her husband.  With a downcast look she replied, “He’s up in the cemetery with the baby.”  That statement was, of course, perfectly true.  The embarrassed “revenuers” muttered their condolences and left!

    I must say I had to clench my teeth occasionally on listening to some of the passages from the Book of Mormon.  On the other hand, there’s really nothing there that’s any more fantastic than the similar stories you can read in the Bible, or the lives of the saints.  In any case, what they believe strikes me as a great deal less dangerous than the equally fantastic belief held by the “men of science” for half a century that there is no such thing as human nature, not to mention “scientific” Marxism-Leninism.  According to some atheists, indoctrinating children with stories from the Bible and the Book of Mormon constitutes “child abuse.”  I have my doubts given the fact that they seem to accomplish those most “moral” of all goals, survival and reproduction, a great deal better than most of my fellow infidels.  Many of my fellow atheists have managed to convince themselves that they’ve swallowed the “red pill,” but in reality they’re just as delusional as the Mormons, and their delusions are arguably more destructive.  I personally would rather see my children become Mormons than dour, barren, intolerant, and ultra-Puritanical Social Justice Warriors, striding down the path to genetic suicide with a self-righteous scowl.  I would also much rather live among spiritual Mormons than secular Communists.

    As one might expect, there were many non-Mormons in the local community who “othered” the Mormons, and vice versa.  Nothing is more natural for our species than to relegate those who are in any way different to the outgroup.  For example, Mormons, were supposed to stick together and favor each other in business dealings, government appointments, etc.  Unfortunately, there has never been a population of humans who consider themselves members of the same group that has not done precisely the same, at least to some extent.  Mormon religious beliefs were considered “crazy,” as opposed, apparently, to such “perfectly sane” stories as Noah’s ark, the loaves and the fishes, the magical conversion of bread and wine to flesh and blood, etc.   Mormons were supposed to imagine that they wore “magic clothes.”  In reality the Mormons don’t consider such garments any more “magical” than a nun’s habit or a Jew’s yarmulke.

    In general, I would prefer that people believe the truth.  I am an atheist, and don’t believe in the existence of any God or gods.  I’m not an “accommodationist,” and I don’t buy Stephen Jay Gould’s notion of “Non-Overlapping Magisteria.”  On the other hand, when people treat me with kindness and generosity, as I was treated in the Mormon community, I’m not in the habit of responding with stones and brickbats, either.  The hard core Hobbesians out there will claim that all that kindness sprang from selfish motives, but hard core Hobbesians must also perforce admit that neither they nor anyone else acts any differently.

    If you want to get a fictional “taste” of what Mormons are like, I recommend the film “Once I was a Beehive.”  You can rent it at Amazon.  It’s about a teenage girl whose mom remarries to a Mormon.  The flavor of the Mormon community pictured in the film reflects my own impressions pretty accurately.  The Mormon Bishop, in particular, is very typical and true to life.

    As for me, in the fullness of time I left the land of the Mormons and now live among the heathen once again.  None of them has seen fit to follow me and pull me back from the fiery furnace by the scruff of my neck.  It may be that they finally realized I was a hopeless case, doomed to sizzle over the coals in the hereafter for the edification of the elect.  I’m afraid they’re right about that.  If they do come after me they’ll find me armed with my copy of Ephesians, as stubborn as ever.

  • The God Myth and the “Humanity Can’t Handle The Truth” Gambit

    Posted on May 12th, 2016 Helian 5 comments

    Hardly a day goes by without some pundit bemoaning the decline in religious faith.  We are told that great evils will inevitably befall mankind unless we all believe in imaginary super-beings.  Of course, these pundits always assume a priori that the particular flavor of religion they happen to favor is true.  Absent that assumption, their hand wringing boils down to the argument that we must all somehow force ourselves to believe in God whether that belief seems rational to us or not.  Otherwise, we won’t be happy, and humanity won’t flourish.

    An example penned by Dennis Prager entitled Secular Conservatives Think America Can Survive the Death of God that appeared recently at National Review Online is typical of the genre.  Noting that even conservative intellectuals are becoming increasingly secular, he writes that,

    They don’t seem to understand that the only solution to many, perhaps most, of the social problems ailing America and the West is some expression of Judeo-Christian religion.

    In another article entitled If God is Dead…, Pat Buchanan echoes Prager, noting, in a rather selective interpretation of history, that,

    When, after the fall of the Roman Empire, the West embraced Christianity as a faith superior to all others, as its founder was the Son of God, the West went on to create modern civilization, and then went out and conquered most of the known world.

    The truths America has taught the world, of an inherent human dignity and worth, and inviolable human rights, are traceable to a Christianity that teaches that every person is a child of God.

    Today, however, with Christianity virtually dead in Europe and slowly dying in America, Western culture grows debased and decadent, and Western civilization is in visible decline.

    Both pundits draw attention to a consequence of the decline of traditional religions that is less a figment of their imaginations; the rise of secular religions to fill the ensuing vacuum.  The examples typically cited include Nazism and Communism.  There does seem to be some innate feature of human behavior that predisposes us to adopt such myths, whether of the spiritual or secular type.  It is most unlikely that it comes in the form of a “belief in God” or “religion” gene.  It would be very difficult to explain how anything of the sort could pop into existence via natural selection.  It seems reasonable, however, that less specialized and more plausible behavioral traits could account for the same phenomenon.  Which begs the question, “So what?”

    Pundits like Prager and Buchanan are putting the cart before the horse.  Before one touts the advantages of one brand of religion or another, isn’t it first expedient to consider the question of whether it is true?  If not, then what is being suggested is that mankind can’t handle the truth.  We must be encouraged to believe in a pack of lies for our own good.  And whatever version of “Judeo-Christian religion” one happens to be peddling, it is, in fact, a pack of lies.  The fact that it is a pack of lies, and obviously a pack of lies, explains, among other things, the increasingly secular tone of conservative pundits so deplored by Buchanan and Prager.

    It is hard to understand how anyone who uses his brain as something other than a convenient stuffing for his skull can still take traditional religions seriously.  The response of the remaining true believers to the so-called New Atheists is telling in itself.  Generally, they don’t even attempt to refute their arguments.  Instead, they resort to ad hominem attacks.  The New Atheists are too aggressive, they have bad manners, they’re just fanatics themselves, etc.  They are not arguing against the “real God,” who, we are told, is not an object, a subject, or a thing ever imagined by sane human beings, but some kind of an entity perched so high up on a shelf that profane atheists can never reach Him.  All this spares the faithful from making fools of themselves with ludicrous mental flip flops to explain the numerous contradictions in their holy books, tortured explanations of why it’s reasonable to assume the “intelligent design” of something less complicated by simply assuming the existence of something vastly more complicated, and implausible yarns about how an infinitely powerful super-being can be both terribly offended by the paltry sins committed by creatures far more inferior to Him than microbes are to us, and at the same time incapable of just stepping out of the clouds for once and giving us all a straightforward explanation of what, exactly, he wants from us.

    In short, Prager and Buchanan would have us somehow force ourselves, perhaps with the aid of brainwashing and judicious use of mind-altering drugs, to believe implausible nonsense, in order to avoid “bad” consequences.  One can’t dismiss this suggestion out of hand.  Our species is a great deal less intelligent than many of us seem to think.  We use our vaunted reason to satisfy whims we take for noble causes, without ever bothering to consider why those whims exist, or what “function” they serve.  Some of them apparently predispose us to embrace ideological constructs that correspond to spiritual or secular religions.  If we use human life as a metric, P&B would be right to claim that traditional spiritual religions have been less “bad” than modern secular ones, costing only tens of millions of lives via religious wars, massacres of infidels, etc., whereas the modern secular religion of Communism cost, in round numbers, 100 million lives, and in a relatively short time, all by itself.  Communism was also “bad” to the extent that we value human intelligence, tending to selectively annihilate the brightest portions of the population in those countries where it prevailed.  There can be little doubt that this “bad” tendency substantially reduced the average IQ in nations like Cambodia and the Soviet Union, resulting in what one might call their self-decapitation.  Based on such metrics, Prager and Buchanan may have a point when they suggest that traditional religions are “better,” to the extent that one realizes that one is merely comparing one disaster to another.

    Can we completely avoid the bad consequences of believing the bogus “truths” of religions, whether spiritual or secular?  There seems to be little reason for optimism on that score.  The demise of traditional religions has not led to much in the way of rational self-understanding.  Instead, as noted above, secular religions have arisen to fill the void.  Their ideological myths have often trumped reason in cases where there has been a serious confrontation between the two, occasionally resulting in the bowdlerization of whole branches of the sciences.  The Blank Slate debacle was the most spectacular example, but there have been others.  As belief in traditional religions has faded, we have gained little in the way of self-knowledge in their wake.  On the contrary, our species seems bitterly determined to avoid that knowledge.  Perhaps our best course really would be to start looking for a path back inside the “Matrix,” as Prager and Buchanan suggest.

    All I can say is that, speaking as an individual, I don’t plan to take that path myself.  I has always seemed self-evident to me that, whatever our goals and aspirations happen to be, we are more likely to reach them if we base our actions on an accurate understanding of reality rather than myths, on truth rather than falsehood.  A rather fundamental class of truths are those that concern, among other things, where those goals and aspirations came from to begin with.  These are the truths about human behavior; why we want what we want, why we act the way we do, why we are moral beings, why we pursue what we imagine to be noble causes.  I believe that the source of all these truths, the “root cause” of all these behaviors, is to be found in our evolutionary history.  The “root cause” we seek is natural selection.  That fact may seem inglorious or demeaning to those who lack imagination, but it remains a fact for all that.  Perhaps, after we sacrifice a few more tens of millions in the process of chasing paradise, we will finally start to appreciate its implications.  I think we will all be better off if we do.

  • More Fun with Moral Realism

    Posted on January 16th, 2016 Helian No comments

    What is moral realism?  Edvard Westermarck provided a good definition in the first paragraph of his Ethical Relativity:

    Ethics is generally looked upon as a “normative” science, the object of which is to find and formulate moral principles and rules possessing objective validity.  The supposed objectivity of moral values, as understood in this treatise, implies that they have a real existence apart from any reference to a human mind, that what is said to be good or bad, right or wrong, cannot be reduced merely to what people think to be good or bad, right or wrong.  It makes morality a matter of truth and falsity, and to say that a judgment is true obviously means something different from the statement that it is thought to be true.  The objectivity of moral judgments does not presuppose the infallibility of the individual who pronounces such a judgment, nor even the accuracy of a general consensus of opinion; but if a certain course of conduct is objectively right, it must be thought to be right by all rational beings who judge truly of the matter and cannot, without error, be judged to be wrong.

    Westermarck dismissed moral realism as a chimera.  So do I.  Indeed, in view of what we now know about the evolutionary origins of moral emotions, the idea strikes me as ludicrous.  It is, however, treated as matter-of-factly as if it were an unquestionable truth, and not only in the general public.  Philosophers merrily discuss all kinds of moral conundrums and paradoxes in academic journals, apparently in the belief that they have finally uncovered the “truth” about such matters, to all appearances with no more fear of being ridiculed than the creators of the latest Paris fashions.  The fact is all the more disconcerting if one takes the trouble to excavate the reasons supplied for this stubborn belief that subjective emotional constructs in the minds of individuals actually relate to independent things.  Typically, they are threadbare almost beyond belief.

    Recently I discussed the case of G. E. Moore, who, after dismissing the arguments of virtually everyone who had attempted a “proof” of moral realism before him as fatally flawed by the naturalistic fallacy, supplied a “proof” of his own.  It turned out that the “objective good” consisted of those things that were most likely to please an English country gentleman.  The summum bonum was described as something like sitting in a cozy house with a nice glass of wine while listening to Beethoven.  The only “proof” supplied for the independent existence of this “objective good” was Moore’s assurance that he was an expert in such matters, and that it was obvious to him that he was right.

    I recently uncovered another such “proof,” this time concocted in the fertile imagination of the Swedish philosopher Torbjörn Tännsjö. It turned up in an interview on the website of 3:AM Magazine under the title, The Hedonistic Utilitarian.  In response to interviewer Richard Marshall’s question,

    Why are you a moral realist and what difference does this make to how you go about investigating morals from, for example, a non-realist?

    Tännsjö replies,

    I am indeed a moral realist.  In particular, I believe that one basic question, what we ought to do, period (the moral question), is a genuine one.  There exists a true answer to it, which is independent of our thought and conceptualization.  My main argument in defense of the position is this.  It is true (independently of our conceptualization) that it is wrong to inflict pain on a sentient creature for no reason (she doesn’t deserve it, I haven’t promised to do it, it is not helpful to this creature or to anyone else if I do it, and so forth).  But if this is a truth, existing independently of our conceptualization, then at least one moral fact (this one) exists and moral realism is true.  We have to accept this, I submit, unless we can find strong reasons to think otherwise.

    In reading this, I was reminded of PFC Littlejohn, who happened to serve in my unit when I was a young lieutenant in the Army.  Whenever I happened to pull his leg more egregiously than even he could bear, he would typically respond, “You must be trying to bullshit me, sir!”  Apparently Tännsjö doesn’t consider Darwin’s theory, or Darwin’s own opinion regarding the origin of the moral emotions, or the flood of books and papers on the evolutionary origins of moral behavior, or the convincing arguments for the selective advantage of just such an emotional response as he describes, or the utter lack of evidence for the physical existence of “moral truths” independent of our “thought and conceptualization,” as sufficiently strong reasons “to think otherwise.”  Tännsjö continues,

    Moral nihilism comes with a price we can now see.  It implies that it is not wrong (independently of our conceptualization) to do what I describe above; this does not mean that it is all right to do it either, of course, but yet, for all this, I find this implication from nihilism hard to digest.  It is not difficult to accept for moral reasons.  If it is false both that it is wrong to perform this action and that it is righty to perform it, then we need to engage in difficult issues in deontic logic as well.

    Yes, in the same sense that deontic logic is necessary to determine whether it is true or false that there are fairies in Richard Dawkins’ garden.  No deontic logic is necessary here – just the realization that Tännsjö is trying to make truth claims about something that is not subject to truth claims.  The claim that it is objectively “not wrong” to do what he describes is as much a truth claim, and therefore just as irrational, as the claim that it is wrong.  As for his equally irrational worries about “moral nihilism,” his argument is similar to those of the religious true believers who think that, because they find a world without a God unpalatable, one must therefore perforce pop into existence.  Westermarck accurately described the nature of Tännsjö’s “proof” in his The Origin and Development of the Moral Ideas, where he wrote,

    As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivise the moral estimate to which it gives rise, in other words, to assign to it universal validity.  The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments.  The intensity of his emotions makes him the victim of an illusion

    The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood.  The ultimate reason for this is, that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.

    Today, Westermarck is nearly forgotten, while G. E. Moore is a household name among moral philosophers.  The Gods and angels of traditional religions seem to be in eclipse in Europe and North America, but “the substance of things hoped for,” and “the evidence of things not seen” are still with us, transmogrified into the ghosts and goblins of moral realism.  We find atheist social justice warriors hurling down their anathemas and interdicts more furiously than anything ever dreamed of by the Puritans and Pharisees of old, supremely confident in their “objective” moral purity.

    And what of moral nihilism?  Dream on!  Anyone who seriously believes that anything like moral nihilism can result from the scribblings of philosophers has either been living under a rock, or is constitutionally incapable of observing the behavior of his own species.  Human beings will always behave morally.  The question is, what kind of a morality can we craft for ourselves that is both in harmony with our moral emotions, that does the least harm, and that most of us can live with.  I personally would prefer one that is based on an accurate understanding of what morality is and where it comes from.

    Do I think that anything of the sort is on the horizon in the foreseeable future?  No.  When it comes to belief in religion and/or moral realism, one must simply get used to living in Bedlam.

  • James Burnham and the Anthropology of Liberalism

    Posted on October 16th, 2015 Helian 2 comments

    James Burnham was an interesting anthropological data point in his own right.  A left wing activist in the 30’s, he eventually became a Trotskyite.  By the 50’s however, he had completed an ideological double back flip to conservatism, and became a Roman Catholic convert on his deathbed.  He was an extremely well-read intellectual, and a keen observer of political behavior.  His most familiar book is The Managerial Revolution, published in 1941.  Among others, it strongly influenced George Orwell, who had something of a love/hate relationship with Burnham.  For example, in an essay in Tribune magazine in January 1944 he wrote,

    Recently, turning up a back number of Horizon, I came upon a long article on James Burnham’s Managerial Revolution, in which Burnham’s main thesis was accepted almost without examination.  It represented, many people would have claimed, the most intelligent forecast of our time.  And yet – founded as it was on a belief in the invincibility of the German army – events have already blown it to pieces.

    A bit over a year later, in February 1945, however, we find Burnham had made more of an impression on Orwell than the first quote implies.  In another essay in the Tribune he wrote,

    …by the way the world is actually shaping, it may be that war will become permanent.  Already, quite visibly and more or less with the acquiescence of all of us, the world is splitting up into the two or three huge super-states forecast in James Burnham’s Managerial Revolution.  One cannot draw their exact boundaries as yet, but one can see more or less what areas they will comprise.  And if the world does settle down into this pattern, it is likely that these vast states will be permanently at war with one another, although it will not necessarily be a very intensive or bloody kind of war.

    Of course, these super-states later made their appearance in Orwell’s most famous novel, 1984.  However, he was right about Burnham the first time.  He had an unfortunate penchant for making wrong predictions, often based on the assumption that transitory events must represent a trend that would continue into the indefinite future.  For example, impressed by the massive industrial might brought to bear by the United States during World War II, and its monopoly of atomic weapons, he suggested in The Struggle for the World, published in 1947, that we immediately proceed to force the Soviet Union to its knees, and establish a Pax Americana.  A bit later, in 1949, impressed by a hardening of the U.S. attitude towards the Soviet Union after the war, he announced The Coming Defeat of Communism in a book of that name.  He probably should have left it at that, but reversed his prognosis in Suicide of the West, which appeared in 1964.  By that time it seemed to Burnham that the United States had become so soft on Communism that the defeat of Western civilization was almost inevitable.  The policy of containment could only delay, but not stop the spread of Communism, and in 1964 it seemed that once a state had fallen behind the Iron Curtain it could never throw off the yoke.

    Burnham didn’t realize that, in the struggle with Communism, time was actually on our side.  A more far-sighted prophet, a Scotsman by the name of Sir James Mackintosh, had predicted in the early 19th century that the nascent versions of Communism then already making their appearance would eventually collapse.  He saw that the Achilles heel of what he recognized was really a secular religion was its ill-advised proclamation of a coming paradise on earth, where it could be fact-checked, instead of in the spiritual realms of the traditional religions, where it couldn’t.  In the end, he was right.  After they had broken 100 million eggs, people finally noticed that the Communists hadn’t produced an omelet after all, and the whole, seemingly impregnable edifice collapsed.

    One thing Burnham did see very clearly, however, was the source of the West’s weakness – liberalism.  He was well aware of its demoralizing influence, and its tendency to collaborate with the forces that sought to destroy the civilization that had given birth to it.  Inspired by what he saw as an existential threat, he carefully studied and analyzed the type of the western liberal, and its evolution away from the earlier “liberalism” of the 19th century.  Therein lies the real value of his Suicide of the West.  It still stands as one of the greatest analyses of modern liberalism ever written.  The basic characteristics of the type he described are as familiar more than half a century later as they were in 1964.  And this time his predictions regarding the “adjustments” in liberal ideology that would take place as its power expanded were spot on.

    Burnham developed nineteen “more or less systematic set of ideas, theories and beliefs about society” characteristic of the liberal syndrome in Chapters III-V of the book, and then listed them, along with possible contrary beliefs in Chapter VII.  Some of them have changed very little since Burnham’s day, such as,

    It is society – through its bad institutions and its failure to eliminate ignorance – that is responsible for social evils.  Our attitude toward those who embody these evils – of crime, delinquency, war, hunger, unemployment, communism, urban blight – should not be retributive but rather the permissive, rehabilitating, education approach of social service; and our main concern should be the elimination of the social conditions that are the source of the evils.

    Since there are no differences among human beings considered in their political capacity as the foundation of legitimate, that is democratic, government, the ideal state will include all human beings, and the ideal government is world government.

    The goal of political and social life is secular:  to increase the material and functional well-being of humanity.

    Some of the 19 have begun to change quite noticeably since the publication of Suicide of the West in just the ways Burnham suggested.  For example, items 9 and 10 on the list reflect a classic version of the ideology that would have been familiar to and embraced by “old school” liberals like John Stuart Mill:

    Education must be thought of as a universal dialogue in which all teachers and students above elementary levels may express their opinions with complete academic freedom.

    Politics must be though of as a universal dialogue in which all persons may express their opinions, whatever they may be, with complete freedom.

    Burnham had already noticed signs of erosion in these particular shibboleths in his own day, as liberals gained increasing control of academia and the media.  As he put it,

    In both Britain and the United States, liberals began in 1962 to develop the doctrine that words which are “inherently offensive,” as far-Right but not communist words seem to be, do not come under the free speech mantle.

    In our own day of academic safe spaces and trigger warnings, there is certainly no longer anything subtle about this ideological shift.  Calls for suppression of “offensive” speech have now become so brazen that they have spawned divisions within the liberal camp itself.  One finds old school liberals of the Berkeley “Free Speech Movement” days resisting Gleichschaltung with the new regime, looking on with dismay as speaker after speaker is barred from university campuses for suspected thought crime.

    As noted above, Communism imploded before it could overwhelm the Western democracies, but the process of decay goes on.  Nothing about the helplessness of Europe in the face of the current inundation by third world refugees would have surprised Burnham in the least.  He predicted it as an inevitable expression of another fundamental characteristic of the ideology – liberal guilt.  Burnham devoted Chapter 10 of his book to the subject, and noted therein,

    Along one perspective, liberalism’s reformist, egalitarian, anti-discrimination, peace-seeking principles are, or at any rate can be interpreted as, the verbally elaborated projections of the liberal sense of guilt.

    and

    The guilt of the liberal causes him to feel obligated to try to do something about any and every social problem, to cure every social evil.  This feeling, too, is non-rational:  the liberal must try to cure the evil even if he has no knowledge of the suitable medicine or, for that matter, of the nature of the disease; he must do something about the social problem even when there is no objective reason to believe that what he does can solve the problem – when, in fact, it may well aggravate the problem instead of solving it.

    I suspect Burnham himself would have been surprised at the degree to which such “social problems” have multiplied in the last half a century, and the pressure to do something about them has only increased in the meantime.  As for the European refugees, consider the following corollaries of liberal guilt as developed in Suicide of the West:

    (The liberal) will not feel uneasy, certainly not indignant, when, sitting in conference or conversation with citizens of countries other than his own – writers or scientists or aspiring politicians, perhaps – they rake his country and his civilization fore and aft with bitter words; he is as likely to join with them in the criticism as to protest it.

    It follows that,

    …the ideology of modern liberalism – its theory of human nature, its rationalism, its doctrines of free speech, democracy and equality – leads to a weakening of attachment to groups less inclusive than Mankind.

    All modern liberals agree that government has a positive duty to make sure that the citizens have jobs, food, clothing, housing, education, medical care, security against sickness, unemployment and old age; and that these should be ever more abundantly provided.  In fact, a government’s duty in these respects, if sufficient resources are at its disposition, is not only to its own citizens but to all humanity.

    …under modern circumstances there is a multiplicity of interests besides those of our own nation and culture that must be taken into account, but an active internationalism in feeling as well as thought, for which “fellow citizens” tend to merge into “humanity,” sovereignty is judged an outmode conception, my religion or no-religion appears as a parochial variant of the “universal ideas common to mankind,” and the “survival of mankind” becomes more crucial than the survival of my country and my civilization.

    For Western civilization in the present condition of the world, the most important practical consequence of the guilt encysted in the liberal ideology and psyche is this:  that the liberal, and the group, nation or civilization infected by liberal doctrine and values, are morally disarmed before those whom the liberal regards as less well off than himself.

    The inevitable implication of the above is that the borders of the United States and Europe must become meaningless in an age of liberal hegemony, as, indeed, they have.  In 1964 Burnham was not without hope that the disease was curable.  Otherwise, of course, he would never have written Suicide of the West.  He concluded,

    But of course the final collapse of the West is not yet inevitable; the report of its death would be premature.  If a decisive changes comes, if the contraction of the past fifty years should cease and be reversed, then the ideology of liberalism, deprived of its primary function, will fade away, like those feverish dreams of the ill man who, passing the crisis of his disease, finds he is not dying after all.  There are a few small signs, here and there, that liberalism may already have started fading.  Perhaps this book is one of them.

    No, liberalism hasn’t faded.  The infection has only become more acute.  At best one might say that there are now a few more people in the West who are aware of the disease.  I am not optimistic about the future of Western civilization, but I am not foolhardy enough to predict historical outcomes.  Perhaps the fever will break, and we will recover, and perhaps not.  Perhaps there will be a violent crisis tomorrow, or perhaps the process of dissolution will drag itself out for centuries.  Objectively speaking, there is no “good” outcome and no “bad” outcome.  However, in the same vein, there is no objective reason why we must refrain from fighting for the survival or our civilization, our culture, or even the ethnic group to which we belong.

    As for the liberals, perhaps they should consider why all the fine moral emotions they are so proud to wear on their sleeves exist to begin with.  I doubt that the reason has anything to do with suicide.

    By all means, read the book.

  • Scientific Morality and the Illusion of Progress

    Posted on July 11th, 2015 Helian 4 comments

    British philosophers demonstrated the existence of a “moral sense” early in the 18th century.  We have now crawled through the rubble left in the wake of the Blank Slate debacle and finally arrived once again at a point they had reached more than two centuries ago.  Of course, men like Shaftesbury and Hutcheson thought this “moral sense” had been planted in our consciousness by God.  When Hume arrived on the scene a bit later it became possible to discuss the subject in secular terms.  Along came Darwin to suggest that the existence of this “moral sense” might have developed in the same way as the physical characteristics of our species; via evolution by natural selection.  Finally, a bit less than half a century later, Westermarck put two and two together, pointing out that morality was a subjective emotional phenomenon and, as such, not subject to truth claims.  His great work, The Origin and Development of the Moral Ideas, appeared in 1906.  Then the darkness fell.

    Now, more than a century later, we can once again at least discuss evolved morality without fear of excommunication by the guardians of ideological purity.  However, the guardians are still there, defending a form of secular Puritanism that yields nothing in intolerant piety to the religious Puritans of old.  We must not push the envelope too far, lest we suffer the same fate as Tim Hunt, with his impious “jokes,” or Matt Taylor, with his impious shirt.  We cannot just blurt out, like Westermarck, that good and evil are merely subjective artifacts of human moral emotions, so powerful that they appear as objective things.  We must at least pretend that these “objects” still exist.  In a word, we are in a holding pattern.

    One can actually pin down fairly accurately the extent to which we have recovered since our emergence from the dark age.  We are, give or take, about 15 years pre-Westermarck.  As evidence of this I invite the reader’s attention to a fascinating “textbook” for teachers of secular morality that appeared in 1891.  Entitled Elements of Ethical Science: A Manual for Teaching Secular Morality, by John Ogden, it taught the subject with all the most up-to-date Darwinian bells and whistles.  In an introduction worthy of Sam Harris the author asks the rhetorical question,

    Can pure morality be taught without inculcating religious doctrines, as these are usually interpreted and understood?

    and answers with a firm “Yes!”  He then proceeds to identify the basis for any “pure morality:”

    Man has inherently a moral nature, an innate moral sense or capacity.  This is necessary to moral culture, since, without the nature or capacity, its cultivation were impossible… This moral nature or capacity is what we call Moral Sense.  It is the basis of conscience.  It exists in man inherently, and, when enlightened, cultivated, and improved, it becomes the active conscience itself.  Conscience, therefore, is moral sense plus intelligence.

    The author recognizes the essential role of this Moral Sense as the universal basis of all the many manifestations of human morality, and one without which they could not exist.  It is to the moral sentiments what the sense of touch is to the other senses:

    (The Moral Sense) furnishes the basis or the elements of the moral sentiments and conscience, much in the same manner in which the cognitive facilities furnish the data or elements for thought and reasoning.  It is not a sixth sense, but it is to the moral sentiments what touch is to the other senses, a base on which they are all built or founded; a soil into which they are planted, and from which they grow… All the moral sentiments are, therefore, but the concrete modifications of the moral sense, or the applications of it, in a developed form, to the ordinary duties of life, as a sense of justice, of right and wrong, of obligation, duty, gratitude, love, etc., just as seeing, hearing, tasting and smelling are but modified forms of feeling or touch, the basis of all sense.

    And here, in a manner entirely similar to so many modern proponents of innate morality, Ogden goes off the tracks.  Like them, he cannot let go of the illusion of objective morality.  Just as the other senses inform us of the existence of physical things, the moral sense must inform us of the existence of another kind of “thing,” a disembodied, ghostly something that floats about independently of the “sense” that “detects” it, in the form of a pure, absolute truth.  There are numerous paths whereby one may, more or less closely, approach this truth, but they all converge on the same, universal thing-in-itself:

    …it must be conceded that, while we have a body of incontestable truth, constituting the basis of all morality, still the opinions of men upon minor points are so diverse as to make a uniform belief in dogmatical principles impossible.  The author maintains that moral truths and moral conduct may be reached from different routes or sources; all converging, it is true, to the same point:  and that it savors somewhat of illiberality to insist upon a uniform belief in the means or doctrines whereby we are to arrive at a perfect knowledge of the truth, in a human sense.

    The means by which this “absolute truth” acquires the normative power to dictate “oughts” to all and sundry is described in terms just as fuzzy as those used by the moral pontificators of our own day, as if it were ungenerous to even ask the question:

    When man’s ideas of right and wrong are duly formulated, recognized and accepted, they constitute what we denominate MORAL LAW.  The moral law now becomes a standard by which to determine the quality of human actions, and a moral obligation demanding obedience to its mandates.  The truth of this proposition needs no further confirmation.

    As they say in the academy to supply missing steps in otherwise elegant proofs, it’s “intuitively obvious to the casual observer.”  In those more enlightened times, only fifteen years elapsed before Westermarck demolished Ogden’s ephemeral thing-in-itself, pointing out that it couldn’t be confirmed because it didn’t exist, and was therefore not subject to truth claims.  I doubt that we’ll be able to recover the same lost ground so quickly in our own day.  Secular piety reigns in the academy, in some cases to a degree that would make the Puritans of old look like abandoned debauchees, and is hardly absent elsewhere.  Savage punishment is meted out to those who deviate from moral purity, whether flippant Nobel Prize winners or overly principled owners of small town bakeries.  Absent objective morality, the advocates of such treatment would lose their odor of sanctity and become recognizable as mere absurd bullies.  Without a satisfying sense of moral rectitude, bullying wouldn’t be nearly as much fun.  It follows that the illusion will probably persist a great deal longer than a decade and a half this time around.

    Be that as it may, Westermarck still had it right.  The “moral sense” exists because it evolved.  Failing this basis, morality as we know it could not exist.  It follows that there is no such thing as moral truth, or any way in which the moral emotions of one individual can gain a legitimate power to dictate rules of behavior to some other individual.  Until we find our way back to that rather elementary level of self-understanding, it will be impossible for us to deal rationally with our own moral behavior.  We’ll simply have to leave it on automatic pilot, and indulge ourselves in the counter-intuitive hope that it will serve our species just as well now as it did in the vastly different environment in which it evolved.