The world as I see it
RSS icon Email icon Home icon
  • Morality and the Ideophobes

    Posted on February 12th, 2017 Helian 3 comments

    In our last episode I pointed out that, while some of the most noteworthy public intellectuals of the day occasionally pay lip service to the connection between morality and evolution by natural selection, they act and speak as if they believed the opposite.  If morality is an expression of evolved traits, it is necessarily subjective.  The individuals mentioned speak as if, and probably believe, that it is objective.  What do I mean by that?  As the Finnish philosopher Edvard Westermarck put it,

    The supposed objectivity of moral values, as understood in this treatise (his Ethical Relativity, ed.) implies that they have a real existence apart from any reference to a human mind, that what is said to be good or bad, right or wrong, cannot be reduced merely to what people think to be good or bad, right or wrong.  It makes morality a matter of truth and falsity, and to say that a judgment is true obviously means something different from the statement that it is thought to be true.

    All of the individuals mentioned in my last post are aware that there is a connection between morality and its evolutionary roots.  If pressed, some of them will even admit the obvious consequence of this fact; that morality must be subjective.  However, neither they nor any other public intellectual that I am aware of actually behaves or speaks as if that consequence meant anything or, indeed, as if it were even true.  One can find abundant evidence that this is true simply by reading their own statements, some of which I quoted.  For example, according the Daniel Dennett, Trump supporters are “guilty.”  Richard Dawkins speaks of the man in pejorative terms that imply a moral judgment rather than rational analysis of his actions.  Sam Harris claims that Trump is “unethical,” and Jonathan Haidt says that he is “morally wrong,” without any qualification to the effect that they are just making subjective judgments, and that the subjective judgments of others may be different and, for that matter, just as “legitimate” as theirs.

    A commenter suggested that I was merely quoting tweets, and that the statements may have been taken out of context, or would have reflected the above qualifications if more space had been allowed.  Unfortunately, I have never seen a single example of an instance where one of the quoted individuals made a similar statement, and then qualified it as suggested.  They invariably speak as if they were stating objective facts when making such moral judgments, with the implied assumption that individuals who don’t agree with them are “bad.”

    A quick check of the Internet will reveal that there are legions of writers out there commenting on the subjective nature of morality.  Not a single one I am aware of seems to realize that, if morality is subjective, their moral judgments lack any objective normative power or legitimacy whatsoever when applied to others.  Indeed, one commonly finds them claiming that morality is subjective, and as a consequence one is “morally obligated” to do one thing, and “morally obligated” not to do another, in the very same article, apparently oblivious to the fact that they are stating a glaring non sequitur.

    None of this should be too surprising.  We are not a particularly rational species.  We give ourselves far more credit for being “wise” than is really due.  Most of us simply react to atavistic urges, and seek to satisfy them.  Our imaginations portray Good and Evil to us as real, objective things, and so we thoughtlessly assume that they are.  It is in our nature to be judgmental, and we take great joy in applying these imagined standards to others.  Unfortunately, this willy-nilly assigning of others to the above imaginary categories is very unlikely to accomplish the same thing today as it did when the  responsible behavioral predispositions evolved.  I would go further.  I would claim that this kind of behavior is not only not “adaptive.”  In fact, it has become extremely dangerous.

    The source of the danger is what I call “ideophobia.”  So far, at least, it hasn’t had a commonly recognized name, but it is by far the most dangerous form of all the different flavors of “bigotry” that afflict us today.  By “bigotry” I really mean outgroup identification.  We all do it, without exception.  Some of the most dangerous manifestations of it exist in just those individuals who imagine they are immune to it.  All of us hate, despise, and are disgusted by the individuals in whatever outgroup happens to suit our fancy.  The outgroup may be defined by race, religion, ethnic group, nationality, and even sex.  I suspect, however, that by far the most common form of outgroup (and ingroup) identification today is by ideology.

    Members of ideologically defined ingroups have certain ideas and beliefs in common.  Taken together, they form the intellectual shack the ingroup in question lives in.  The outgroup consists of those who disagree with these core beliefs, and especially those who define their own ingroup by opposing beliefs.  Ideophobes hate and despise such individuals.  They indulge in a form of bigotry that is all the more dangerous because it has gone so long without a name.  Occasionally they will imagine that they advocate universal human brotherhood, and “human flourishing.”  In reality, “brotherhood” is the last thing ideophobes want when it comes to “thought crime.”  They do not disagree rationally and calmly.  They hate the “other,” to the point of reacting with satisfaction and even glee if the “other” suffers physical harm.  They often imagine themselves to be great advocates of diversity, and yet are blithely unaware of the utter lack of it in the educational, media, entertainment, and other institutions they control when it comes to diversity of opinion.  As for the ideological memes of the ingroup, they expect rigid uniformity.  What Dennett, Dawkins, Harris and Haidt thought they were doing was upholding virtue.  What they were really doing is better called “virtue signaling.”  They were assuring the other members of their ingroup that they “think right” about some of its defining “correct thoughts,” and registering the appropriate allergic reaction to the outgroup.

    I cannot claim that ideophobia is objectively immoral.  I do believe, however, that it is extremely dangerous, not only to me, but to everyone else on the planet.  I propose that it’s high time that we recognized the phenomenon as a manifestation of human nature that has long outlived its usefulness.  We need to recognize that ideophobia is essentially the same thing as racism, sexism, anti-Semitism, homophobia, xenophobia, or what have you.  The only difference is in the identifying characteristics of the outgroup.  The kind of behavior described is a part of what we are, and will remain a part of what we are.  That does not mean that it can’t be controlled.

    What evidence do I have that this type of behavior is dangerous?  There were two outstanding examples in the 20th century.  The Communists murdered 100 million people, give or take, weighted in the direction of the most intelligent and capable members of society, because they belonged to their outgroup, commonly referred to as the “bourgeoisie.”  The Nazis murdered tens of millions of Jews, Slavs, gypsies, and members of any other ethnicity that they didn’t recognize as belonging to their own “Aryan” ingroup.  There are countless examples of similar mayhem, going back to the beginnings of recorded history, and ample evidence that the same thing was going on much earlier.  As many of the Communists and Nazis discovered, what goes around comes around.  Millions of them became victims of their own irrational hatred.

    No doubt Dennett, Dawkins, Harris, Haidt and legions of others like them see themselves as paragons of morality and rationality.  I have my doubts.  With the exception of Haidt, they have made no attempt to determine why those they consider “deplorables” think the way they do, or to calmly analyze what might be their desires and goals, and to search for common ground and understanding.  As for Haidt, his declaration that the goals of his outgroup are “morally wrong” flies in the face of all the fine theories he recently discussed in his The Righteous Mind.  I would be very interested to learn how he thinks he can square this circle.  Neither he nor any of the others have given much thought to whether the predispositions that inspire their own desires and goals will accomplish the same thing now as when they evolved, and appear unconcerned about the real chance that they will accomplish the opposite.  They have not bothered to consider whether it even matters, and why, or whether the members of their outgroup may be acting a great deal more consistently in that respect than they do.  Instead, they have relegated those who disagree with them to the outgroup, slamming shut the door on rational discussion.

    In short, they have chosen ideophobia.  It is a dangerous choice, and may turn out to be a very dangerous one, assuming we value survival.  I personally would prefer that we all learn to understand and seek to control the worst manifestations of our dual system of morality; our tendency to recognize ingroups and outgroups and apply different standards of good and evil to individuals depending on the category to which they belong.  I doubt that anything of the sort will happen any time soon, though.  Meanwhile, we are already witnessing the first violent manifestations of this latest version of outgroup identification.  It’s hard to say how extreme it will become before the intellectual fashions change again.  Perhaps the best we can do is sit back and collect the data.

  • Moral Nihilism, Moral Chaos, and Moral Truth

    Posted on October 5th, 2016 Helian 3 comments

    The truth about morality is both simple and obvious.  It exists as a result of evolution by natural selection.  From that it follows that it cannot possibly have a purpose or goal, and from that it follows that one cannot make “progress” towards fulfilling that nonexistent purpose or reaching that nonexistent goal.  Simple and obvious as it is, no truth has been harder for mankind to accept.

    The reason for this has to do with the nature of moral emotions themselves.  They portray Good and Evil to us as real things that exist independent of human consciousness, when in fact they are subjective artifacts of our imaginations.  That truth has always been hard for us to accept.  It is particularly hard when self-esteem is based on the illusion of moral superiority.  That illusion is obviously alive and well at a time when a large fraction of the population is capable of believing that another large fraction is “deplorable.”  The fact that the result of indulging such illusions in the past has occasionally and not infrequently been mass murder suggests that, as a matter of public safety, it may be useful to stop indulging them.

    The “experts on ethics” delight in concocting chilling accounts of what will happen if we do stop indulging them.  We are told that a world without objective moral truths will be a world of moral nihilism and moral chaos.  The most obvious answer to such fantasies is, “So what?”  Is the truth really irrelevant?  Are we really expected to force ourselves to believe in lies because that truth is just to scary for us to face?  Come to think of it, what, exactly, do we have now if not moral nihilism and moral chaos?

    We live in a world in which every two bit social justice warrior can invent some new “objective evil,” whether “cultural appropriation,” failure to memorize the 57 different flavors or gender, or some arcane “micro-aggression,” and work himself into a fine fit of virtuous indignation if no one takes him seriously.  The very illusion that Good and Evil are objective things is regularly exploited to justify the crude bullying that is now used to enforce new “moral laws” that have suddenly been concocted out of the ethical vacuum.  The unsuspecting owners of mom and pop bakeries wake up one morning to learn that they are now “deplorable,” and so “evil” that their business must be destroyed with a huge fine.

    We live in a world in which hundreds of millions believe that other hundreds of millions who associate the word “begotten” with the “son of God,” or believe in the Trinity, are so evil that they will certainly burn in hell forever.  These other hundreds of millions believe that heavenly bliss will be denied to anyone who doesn’t believe in a God with these attributes.

    We live in a world in which the regime in charge of the most powerful country in the world believes it has such a monopoly on the “objective Good” that it can ignore international law, send its troops to occupy parts of another sovereign state, and dictate to the internationally recognized government of that state which parts of its territory it is allowed to control, and which not.  It persists in this dubious method of defending the “Good” even though it risks launching a nuclear war in the process.  The citizens in that country who happen to support one candidate for President don’t merely consider the citizens who support the opposing candidate wrong.  They consider them objectively evil according to moral “laws” that apparently float about as insubstantial spirits, elevating themselves by their own bootstraps.

    We live in a world in which evolutionary biologists, geneticists, and neuroscientists who are perfectly well aware of the evolutionary roots of morality nevertheless persist in cobbling together new moral systems that lack even so much as the threadbare semblance of a legitimate basis.  The faux legitimacy that the old religions at least had the common decency to supply in the form of imaginary gods is thrown to the winds without a thought.  In spite of that these same scientists expect the rest of us to take them seriously when they announce that, at long last, they’ve discovered the philosopher’s stone of objective Good and Evil, whether in the form of some whimsical notion of “human flourishing,” or perhaps a slightly retouched version of utilitarianism.  In almost the same breath, they affirm the evolutionary basis of morality, and then proceed to denounce anyone who doesn’t conform to their newly minted moral “laws.”  When it comes to morality, it is hard to imagine a more nihilistic and chaotic world.

    I find it hard to believe that a world in which the subjective nature and rather humble evolutionary roots of all our exalted moral systems were commonly recognized, along with the obvious implications of these fundamental truths, could possibly be even more nihilistic and chaotic than the one we already live in.  I doubt that “moral relativity” would prevail in such a world, for the simple reason that it is not in our nature to be moral relativists.  We might even be able to come up with a set of “absolute” moral rules that would be obeyed, not because humanity had deluded itself into believing they were objectively true, but because of a common determination to punish free riders and cheaters.  We might even be able to come up with some rational process for changing and adjusting the rules when necessary by common consent, rather than by the current “enlightened” process of successful bullying.

    We would all be aware that even the most “exalted” and “noble” moral emotions, even those accompanied by stimulating music and rousing speeches, have a common origin; their tendency to improve the odds that the genes responsible for them would survive in a Pleistocene environment.  Under the circumstances, it would be reasonable to doubt, not only their ability to detect “objective Good” and “objective Evil,” but the wisdom of paying any attention to them at all.  Instead of swallowing the novel moral concoctions of pious charlatans without a murmur, we would begin to habitually greet them with the query, “Exactly what innate whim are you trying to satisfy?”  We would certainly be very familiar with the tendency of every one of us, described so eloquently by Jonathan Haidt in his “The Righteous Mind,” to begin rationalizing our moral emotions as soon as we experience them, whether in response to “social injustice” or a rude driver who happened to cut us off on the way to work.  We would realize that that very tendency also exists by virtue of evolution by natural selection, not because it is actually capable of unmasking social injustice, or distinguishing “evil” from “good” drivers, but merely because it improved our chances of survival when there were no cars, and no one had ever heard of such a thing as social justice.

    I know, I’m starting to ramble.  I’m imagining a utopia, but one can always dream.

  • G. E. Moore Contra Edvard Westermarck

    Posted on November 10th, 2015 Helian 2 comments

    Many pre-Darwinian philosophers realized that the source of human morality was to be found in innate “sentiments,” or “passions,” often speculating that they had been put there by God.  Hume put the theory on a more secular basis.  Darwin realized that the “sentiments,” were there because of natural selection, and that human morality was the result of their expression in creatures with large brains.  Edvard Westermarck, perhaps at the same time the greatest and the most unrecognized moral philosopher of them all, put it all together in a coherent theory of human morality, supported by copious evidence, in his The Origin and Development of the Moral Ideas.

    Westermarck is all but forgotten today, probably because his insights were so unpalatable to the various academic and professional tribes of “experts on ethics.”  They realized that, if Westermarck were right, and morality really is just the expression of evolved behavioral predispositions, they would all be out of a job.  Under the circumstances, its interesting that his name keeps surfacing in modern works about evolved morality, innate behavior, and evolutionary psychology.  For example, I ran across a mention of him in famous primatologist Frans de Waal’s latest book, The Bonobo and the Atheist.  People like de Waal who know something about the evolved roots of behavior are usually quick to recognize the significance of Westermarck’s work.

    Be that as it may, G. E. Moore, the subject of my last post, holds a far more respected place in the pantheon of moral philosophers.  That’s to be expected, of course.  He never suggested anything as disconcerting as the claim that all the mountains of books and papers they had composed over the centuries might as well have been written about the nature of unicorns.  True, he did insist that everyone who had written about the subject of morality before him was delusional, having fallen for the naturalistic fallacy, but at least he didn’t claim that the subject they were writing about was a chimera.

    Most of what I wrote about in my last post came from the pages of Moore’s Principia Ethica.  That work was published in 1903.  Nine years later he published another little book, entitled Ethics.  As it happens, Westermarck’s Origin appeared between those two dates, in 1906.  In all likelihood, Moore read Westermarck, because parts of Ethics appear to be direct responses to his book.  Moore had only a vague understanding of Darwin, and the implications of his work on the subject of human behavior.  He did, however, understand Westermarck when he wrote in the Origin,

    If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.  It has been said by Bentham and others that moral principles cannot be proved because they are first principles which are used to prove everything else.  But the real reason for their being inaccessible to demonstration is that, owing to their very nature, they can never be true.  If the word “Ethics,” then, is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.

    Now that got Moore’s attention.  Responding to Westermarck’s theory, or something very like it, he wrote:

    Even apart from the fact that they lead to the conclusion that one and the same action is often both right and wrong, it is, I think, very important that we should realize, to begin with, that these views are false; because, if they were true, it would follow that we must take an entirely different view as to the whole nature of Ethics, so far as it is concerned with right and wrong, from what has commonly been taken by a majority of writers.  If these views were true, the whole business of Ethics, in this department, would merely consist in discovering what feelings and opinions men have actually had about different actions, and why they have had them.  A good many writers seem actually to have treated the subject as if this were all that it had to investigate.  And of course questions of this sort are not without interest, and are subjects of legitimate curiosity.  But such questions only form one special branch of Psychology or Anthropology; and most writers have certainly proceeded on the assumption that the special business of Ethics, and the questions which it has to try to answer, are something quite different from this.

    Indeed they have.  The question is whether they’ve actually been doing anything worthwhile in the process.  Note the claim that Westermarck’s views were “false.”  This claim was based on what Moore called a “proof” that it couldn’t be true that appeared in the preceding pages.  Unfortunately, this “proof” is transparently flimsy to anyone who isn’t inclined to swallow it because it defends the relevance of their “expertise.”  Quoting directly from his Ethics, it goes something like this:

    1.  It is absolutely impossible that any one single, absolutely particular action can ever be both right and wrong, either at the same time or at different times.
    2. If the whole of what we mean to assert, when we say that an action is right, is merely that we have a particular feeling towards it, then plainly, provided only we really have this feeling, the action must really be right.
    3. For if this is so, and if, when a man asserts an action to be right or wrong, he is always merely asserting that he himself has some particular feeling towards it, then it absolutely follows that one and the same action has sometimes been both right and wrong – right at one time and wrong at another, or both simultaneously.
    4. But if this is so, then the theory we are considering certainly is not true.  (QED)

    Note that this “proof” requires the positive assertion that it is possible to claim that an action can be right or wrong, in this case because of “feelings.”  A second, similar proof, also offered in Chapter III of Ethics, “proves” that an action can’t possible be right merely because one “thinks” it right, either.  With that, Moore claims that he has “proved” that Westermarck, or someone with identical views, must be wrong.  The only problem with the “proof” is that Westermarck specifically pointed out in the passage quoted above that it is impossible to make truth claims about “moral principles.”  Therefore, it is out of the question that he could ever be claiming that any action “is right,” or “is wrong,” because of “feelings” or for any other reason.  In other words, Moore’s “proof” is nonsense.

    The fact that Moore was responding specifically to evolutionary claims about morality is also evident in the same Chapter of Ethics.  Allow me to quote him at length.

    …it is supposed that there was a time, if we go far enough back, when our ancestors did have different feelings towards different actions, being, for instance, pleased with some and displeased with others, but when they did not, as yet, judge any actions to be right or wrong; and that it was only because they transmitted these feelings, more or less modified, to their descendants, that those descendants at some later stage, began to make judgments of right and wrong; so that, in a sense, or moral judgments were developed out of mere feelings.  And I can see no objection to the supposition that this was so.  But, then, it seems also to be supposed that, if our moral judgments were developed out of feelings – if this was their origin – they must still at this moment be somehow concerned with feelings; that the developed product must resemble the germ out of which it was developed in this particular respect.  And this is an assumption for which there is, surely, no shadow of ground.

    In fact, there was a “shadow of ground” when Moore wrote those words, and the “shadow” has grown a great deal longer in our own day.  Moore continues,

    Thus, even those who hold that our moral judgments are merely judgments about feelings must admit that, at some point in the history of the human race, men, or their ancestors, began not merely to have feelings but to judge that they had them:  and this along means an enormous change.

    Why was this such an “enormous change?”  Why, of course, because as soon as our ancestors judged that they had feelings, then, suddenly those feelings could no longer be a basis for morality, because of the “proof” given above.  Moore concludes triumphantly,

    And hence, the theory that moral judgments originated in feelings does not, in fact, lend any support at all to the theory that now, as developed, they can only be judgments about feelings.

    If Moore’s reputation among them is any guide, such “ironclad logic” is still taken seriously by todays crop of “experts on ethics.”  Perhaps it’s time they started paying more attention to Westermarck.

  • The Moral Philosophy of G. E. Moore, or Why You Don’t Need to Bother with Aristotle, Hegel, and Kant

    Posted on November 7th, 2015 Helian No comments

    G. E. Moore isn’t exactly a household name these days, except perhaps among philosophers.  You may have heard of his most famous concoction, though – the “naturalistic fallacy.”  If we are to believe Moore, not only Aristotle, Hegel and Kant, but virtually every other philosopher you’ve ever heard of got morality all wrong because of it.  He was the first one who ever got it right.  On top of that, his books are quite thin, and he writes in the vernacular.  When you think about it, he did us all a huge favor.  Assuming he’s right, you won’t have to struggle with Kant, whose sentences can run on for a page and a half before you finally get to the verb at the end, and who is comprehensible, even to Germans, only in English translation.  You won’t have to agonize over the correct interpretation of Hegel’s dialectic.  Moore has done all that for you.  Buy his books, which are little more than pamphlets, and you’ll be able to toss out all those thick tomes and learn all the moral philosophy you will ever need in a week or two.

    Or at least you will if Moore got it right.  It all hinges on his notion of the “Good-in-itself.”  He claims it’s something like what philosophers call qualia.  Qualia are the content of our subjective experiences, like colors, smells, pain, etc.  They can’t really be defined, but only experienced.  Consider, for example, the difficulty of explaining “red” to a blind person.  Moore’s description of the Good is even more vague.  As he puts it in his rather pretentiously named Principia Ethica,

    Let us, then, consider this position.  My point is that ‘good’ is a simple notion, just as ‘yellow’ is a simple notion; that, just as you cannot, by any manner of means, explain to any one who does not already know it, what yellow is, so you cannot explain what good is.

    In other words, you can’t even define good.  If that isn’t slippery enough for you, try this:

    They (metaphysicians) have always been much occupied, not only with that other class of natural objects which consists in mental facts, but also with the class of objects or properties of objects, which certainly do not exist in time, are not therefore parts of Nature, and which, in fact, do no exist at all.  To this class, as I have said, belongs what we mean by the adjective “good.” …What is meant by good?  This first question I have already attempted to answer.  The peculiar predicate, by reference to which the sphere of Ethics must be defined, is simple, unanalyzable, indefinable.

    Or, as he puts it elsewhere, the Good doesn’t exist.  It just is.  Which brings us to the naturalistic fallacy.  If, as Moore claims, Good doesn’t exist as a natural, or even a metaphysical, object, it can’t be defined with reference to such an object.  Attempts to so define it are what he refers to as the naturalistic fallacy.  That, in his opinion, is why every other moral philosopher in history, or at least all the ones whose names happen to turn up in his books, have been wrong except him.  The fallacy is defined at Wiki and elsewhere on the web, but the best way to grasp what he means is to read his books.  For example,

    The naturalistic fallacy always implies that when we think “This is good,” what we are thinking is that the thing in question bears a definite relation to some one other thing.

    That fallacy, I explained, consists in the contention that good means nothing but some simple or complex notion, that can be defined in terms of natural qualities.

    To hold that from any proposition asserting “Reality is of this nature” we can infer, or obtain confirmation for, any proposition asserting “This is good in itself” is to commit the naturalistic fallacy.

    In short, all the head scratching of all the philosophers over thousands of years about the question of what is Good has been so much wasted effort.  Certainly, the average layman had no chance at all of understanding the subject, or at least he didn’t until the fortuitous appearance of Moore on the scene.  He didn’t show up a moment too soon, either, because, as he explains in his books, we all have “duties.”  It turns out that, not only did the intuition “Good,” pop up in his consciousness, more or less after the fashion of “yellow,” or the smell of a rose.  He also “intuited” that it came fully equipped with the power to dictate to other individuals what they ought and ought not to do.  Again, I’ll allow the philosopher to explain.

    Our “duty,” therefore, can only be defined as that action, which will cause more good to exist in the Universe than any possible alternative… When, therefore, Ethics presumes to assert that certain ways of acting are “duties” it presumes to assert that to act in those ways will always produce the greatest possible sum of good.

    But how on earth can we ever even begin to do our duty if we have no clue what Good is?  Well, Moore is actually quite coy about explaining it to us, and rightly so, as it turns out.  When he finally takes a stab at it in Chapter VI of Principia, it turns out to be paltry enough.  Basically, it’s the same “pleasure,” or “happiness” that many other philosophers have suggested, only it’s not described in such simple terms.  It must be part of what Moore describes as an “organic whole,” consisting not only of pleasure itself, for example, but also a consciousness capable of experiencing the pleasure, the requisite level of taste to really appreciate it, the emotional equipment necessary to react with the appropriate level of awe, etc.  Silly old philosophers!  They rashly assumed that, if the Good were defined as “pleasure,” it would occur to their readers that they would have to be conscious in order to experience it without them spelling it out.  Little did they suspect the coming of G. E. Moore and his naturalistic fallacy.

    When he finally gets around to explaining it to us, we gather that Moore’s Good is more or less what you’d expect the intuition of Good to be in a well-bred English gentleman endowed with “good taste” around the turn of the 20th century.  His Good turns out to include nice scenery, pleasant music, and chats with other “good” people.  Or, as he put it somewhat more expansively,

    We can imagine the case of a single person, enjoying throughout eternity the contemplation of scenery as beautiful, and intercourse with persons as admirable, as can be imagined.

    and

    By far the most valuable things which we know or can imagine, are certain states of consciousness, which may be roughly described as the pleasures of human intercourse and the enjoyment of beautiful objects.  No one, probably, who has asked himself the question, has ever doubted that personal affection and the appreciation of what is beautiful in Art or Nature, are good in themselves.

    Really?  No one?  One can only surmise that Moore’s circle of acquaintance must have been quite limited.  Unsurprisingly, Beethoven’s Fifth is in the mix, but only, of course, as part of an “organic whole.”  As Moore puts it,

    What value should we attribute to the proper emotion excited by hearing Beethoven’s Fifth Symphony, if that emotion were entirely unaccompanied by any consciousness, either of the notes, or of the melodic and harmonic relations between them?

    It would seem, then, that even if you’re such a coarse person that you can’t appreciate Beethoven’s Fifth yourself, it is still your “duty” to make sure that it’s right there on everyone else’s smart phone.

    Imagine, if you will, Mother Nature sitting down with Moore, holding his hand, looking directly into his eyes, and revealing to him in all its majesty the evolution of life on this planet, starting from the simplest, one celled creatures more than four billion years ago, and proceeding through ever more complex forms to the almost incredible emergence of a highly intelligent and highly social species known as Homo sapiens.  It all happened, she explains to him with a look of triumph on her face, because, over all those four billion years, the chain of life remained unbroken because the creatures that made up the links of that chain survived and reproduced.  Then, with a serious expression on her face, she asks him, “Now do you understand the reason for the existence of moral emotions?”  “Of course,” answers Moore, “they’re there so I can enjoy nice landscapes and pretty music.”  (Loud forehead slap)  Mother Nature stands up and walks away shaking her head, consoling herself with the thought that some more advanced species might “get it” after another million years or so of natural selection.

    And what of Aristotle, Hegel and Kant?  Throw out your philosophy books and forget about them.  Imagine being so dense as to commit the naturalistic fallacy!

    Moore

  • …And One More Thing about James Burnham: On Human Nature

    Posted on October 17th, 2015 Helian 4 comments

    There’s another thing about James Burnham’s Suicide of the West that’s quite fascinating; his take on human nature.  In fact, Chapter III is entitled “Human Nature and the Good Society.”  Here are a few excerpts from that chapter:

    However varied may be the combination of beliefs that it is psychologically possible for an individual liberal to hold, it remains true that liberalism is logically committed to a doctrine along the lines that I have sketched:  viewing human nature as not fixed but plastic and changing; with no pre-set limit to potential development; with no innate obstacle to the realization of a society of peace, freedom, justice and well-being.  Unless these things are true of human nature, the liberal doctrine and program for government, education, reform and so on are an absurdity.

    But in the face of what man has done and does, it is only an ideologue obsessed with his own abstractions who can continue to cling to the vision of an innately uncorrupt, rational and benignly plastic human nature possessed of an unlimited potential for realizing the good society.

    Quite true, which makes it all the more remarkable that virtually all the “scientists” in the behavioral “sciences” at the time Burnham wrote these lines were “clinging to that vision,” at least in the United States.  See, for example, The Triumph of Evolution, in which one of these “men of science,” author Hamilton Cravens, documents the fact.  Burnham continues,

    No, we must repeat:  if human nature is scored by innate defects, if the optimistic account of man is unjustified, then is all the liberal faith in vain.

    Here we get a glimpse of the reason that the Blank Slaters insisted so fanatically that there is no such thing as human nature, at least as commonly understood, for so many years, in defiance of all reason, and despite the fact that any 10 year old could have told them their anthropological theories were ludicrous.  The truth stood in the way of their ideology.  Therefore, the truth had to yield.

    All this begs the question of how, as early as 1964, Burnham came up with such a “modern” understanding of the Blank Slate.  Reading on in the chapter, we find some passages that are even more intriguing.  Have a look at this:

    It is not merely the record of history that speaks in unmistakable refutation of the liberal doctrine of man.  Ironically enough – ironically, because it is liberalism that has maintained so exaggerated a faith in science – almost all modern scientific studies of man’s nature unite in giving evidence against the liberal view of man as a creature motivated, once ignorance is dispelled, by the rational search for peace, freedom and plenty.  Every modern school of biology and psychology and most schools of sociology and anthropology conclude that men are driven chiefly by profound non-rational, often anti-rational, sentiments and impulses, whose character and very existence are not ordinarily understood by conscious reason.  Many of these drives are aggressive, disruptive, and injurious to others and to society.

    !!!

    The bolding and italics are mine.  How on earth did Burnham come up with such ideas?  By all means, dear reader, head for your local university library, fish out the ancient microfiche, and search through the scientific and professional journals of the time yourself.  Almost without exception, the Blank Slate called the tune.  Clearly, Burnham didn’t get the notion that “almost all modern scientific studies of man’s nature” contradicted the Blank Slate from actually reading the literature himself.  Where, then, did he get it?  Only Burnham and the wild goose know, and Burnham’s dead, but my money is on Robert Ardrey.  True, Konrad Lorenz’ On Aggression was published in Germany in 1963, but it didn’t appear in English until 1966.  The only other really influential popular science book published before Suicide of the West that suggested anything like what Burnham wrote in the above passage was Ardrey’s African Genesis, published in 1961.

    What’s that you say?  I’m dreaming?  No one of any significance ever challenged the Blank Slate orthodoxy until E. O. Wilson’s stunning and amazing publication of Sociobiology in 1975?  I know, it must be true, because it’s all right there in Wikipedia.  As George Orwell once said, “He who controls the present controls the past.”

  • James Burnham and the Anthropology of Liberalism

    Posted on October 16th, 2015 Helian 2 comments

    James Burnham was an interesting anthropological data point in his own right.  A left wing activist in the 30’s, he eventually became a Trotskyite.  By the 50’s however, he had completed an ideological double back flip to conservatism, and became a Roman Catholic convert on his deathbed.  He was an extremely well-read intellectual, and a keen observer of political behavior.  His most familiar book is The Managerial Revolution, published in 1941.  Among others, it strongly influenced George Orwell, who had something of a love/hate relationship with Burnham.  For example, in an essay in Tribune magazine in January 1944 he wrote,

    Recently, turning up a back number of Horizon, I came upon a long article on James Burnham’s Managerial Revolution, in which Burnham’s main thesis was accepted almost without examination.  It represented, many people would have claimed, the most intelligent forecast of our time.  And yet – founded as it was on a belief in the invincibility of the German army – events have already blown it to pieces.

    A bit over a year later, in February 1945, however, we find Burnham had made more of an impression on Orwell than the first quote implies.  In another essay in the Tribune he wrote,

    …by the way the world is actually shaping, it may be that war will become permanent.  Already, quite visibly and more or less with the acquiescence of all of us, the world is splitting up into the two or three huge super-states forecast in James Burnham’s Managerial Revolution.  One cannot draw their exact boundaries as yet, but one can see more or less what areas they will comprise.  And if the world does settle down into this pattern, it is likely that these vast states will be permanently at war with one another, although it will not necessarily be a very intensive or bloody kind of war.

    Of course, these super-states later made their appearance in Orwell’s most famous novel, 1984.  However, he was right about Burnham the first time.  He had an unfortunate penchant for making wrong predictions, often based on the assumption that transitory events must represent a trend that would continue into the indefinite future.  For example, impressed by the massive industrial might brought to bear by the United States during World War II, and its monopoly of atomic weapons, he suggested in The Struggle for the World, published in 1947, that we immediately proceed to force the Soviet Union to its knees, and establish a Pax Americana.  A bit later, in 1949, impressed by a hardening of the U.S. attitude towards the Soviet Union after the war, he announced The Coming Defeat of Communism in a book of that name.  He probably should have left it at that, but reversed his prognosis in Suicide of the West, which appeared in 1964.  By that time it seemed to Burnham that the United States had become so soft on Communism that the defeat of Western civilization was almost inevitable.  The policy of containment could only delay, but not stop the spread of Communism, and in 1964 it seemed that once a state had fallen behind the Iron Curtain it could never throw off the yoke.

    Burnham didn’t realize that, in the struggle with Communism, time was actually on our side.  A more far-sighted prophet, a Scotsman by the name of Sir James Mackintosh, had predicted in the early 19th century that the nascent versions of Communism then already making their appearance would eventually collapse.  He saw that the Achilles heel of what he recognized was really a secular religion was its ill-advised proclamation of a coming paradise on earth, where it could be fact-checked, instead of in the spiritual realms of the traditional religions, where it couldn’t.  In the end, he was right.  After they had broken 100 million eggs, people finally noticed that the Communists hadn’t produced an omelet after all, and the whole, seemingly impregnable edifice collapsed.

    One thing Burnham did see very clearly, however, was the source of the West’s weakness – liberalism.  He was well aware of its demoralizing influence, and its tendency to collaborate with the forces that sought to destroy the civilization that had given birth to it.  Inspired by what he saw as an existential threat, he carefully studied and analyzed the type of the western liberal, and its evolution away from the earlier “liberalism” of the 19th century.  Therein lies the real value of his Suicide of the West.  It still stands as one of the greatest analyses of modern liberalism ever written.  The basic characteristics of the type he described are as familiar more than half a century later as they were in 1964.  And this time his predictions regarding the “adjustments” in liberal ideology that would take place as its power expanded were spot on.

    Burnham developed nineteen “more or less systematic set of ideas, theories and beliefs about society” characteristic of the liberal syndrome in Chapters III-V of the book, and then listed them, along with possible contrary beliefs in Chapter VII.  Some of them have changed very little since Burnham’s day, such as,

    It is society – through its bad institutions and its failure to eliminate ignorance – that is responsible for social evils.  Our attitude toward those who embody these evils – of crime, delinquency, war, hunger, unemployment, communism, urban blight – should not be retributive but rather the permissive, rehabilitating, education approach of social service; and our main concern should be the elimination of the social conditions that are the source of the evils.

    Since there are no differences among human beings considered in their political capacity as the foundation of legitimate, that is democratic, government, the ideal state will include all human beings, and the ideal government is world government.

    The goal of political and social life is secular:  to increase the material and functional well-being of humanity.

    Some of the 19 have begun to change quite noticeably since the publication of Suicide of the West in just the ways Burnham suggested.  For example, items 9 and 10 on the list reflect a classic version of the ideology that would have been familiar to and embraced by “old school” liberals like John Stuart Mill:

    Education must be thought of as a universal dialogue in which all teachers and students above elementary levels may express their opinions with complete academic freedom.

    Politics must be though of as a universal dialogue in which all persons may express their opinions, whatever they may be, with complete freedom.

    Burnham had already noticed signs of erosion in these particular shibboleths in his own day, as liberals gained increasing control of academia and the media.  As he put it,

    In both Britain and the United States, liberals began in 1962 to develop the doctrine that words which are “inherently offensive,” as far-Right but not communist words seem to be, do not come under the free speech mantle.

    In our own day of academic safe spaces and trigger warnings, there is certainly no longer anything subtle about this ideological shift.  Calls for suppression of “offensive” speech have now become so brazen that they have spawned divisions within the liberal camp itself.  One finds old school liberals of the Berkeley “Free Speech Movement” days resisting Gleichschaltung with the new regime, looking on with dismay as speaker after speaker is barred from university campuses for suspected thought crime.

    As noted above, Communism imploded before it could overwhelm the Western democracies, but the process of decay goes on.  Nothing about the helplessness of Europe in the face of the current inundation by third world refugees would have surprised Burnham in the least.  He predicted it as an inevitable expression of another fundamental characteristic of the ideology – liberal guilt.  Burnham devoted Chapter 10 of his book to the subject, and noted therein,

    Along one perspective, liberalism’s reformist, egalitarian, anti-discrimination, peace-seeking principles are, or at any rate can be interpreted as, the verbally elaborated projections of the liberal sense of guilt.

    and

    The guilt of the liberal causes him to feel obligated to try to do something about any and every social problem, to cure every social evil.  This feeling, too, is non-rational:  the liberal must try to cure the evil even if he has no knowledge of the suitable medicine or, for that matter, of the nature of the disease; he must do something about the social problem even when there is no objective reason to believe that what he does can solve the problem – when, in fact, it may well aggravate the problem instead of solving it.

    I suspect Burnham himself would have been surprised at the degree to which such “social problems” have multiplied in the last half a century, and the pressure to do something about them has only increased in the meantime.  As for the European refugees, consider the following corollaries of liberal guilt as developed in Suicide of the West:

    (The liberal) will not feel uneasy, certainly not indignant, when, sitting in conference or conversation with citizens of countries other than his own – writers or scientists or aspiring politicians, perhaps – they rake his country and his civilization fore and aft with bitter words; he is as likely to join with them in the criticism as to protest it.

    It follows that,

    …the ideology of modern liberalism – its theory of human nature, its rationalism, its doctrines of free speech, democracy and equality – leads to a weakening of attachment to groups less inclusive than Mankind.

    All modern liberals agree that government has a positive duty to make sure that the citizens have jobs, food, clothing, housing, education, medical care, security against sickness, unemployment and old age; and that these should be ever more abundantly provided.  In fact, a government’s duty in these respects, if sufficient resources are at its disposition, is not only to its own citizens but to all humanity.

    …under modern circumstances there is a multiplicity of interests besides those of our own nation and culture that must be taken into account, but an active internationalism in feeling as well as thought, for which “fellow citizens” tend to merge into “humanity,” sovereignty is judged an outmode conception, my religion or no-religion appears as a parochial variant of the “universal ideas common to mankind,” and the “survival of mankind” becomes more crucial than the survival of my country and my civilization.

    For Western civilization in the present condition of the world, the most important practical consequence of the guilt encysted in the liberal ideology and psyche is this:  that the liberal, and the group, nation or civilization infected by liberal doctrine and values, are morally disarmed before those whom the liberal regards as less well off than himself.

    The inevitable implication of the above is that the borders of the United States and Europe must become meaningless in an age of liberal hegemony, as, indeed, they have.  In 1964 Burnham was not without hope that the disease was curable.  Otherwise, of course, he would never have written Suicide of the West.  He concluded,

    But of course the final collapse of the West is not yet inevitable; the report of its death would be premature.  If a decisive changes comes, if the contraction of the past fifty years should cease and be reversed, then the ideology of liberalism, deprived of its primary function, will fade away, like those feverish dreams of the ill man who, passing the crisis of his disease, finds he is not dying after all.  There are a few small signs, here and there, that liberalism may already have started fading.  Perhaps this book is one of them.

    No, liberalism hasn’t faded.  The infection has only become more acute.  At best one might say that there are now a few more people in the West who are aware of the disease.  I am not optimistic about the future of Western civilization, but I am not foolhardy enough to predict historical outcomes.  Perhaps the fever will break, and we will recover, and perhaps not.  Perhaps there will be a violent crisis tomorrow, or perhaps the process of dissolution will drag itself out for centuries.  Objectively speaking, there is no “good” outcome and no “bad” outcome.  However, in the same vein, there is no objective reason why we must refrain from fighting for the survival or our civilization, our culture, or even the ethnic group to which we belong.

    As for the liberals, perhaps they should consider why all the fine moral emotions they are so proud to wear on their sleeves exist to begin with.  I doubt that the reason has anything to do with suicide.

    By all means, read the book.

  • Relics of the Blank Slate, as Excavated at “Ethics” Magazine

    Posted on October 4th, 2015 Helian No comments

    There’s a reason that the Blank Slaters clung so bitterly to their absurd orthodoxy for so many years.  If there is such a thing as human nature, then all the grandiose utopias they concocted for us over the years, from Communism on down, would vanish like so many mirages.  That orthodoxy collapsed when a man named Robert Ardrey made a laughing stock of the “men of science.”  In this enlightened age, one seldom finds an old school, hard core Blank Slater outside of the darkest, most obscure rat holes of academia.  Even PBS and Scientific American have thrown in the towel.  Still, one occasionally runs across “makeovers” of the old orthodoxy, in the guise of what one might call Blank Slate Lite.

    I recently discovered just such an artifact in the pages of Ethics magazine, which functions after a fashion as an asylum for “experts in ethics” who still cling to the illusion that they have anything relevant to say.  Entitled The Limits of Evolutionary Explanations of Morality and Their Implications for Moral Progress, it was written by Prof. Allen Buchanan of Duke and Kings College, London, and Asst. Prof. Russell Powell of Boston University.  Unfortunately, it’s behind a pay wall, and is quite long, but if you’re the adventurous type you might be able to access it at a local university library.  In any case, the short version of the paper might be summarized as follows:

    Conservatives have traditionally claimed that “human nature places severe limitations on social and moral reform,” but have “offered little in the way of scientific evidence to support this claim.”  Now, however, a later breed of conservatives, knows as “evoconservatives,” have “attempted to fill this empirical gap in the conservative argument by appealing to the prevailing evolutionary explanation of morality to show that it is unrealistic to think that cosmopolitan and other “inclusivist” moral ideals can meaningfully be realized.”  However, while evolved psychology can’t be discounted in moral theory, and there is such a thing as human nature, they are so plastic and malleable that it doesn’t stand in the way of moral progress.

    This, at least, is the argument until one gets to the “Conclusion” section at the end.  Then, as if frightened by their own hubris, the authors make noises in a quite contradictory direction, writing, for example,

    …we acknowledge that evolved psychological capacities, interacting with particular social and institutional environments, can pose serious obstacles to using our rationality in ways that result in more inclusive moralities. For example, environments that mirror conditions of the EEA (environment of evolutionary adaptation, i.e., the environment in which moral behavioral predispositions presumably evolved, ed.)—such as those characterized by great physical insecurity, high parasite threat, severe intergroup competition for resources, and a lack of institutions for peaceful, mutually beneficial cooperation—will tend to be very unfriendly to the development of inclusivist morality.

    However, they conclude triumphantly with the following:

    At the same time, however, we have offered compelling reasons, both theoretical and empirical, to believe that human morality is only weakly constrained by human evolutionary history, leaving the potential for substantial moral progress wide open. Our point is not that human beings have slipped the “leash” of evolution, but rather that the leash is far longer than evoconservatives and even many evolutionary psychologists have acknowledged—and no one is in a position at present to know just how elastic it will turn out to be.

    Students of the Blank Slate orthodoxy will see that all the main shibboleths are still there, if in somewhat attenuated form.  The Blank Slate itself is replaced by a “long leash.”  The “genetic determinist” strawman of the Blank Slaters is replaced by “evoconservatives.”  These evoconservatives are no longer “fascists and racists,” but merely a nuisance standing in the way of “moral progress.”  The overriding goal is no longer anything like the Marxist paradise on earth, but the somewhat less inspiring continued “development of inclusivist morality.”

    Readers of this blog should immediately notice the unwarranted assumption that there actually is such a thing as “moral progress.”  In that case, there must be a goal towards which morality is progressing.  Natural selection occurs without any such goal or purpose.  It follows that the authors believe that there must be some “mysterious, transcendental” origin other than natural evolution to account for this progress.  However, they insist they don’t believe in such a “mysterious, transcendental” source.  How, then, do they account for the existence of this “thing” they refer to as “moral progress?”  What the authors are really referring to when they refer to this “moral progress” is “the way we and other good liberals want things.”

    By “inclusivist” moralities, the authors mean versions that can be expanded to include very large subsets of the human population that are neither kin to the bearers of that morality nor members of any identifiable group that is likely to reciprocate their good deeds.  Presumably the ultimate goal is to expand these subsets to “include” all mankind.  The “evoconservatives” we are told, deny the possibility of such “inclusivism” in spite of the fact that one can cite many obvious examples to the contrary.  At this point, one begins to wonder who these obtuse evoconservatives really are.  The authors are quite coy about identifying them.  The footnote following their first mention merely points to a blurb about what the authors will discuss later in the text.  No names are named.  Much later in the text Jonathan Haidt is finally identified as one of the evoconservatives.  As the authors put it,

    Leading psychologist Jonathan Haidt, who has stressed the moral psychological significance of in-group loyalty, expresses a related view: ‘It would be nice to believe that we humans were designed to love everyone unconditionally. Nice, but rather unlikely from an evolutionary perspective. Parochial love—love within groups—amplified by similarity, a sense of shared fate, and the suppression of free riders, may be the most we can accomplish.

    In fact, as anyone who has actually read Haidt is aware, he neither believes that “inclusivist” moralities as defined by the authors are impossible, nor does this quote imply anything of the sort.  A genuine conservative would doubtless classify Haidt as a liberal, but he has defended, or at least tried to explain, conservative moralities.  Apparently that is sufficient to cast him into the outer darkness as an “evoconservative.”

    The authors also point the finger at Larry Arnhart.  Arnhart is neither a geneticist, nor an evolutionary biologist, nor an evolutionary psychologist, but a political scientist who apparently subscribes to some version of the naturalistic fallacy.  Nowhere is it demonstrated that he actually believes that the inclusivist versions of morality favored by the authors are impossible.  In a word, the few slim references to individuals who are supposed to fit the description of the evoconservative strawman concocted by the authors actually do nothing of the sort.  Yet in spite of the fact that the authors can’t actually name anyone who explicitly embraces their version of evoconservatism, they describe the existence of “inclusivist morality” as a “major flaw in evoconservative arguments.”

    A bit later, the authors appear to drop their evoconservative strawman, and expand their field of fire to include anyone who claims that “inclusivist morality” could have resulted from natural selection.  For example, quoting from the article:

    The key point is that none of these inclusivist features of contemporary morality are plausibly explained in standard selectionist terms, that is, as adaptations or predictable expressions of adaptive features that arose in the environment of evolutionary adaptation (EEA).

    Here, “evoconservatives” have been replaced by “standard selectionists.”  Invariably, the authors walk back such seemingly undistilled statements of Blank Slate ideology with assurances that no one believes more firmly than they in the evolutionary roots of morality.  That, of course, begs the question of how “these inclusivist features,” if they are not explainable in “standard selectionist terms,” are plausibly explained in “non-standard selectionist terms,” and who these “non-standard selectionists” actually are.  Apparently the only alternative is that the “inclusivist features” have a “transcendental” explanation, not further elaborated by the authors.  This conclusion is not as far fetched as it seems.  Interestingly enough, the authors’ work is partially funded by the Templeton Foundation, an accommodationist outfit with the ostensible goal of proving that religion and science are not mutually exclusive.

    In fact, I know of not a single scientist whose specialty is germane to the subject of human morality who would dispute the existence of inclusive moralities.  The authors limit themselves to statements to the effect that the work of such and such a person “suggests” that they don’t believe in inclusive moralities, or that the work of some other person “implies” that they don’t believe such moralities are stable.  Wouldn’t it be more reasonable to simply go and ask these people what they actually believe regarding these matters, instead of putting words in their mouths?

    Left out of all these glowing descriptions of inclusive moralities is the fact that not a single one of them exists without an outgroup.  That fact is demonstrated by the authors themselves, whose outgroup obviously includes those they identify as “evoconservatives.”  One might also point out that those who have “inclusive” ingroups commonly have “inclusive” outgroups as well, and liberals are commonly found among the most violent outgroup haters on the planet.  To confirm this, one need only look at the comments at the websites of Daily Kos, or Talking Points Memo, or the Nation, or any other familiar liberal watering hole.

    While I’m somewhat dubious about all the authors’ loose talk about “moral progress,” I think we can at least identify some real progress towards getting at the truth in their version of Blank Slate Lite.  After all, it’s a far cry from the old school version.  Throughout the article the authors question the ability of natural selection in the environment in which moral behavior presumably evolved in early humans to account for this or that feature of their observed “inclusive morality.”  As noted above, however, as often as they do it, they are effusive in assuring the reader that by no means do they wish to imply that they find any fault whatsoever with innate theories of human morality.  In the end, what more can one ask than the ability to continue seeking the truth about human moral behavior in every relevant area of science without fear of being denounced and intimidated as guilty of one type of villainy or another.  That ability seems more assured if the existence of innate behavior is at least admitted, and is therefore unlikely to be criminalized as it was in the heyday of the Blank Slate.  In that respect, Blank Slate Lite really does represent progress.

    Of course, there remains the question of why so many of us still take seriously the authors’ fantasies about “moral progress” more than a century after Westermarck pointed out the absurdity of truth claims about morality.  I suspect the answer lies in the fact that ending the charade would reduce all the pontifications of all the “experts in morality” catered to by learned journals like Ethics to gibberish.  Experts don’t like to be confronted with the truth that their painstakingly acquired expertise is irrelevant.  Admitting it would make it a great deal harder to secure grants from the Templeton Foundation.

    UPDATE:  I failed to mention another intriguing paragraph in the paper that reads as follows:

    The human capacity to reflect on and revise our conceptions of duty and moral standing can give us reasons here and now to expand our capacities for moral behavior by developing institutions that economize on sympathy and enhance our ability to take the interests of strangers into account. This same capacity may also give us reasons, in the not-too-distant future, to modify our evolved psychology through the employment of biomedical interventions that enable us to implement new norms that we develop as a result of the process of reflection. In both cases, the limits of our evolved motivational capacities do not translate into a comparable constraint on our capacity for moral action. The fact that we are not currently motivationally capable of acting on the considered moral norms we have come to endorse is not a reason to trim back those norms; it is a reason to enhance our motivational capacity, either through institutional or biomedical means, so that it matches the demands of our considered morality.

    Note the bolded wording.  I’m not sure what to make of it, dear reader, but it appears that, one way or another, the authors intend to “get our minds right.”

  • Panksepp, Animal Rights, and the Blank Slate

    Posted on August 23rd, 2015 Helian 4 comments

    So who is Jaak Panksepp?  Have a look at his YouTube talk on emotions at the bottom of this post, for starters.  A commenter recommended him, and I discovered the advice was well worth taking.  Panksepp’s The Archaeology of Mind, which he co-authored with Lucy Biven, was a revelation to me.  The book describes a set of basic emotional systems that exist in all, or virtually all, mammals, including humans.  In the words of the authors:

    …the ancient subcortical regions of mammalian brains contain at least seven emotional, or affective, systems:  SEEKING (expectancy), FEAR (anxiety), RAGE (anger), LUST (sexual excitement), CARE (nurturance), PANIC/GRIEF (sadness), and PLAY (social joy).  Each of these systems controls distinct but specific types of behaviors associated with many overlapping physiological changes.

    This is not just another laundry list of “instincts” of the type often proposed by psychologists at the end of the 19th and the beginning of the 20th centuries.  Panksepp is a neuroscientist, and has verified experimentally the unique signatures of these emotional systems in the ancient regions of the brain shared by humans and other mammals.  Again quoting from the book,

    As far as we know right now, primal emotional systems are made up of neuroanatomies and neurochemistries that are remarkably similar across all mammalian species.  This suggests that these systems evolved a very long time ago and that at a basic emotional and motivational level, all mammals are more similar than they are different.  Deep in the ancient affective recesses of our brains, we remain evolutionarily kin.

    If you are an astute student of the Blank Slate phenomenon, dear reader, no doubt you are already aware of the heretical nature of this passage.  That’s right!  The Blank Slaters were prone to instantly condemn any suggestion that there were similarities between humans and other animals as “anthropomorphism.”  In fact, if you read the book you will find that their reaction to Panksepp and others doing similar research has been every bit as allergic as their reaction to anyone suggesting the existence of human nature.  However, in the field of animal behavior, they are anything but a quaint artifact of the past.  Diehard disciples of the behaviorist John B. Watson and his latter day follower B. F. Skinner, Blank Slaters of the first water, still haunt the halls of academia in significant numbers, and still control the message in any number of “scientific” journals.  There they have been following their usual “scholarly” pursuit of ignoring and/or vilifying anyone who dares to disagree with them ever since the heyday of Ashley Montagu and Richard Lewontin.  In the process they have managed to suppress or distort a great deal of valuable research bearing directly on the wellsprings of human behavior.

    We learn from the book that the Blank Slate orthodoxy has been as damaging for other animals as it has been for us.  Among other things, it has served as the justification for indifference to or denial of the feelings and consciousness of animals.  The possibility that this attitude has contributed to some rather gross instances of animal abuse has been drawing increasing attention from those who are concerned about their welfare.  See for example, the website of Panksepp admirer Temple Grandin.  According to Panksepp & Bevin,

    Another of Descartes’ big errors was the idea that animals are without consciousness, without experiences, because they lack the subtle nonmaterial stuff from which the human mind is made.  This notion lingers on today in the belief that animals do not think about nor even feel their emotional responses.

    Many emotion researchers as well as neuroscience colleagues make a sharp distinction between affect and emotion, seeing emotion as purely behavioral and physiological responses that are devoid of affective experience.  They see emotional arousal as merely a set of physiological responses that include emotion-associated behaviors and a variety of visceral (hormonal/autonomic) responses, without actually experiencing anything – many researchers believe that other animals may not feel their emotional arousals.  We disagree.

    Some justify this rather counter-intuitive belief by suggesting that it is impossible to really experience or be conscious of emotions (affects) without language.  Panksepp & Bevins’ response:

    Words cannot describe the experience of seeing the color red to someone who is blind.  Words do not describe affects either.  One cannot explain what it feels like to be angry, frightened, lustful, tender, lonely, playful, or excited, except indirectly in metaphors.  Words are only labels for affective experiences that we have all had – primary affective experiences that we universally recognize.  But because they are hidden in our minds, arising from ancient prelinguistic capacities of our brains, we have found no way to talk about them coherently.

    With such excuses, and the fact that they could not “see” feelings and emotions in their experiments with “reinforcement” and “conditioning,” the behaviorists concluded that the feelings of the animals they were using in their experiments didn’t matter.  It was outside the realm of “science.”  Again from the book,

    Much as we admire the scientific finesses of these conditioning experiments, we part company with (Joseph) LeDoux and many of the others who conduct this kind of work when it comes to understanding what emotional feelings really are.  This is because they studiously ignore the feelings of their animals, and they often claim that the existence or nonexistence of the animals’ feelings is a nonscientific issue (although there are some signs of changing sentiments on these momentous issues).  In any event…, LeDoux has specifically endorsed the read-out theory – to the effect that affects are created by neocortical working-memory functions, uniquely expanded in human brains.  In other words, he see affects as a higher-order cognitive construct (perhaps only elaborated in humans), and thereby he envisions the striking FEAR responses of his animals to be purely physiological effects with no experiential consequences.

    …And when we analyze the punishing properties of electrical stimulation here in animals, we get the strongest aversive responses imaginable at the lowest levels of brain stimulation, and humans experience the most fearful states of mind imaginable.  Such issues of affective experience should haunt fear-conditioners much more than they apparently do.

    The evidence strongly indicates that there are primary-process emotional networks in the brain that help generate phenomenal affective experiences in all mammals, and perhaps in many other vertebrates and invertebrates.

    It’s stunning, really.  Anyone who has ever owned a dog is aware of how similar their emotional responses can often be to those of humans, and how well they remember them.  Like humans, they are mammals.  Like humans, their brains include a cortex.  It would hardly be “parsimonious” to simply assume that humans represent some kind of a radical departure when it comes to the ability to experience and remember emotions, and that other animals lack this ability, in defiance of centuries of such “common sense” observations that they can.  All this mass of evidence apparently isn’t “scientific,” and therefore doesn’t count, because these latter day Blank Slaters can’t observe in their mazes and shock boxes what appears obvious to everyone else in the world.  “Anthropomorphism!”  From such profound reasoning we are apparently to conclude that pain in animals doesn’t matter.

    Why the Blank Slate’s furious opposition to “anthropomorphism?”  In a sense, it’s actually an anachronism.  Recall that the fundamental dogma of the Blank Slate was the denial of human nature.  Obviously other mammals have a “nature.”  Clearly, the claim that dogs and cats must “learn” all their behavior from their “culture” was never going to fly.  Not so human beings.  Once upon a time the Blank Slaters claimed that everything in the human behavioral repertoire, with the possible exception of breathing, urinating, and defecating, was learned.  They even went so far as to include sex.  Even orgasms had to be “learned.”  It follows that the gulf between humans and animals had to be made as wide as possible.

    Fast forward to about the year 2000.  As far as their denial of human nature was concerned, the Blank Slaters had lost control of the popular media.  To an increasing extent, they were also losing control of the message in academia.  Books and articles about innate human behavior began pouring from the presses, and people began speaking of human nature as a given.  The Blank Slaters had lost that battle.  The main reason for their “anthropomorphism” phobia had disappeared.  In the more sequestered field of “animal nature,” however, they could carry on as if nothing had happened without making laughing stocks of themselves.  No one was paying any attention except a few animal rights activists.  And carry on they did, with the same “scientific” methods they had used in the past.  Allow me to quote from Panksepp & Biven again to give you a taste of what I’m talking about:

    It is noteworthy that Walter Hess, who first discovered the RAGE system in the cat brain in the mid-1930s (he won a Nobel Prize for his work in 1949), using localized stimulation of the hypothalamus, was among the first to suggest that the behavior was “sham rage.”  He confessed, however, in writings published after his retirement (as noted in Chapter 2:  e.g., The Biology of Mind [1964]), that he had always believed that the animals actually experienced true anger.  He admitted to having shared sentiments he did not himself believe.  Why?  He simply did not want to have his work marginalized by the then-dominant behaviorists who had no tolerance for talk about emotional experiences.  As a result, we still do not know much about how the RAGE system interacts with other cognitive and affective systems of the brain.

    In an earlier chapter on The Evolution of Affective Consciousness they added,

    In his retirement he admitted regrets about having been too timid, not true to his convictions, to claim that his animals had indeed felt real anger.  He confessed that he did this because he feared that such talk would lead to attacks by the powerful American behaviorists, who might thereby also marginalize his more concrete scientific discoveries.  To a modest extent, he tried to rectify his “mistake” in his last book, The Biology of Mind, but this work had little influence.

    So much for the “self-correcting” nature of science.  It is anything but that when poisoned by ideological dogmas.  Panksepp and Biven conclude,

    But now, thankfully, in our enlightened age, the ban has been lifted.  Or has it?  In fact, after the cognitive revolution of the early 1970s, the behaviorist bias has largely been retained but more implicitly by most, and it is still the prevailing view among many who study animal behavior.  It seems the educated public is not aware of that fact.  We hope the present book will change that and expose this residue of behaviorist fundamentalism for what it is:  an anachronism that only makes sense to people who have been schooled within a particular tradition, not something that makes any intrinsic sense in itself!  It is currently still blocking a rich discourse concerning the psychological, especially the affective, functions of animal brains and human minds.

    This passage is particularly interesting because it demonstrates, as can be seen from the passage about “the cognitive revolution of the early 1970s,” that the authors were perfectly well aware of the larger battle with the Blank Slate orthodoxy over human nature.  However, that rather opaque allusion is about as close as they came to referring to it in the book.  One can hardly blame them for deciding to fight one battle at a time.  There is one interesting connection that I will point out for the cognoscenti.  In Chapter 6, Beyond Instincts, they write,

    The genetically ingrained emotional systems of the brain reflect ancestral memories – adaptive affective functions of such universal importance for survival that they were built into the brain, rather than having to be learned afresh by each generation of individuals.  These genetically ingrained memories (instincts) serve as a solid platform for further developments in the emergence of both learning and higher-order reflective consciousness.

    Compare this with a passage from the work of the brilliant South African naturalist Eugene Marais, which appeared in his The Soul of the Ape, written well before his death in 1936, but only published in 1969:

    …it would be convenient to speak of instinct as phyletic memory.  There are many analogies between memory and instinct, and although these may not extend to fundamentals, they are still of such a nature that the term phyletic memory will always convey a clear understanding of the most characteristic attributes of instinct.

    As it happens, the very charming and insightful introduction to The Soul of the Ape when it was finally published in 1969 was written by none other than Robert Ardrey!  He had an uncanny ability to find and appreciate the significance of the work of brilliant but little-known researchers like Marais.

    As for Panksepp, I can only apologize for taking so long to discover him.  If nothing else, his work and teachings reveal that this is no time for complacency.  True, the Blank Slaters have been staggered, but they haven’t been defeated quite yet.  They’ve merely abandoned the battlefield and retreated to what would seem to be their last citadel; the field of animal behavior.  Unfortunately there is no Robert Ardrey around to pitch them headlong out of that last refuge, but they face a different challenge now.  They can no longer pretend to hold the moral high ground.  Their denial that animals can experience and remember their emotions in the same way as humans leaves the door wide open for the abuse of animals, both inside and outside the laboratory.  It is to be hoped that more animal rights activists like Temple Grandin will start paying attention.  I may not agree with them about eating red meat, but the maltreatment of animals, justified by reference to a bogus ideological dogma, is something that can definitely excite my own RAGE emotions.  I will have no problem standing shoulder to shoulder with them in this fight.

  • Indulge Yourself – Believe in Free Will

    Posted on May 24th, 2015 Helian 1 comment

    Philosophers have been masticating the question of free will for many centuries.  The net result of their efforts has been a dizzying array of different “flavors” of free will or the lack thereof.  I invite anyone with the patience to attempt disentangling the various permutations and combinations thereof to start with the Wiki page, and take it from there.   For the purpose of this post I will simply define free will as the ability to make choices that are not predetermined before we make the choice.  This implies that our conscious minds are not entirely subject to deterministic physical laws, and have the power to alter physical reality.  Lack of free will means the absence of this power, and implies that we lack the power to alter physical reality in any way.  I personally have no idea whether we have free will or not.  In my opinion, we currently lack the knowledge to answer the question.  However, I believe that debating the matter is useless.  Instead, we should assume that there is free will as the “default” position, and get on with our lives.

    Of course, if there is no free will, my advice is useless.  I am simply an automaton among automatons, adding to the chorus of sound and fury that signifies nothing.  In that case the debate over free will is merely another amusing case of pre-programmed robots arguing over what they “should” believe, and what they “ought” to do as a consequence, in a world in which the words “should” and “ought” are completely meaningless.  These words imply an ability to choose between two alternatives, but no such choice can exist if there is no free will.  “Ought” we to alter the criminal justice system because we have decided there is no such thing as free will?  If we have no free will, the question is meaningless.  We cannot possibly alter the predetermined outcome of the debate, or the predetermined evolution of the criminal justice system, or even our opinion on whether it “ought” to be changed or not.  Under the circumstances it can hardly hurt to assume that we do have free will.  If so, the assumption must have been foreordained, and no conscious agency exists that could have altered the fact.  If we don’t have free will, it is also absurd, if inevitable, to blame me or even take issue with me for advocating that we act as if we have free will.  After all, in that case I couldn’t have acted or thought any differently, assuming my mind is an artifact of the physical world, and not a “ghost in the machine.”  If we believe in free will but there is no free will, debate about the matter may or may not be inevitable, but it is certainly futile, because the outcome of the debate has been predetermined.

    On the other hand, if we decide that there is no free will, but there actually is, it can potentially “hurt” a great deal.  In that case, we will be basing our actions and our conclusions about what “ought” or “ought not” to be done on a false assumption.  Whatever our idiosyncratic goals happen to be, it is more probable that we will attain them if we base our strategy for achieving them on truth rather than falsehood.  If we have free will, the outcome of the debate matters.  Suppose, for example, that the anti-free will side has much better debaters and convinces those watching the debate that they have no free will even if they do.  Plausible results include despair, a sense of purposelessness, fatalism, a lethargic and indifferent attitude towards life, a feeling that nothing matters, etc.  No doubt there are legions of philosophers out there who can prove that, because a = b and b = c, none of these reactions are reasonable.  They will, however, occur whether they are reasonable or not.

    I doubt that my proposed default position will be difficult to implement.  Even the most diehard free will denialists seldom succeed in completely accepting the implications of their own theories.  Look through their writings, and before long you’ll find a “should.”  Read a bit further and you’re likely to stumble over an “ought” as well.  However, as noted above, speaking of “should” and “ought” in the absence of free will is absurd.  They imply the possibility of a choice between two alternatives that will lead to different outcomes.  If there is no free will, there can be no choice.  Individuals will do what they “ought” to do or “ought not” to do just as the arrangement of matter and energy in the universe happens to dictate.  It is absurd to blame them for doing something they could not avoid.  However, the question of whether they actually will be blamed or not is also predetermined.  It is just as absurd to blame the blamers.

    In short, I propose we all stop arguing and accept the default.  If there is no free will, then obviously I am proposing it because of my programming.  I can’t do otherwise even if I “ought” to.  It’s possible my proposal may change things, but, if so, the change was inevitable.  However, if there is free will, then believing in it is simply believing in the truth, and a truth that, at least from my point of view, happens to be a great deal more palatable than the alternative.

  • Whither Morality?

    Posted on April 19th, 2015 Helian 4 comments

    The evolutionary origins of morality and the reasons for its existence have been obvious for over a century.  They were no secret to Edvard Westermarck when he published The Origin and Development of the Moral Ideas in 1906, and many others had written books and papers on the subject before his book appeared.  However, our species has a prodigious talent for ignoring inconvenient truths, and we have been studiously ignoring that particular truth ever since.

    Why is it inconvenient?  Let me count the ways!  To begin, the philosophers who have taken it upon themselves to “educate” us about the difference between good and evil would be unemployed if they were forced to admit that those categories are purely subjective, and have no independent existence of their own.  All of their carefully cultivated jargon on the subject would be exposed as gibberish.  Social Justice Warriors and activists the world over, those whom H. L. Mencken referred to collectively as the “Uplift,” would be exposed as so many charlatans.  We would begin to realize that the legions of pious prigs we live with are not only an inconvenience, but absurd as well.  Gaining traction would be a great deal more difficult for political and religious cults that derive their raison d’être from the fabrication and bottling of novel moralities.  And so on, and so on.

    Just as they do today, those who experienced these “inconveniences” in one form or another pointed to the drawbacks of reality in Westermarck’s time.  For example, from his book,

    Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism.  If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it.  This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong.  To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief.  The unfortunate circumstance that there do exist dangerous things in the world, proves that something may be dangerous and yet true.  another question is whether any scientific truth really is mischievous on the whole, although it may cause much discomfort to certain people.  I venture to believe that this, at any rate, is not the case with that form of ethical subjectivism which I am here advocating.

    I venture to believe it as well.  In the first place, when we accept the truth about morality we make life a great deal more difficult for people of the type described above.  Their exploitation of our ignorance about morality has always been an irritant, but has often been a great deal more damaging than that.  In the 20th century alone, for example, the Communist and Nazi movements, whose followers imagined themselves at the forefront of great moral awakenings that would lead to the triumph of Good over Evil, resulted in the needless death of tens of millions of people.  The victims were drawn disproportionately from among the most intelligent and productive members of society.

    Still, just as Westermarck predicted more than a century ago, the bugaboo of “moral relativism” continues to be “repeated as an argument” in our own day.  Apparently we are to believe that if the philosophers and theologians all step out from behind the curtain after all these years and reveal that everything they’ve taught us about morality is so much bunk, civilized society will suddenly dissolve in an orgy of rape and plunder.

    Such notions are best left behind with the rest of the impedimenta of the Blank Slate.  Nothing could be more absurd than the notion that unbridled license and amorality are our “default” state.  One can quickly disabuse ones self of that fear by simply reading the comment thread of any popular news website.  There one will typically find a gaudy exhibition of moralistic posing and pious one-upmanship.  I encourage those who shudder at the thought of such an unpleasant reading assignment to instead have a look at Jonathan Haidt’s The Righteous Mind.  As he puts it in the introduction to his book,

    I could have titled this book The Moral Mind to convey the sense that the human mind is designed to “do” morality, just as it’s designed to do language, sexuality, music, and many other things described in popular books reporting the latest scientific findings.  But I chose the title The Righteous Mind to convey the sense that human nature is not just intrinsically moral, it’s also intrinsically moralistic, critical and judgmental… I want to show you that an obsession with righteousness (leading inevitably to self-righteousness) is the normal human condition.  It is a feature of our evolutionary design, not a bug or error that crept into minds that would otherwise be objective and rational.

    Haidt also alludes to a potential reason that some of the people already mentioned above continue to evoke the scary mirage of moral relativism:

    Webster’s Third New World Dictionary defines delusion as “a false conception and persistent belief in something that has no existence in fact.”  As an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history:  the rationalist delusion.  It’s the idea that reasoning is our most noble attribute, one that makes us like the gods (for Plato) or that brings us beyond the “delusion” of believing in gods (for the New Atheists).  The rationalist delusion is not just a claim about human nature.  It’s also a claim that the rational caste (philosophers or scientists) should have more power, and it usually comes along with a utopian program for raising more rational children.

    Human beings are not by nature moral relativists, and they are in no danger of becoming moral relativists merely by virtue of the fact that they have finally grasped what morality actually is.  It is their nature to perceive Good and Evil as real, independent things, independent of the subjective minds that give rise to them, and they will continue to do so even if their reason informs them that what they perceive is a mirage.  They will always tend to behave as if these categories were absolute, rather than relative, even if all the theologians and philosophers among them shout at the top of their lungs that they are not being “rational.”

    That does not mean that we should leave reason completely in the dust.  Far from it!  Now that we can finally understand what morality is, and account for the evolutionary origins of the behavioral predispositions that are its root cause, it is within our power to avoid some of the most destructive manifestations of moral behavior.  Our moral behavior is anything but infinitely malleable, but we know from the many variations in the way it is manifested in different human societies and cultures, as well as its continuous and gradual change in any single society, that within limits it can be shaped to best suit our needs.  Unfortunately, the only way we will be able to come up with an “optimum” morality is by leaning on the weak reed of our ability to reason.

    My personal preferences are obvious enough, even if they aren’t set in stone.  I would prefer to limit the scope of morality to those spheres in which it is indispensable for lack of a viable alternative.  I would prefer a system that reacts to the “Uplift” and unbridled priggishness and self-righteousness with scorn and contempt.  I would prefer an educational system that teaches the young the truth about what morality actually is, and why, in spite of its humble origins, we can’t get along without it if we really want our societies to “flourish.”  I know; the legions of those whose whole “purpose of life” is dependent on cultivating the illusion that their own versions of Good and Evil are the “real” ones stands in the way of the realization of these whims of mine.  Still, one can dream.