G. E. Moore Contra Edvard Westermarck

Many pre-Darwinian philosophers realized that the source of human morality was to be found in innate “sentiments,” or “passions,” often speculating that they had been put there by God.  Hume put the theory on a more secular basis.  Darwin realized that the “sentiments,” were there because of natural selection, and that human morality was the result of their expression in creatures with large brains.  Edvard Westermarck, perhaps at the same time the greatest and the most unrecognized moral philosopher of them all, put it all together in a coherent theory of human morality, supported by copious evidence, in his The Origin and Development of the Moral Ideas.

Westermarck is all but forgotten today, probably because his insights were so unpalatable to the various academic and professional tribes of “experts on ethics.”  They realized that, if Westermarck were right, and morality really is just the expression of evolved behavioral predispositions, they would all be out of a job.  Under the circumstances, its interesting that his name keeps surfacing in modern works about evolved morality, innate behavior, and evolutionary psychology.  For example, I ran across a mention of him in famous primatologist Frans de Waal’s latest book, The Bonobo and the Atheist.  People like de Waal who know something about the evolved roots of behavior are usually quick to recognize the significance of Westermarck’s work.

Be that as it may, G. E. Moore, the subject of my last post, holds a far more respected place in the pantheon of moral philosophers.  That’s to be expected, of course.  He never suggested anything as disconcerting as the claim that all the mountains of books and papers they had composed over the centuries might as well have been written about the nature of unicorns.  True, he did insist that everyone who had written about the subject of morality before him was delusional, having fallen for the naturalistic fallacy, but at least he didn’t claim that the subject they were writing about was a chimera.

Most of what I wrote about in my last post came from the pages of Moore’s Principia Ethica.  That work was published in 1903.  Nine years later he published another little book, entitled Ethics.  As it happens, Westermarck’s Origin appeared between those two dates, in 1906.  In all likelihood, Moore read Westermarck, because parts of Ethics appear to be direct responses to his book.  Moore had only a vague understanding of Darwin, and the implications of his work on the subject of human behavior.  He did, however, understand Westermarck when he wrote in the Origin,

If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.  It has been said by Bentham and others that moral principles cannot be proved because they are first principles which are used to prove everything else.  But the real reason for their being inaccessible to demonstration is that, owing to their very nature, they can never be true.  If the word “Ethics,” then, is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.

Now that got Moore’s attention.  Responding to Westermarck’s theory, or something very like it, he wrote:

Even apart from the fact that they lead to the conclusion that one and the same action is often both right and wrong, it is, I think, very important that we should realize, to begin with, that these views are false; because, if they were true, it would follow that we must take an entirely different view as to the whole nature of Ethics, so far as it is concerned with right and wrong, from what has commonly been taken by a majority of writers.  If these views were true, the whole business of Ethics, in this department, would merely consist in discovering what feelings and opinions men have actually had about different actions, and why they have had them.  A good many writers seem actually to have treated the subject as if this were all that it had to investigate.  And of course questions of this sort are not without interest, and are subjects of legitimate curiosity.  But such questions only form one special branch of Psychology or Anthropology; and most writers have certainly proceeded on the assumption that the special business of Ethics, and the questions which it has to try to answer, are something quite different from this.

Indeed they have.  The question is whether they’ve actually been doing anything worthwhile in the process.  Note the claim that Westermarck’s views were “false.”  This claim was based on what Moore called a “proof” that it couldn’t be true that appeared in the preceding pages.  Unfortunately, this “proof” is transparently flimsy to anyone who isn’t inclined to swallow it because it defends the relevance of their “expertise.”  Quoting directly from his Ethics, it goes something like this:

  1.  It is absolutely impossible that any one single, absolutely particular action can ever be both right and wrong, either at the same time or at different times.
  2. If the whole of what we mean to assert, when we say that an action is right, is merely that we have a particular feeling towards it, then plainly, provided only we really have this feeling, the action must really be right.
  3. For if this is so, and if, when a man asserts an action to be right or wrong, he is always merely asserting that he himself has some particular feeling towards it, then it absolutely follows that one and the same action has sometimes been both right and wrong – right at one time and wrong at another, or both simultaneously.
  4. But if this is so, then the theory we are considering certainly is not true.  (QED)

Note that this “proof” requires the positive assertion that it is possible to claim that an action can be right or wrong, in this case because of “feelings.”  A second, similar proof, also offered in Chapter III of Ethics, “proves” that an action can’t possible be right merely because one “thinks” it right, either.  With that, Moore claims that he has “proved” that Westermarck, or someone with identical views, must be wrong.  The only problem with the “proof” is that Westermarck specifically pointed out in the passage quoted above that it is impossible to make truth claims about “moral principles.”  Therefore, it is out of the question that he could ever be claiming that any action “is right,” or “is wrong,” because of “feelings” or for any other reason.  In other words, Moore’s “proof” is nonsense.

The fact that Moore was responding specifically to evolutionary claims about morality is also evident in the same Chapter of Ethics.  Allow me to quote him at length.

…it is supposed that there was a time, if we go far enough back, when our ancestors did have different feelings towards different actions, being, for instance, pleased with some and displeased with others, but when they did not, as yet, judge any actions to be right or wrong; and that it was only because they transmitted these feelings, more or less modified, to their descendants, that those descendants at some later stage, began to make judgments of right and wrong; so that, in a sense, or moral judgments were developed out of mere feelings.  And I can see no objection to the supposition that this was so.  But, then, it seems also to be supposed that, if our moral judgments were developed out of feelings – if this was their origin – they must still at this moment be somehow concerned with feelings; that the developed product must resemble the germ out of which it was developed in this particular respect.  And this is an assumption for which there is, surely, no shadow of ground.

In fact, there was a “shadow of ground” when Moore wrote those words, and the “shadow” has grown a great deal longer in our own day.  Moore continues,

Thus, even those who hold that our moral judgments are merely judgments about feelings must admit that, at some point in the history of the human race, men, or their ancestors, began not merely to have feelings but to judge that they had them:  and this along means an enormous change.

Why was this such an “enormous change?”  Why, of course, because as soon as our ancestors judged that they had feelings, then, suddenly those feelings could no longer be a basis for morality, because of the “proof” given above.  Moore concludes triumphantly,

And hence, the theory that moral judgments originated in feelings does not, in fact, lend any support at all to the theory that now, as developed, they can only be judgments about feelings.

If Moore’s reputation among them is any guide, such “ironclad logic” is still taken seriously by todays crop of “experts on ethics.”  Perhaps it’s time they started paying more attention to Westermarck.

The Moral Philosophy of G. E. Moore, or Why You Don’t Need to Bother with Aristotle, Hegel, and Kant

G. E. Moore isn’t exactly a household name these days, except perhaps among philosophers.  You may have heard of his most famous concoction, though – the “naturalistic fallacy.”  If we are to believe Moore, not only Aristotle, Hegel and Kant, but virtually every other philosopher you’ve ever heard of got morality all wrong because of it.  He was the first one who ever got it right.  On top of that, his books are quite thin, and he writes in the vernacular.  When you think about it, he did us all a huge favor.  Assuming he’s right, you won’t have to struggle with Kant, whose sentences can run on for a page and a half before you finally get to the verb at the end, and who is comprehensible, even to Germans, only in English translation.  You won’t have to agonize over the correct interpretation of Hegel’s dialectic.  Moore has done all that for you.  Buy his books, which are little more than pamphlets, and you’ll be able to toss out all those thick tomes and learn all the moral philosophy you will ever need in a week or two.

Or at least you will if Moore got it right.  It all hinges on his notion of the “Good-in-itself.”  He claims it’s something like what philosophers call qualia.  Qualia are the content of our subjective experiences, like colors, smells, pain, etc.  They can’t really be defined, but only experienced.  Consider, for example, the difficulty of explaining “red” to a blind person.  Moore’s description of the Good is even more vague.  As he puts it in his rather pretentiously named Principia Ethica,

Let us, then, consider this position.  My point is that ‘good’ is a simple notion, just as ‘yellow’ is a simple notion; that, just as you cannot, by any manner of means, explain to any one who does not already know it, what yellow is, so you cannot explain what good is.

In other words, you can’t even define good.  If that isn’t slippery enough for you, try this:

They (metaphysicians) have always been much occupied, not only with that other class of natural objects which consists in mental facts, but also with the class of objects or properties of objects, which certainly do not exist in time, are not therefore parts of Nature, and which, in fact, do no exist at all.  To this class, as I have said, belongs what we mean by the adjective “good.” …What is meant by good?  This first question I have already attempted to answer.  The peculiar predicate, by reference to which the sphere of Ethics must be defined, is simple, unanalyzable, indefinable.

Or, as he puts it elsewhere, the Good doesn’t exist.  It just is.  Which brings us to the naturalistic fallacy.  If, as Moore claims, Good doesn’t exist as a natural, or even a metaphysical, object, it can’t be defined with reference to such an object.  Attempts to so define it are what he refers to as the naturalistic fallacy.  That, in his opinion, is why every other moral philosopher in history, or at least all the ones whose names happen to turn up in his books, have been wrong except him.  The fallacy is defined at Wiki and elsewhere on the web, but the best way to grasp what he means is to read his books.  For example,

The naturalistic fallacy always implies that when we think “This is good,” what we are thinking is that the thing in question bears a definite relation to some one other thing.

That fallacy, I explained, consists in the contention that good means nothing but some simple or complex notion, that can be defined in terms of natural qualities.

To hold that from any proposition asserting “Reality is of this nature” we can infer, or obtain confirmation for, any proposition asserting “This is good in itself” is to commit the naturalistic fallacy.

In short, all the head scratching of all the philosophers over thousands of years about the question of what is Good has been so much wasted effort.  Certainly, the average layman had no chance at all of understanding the subject, or at least he didn’t until the fortuitous appearance of Moore on the scene.  He didn’t show up a moment too soon, either, because, as he explains in his books, we all have “duties.”  It turns out that, not only did the intuition “Good,” pop up in his consciousness, more or less after the fashion of “yellow,” or the smell of a rose.  He also “intuited” that it came fully equipped with the power to dictate to other individuals what they ought and ought not to do.  Again, I’ll allow the philosopher to explain.

Our “duty,” therefore, can only be defined as that action, which will cause more good to exist in the Universe than any possible alternative… When, therefore, Ethics presumes to assert that certain ways of acting are “duties” it presumes to assert that to act in those ways will always produce the greatest possible sum of good.

But how on earth can we ever even begin to do our duty if we have no clue what Good is?  Well, Moore is actually quite coy about explaining it to us, and rightly so, as it turns out.  When he finally takes a stab at it in Chapter VI of Principia, it turns out to be paltry enough.  Basically, it’s the same “pleasure,” or “happiness” that many other philosophers have suggested, only it’s not described in such simple terms.  It must be part of what Moore describes as an “organic whole,” consisting not only of pleasure itself, for example, but also a consciousness capable of experiencing the pleasure, the requisite level of taste to really appreciate it, the emotional equipment necessary to react with the appropriate level of awe, etc.  Silly old philosophers!  They rashly assumed that, if the Good were defined as “pleasure,” it would occur to their readers that they would have to be conscious in order to experience it without them spelling it out.  Little did they suspect the coming of G. E. Moore and his naturalistic fallacy.

When he finally gets around to explaining it to us, we gather that Moore’s Good is more or less what you’d expect the intuition of Good to be in a well-bred English gentleman endowed with “good taste” around the turn of the 20th century.  His Good turns out to include nice scenery, pleasant music, and chats with other “good” people.  Or, as he put it somewhat more expansively,

We can imagine the case of a single person, enjoying throughout eternity the contemplation of scenery as beautiful, and intercourse with persons as admirable, as can be imagined.

and

By far the most valuable things which we know or can imagine, are certain states of consciousness, which may be roughly described as the pleasures of human intercourse and the enjoyment of beautiful objects.  No one, probably, who has asked himself the question, has ever doubted that personal affection and the appreciation of what is beautiful in Art or Nature, are good in themselves.

Really?  No one?  One can only surmise that Moore’s circle of acquaintance must have been quite limited.  Unsurprisingly, Beethoven’s Fifth is in the mix, but only, of course, as part of an “organic whole.”  As Moore puts it,

What value should we attribute to the proper emotion excited by hearing Beethoven’s Fifth Symphony, if that emotion were entirely unaccompanied by any consciousness, either of the notes, or of the melodic and harmonic relations between them?

It would seem, then, that even if you’re such a coarse person that you can’t appreciate Beethoven’s Fifth yourself, it is still your “duty” to make sure that it’s right there on everyone else’s smart phone.

Imagine, if you will, Mother Nature sitting down with Moore, holding his hand, looking directly into his eyes, and revealing to him in all its majesty the evolution of life on this planet, starting from the simplest, one celled creatures more than four billion years ago, and proceeding through ever more complex forms to the almost incredible emergence of a highly intelligent and highly social species known as Homo sapiens.  It all happened, she explains to him with a look of triumph on her face, because, over all those four billion years, the chain of life remained unbroken because the creatures that made up the links of that chain survived and reproduced.  Then, with a serious expression on her face, she asks him, “Now do you understand the reason for the existence of moral emotions?”  “Of course,” answers Moore, “they’re there so I can enjoy nice landscapes and pretty music.”  (Loud forehead slap)  Mother Nature stands up and walks away shaking her head, consoling herself with the thought that some more advanced species might “get it” after another million years or so of natural selection.

And what of Aristotle, Hegel and Kant?  Throw out your philosophy books and forget about them.  Imagine being so dense as to commit the naturalistic fallacy!

Moore

…And One More Thing about James Burnham: On Human Nature

There’s another thing about James Burnham’s Suicide of the West that’s quite fascinating; his take on human nature.  In fact, Chapter III is entitled “Human Nature and the Good Society.”  Here are a few excerpts from that chapter:

However varied may be the combination of beliefs that it is psychologically possible for an individual liberal to hold, it remains true that liberalism is logically committed to a doctrine along the lines that I have sketched:  viewing human nature as not fixed but plastic and changing; with no pre-set limit to potential development; with no innate obstacle to the realization of a society of peace, freedom, justice and well-being.  Unless these things are true of human nature, the liberal doctrine and program for government, education, reform and so on are an absurdity.

But in the face of what man has done and does, it is only an ideologue obsessed with his own abstractions who can continue to cling to the vision of an innately uncorrupt, rational and benignly plastic human nature possessed of an unlimited potential for realizing the good society.

Quite true, which makes it all the more remarkable that virtually all the “scientists” in the behavioral “sciences” at the time Burnham wrote these lines were “clinging to that vision,” at least in the United States.  See, for example, The Triumph of Evolution, in which one of these “men of science,” author Hamilton Cravens, documents the fact.  Burnham continues,

No, we must repeat:  if human nature is scored by innate defects, if the optimistic account of man is unjustified, then is all the liberal faith in vain.

Here we get a glimpse of the reason that the Blank Slaters insisted so fanatically that there is no such thing as human nature, at least as commonly understood, for so many years, in defiance of all reason, and despite the fact that any 10 year old could have told them their anthropological theories were ludicrous.  The truth stood in the way of their ideology.  Therefore, the truth had to yield.

All this begs the question of how, as early as 1964, Burnham came up with such a “modern” understanding of the Blank Slate.  Reading on in the chapter, we find some passages that are even more intriguing.  Have a look at this:

It is not merely the record of history that speaks in unmistakable refutation of the liberal doctrine of man.  Ironically enough – ironically, because it is liberalism that has maintained so exaggerated a faith in science – almost all modern scientific studies of man’s nature unite in giving evidence against the liberal view of man as a creature motivated, once ignorance is dispelled, by the rational search for peace, freedom and plenty.  Every modern school of biology and psychology and most schools of sociology and anthropology conclude that men are driven chiefly by profound non-rational, often anti-rational, sentiments and impulses, whose character and very existence are not ordinarily understood by conscious reason.  Many of these drives are aggressive, disruptive, and injurious to others and to society.

!!!

The bolding and italics are mine.  How on earth did Burnham come up with such ideas?  By all means, dear reader, head for your local university library, fish out the ancient microfiche, and search through the scientific and professional journals of the time yourself.  Almost without exception, the Blank Slate called the tune.  Clearly, Burnham didn’t get the notion that “almost all modern scientific studies of man’s nature” contradicted the Blank Slate from actually reading the literature himself.  Where, then, did he get it?  Only Burnham and the wild goose know, and Burnham’s dead, but my money is on Robert Ardrey.  True, Konrad Lorenz’ On Aggression was published in Germany in 1963, but it didn’t appear in English until 1966.  The only other really influential popular science book published before Suicide of the West that suggested anything like what Burnham wrote in the above passage was Ardrey’s African Genesis, published in 1961.

What’s that you say?  I’m dreaming?  No one of any significance ever challenged the Blank Slate orthodoxy until E. O. Wilson’s stunning and amazing publication of Sociobiology in 1975?  I know, it must be true, because it’s all right there in Wikipedia.  As George Orwell once said, “He who controls the present controls the past.”

James Burnham and the Anthropology of Liberalism

James Burnham was an interesting anthropological data point in his own right.  A left wing activist in the 30’s, he eventually became a Trotskyite.  By the 50’s however, he had completed an ideological double back flip to conservatism, and became a Roman Catholic convert on his deathbed.  He was an extremely well-read intellectual, and a keen observer of political behavior.  His most familiar book is The Managerial Revolution, published in 1941.  Among others, it strongly influenced George Orwell, who had something of a love/hate relationship with Burnham.  For example, in an essay in Tribune magazine in January 1944 he wrote,

Recently, turning up a back number of Horizon, I came upon a long article on James Burnham’s Managerial Revolution, in which Burnham’s main thesis was accepted almost without examination.  It represented, many people would have claimed, the most intelligent forecast of our time.  And yet – founded as it was on a belief in the invincibility of the German army – events have already blown it to pieces.

A bit over a year later, in February 1945, however, we find Burnham had made more of an impression on Orwell than the first quote implies.  In another essay in the Tribune he wrote,

…by the way the world is actually shaping, it may be that war will become permanent.  Already, quite visibly and more or less with the acquiescence of all of us, the world is splitting up into the two or three huge super-states forecast in James Burnham’s Managerial Revolution.  One cannot draw their exact boundaries as yet, but one can see more or less what areas they will comprise.  And if the world does settle down into this pattern, it is likely that these vast states will be permanently at war with one another, although it will not necessarily be a very intensive or bloody kind of war.

Of course, these super-states later made their appearance in Orwell’s most famous novel, 1984.  However, he was right about Burnham the first time.  He had an unfortunate penchant for making wrong predictions, often based on the assumption that transitory events must represent a trend that would continue into the indefinite future.  For example, impressed by the massive industrial might brought to bear by the United States during World War II, and its monopoly of atomic weapons, he suggested in The Struggle for the World, published in 1947, that we immediately proceed to force the Soviet Union to its knees, and establish a Pax Americana.  A bit later, in 1949, impressed by a hardening of the U.S. attitude towards the Soviet Union after the war, he announced The Coming Defeat of Communism in a book of that name.  He probably should have left it at that, but reversed his prognosis in Suicide of the West, which appeared in 1964.  By that time it seemed to Burnham that the United States had become so soft on Communism that the defeat of Western civilization was almost inevitable.  The policy of containment could only delay, but not stop the spread of Communism, and in 1964 it seemed that once a state had fallen behind the Iron Curtain it could never throw off the yoke.

Burnham didn’t realize that, in the struggle with Communism, time was actually on our side.  A more far-sighted prophet, a Scotsman by the name of Sir James Mackintosh, had predicted in the early 19th century that the nascent versions of Communism then already making their appearance would eventually collapse.  He saw that the Achilles heel of what he recognized was really a secular religion was its ill-advised proclamation of a coming paradise on earth, where it could be fact-checked, instead of in the spiritual realms of the traditional religions, where it couldn’t.  In the end, he was right.  After they had broken 100 million eggs, people finally noticed that the Communists hadn’t produced an omelet after all, and the whole, seemingly impregnable edifice collapsed.

One thing Burnham did see very clearly, however, was the source of the West’s weakness – liberalism.  He was well aware of its demoralizing influence, and its tendency to collaborate with the forces that sought to destroy the civilization that had given birth to it.  Inspired by what he saw as an existential threat, he carefully studied and analyzed the type of the western liberal, and its evolution away from the earlier “liberalism” of the 19th century.  Therein lies the real value of his Suicide of the West.  It still stands as one of the greatest analyses of modern liberalism ever written.  The basic characteristics of the type he described are as familiar more than half a century later as they were in 1964.  And this time his predictions regarding the “adjustments” in liberal ideology that would take place as its power expanded were spot on.

Burnham developed nineteen “more or less systematic set of ideas, theories and beliefs about society” characteristic of the liberal syndrome in Chapters III-V of the book, and then listed them, along with possible contrary beliefs in Chapter VII.  Some of them have changed very little since Burnham’s day, such as,

It is society – through its bad institutions and its failure to eliminate ignorance – that is responsible for social evils.  Our attitude toward those who embody these evils – of crime, delinquency, war, hunger, unemployment, communism, urban blight – should not be retributive but rather the permissive, rehabilitating, education approach of social service; and our main concern should be the elimination of the social conditions that are the source of the evils.

Since there are no differences among human beings considered in their political capacity as the foundation of legitimate, that is democratic, government, the ideal state will include all human beings, and the ideal government is world government.

The goal of political and social life is secular:  to increase the material and functional well-being of humanity.

Some of the 19 have begun to change quite noticeably since the publication of Suicide of the West in just the ways Burnham suggested.  For example, items 9 and 10 on the list reflect a classic version of the ideology that would have been familiar to and embraced by “old school” liberals like John Stuart Mill:

Education must be thought of as a universal dialogue in which all teachers and students above elementary levels may express their opinions with complete academic freedom.

Politics must be though of as a universal dialogue in which all persons may express their opinions, whatever they may be, with complete freedom.

Burnham had already noticed signs of erosion in these particular shibboleths in his own day, as liberals gained increasing control of academia and the media.  As he put it,

In both Britain and the United States, liberals began in 1962 to develop the doctrine that words which are “inherently offensive,” as far-Right but not communist words seem to be, do not come under the free speech mantle.

In our own day of academic safe spaces and trigger warnings, there is certainly no longer anything subtle about this ideological shift.  Calls for suppression of “offensive” speech have now become so brazen that they have spawned divisions within the liberal camp itself.  One finds old school liberals of the Berkeley “Free Speech Movement” days resisting Gleichschaltung with the new regime, looking on with dismay as speaker after speaker is barred from university campuses for suspected thought crime.

As noted above, Communism imploded before it could overwhelm the Western democracies, but the process of decay goes on.  Nothing about the helplessness of Europe in the face of the current inundation by third world refugees would have surprised Burnham in the least.  He predicted it as an inevitable expression of another fundamental characteristic of the ideology – liberal guilt.  Burnham devoted Chapter 10 of his book to the subject, and noted therein,

Along one perspective, liberalism’s reformist, egalitarian, anti-discrimination, peace-seeking principles are, or at any rate can be interpreted as, the verbally elaborated projections of the liberal sense of guilt.

and

The guilt of the liberal causes him to feel obligated to try to do something about any and every social problem, to cure every social evil.  This feeling, too, is non-rational:  the liberal must try to cure the evil even if he has no knowledge of the suitable medicine or, for that matter, of the nature of the disease; he must do something about the social problem even when there is no objective reason to believe that what he does can solve the problem – when, in fact, it may well aggravate the problem instead of solving it.

I suspect Burnham himself would have been surprised at the degree to which such “social problems” have multiplied in the last half a century, and the pressure to do something about them has only increased in the meantime.  As for the European refugees, consider the following corollaries of liberal guilt as developed in Suicide of the West:

(The liberal) will not feel uneasy, certainly not indignant, when, sitting in conference or conversation with citizens of countries other than his own – writers or scientists or aspiring politicians, perhaps – they rake his country and his civilization fore and aft with bitter words; he is as likely to join with them in the criticism as to protest it.

It follows that,

…the ideology of modern liberalism – its theory of human nature, its rationalism, its doctrines of free speech, democracy and equality – leads to a weakening of attachment to groups less inclusive than Mankind.

All modern liberals agree that government has a positive duty to make sure that the citizens have jobs, food, clothing, housing, education, medical care, security against sickness, unemployment and old age; and that these should be ever more abundantly provided.  In fact, a government’s duty in these respects, if sufficient resources are at its disposition, is not only to its own citizens but to all humanity.

…under modern circumstances there is a multiplicity of interests besides those of our own nation and culture that must be taken into account, but an active internationalism in feeling as well as thought, for which “fellow citizens” tend to merge into “humanity,” sovereignty is judged an outmode conception, my religion or no-religion appears as a parochial variant of the “universal ideas common to mankind,” and the “survival of mankind” becomes more crucial than the survival of my country and my civilization.

For Western civilization in the present condition of the world, the most important practical consequence of the guilt encysted in the liberal ideology and psyche is this:  that the liberal, and the group, nation or civilization infected by liberal doctrine and values, are morally disarmed before those whom the liberal regards as less well off than himself.

The inevitable implication of the above is that the borders of the United States and Europe must become meaningless in an age of liberal hegemony, as, indeed, they have.  In 1964 Burnham was not without hope that the disease was curable.  Otherwise, of course, he would never have written Suicide of the West.  He concluded,

But of course the final collapse of the West is not yet inevitable; the report of its death would be premature.  If a decisive changes comes, if the contraction of the past fifty years should cease and be reversed, then the ideology of liberalism, deprived of its primary function, will fade away, like those feverish dreams of the ill man who, passing the crisis of his disease, finds he is not dying after all.  There are a few small signs, here and there, that liberalism may already have started fading.  Perhaps this book is one of them.

No, liberalism hasn’t faded.  The infection has only become more acute.  At best one might say that there are now a few more people in the West who are aware of the disease.  I am not optimistic about the future of Western civilization, but I am not foolhardy enough to predict historical outcomes.  Perhaps the fever will break, and we will recover, and perhaps not.  Perhaps there will be a violent crisis tomorrow, or perhaps the process of dissolution will drag itself out for centuries.  Objectively speaking, there is no “good” outcome and no “bad” outcome.  However, in the same vein, there is no objective reason why we must refrain from fighting for the survival or our civilization, our culture, or even the ethnic group to which we belong.

As for the liberals, perhaps they should consider why all the fine moral emotions they are so proud to wear on their sleeves exist to begin with.  I doubt that the reason has anything to do with suicide.

By all means, read the book.

Relics of the Blank Slate, as Excavated at “Ethics” Magazine

There’s a reason that the Blank Slaters clung so bitterly to their absurd orthodoxy for so many years.  If there is such a thing as human nature, then all the grandiose utopias they concocted for us over the years, from Communism on down, would vanish like so many mirages.  That orthodoxy collapsed when a man named Robert Ardrey made a laughing stock of the “men of science.”  In this enlightened age, one seldom finds an old school, hard core Blank Slater outside of the darkest, most obscure rat holes of academia.  Even PBS and Scientific American have thrown in the towel.  Still, one occasionally runs across “makeovers” of the old orthodoxy, in the guise of what one might call Blank Slate Lite.

I recently discovered just such an artifact in the pages of Ethics magazine, which functions after a fashion as an asylum for “experts in ethics” who still cling to the illusion that they have anything relevant to say.  Entitled The Limits of Evolutionary Explanations of Morality and Their Implications for Moral Progress, it was written by Prof. Allen Buchanan of Duke and Kings College, London, and Asst. Prof. Russell Powell of Boston University.  Unfortunately, it’s behind a pay wall, and is quite long, but if you’re the adventurous type you might be able to access it at a local university library.  In any case, the short version of the paper might be summarized as follows:

Conservatives have traditionally claimed that “human nature places severe limitations on social and moral reform,” but have “offered little in the way of scientific evidence to support this claim.”  Now, however, a later breed of conservatives, knows as “evoconservatives,” have “attempted to fill this empirical gap in the conservative argument by appealing to the prevailing evolutionary explanation of morality to show that it is unrealistic to think that cosmopolitan and other “inclusivist” moral ideals can meaningfully be realized.”  However, while evolved psychology can’t be discounted in moral theory, and there is such a thing as human nature, they are so plastic and malleable that it doesn’t stand in the way of moral progress.

This, at least, is the argument until one gets to the “Conclusion” section at the end.  Then, as if frightened by their own hubris, the authors make noises in a quite contradictory direction, writing, for example,

…we acknowledge that evolved psychological capacities, interacting with particular social and institutional environments, can pose serious obstacles to using our rationality in ways that result in more inclusive moralities. For example, environments that mirror conditions of the EEA (environment of evolutionary adaptation, i.e., the environment in which moral behavioral predispositions presumably evolved, ed.)—such as those characterized by great physical insecurity, high parasite threat, severe intergroup competition for resources, and a lack of institutions for peaceful, mutually beneficial cooperation—will tend to be very unfriendly to the development of inclusivist morality.

However, they conclude triumphantly with the following:

At the same time, however, we have offered compelling reasons, both theoretical and empirical, to believe that human morality is only weakly constrained by human evolutionary history, leaving the potential for substantial moral progress wide open. Our point is not that human beings have slipped the “leash” of evolution, but rather that the leash is far longer than evoconservatives and even many evolutionary psychologists have acknowledged—and no one is in a position at present to know just how elastic it will turn out to be.

Students of the Blank Slate orthodoxy will see that all the main shibboleths are still there, if in somewhat attenuated form.  The Blank Slate itself is replaced by a “long leash.”  The “genetic determinist” strawman of the Blank Slaters is replaced by “evoconservatives.”  These evoconservatives are no longer “fascists and racists,” but merely a nuisance standing in the way of “moral progress.”  The overriding goal is no longer anything like the Marxist paradise on earth, but the somewhat less inspiring continued “development of inclusivist morality.”

Readers of this blog should immediately notice the unwarranted assumption that there actually is such a thing as “moral progress.”  In that case, there must be a goal towards which morality is progressing.  Natural selection occurs without any such goal or purpose.  It follows that the authors believe that there must be some “mysterious, transcendental” origin other than natural evolution to account for this progress.  However, they insist they don’t believe in such a “mysterious, transcendental” source.  How, then, do they account for the existence of this “thing” they refer to as “moral progress?”  What the authors are really referring to when they refer to this “moral progress” is “the way we and other good liberals want things.”

By “inclusivist” moralities, the authors mean versions that can be expanded to include very large subsets of the human population that are neither kin to the bearers of that morality nor members of any identifiable group that is likely to reciprocate their good deeds.  Presumably the ultimate goal is to expand these subsets to “include” all mankind.  The “evoconservatives” we are told, deny the possibility of such “inclusivism” in spite of the fact that one can cite many obvious examples to the contrary.  At this point, one begins to wonder who these obtuse evoconservatives really are.  The authors are quite coy about identifying them.  The footnote following their first mention merely points to a blurb about what the authors will discuss later in the text.  No names are named.  Much later in the text Jonathan Haidt is finally identified as one of the evoconservatives.  As the authors put it,

Leading psychologist Jonathan Haidt, who has stressed the moral psychological significance of in-group loyalty, expresses a related view: ‘It would be nice to believe that we humans were designed to love everyone unconditionally. Nice, but rather unlikely from an evolutionary perspective. Parochial love—love within groups—amplified by similarity, a sense of shared fate, and the suppression of free riders, may be the most we can accomplish.

In fact, as anyone who has actually read Haidt is aware, he neither believes that “inclusivist” moralities as defined by the authors are impossible, nor does this quote imply anything of the sort.  A genuine conservative would doubtless classify Haidt as a liberal, but he has defended, or at least tried to explain, conservative moralities.  Apparently that is sufficient to cast him into the outer darkness as an “evoconservative.”

The authors also point the finger at Larry Arnhart.  Arnhart is neither a geneticist, nor an evolutionary biologist, nor an evolutionary psychologist, but a political scientist who apparently subscribes to some version of the naturalistic fallacy.  Nowhere is it demonstrated that he actually believes that the inclusivist versions of morality favored by the authors are impossible.  In a word, the few slim references to individuals who are supposed to fit the description of the evoconservative strawman concocted by the authors actually do nothing of the sort.  Yet in spite of the fact that the authors can’t actually name anyone who explicitly embraces their version of evoconservatism, they describe the existence of “inclusivist morality” as a “major flaw in evoconservative arguments.”

A bit later, the authors appear to drop their evoconservative strawman, and expand their field of fire to include anyone who claims that “inclusivist morality” could have resulted from natural selection.  For example, quoting from the article:

The key point is that none of these inclusivist features of contemporary morality are plausibly explained in standard selectionist terms, that is, as adaptations or predictable expressions of adaptive features that arose in the environment of evolutionary adaptation (EEA).

Here, “evoconservatives” have been replaced by “standard selectionists.”  Invariably, the authors walk back such seemingly undistilled statements of Blank Slate ideology with assurances that no one believes more firmly than they in the evolutionary roots of morality.  That, of course, begs the question of how “these inclusivist features,” if they are not explainable in “standard selectionist terms,” are plausibly explained in “non-standard selectionist terms,” and who these “non-standard selectionists” actually are.  Apparently the only alternative is that the “inclusivist features” have a “transcendental” explanation, not further elaborated by the authors.  This conclusion is not as far fetched as it seems.  Interestingly enough, the authors’ work is partially funded by the Templeton Foundation, an accommodationist outfit with the ostensible goal of proving that religion and science are not mutually exclusive.

In fact, I know of not a single scientist whose specialty is germane to the subject of human morality who would dispute the existence of inclusive moralities.  The authors limit themselves to statements to the effect that the work of such and such a person “suggests” that they don’t believe in inclusive moralities, or that the work of some other person “implies” that they don’t believe such moralities are stable.  Wouldn’t it be more reasonable to simply go and ask these people what they actually believe regarding these matters, instead of putting words in their mouths?

Left out of all these glowing descriptions of inclusive moralities is the fact that not a single one of them exists without an outgroup.  That fact is demonstrated by the authors themselves, whose outgroup obviously includes those they identify as “evoconservatives.”  One might also point out that those who have “inclusive” ingroups commonly have “inclusive” outgroups as well, and liberals are commonly found among the most violent outgroup haters on the planet.  To confirm this, one need only look at the comments at the websites of Daily Kos, or Talking Points Memo, or the Nation, or any other familiar liberal watering hole.

While I’m somewhat dubious about all the authors’ loose talk about “moral progress,” I think we can at least identify some real progress towards getting at the truth in their version of Blank Slate Lite.  After all, it’s a far cry from the old school version.  Throughout the article the authors question the ability of natural selection in the environment in which moral behavior presumably evolved in early humans to account for this or that feature of their observed “inclusive morality.”  As noted above, however, as often as they do it, they are effusive in assuring the reader that by no means do they wish to imply that they find any fault whatsoever with innate theories of human morality.  In the end, what more can one ask than the ability to continue seeking the truth about human moral behavior in every relevant area of science without fear of being denounced and intimidated as guilty of one type of villainy or another.  That ability seems more assured if the existence of innate behavior is at least admitted, and is therefore unlikely to be criminalized as it was in the heyday of the Blank Slate.  In that respect, Blank Slate Lite really does represent progress.

Of course, there remains the question of why so many of us still take seriously the authors’ fantasies about “moral progress” more than a century after Westermarck pointed out the absurdity of truth claims about morality.  I suspect the answer lies in the fact that ending the charade would reduce all the pontifications of all the “experts in morality” catered to by learned journals like Ethics to gibberish.  Experts don’t like to be confronted with the truth that their painstakingly acquired expertise is irrelevant.  Admitting it would make it a great deal harder to secure grants from the Templeton Foundation.

UPDATE:  I failed to mention another intriguing paragraph in the paper that reads as follows:

The human capacity to reflect on and revise our conceptions of duty and moral standing can give us reasons here and now to expand our capacities for moral behavior by developing institutions that economize on sympathy and enhance our ability to take the interests of strangers into account. This same capacity may also give us reasons, in the not-too-distant future, to modify our evolved psychology through the employment of biomedical interventions that enable us to implement new norms that we develop as a result of the process of reflection. In both cases, the limits of our evolved motivational capacities do not translate into a comparable constraint on our capacity for moral action. The fact that we are not currently motivationally capable of acting on the considered moral norms we have come to endorse is not a reason to trim back those norms; it is a reason to enhance our motivational capacity, either through institutional or biomedical means, so that it matches the demands of our considered morality.

Note the bolded wording.  I’m not sure what to make of it, dear reader, but it appears that, one way or another, the authors intend to “get our minds right.”

Panksepp, Animal Rights, and the Blank Slate

So who is Jaak Panksepp?  Have a look at his YouTube talk on emotions at the bottom of this post, for starters.  A commenter recommended him, and I discovered the advice was well worth taking.  Panksepp’s The Archaeology of Mind, which he co-authored with Lucy Biven, was a revelation to me.  The book describes a set of basic emotional systems that exist in all, or virtually all, mammals, including humans.  In the words of the authors:

…the ancient subcortical regions of mammalian brains contain at least seven emotional, or affective, systems:  SEEKING (expectancy), FEAR (anxiety), RAGE (anger), LUST (sexual excitement), CARE (nurturance), PANIC/GRIEF (sadness), and PLAY (social joy).  Each of these systems controls distinct but specific types of behaviors associated with many overlapping physiological changes.

This is not just another laundry list of “instincts” of the type often proposed by psychologists at the end of the 19th and the beginning of the 20th centuries.  Panksepp is a neuroscientist, and has verified experimentally the unique signatures of these emotional systems in the ancient regions of the brain shared by humans and other mammals.  Again quoting from the book,

As far as we know right now, primal emotional systems are made up of neuroanatomies and neurochemistries that are remarkably similar across all mammalian species.  This suggests that these systems evolved a very long time ago and that at a basic emotional and motivational level, all mammals are more similar than they are different.  Deep in the ancient affective recesses of our brains, we remain evolutionarily kin.

If you are an astute student of the Blank Slate phenomenon, dear reader, no doubt you are already aware of the heretical nature of this passage.  That’s right!  The Blank Slaters were prone to instantly condemn any suggestion that there were similarities between humans and other animals as “anthropomorphism.”  In fact, if you read the book you will find that their reaction to Panksepp and others doing similar research has been every bit as allergic as their reaction to anyone suggesting the existence of human nature.  However, in the field of animal behavior, they are anything but a quaint artifact of the past.  Diehard disciples of the behaviorist John B. Watson and his latter day follower B. F. Skinner, Blank Slaters of the first water, still haunt the halls of academia in significant numbers, and still control the message in any number of “scientific” journals.  There they have been following their usual “scholarly” pursuit of ignoring and/or vilifying anyone who dares to disagree with them ever since the heyday of Ashley Montagu and Richard Lewontin.  In the process they have managed to suppress or distort a great deal of valuable research bearing directly on the wellsprings of human behavior.

We learn from the book that the Blank Slate orthodoxy has been as damaging for other animals as it has been for us.  Among other things, it has served as the justification for indifference to or denial of the feelings and consciousness of animals.  The possibility that this attitude has contributed to some rather gross instances of animal abuse has been drawing increasing attention from those who are concerned about their welfare.  See for example, the website of Panksepp admirer Temple Grandin.  According to Panksepp & Bevin,

Another of Descartes’ big errors was the idea that animals are without consciousness, without experiences, because they lack the subtle nonmaterial stuff from which the human mind is made.  This notion lingers on today in the belief that animals do not think about nor even feel their emotional responses.

Many emotion researchers as well as neuroscience colleagues make a sharp distinction between affect and emotion, seeing emotion as purely behavioral and physiological responses that are devoid of affective experience.  They see emotional arousal as merely a set of physiological responses that include emotion-associated behaviors and a variety of visceral (hormonal/autonomic) responses, without actually experiencing anything – many researchers believe that other animals may not feel their emotional arousals.  We disagree.

Some justify this rather counter-intuitive belief by suggesting that it is impossible to really experience or be conscious of emotions (affects) without language.  Panksepp & Bevins’ response:

Words cannot describe the experience of seeing the color red to someone who is blind.  Words do not describe affects either.  One cannot explain what it feels like to be angry, frightened, lustful, tender, lonely, playful, or excited, except indirectly in metaphors.  Words are only labels for affective experiences that we have all had – primary affective experiences that we universally recognize.  But because they are hidden in our minds, arising from ancient prelinguistic capacities of our brains, we have found no way to talk about them coherently.

With such excuses, and the fact that they could not “see” feelings and emotions in their experiments with “reinforcement” and “conditioning,” the behaviorists concluded that the feelings of the animals they were using in their experiments didn’t matter.  It was outside the realm of “science.”  Again from the book,

Much as we admire the scientific finesses of these conditioning experiments, we part company with (Joseph) LeDoux and many of the others who conduct this kind of work when it comes to understanding what emotional feelings really are.  This is because they studiously ignore the feelings of their animals, and they often claim that the existence or nonexistence of the animals’ feelings is a nonscientific issue (although there are some signs of changing sentiments on these momentous issues).  In any event…, LeDoux has specifically endorsed the read-out theory – to the effect that affects are created by neocortical working-memory functions, uniquely expanded in human brains.  In other words, he see affects as a higher-order cognitive construct (perhaps only elaborated in humans), and thereby he envisions the striking FEAR responses of his animals to be purely physiological effects with no experiential consequences.

…And when we analyze the punishing properties of electrical stimulation here in animals, we get the strongest aversive responses imaginable at the lowest levels of brain stimulation, and humans experience the most fearful states of mind imaginable.  Such issues of affective experience should haunt fear-conditioners much more than they apparently do.

The evidence strongly indicates that there are primary-process emotional networks in the brain that help generate phenomenal affective experiences in all mammals, and perhaps in many other vertebrates and invertebrates.

It’s stunning, really.  Anyone who has ever owned a dog is aware of how similar their emotional responses can often be to those of humans, and how well they remember them.  Like humans, they are mammals.  Like humans, their brains include a cortex.  It would hardly be “parsimonious” to simply assume that humans represent some kind of a radical departure when it comes to the ability to experience and remember emotions, and that other animals lack this ability, in defiance of centuries of such “common sense” observations that they can.  All this mass of evidence apparently isn’t “scientific,” and therefore doesn’t count, because these latter day Blank Slaters can’t observe in their mazes and shock boxes what appears obvious to everyone else in the world.  “Anthropomorphism!”  From such profound reasoning we are apparently to conclude that pain in animals doesn’t matter.

Why the Blank Slate’s furious opposition to “anthropomorphism?”  In a sense, it’s actually an anachronism.  Recall that the fundamental dogma of the Blank Slate was the denial of human nature.  Obviously other mammals have a “nature.”  Clearly, the claim that dogs and cats must “learn” all their behavior from their “culture” was never going to fly.  Not so human beings.  Once upon a time the Blank Slaters claimed that everything in the human behavioral repertoire, with the possible exception of breathing, urinating, and defecating, was learned.  They even went so far as to include sex.  Even orgasms had to be “learned.”  It follows that the gulf between humans and animals had to be made as wide as possible.

Fast forward to about the year 2000.  As far as their denial of human nature was concerned, the Blank Slaters had lost control of the popular media.  To an increasing extent, they were also losing control of the message in academia.  Books and articles about innate human behavior began pouring from the presses, and people began speaking of human nature as a given.  The Blank Slaters had lost that battle.  The main reason for their “anthropomorphism” phobia had disappeared.  In the more sequestered field of “animal nature,” however, they could carry on as if nothing had happened without making laughing stocks of themselves.  No one was paying any attention except a few animal rights activists.  And carry on they did, with the same “scientific” methods they had used in the past.  Allow me to quote from Panksepp & Biven again to give you a taste of what I’m talking about:

It is noteworthy that Walter Hess, who first discovered the RAGE system in the cat brain in the mid-1930s (he won a Nobel Prize for his work in 1949), using localized stimulation of the hypothalamus, was among the first to suggest that the behavior was “sham rage.”  He confessed, however, in writings published after his retirement (as noted in Chapter 2:  e.g., The Biology of Mind [1964]), that he had always believed that the animals actually experienced true anger.  He admitted to having shared sentiments he did not himself believe.  Why?  He simply did not want to have his work marginalized by the then-dominant behaviorists who had no tolerance for talk about emotional experiences.  As a result, we still do not know much about how the RAGE system interacts with other cognitive and affective systems of the brain.

In an earlier chapter on The Evolution of Affective Consciousness they added,

In his retirement he admitted regrets about having been too timid, not true to his convictions, to claim that his animals had indeed felt real anger.  He confessed that he did this because he feared that such talk would lead to attacks by the powerful American behaviorists, who might thereby also marginalize his more concrete scientific discoveries.  To a modest extent, he tried to rectify his “mistake” in his last book, The Biology of Mind, but this work had little influence.

So much for the “self-correcting” nature of science.  It is anything but that when poisoned by ideological dogmas.  Panksepp and Biven conclude,

But now, thankfully, in our enlightened age, the ban has been lifted.  Or has it?  In fact, after the cognitive revolution of the early 1970s, the behaviorist bias has largely been retained but more implicitly by most, and it is still the prevailing view among many who study animal behavior.  It seems the educated public is not aware of that fact.  We hope the present book will change that and expose this residue of behaviorist fundamentalism for what it is:  an anachronism that only makes sense to people who have been schooled within a particular tradition, not something that makes any intrinsic sense in itself!  It is currently still blocking a rich discourse concerning the psychological, especially the affective, functions of animal brains and human minds.

This passage is particularly interesting because it demonstrates, as can be seen from the passage about “the cognitive revolution of the early 1970s,” that the authors were perfectly well aware of the larger battle with the Blank Slate orthodoxy over human nature.  However, that rather opaque allusion is about as close as they came to referring to it in the book.  One can hardly blame them for deciding to fight one battle at a time.  There is one interesting connection that I will point out for the cognoscenti.  In Chapter 6, Beyond Instincts, they write,

The genetically ingrained emotional systems of the brain reflect ancestral memories – adaptive affective functions of such universal importance for survival that they were built into the brain, rather than having to be learned afresh by each generation of individuals.  These genetically ingrained memories (instincts) serve as a solid platform for further developments in the emergence of both learning and higher-order reflective consciousness.

Compare this with a passage from the work of the brilliant South African naturalist Eugene Marais, which appeared in his The Soul of the Ape, written well before his death in 1936, but only published in 1969:

…it would be convenient to speak of instinct as phyletic memory.  There are many analogies between memory and instinct, and although these may not extend to fundamentals, they are still of such a nature that the term phyletic memory will always convey a clear understanding of the most characteristic attributes of instinct.

As it happens, the very charming and insightful introduction to The Soul of the Ape when it was finally published in 1969 was written by none other than Robert Ardrey!  He had an uncanny ability to find and appreciate the significance of the work of brilliant but little-known researchers like Marais.

As for Panksepp, I can only apologize for taking so long to discover him.  If nothing else, his work and teachings reveal that this is no time for complacency.  True, the Blank Slaters have been staggered, but they haven’t been defeated quite yet.  They’ve merely abandoned the battlefield and retreated to what would seem to be their last citadel; the field of animal behavior.  Unfortunately there is no Robert Ardrey around to pitch them headlong out of that last refuge, but they face a different challenge now.  They can no longer pretend to hold the moral high ground.  Their denial that animals can experience and remember their emotions in the same way as humans leaves the door wide open for the abuse of animals, both inside and outside the laboratory.  It is to be hoped that more animal rights activists like Temple Grandin will start paying attention.  I may not agree with them about eating red meat, but the maltreatment of animals, justified by reference to a bogus ideological dogma, is something that can definitely excite my own RAGE emotions.  I will have no problem standing shoulder to shoulder with them in this fight.

Indulge Yourself – Believe in Free Will

Philosophers have been masticating the question of free will for many centuries.  The net result of their efforts has been a dizzying array of different “flavors” of free will or the lack thereof.  I invite anyone with the patience to attempt disentangling the various permutations and combinations thereof to start with the Wiki page, and take it from there.   For the purpose of this post I will simply define free will as the ability to make choices that are not predetermined before we make the choice.  This implies that our conscious minds are not entirely subject to deterministic physical laws, and have the power to alter physical reality.  Lack of free will means the absence of this power, and implies that we lack the power to alter physical reality in any way.  I personally have no idea whether we have free will or not.  In my opinion, we currently lack the knowledge to answer the question.  However, I believe that debating the matter is useless.  Instead, we should assume that there is free will as the “default” position, and get on with our lives.

Of course, if there is no free will, my advice is useless.  I am simply an automaton among automatons, adding to the chorus of sound and fury that signifies nothing.  In that case the debate over free will is merely another amusing case of pre-programmed robots arguing over what they “should” believe, and what they “ought” to do as a consequence, in a world in which the words “should” and “ought” are completely meaningless.  These words imply an ability to choose between two alternatives, but no such choice can exist if there is no free will.  “Ought” we to alter the criminal justice system because we have decided there is no such thing as free will?  If we have no free will, the question is meaningless.  We cannot possibly alter the predetermined outcome of the debate, or the predetermined evolution of the criminal justice system, or even our opinion on whether it “ought” to be changed or not.  Under the circumstances it can hardly hurt to assume that we do have free will.  If so, the assumption must have been foreordained, and no conscious agency exists that could have altered the fact.  If we don’t have free will, it is also absurd, if inevitable, to blame me or even take issue with me for advocating that we act as if we have free will.  After all, in that case I couldn’t have acted or thought any differently, assuming my mind is an artifact of the physical world, and not a “ghost in the machine.”  If we believe in free will but there is no free will, debate about the matter may or may not be inevitable, but it is certainly futile, because the outcome of the debate has been predetermined.

On the other hand, if we decide that there is no free will, but there actually is, it can potentially “hurt” a great deal.  In that case, we will be basing our actions and our conclusions about what “ought” or “ought not” to be done on a false assumption.  Whatever our idiosyncratic goals happen to be, it is more probable that we will attain them if we base our strategy for achieving them on truth rather than falsehood.  If we have free will, the outcome of the debate matters.  Suppose, for example, that the anti-free will side has much better debaters and convinces those watching the debate that they have no free will even if they do.  Plausible results include despair, a sense of purposelessness, fatalism, a lethargic and indifferent attitude towards life, a feeling that nothing matters, etc.  No doubt there are legions of philosophers out there who can prove that, because a = b and b = c, none of these reactions are reasonable.  They will, however, occur whether they are reasonable or not.

I doubt that my proposed default position will be difficult to implement.  Even the most diehard free will denialists seldom succeed in completely accepting the implications of their own theories.  Look through their writings, and before long you’ll find a “should.”  Read a bit further and you’re likely to stumble over an “ought” as well.  However, as noted above, speaking of “should” and “ought” in the absence of free will is absurd.  They imply the possibility of a choice between two alternatives that will lead to different outcomes.  If there is no free will, there can be no choice.  Individuals will do what they “ought” to do or “ought not” to do just as the arrangement of matter and energy in the universe happens to dictate.  It is absurd to blame them for doing something they could not avoid.  However, the question of whether they actually will be blamed or not is also predetermined.  It is just as absurd to blame the blamers.

In short, I propose we all stop arguing and accept the default.  If there is no free will, then obviously I am proposing it because of my programming.  I can’t do otherwise even if I “ought” to.  It’s possible my proposal may change things, but, if so, the change was inevitable.  However, if there is free will, then believing in it is simply believing in the truth, and a truth that, at least from my point of view, happens to be a great deal more palatable than the alternative.

Whither Morality?

The evolutionary origins of morality and the reasons for its existence have been obvious for over a century.  They were no secret to Edvard Westermarck when he published The Origin and Development of the Moral Ideas in 1906, and many others had written books and papers on the subject before his book appeared.  However, our species has a prodigious talent for ignoring inconvenient truths, and we have been studiously ignoring that particular truth ever since.

Why is it inconvenient?  Let me count the ways!  To begin, the philosophers who have taken it upon themselves to “educate” us about the difference between good and evil would be unemployed if they were forced to admit that those categories are purely subjective, and have no independent existence of their own.  All of their carefully cultivated jargon on the subject would be exposed as gibberish.  Social Justice Warriors and activists the world over, those whom H. L. Mencken referred to collectively as the “Uplift,” would be exposed as so many charlatans.  We would begin to realize that the legions of pious prigs we live with are not only an inconvenience, but absurd as well.  Gaining traction would be a great deal more difficult for political and religious cults that derive their raison d’être from the fabrication and bottling of novel moralities.  And so on, and so on.

Just as they do today, those who experienced these “inconveniences” in one form or another pointed to the drawbacks of reality in Westermarck’s time.  For example, from his book,

Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism.  If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it.  This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong.  To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief.  The unfortunate circumstance that there do exist dangerous things in the world, proves that something may be dangerous and yet true.  another question is whether any scientific truth really is mischievous on the whole, although it may cause much discomfort to certain people.  I venture to believe that this, at any rate, is not the case with that form of ethical subjectivism which I am here advocating.

I venture to believe it as well.  In the first place, when we accept the truth about morality we make life a great deal more difficult for people of the type described above.  Their exploitation of our ignorance about morality has always been an irritant, but has often been a great deal more damaging than that.  In the 20th century alone, for example, the Communist and Nazi movements, whose followers imagined themselves at the forefront of great moral awakenings that would lead to the triumph of Good over Evil, resulted in the needless death of tens of millions of people.  The victims were drawn disproportionately from among the most intelligent and productive members of society.

Still, just as Westermarck predicted more than a century ago, the bugaboo of “moral relativism” continues to be “repeated as an argument” in our own day.  Apparently we are to believe that if the philosophers and theologians all step out from behind the curtain after all these years and reveal that everything they’ve taught us about morality is so much bunk, civilized society will suddenly dissolve in an orgy of rape and plunder.

Such notions are best left behind with the rest of the impedimenta of the Blank Slate.  Nothing could be more absurd than the notion that unbridled license and amorality are our “default” state.  One can quickly disabuse ones self of that fear by simply reading the comment thread of any popular news website.  There one will typically find a gaudy exhibition of moralistic posing and pious one-upmanship.  I encourage those who shudder at the thought of such an unpleasant reading assignment to instead have a look at Jonathan Haidt’s The Righteous Mind.  As he puts it in the introduction to his book,

I could have titled this book The Moral Mind to convey the sense that the human mind is designed to “do” morality, just as it’s designed to do language, sexuality, music, and many other things described in popular books reporting the latest scientific findings.  But I chose the title The Righteous Mind to convey the sense that human nature is not just intrinsically moral, it’s also intrinsically moralistic, critical and judgmental… I want to show you that an obsession with righteousness (leading inevitably to self-righteousness) is the normal human condition.  It is a feature of our evolutionary design, not a bug or error that crept into minds that would otherwise be objective and rational.

Haidt also alludes to a potential reason that some of the people already mentioned above continue to evoke the scary mirage of moral relativism:

Webster’s Third New World Dictionary defines delusion as “a false conception and persistent belief in something that has no existence in fact.”  As an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history:  the rationalist delusion.  It’s the idea that reasoning is our most noble attribute, one that makes us like the gods (for Plato) or that brings us beyond the “delusion” of believing in gods (for the New Atheists).  The rationalist delusion is not just a claim about human nature.  It’s also a claim that the rational caste (philosophers or scientists) should have more power, and it usually comes along with a utopian program for raising more rational children.

Human beings are not by nature moral relativists, and they are in no danger of becoming moral relativists merely by virtue of the fact that they have finally grasped what morality actually is.  It is their nature to perceive Good and Evil as real, independent things, independent of the subjective minds that give rise to them, and they will continue to do so even if their reason informs them that what they perceive is a mirage.  They will always tend to behave as if these categories were absolute, rather than relative, even if all the theologians and philosophers among them shout at the top of their lungs that they are not being “rational.”

That does not mean that we should leave reason completely in the dust.  Far from it!  Now that we can finally understand what morality is, and account for the evolutionary origins of the behavioral predispositions that are its root cause, it is within our power to avoid some of the most destructive manifestations of moral behavior.  Our moral behavior is anything but infinitely malleable, but we know from the many variations in the way it is manifested in different human societies and cultures, as well as its continuous and gradual change in any single society, that within limits it can be shaped to best suit our needs.  Unfortunately, the only way we will be able to come up with an “optimum” morality is by leaning on the weak reed of our ability to reason.

My personal preferences are obvious enough, even if they aren’t set in stone.  I would prefer to limit the scope of morality to those spheres in which it is indispensable for lack of a viable alternative.  I would prefer a system that reacts to the “Uplift” and unbridled priggishness and self-righteousness with scorn and contempt.  I would prefer an educational system that teaches the young the truth about what morality actually is, and why, in spite of its humble origins, we can’t get along without it if we really want our societies to “flourish.”  I know; the legions of those whose whole “purpose of life” is dependent on cultivating the illusion that their own versions of Good and Evil are the “real” ones stands in the way of the realization of these whims of mine.  Still, one can dream.

On the Malleability and Plasticity of the History of the Blank Slate

Let me put my own cards on the table.  I consider the Blank Slate affair the greatest debacle in the history of science.  Perhaps you haven’t heard of it.  I wouldn’t be surprised.  Those who are the most capable of writing its history are often also those who are most motivated to sweep the whole thing under the rug.  In any case, in the context of this post the Blank Slate refers to a dogma that prevailed in the behavioral sciences for much of the 20th century according to which there is, for all practical purposes, no such thing as human nature.  I consider it the greatest scientific debacle of all time because, for more than half a century, it blocked the path of our species to self-knowledge.  As we gradually approach the technological ability to commit collective suicide, self-knowledge may well be critical to our survival.

Such histories of the affair as do exist are often carefully and minutely researched by historians familiar with the scientific issues involved.  In general, they’ve personally lived through at least some phase of it, and they’ve often been personally acquainted with some of the most important players.  In spite of that, their accounts have a disconcerting tendency to wildly contradict each other.  Occasionally one finds different versions of the facts themselves, but more often its a question of the careful winnowing of the facts to select and record only those that support a preferred narrative.

Obviously, I can’t cover all the relevant literature in a single blog post.  Instead, to illustrate my point, I will focus on a single work whose author, Hamilton Cravens, devotes most of his attention to events in the first half of the 20th century, describing the sea change in the behavioral sciences that signaled the onset of the Blank Slate.  As it happens, that’s not quite what he intended.  What we see today as the darkness descending was for him the light of science bursting forth.  Indeed, his book is entitled, somewhat optimistically in retrospect, The Triumph of Evolution:  The Heredity-Environment Controversy, 1900-1941.  It first appeared in 1978, more or less still in the heyday of the Blank Slate, although murmurings against it could already be detected among academic and professional experts in the behavioral sciences after the appearance of a series of devastating critiques in the popular literature in the 60’s by Robert Ardrey, Konrad Lorenz, and others, topped off by E. O. Wilson’s Sociobiology in 1975.

Ostensibly, the “triumph” Cravens’ title refers to is the demise of what he calls the “extreme hereditarian” interpretations of human behavior that prevailed in the late 19th and early 20th century in favor of a more “balanced” approach that recognized the importance of culture, as revealed by a systematic application of the scientific method.  One certainly can’t fault him for being superficial.  He introduces us to most of the key movers and shakers in the behavioral sciences in the period in question.  There are minutiae about the contents of papers in old scientific journals, comments gleaned from personal correspondence, who said what at long forgotten scientific conferences, which colleges and universities had strong programs in psychology, sociology and anthropology more than 100 years ago, and who supported them, etc., etc.  He guides us into his narrative so gently that we hardly realize we’re being led by the nose.  Gradually, however, the picture comes into focus.

It goes something like this.  In bygone days before the “triumph of evolution,” the existence of human “instincts” was taken for granted.  Their importance seemed even more obvious in light of the rediscovery of Mendel’s work.  As Cravens put it,

While it would be inaccurate to say that most American experimentalists concluded as  the result of the general acceptance of Mendelism by 1910 or so that heredity was all powerful and environment of no consequence, it was nevertheless true that heredity occupied a much more prominent place than environment in their writings.

This sort of “subtlety” is characteristic of Cravens’ writing.  Here, he doesn’t accuse the scientists he’s referring to of being outright genetic determinists.  They just have an “undue” tendency to overemphasize heredity.  It is only gradually, and by dint of occasional reading between the lines that we learn the “true” nature of these believers in human “instinct.”  Without ever seeing anything as blatant as a mention of Marxism, we learn that their “science” was really just a reflection of their “class.”  For example,

But there were other reasons why so many American psychologists emphasized heredity over environment.  They shared the same general ethnocultural and class background as did the biologists.  Like the biologists, they grew up in middle class, white Anglo-Saxon Protestant homes, in a subculture where the individual was the focal point of social explanation and comment.

As we read on, we find Cravens is obsessed with white Anglo-Saxon Protestants, or WASPs, noting that the “wrong” kind of scientists belong to that “class” scores of times.  Among other things, they dominate the eugenics movement, and are innocently referred to as Social Darwinists, as if these terms had never been used in a pejorative sense.  In general they are supposed to oppose immigration from other than “Nordic” countries, and tend to support “neo-Lamarckian” doctrines, and believe blindly that intelligence test results are independent of “social circumstances and milieu.”  As we read further into Section I of the book, we are introduced to a whole swarm of these instinct-believing WASPs.

In Section II, however, we begin to see the first glimmerings of a new, critical and truly scientific approach to the question of human instincts.  Men like Franz Boas, Robert Lowie, and Alfred Kroeber, began to insist on the importance of culture.  Furthermore, they believed that their “culture idea” could be studied in isolation in such disciplines as sociology and anthropology, insisting on sharp, “territorial” boundaries that would protect their favored disciplines from the defiling influence of instincts.  As one might expect,

The Boasians were separated from WASP culture; several were immigrants, of Jewish background, or both.

A bit later they were joined by joined by John Watson and his behaviorists who, after performing some experiments on animals and human infants, apparently experienced an epiphany.  As Cravens puts it,

To his amazement, Watson concluded that the James-McDougall human instinct theory had no demonstrable experimental basis.  He found the instinct theorists had greatly overestimated the number of original emotional reactions in infants.  For all practical purposes, he realized that there were no human instincts determining the behavior of adults or even of children.

Perhaps more amazing is the fact that Cravens suspected not a hint of a tendency to replace science with dogma in all this.  As Leibniz might have put it, everything was for the best, in this, the best of all possible worlds.  Everything pointed to the “triumph of evolution.”  According to Cravens, the “triumph” came with astonishing speed:

By the early 1920s the controversy was over.  Subsequently, psychologists and sociologists joined hands to work out a new interdisciplinary model of the sources of human conduct and emotion stressing the interaction of heredity and environment, of innate and acquired characters – in short, the balance of man’s nature and his culture.

Alas, my dear Cravens, the controversy was just beginning.  In what follows, he allows us a glimpse at just what kind of “balance” he’s referring to.  As we read on into Section 3 of the book, he finally gets around to setting the hook:

Within two years of the Nazi collapse in Europe Science published an article symptomatic of a profound theoretical reorientation in the American natural and social sciences.  In that article Theodosius Dobzhansky, a geneticist, and M. F. Ashley-Montagu, an anthropologist, summarized and synthesized what the last quarter century’s work in their respective fields implied for extreme hereditarian explanations of human nature and conduct.  Their overarching thesis was that man was the product of biological and social evolution.  Even though man in his biological aspects was as subject to natural processes as any other species, in certain critical respects he was unique in nature, for the specific system of genes that created an identifiably human mentality also permitted man to experience cultural evolution… Dobzhansky and Ashley-Montagu continued, “Instead of having his responses genetically fixed as in other animal species, man is a species that invents its own responses, and it is out of this unique ability to invent…  his responses that his cultures are born.”

and, finally, in the conclusions, after assuring us that,

By the early 1940s the nature-nurture controversy had run its course.

Cravens leaves us with some closing sentences that epitomize his “triumph of evolution:”

The long-range, historical function of the new evolutionary science was to resolve the basic questions about human nature in a secular and scientific way, and thus provide the possibilities for social order and control in an entirely new kind of society.  Apparently this was a most successful and enduring campaign in American culture.

At this point, one doesn’t know whether to laugh or cry.  Apparently Cravens, who has just supplied us with arcane details about who said what at obscure scientific conferences half a century and more before he published his book was completely unawares of exactly what Ashley Montagu, his herald of the new world order, meant when he referred to “extreme hereditarian explanations,” in spite of the fact that he spelled it out ten years earlier in an invaluable little pocket guide for the followers of the “new science” entitled Man and Aggression.  There Montagu describes the sort of “balance of man’s nature and his culture” he intended as follows:

Man is man because he has no instincts, because everything he is and has become he has learned, acquired, from his culture, from the man-made part of the environment, from other human beings.

and,

There is, in fact, not the slightest evidence or ground for assuming that the alleged “phylogenetically adapted instinctive” behavior of other animals is in any way relevant to the discussion of the motive-forces of human behavior.  The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.

So much for Cravens’ “balance.”  He spills a great deal of ink in his book assuring us that the Blank Slate orthodoxy he defends was the product of “science,” little influenced by any political or ideological bias.  Apparently he also didn’t notice that, not only in Man and Aggression, but ubiquitously in the Blank Slate literature, the “new science” is defended over and over and over again with the “argument” that anyone who opposes it is a racist and a fascist, not to mention far right wing.

As it turns out, Cravens didn’t completely lapse into a coma following the publication of Ashley Montagu’s 1947 pronunciamiento in Science.  In his “Conclusion” we discover that, after all, he had a vague presentiment of the avalanche that would soon make a shambles of his “new evolutionary science.”  In his words,

Of course in recent years something approximating at least a minor revival of the old nature-nurture controversy seems to have arisen in American science and politics.  It is certainly quite possible that this will lead to a full scale nature-nurture controversy in time, not simply because of the potential for a new model of nature that would permit a new debate, but also, as one historian has pointed out, because our own time, like the 1920s, has been a period of racial and ethnic polarization.  Obviously any further comment would be premature.

Obviously, my dear Cravens.  What’s the moral of the story, dear reader?   Well, among other things, that if you really want to learn something about the Blank Slate, you’d better not be shy of wading through the source literature yourself.  It’s still out there, waiting to be discovered.  One particularly rich source of historical nuggets is H. L. Mencken’s American Mercury, which Ron Unz has been so kind as to post online.  Mencken took a personal interest in the “nature vs. nurture” controversy, and took care to publish articles by heavy hitters on both sides.  For a rather different take than Cravens on the motivations of the early Blank Slaters, see for example, Heredity and the Uplift, by H. M. Parshley.  Parshley was an interesting character who took on no less an opponent than Clarence Darrow in a debate over eugenics, and later translated Simone de Beauvoir’s feminist manifesto The Second Sex into English.

chimp-thinking

Stephen Hawking Chimes in “On Aggression”

Tell me, dear reader, have you ever heard the term, “On Aggression” before?  As it happens, that was actually the title of a book by Konrad Lorenz published in 1966, at the height of the Blank Slate debacle.  In it Lorenz suggested that the origins of both animal and human aggression could be traced to evolved behavioral predispositions, or, in the vernacular, human nature.  He was duly denounced at the time by the Blank Slate priesthood as a fascist and a racist, with dark allusions to possible connections to the John Birch Society itself!  See, for example, “Man and Aggression,” edited by Ashley Montagu, or “Not in Our Genes,” by Richard Lewontin.  In those days the Blank Slaters had the popular media in their hip pocket.  In fact, they continued to have it in their hip pocket pretty much until the end of the 20th century.  For example, no less a celebrity than Jane Goodall was furiously vilified, in the Sunday Times, no less, for daring to suggest that chimpanzees could occasionally be aggressive.

Times have changed!  Fast forward to 2015.  Adaeze Uyanwah, a 24-year-old from California, just won the “Guest of Honor” contest from VisitLondon.com. The prize package included a tour of London’s Science Museum with celebrity physicist Stephen Hawking.  During the tour, Uyanwah asked Hawking which human shortcoming he would most like to change.  He replied as follows:

The human failing I would most like to correct is aggression.  It may have had survival advantage in caveman days, to get more food, territory or a partner with whom to reproduce, but now it threatens to destroy us all.

Hello!!  Hawking just matter-of-factly referred to aggression as an innate human trait!  Were there shrieks of rage from the august practitioners of the behavioral sciences?  No.  Did it occur to anyone to denounce Hawking as a fascist?  No.  Did so much as a single journalistic crusader for social justice swallow his gum?  No!  See for yourself!  You can check the response in the reliably liberal Huffington Post, Washington Post, or even the British Independent, and you won’t find so much as a mildly raised eyebrow.  By all means, read on and check the comments!  No one noticed a thing!  If you’re still not sufficiently stunned, check out this interview of famous physicist Mishio Kaku apropos Hawking’s comment on MSNBC’s Ed Show.  As anyone who hasn’t been asleep for the last 20 years is aware, MSNBC’s political line is rather to the left of Foxnews.  Nothing that either (Ed) Schultz nor Kaku says suggest that they find anything the least bit controversial about Hawking’s statement.  Indeed, they accept it as obvious, and continue with a discussion of whether it would behoove us to protect ourselves from this unfortunate aspect of our “human nature” by escaping to outer space!

In a word, while the Blank Slate may simmer on in the more obscurantist corners of academia, I think we can safely conclude that it has lost the popular media.  Is hubris in order?  Having watched all the old Christopher Lee movies, I rather doubt it.  Vampires have a way of rising from the grave.