Stephen Hawking Chimes in “On Aggression”

Tell me, dear reader, have you ever heard the term, “On Aggression” before?  As it happens, that was actually the title of a book by Konrad Lorenz published in 1966, at the height of the Blank Slate debacle.  In it Lorenz suggested that the origins of both animal and human aggression could be traced to evolved behavioral predispositions, or, in the vernacular, human nature.  He was duly denounced at the time by the Blank Slate priesthood as a fascist and a racist, with dark allusions to possible connections to the John Birch Society itself!  See, for example, “Man and Aggression,” edited by Ashley Montagu, or “Not in Our Genes,” by Richard Lewontin.  In those days the Blank Slaters had the popular media in their hip pocket.  In fact, they continued to have it in their hip pocket pretty much until the end of the 20th century.  For example, no less a celebrity than Jane Goodall was furiously vilified, in the Sunday Times, no less, for daring to suggest that chimpanzees could occasionally be aggressive.

Times have changed!  Fast forward to 2015.  Adaeze Uyanwah, a 24-year-old from California, just won the “Guest of Honor” contest from VisitLondon.com. The prize package included a tour of London’s Science Museum with celebrity physicist Stephen Hawking.  During the tour, Uyanwah asked Hawking which human shortcoming he would most like to change.  He replied as follows:

The human failing I would most like to correct is aggression.  It may have had survival advantage in caveman days, to get more food, territory or a partner with whom to reproduce, but now it threatens to destroy us all.

Hello!!  Hawking just matter-of-factly referred to aggression as an innate human trait!  Were there shrieks of rage from the august practitioners of the behavioral sciences?  No.  Did it occur to anyone to denounce Hawking as a fascist?  No.  Did so much as a single journalistic crusader for social justice swallow his gum?  No!  See for yourself!  You can check the response in the reliably liberal Huffington Post, Washington Post, or even the British Independent, and you won’t find so much as a mildly raised eyebrow.  By all means, read on and check the comments!  No one noticed a thing!  If you’re still not sufficiently stunned, check out this interview of famous physicist Mishio Kaku apropos Hawking’s comment on MSNBC’s Ed Show.  As anyone who hasn’t been asleep for the last 20 years is aware, MSNBC’s political line is rather to the left of Foxnews.  Nothing that either (Ed) Schultz nor Kaku says suggest that they find anything the least bit controversial about Hawking’s statement.  Indeed, they accept it as obvious, and continue with a discussion of whether it would behoove us to protect ourselves from this unfortunate aspect of our “human nature” by escaping to outer space!

In a word, while the Blank Slate may simmer on in the more obscurantist corners of academia, I think we can safely conclude that it has lost the popular media.  Is hubris in order?  Having watched all the old Christopher Lee movies, I rather doubt it.  Vampires have a way of rising from the grave.

Edvard Westermarck on Morality: The Light Before the Darkness Fell

The nature of morality became obvious to anyone who cared to think about it after Darwin published his great theory, including Darwin himself.  In short, it became clear that the “root causes” of morality were to be found in “human nature,” our specie’s collection of evolved behavioral predispositions.  As the expression of evolved traits, morality has no purpose, unless one cares to use that term as shorthand for the apparent biological function it serves.  It exists because it enhanced the probability that the creatures with the genetic endowment that gave rise to it would survive and reproduce in the conditions that existed when those genes appeared.  As a result, there are no moral “truths.”  Rather, morality is a subjective phenomenon with emotional rather than logical origins.

So much became obvious to many during the decades that following the publication of On the Origin of Species in 1859.  One man spelled out the truth more explicitly, clearly, and convincingly than any other.  That man was Edvard Westermarck.

Westermarck was a Finnish philosopher and sociologist who published his seminal work on morality, The Origin and Development of the Moral Ideas, in 1906.  As we now know in retrospect, the truths in that great book were too much for mankind to bear.  The voices repeating those truths became fewer, and were finally silenced.  The darkness returned, and more than a century later we are still struggling to find our way out of the fog.  It should probably come as no surprise.  It goes without saying that the truth was unpalatable to believers in imaginary super beings.  Beyond that, the truth relegated the work of most of the great moral philosophers of the past to the status  of historical curiosities.  Those who interpreted their thought for the rest of us felt the ground slipping from beneath their feet.  Experts in ethics and morality became the equivalent of experts in astrology, and a step below the level of doctors of chiropracty.  Zealots of Marxism and the other emerging secular versions of religion rejected a truth that exposed the absurdity of attempts to impose new versions of morality from on high.  As for the average individuals of the species Homo sapiens, they rejected the notion that the “Good” and “Evil” objects that their emotions portrayed so realistically, and that moved them so profoundly, were mere fantasies.

The result was more or less predictable.  Westermarck and the rest were shouted down.  The Blank Slate debacle turned the behavioral sciences into so many strongholds of an obscurantist orthodoxy.  The blind exploitation of moral emotions in the name of such newly concocted “Goods” as Nazism and Communism resulted in the deaths of tens of millions, and misery on a vast scale.  The Academy became the spawning ground of a modern, secular version of Puritanism, more intolerant and bigoted than the last.  In the case of Westermarck, the result has, at least, been more amusing.  He has been hidden in plain sight.  On his Wiki page, for example, he is described as one who “studied exogamy and incest taboo.”  To the extent that his name is mentioned at all, it is usually in connection with the Westermarck Effect, according to which individuals in close proximity in the early years of life become sexually desensitized to each other.  So much for the legacy of the man who has a good claim to be the most profound thinker on the subject of morality to appear since the days of Hume.

Let us cut to the chase and consider what Westermarck actually said.  In the first place, he stressed a point often completely overlooked by modern researchers in the behavioral sciences; the complex emotions we now associate with morality did not suddenly appear fully formed like Athena from the forehead of Zeus.  Rather, they represent the results of a continuous process of evolution from simpler emotional responses that Westermarck grouped into the categories of “resentment” and “approval.”  These had existed in many animal species long before hominids appeared on the scene.  They were there as a result of natural selection.  As Westermarck put it:

As to their origin, the evolutionist can hardly entertain a doubt. Resentment, like protective reflex action, out of which it has gradually developed, is a means of protection for the animal. Its intrinsic object is to remove a cause of pain, or, what is the same, a cause of danger. Two different attitudes maybe taken by an animal towards another which has made it feel pain: it may either shun or attack its enemy. In the former case its action is prompted by fear, in the latter by anger, and it depends on the circumstances which of these emotions is the actual determinant. Both of them are of supreme importance for the preservation of the species, and may consequently be regarded as elements in the animal’s mental constitution which have been acquired by means of natural selection in the struggle for existence.

From what has been said above it is obvious that moral resentment is of extreme antiquity in the human race, nay that the germ of it is found even in the lower animal world among social animals capable of feeling sympathetic resentment.  The origin of custom as a moral rule no doubt lies in a very remote period of human history.

This is followed by another remarkable passage, which showcases another aspect of Westermarck’s genius that appears repeatedly in his books; his almost incredible erudition.  His knowledge of the intellectual and historical antecedents of his own ideas is not limited to a narrow field, but is all-encompassing, and highly useful to anyone who cares to study the relevant source material on his own:

 This view is not new. More than one hundred and fifty years before Darwin, Shaftesbury wrote of resentment in these words:  ” Notwithstanding its immediate aim be indeed the ill or punishment of another, yet it is plainly of the sort of those [affections] which tend to the advantage “and interest of the self-system, the animal himself; and is withal in other respects contributing to the good and interest of the species.”  A similar opinion is expressed by Butler, according to whom the reason and end for which man was made liable to anger is, that he might be better qualified to prevent and resist violence and opposition, while deliberate resentment “is to be considered as a weapon, put into our hands by nature, against injury, injustice, and cruelty.”  Adam Smith, also, believes that resentment has “been given us by nature for defence, and for defence only,” as being “the safeguard of justice and the I security of innocence.”  Exactly the same view is taken by several modern evolutionists as regards the “end” of resentment, though they, of course, do not rest contented with saying that this feeling has been given us by nature, but try to explain in what way it has developed. “Among members of the same species,” says Mr. Herbert Spencer, “those individuals which have not, in any considerable degree, resented aggressions, must have ever tended to disappear, and to have left behind those which have with some effect made counter-aggressions.”

All these references are accompanied by citations of the works in which they appear in the footnotes.  Westermarck then went on to derive conclusions from the evolutionary origins of morality that are both simple and obvious, but which modern behavioral scientists and philosophers have a daunting capacity to ignore.  He concluded that morality is subjective.  It may be reasoned about, but is the product of emotion, not reason.  It follows that there are no such things as moral “truths,” and that the powerful moral emotions that we so cling to, and that cause the chimeras of “Good” and “Evil” to hover in our consciousness as palpable, independent objects, are, in fact, illusions.  In Westermarck’s own words:

As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity.  The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments.  The intensity of his emotions makes him the victim of an illusion.

The presumed objectivity of moral judgments thus being a chimera there can be no moral truth in the sense in which this term is generally understood.  The ultimate reason for this is that the moral concepts are based upon emotions and that the contents of an emotion fall entirely outside the category of truth.

Consider the significance of these passages, almost incredible looking back from a point of view through the Puritanical mist of the 21st century.  In one of the blurbs I ran across while searching the name “Westermarck,” his work was referred to as “outdated.”  I suppose that, in a sense, that conclusion is quite true, but not in the way intended.  I know of not a single modern thinker, scientist, or philosopher who has even come close to Westermarck in the simplicity and clarity with which he presents these conclusions, so obvious to anyone who has read and understood Darwin.  Here are some more passages that reinforce that conclusion:

If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.  It has been said by Bentham and others that moral principles cannot be proved because they are first principles which are used to prove everything else.  But the real reason for their being inaccessible to demonstration is that, owing to their very nature, they can never be true.  If the word, “Ethics,” then, is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.

To put it more bluntly, and to reveal some of my own purely subjective moral emotions in the process, the flamboyant peacocks currently strutting about among us peddling their idiosyncratic flavors of virtuous indignation and moral outrage based on a supposed monopoly on moral “truths” are, in reality, so many charlatans and buffoons.  To take them seriously is to embrace a lie, and one that, as has been clearly and repeatedly demonstrated in the past, and will almost certainly be abundantly demonstrated again in the future, is not only irritating, but extremely dangerous.  The above, by the way, appears in the context of a shattering rebuttal of utilitarianism in Chapter 1 that is as applicable to the modern versions being concocted for our edification by the likes of Sam Harris and Joshua Greene as it is to the earlier theories of John Stuart Mill and others.  In reading Westermarck’s book, one is constantly taken aback by insights that are stunning in view of the time at which they were written.  Consider, for example, the following in light of recent research on mirror neurons:

That a certain act causes pleasure or pain to the bystander is partly due to the close association which exists between these feelings and their outward expressions.  The sight of a happy face tends to produce some degree of pleasure in him who sees it.  The sight of the bodily signs of suffering tends to produce a feeling of pain.  In either case the feeling of the spectator is the result of a process of reproduction, the perception of the physical manifestation of the feeling recalling the feeling itself on account of the established association between them.

I fear we will have a very long wait before our species grasps the significance of Westermarck’s ideas and adjusts its perceptions of the nature and significance of morality accordingly.  As Jonathan Haidt pointed out in his The Righteous Mind, we are far to fond of the delightful joys of self-righteousness to admit the less than exalted truths about its origins without a struggle.  There are some grounds for optimism in the fact that a “Happy Few” are still around who understand that the significance of Westermarck completely transcends anything he had to say about sexual attraction and marriage.  As it happens, Frans de Waal, whose latest book is the subject of one of my recent posts, is one of them.  I personally became aware of him thanks to a reference to his book in Nietzsche’s “Human, All Too Human.”  I don’t think Nietzsche ever quite grasped what Westermarck was saying.  He had too much the soul of an artist and a poet rather than a scientist for that.  Yet, somehow, he had a sixth sense for ferreting out the wheat from the chaff in human thought.  As it happens, I began reading Stendhal, my favorite novelist, thanks to a reference in Nietzsche as well.  I may not exactly be on board as far as his ramblings about morality are concerned, but at least I owe him a tip of the hat for that.  As for Westermarck, I can but hope that many more will read and grasp the significance of his theories.  His book is available free online at Google books for anyone who cares to look at it.

UPDATE:  Apparently I became too “dizzy with success” at discovering Westermarck to notice a “minor” temporal anomaly in the above post.  A commenter just pointed it out to me.  Westermarck wrote his book in 1906, and Nietzsche died in 1900!  He was actually referring to a book by Paul Ree entitled, “The Origin of the Moral Sensations,” which appeared in 1877.  Check Ree’s Wiki page, and you’ll see he’s the guy standing in front of a cart with Nietzsche in the famous picture with Lou Andreas-Salome sitting in the cart holding a whip.  Of course, it’s a spoof on Nietzsche’s famous dictum, “You go to women? Do not forget the whip!”  I was reading the German version of his “Human, all too Human.”  The quote referred to appears in Section 37, as follows:

Welches ist doch der Hauptsatz, zu dem einer der kühnsten und kältesten Denker, der Verfasser des Buches “Über den Ursprung der moralischen Empfindungen” vermöge seiner ein-und durchschneidenden Analysen des menschlichen Handelns gelangt?

In my English version of the book above the quote is translated as,

Which principle did one of the keenest and coolest thinkers, the author of the book On the Origin of the Moral Feelings, arrive at through his incisive and piercing analysis of human actions?

I translated the title on the fly as “On the Origin of the Moral Emotions,” and when you search that title on Bing, the first link that comes up points to Westermarck’s book.  In a word, my discovery of Westermarck was due to serendipity or bungling, take your pick.  The shade of Nietzsche must be chuckling somewhere.  Now I feel obligated to have a look at Ree’s book as well.  I’ll let you know what I think of him in a later post, and I promise not to claim I discovered him thanks to a reference in Aristotle’s “Ethics.”

ree

Was There a Time Before the Blank Slate?

Yes, dear reader, there was.  It’s quite true that, for half a decade and more, the “Men of Science” imposed on the credulity of mankind by insisting that something perfectly obvious and long familiar to the rest of us didn’t exist.  I refer, of course, to human nature.  It was a herculean effort in self-deception that confirmed yet again George Orwell’s observation that, “There are some ideas so absurd that only an intellectual could believe them.”  In the heyday of the Blank Slate orthodoxy, such “Men of Science” as Ashley Montagu could say things such as,

…man is man because he has no instincts, because everything he is and has become he has learned, acquired from his culture, from the man-made part of the environment, from other human beings.

and

The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.

and do it with a perfectly straight face.  It was an episode in our history that must never be forgotten, and one that should be recalled whenever we hear someone claim that “science says” this or that, or that “the science is settled.”  The scientific method is the best butterfly net our species has come up with so far to occasionally capture a fluttering bit of truth.  However, it can never be separated from the ideological context in which it functions.  As the Blank Slate episode demonstrated, that context is quite capable of subverting and adulterating the truth when the truth stands in the way of ideological imperatives.

In the case of the Blank Slate, as it happens, those imperatives did not derail our search for truth for some time after Darwin first grasped the behavioral implications of his revolutionary theory.  And just as those implications were obvious to Darwin, they were obvious to many others.  The existence and selective significance of human nature were immediately apparent to anyone with an open mind and rudimentary powers of self-observation.  Indeed, they were treated almost as commonplaces in the behavioral sciences for decades after Darwin until they finally succumbed to the ideological fog.

For example, at about the same time that J. B. Watson and Frank Boas began fabricating the first serious “scientific” rationalizations of the Blank Slate, there was no evidence in the popular media of the rigid ideological orthodoxy that became such a remarkable feature of their coverage of anything dealing with human behavior in the 60’s and 70’s.  The later vilification of heretics as “racists” and “fascists” was nowhere to be seen.  Indeed, one Dr. Grace Adams, who held a Ph.D. in psychology from Cornell, was actually guileless enough to contribute an article entitled Human Instincts to H. L. Mencken’s The American Mercury as late as 1928!  Apparently without the faintest inkling of the hijacking of the behavioral sciences that was then already in the works, she wrote,

The recognition of the full scope and function of the human instincts will appear to those who come after us as the most important advance made by psychology in our time. (!)

How ironic those words seem now!  The very term “instinct” became toxic during the ascendancy of the Blank Slate, when the high priests of the prevailing orthodoxy insisted on their own rigid definition of the term, and then proceeded to exploit it as a handy tool for “smarter than thou” posturing and scientific one-upmanship.  Adams’ article includes some interesting remarks on the origin of the word “instinct” in the biological sciences and the later, gradual redefinitions that occurred when it was taken up by the psychologists.  In particular, she notes that, while the biologists of the time still used the term to describe behaviors that were unaffected by either “experience or volition,” and were “purely mechanical processes lying completely outside the province of consciousness,” psychologists preferred a much more flexible definition.  Referring to the great American ur-psychologist William James, Adams wrote,

So it was obvious, to him at least, “that every instinctive act in an animal with memory must cease to be ‘blind’ after being once repeated.”  In this way, according to James, an instinct could become not only conscious but capable of modification and conscious direction and change.

Or, as we would say today, the expression of “instincts” could be modified by “culture.”  Adams notes that, as early as 1890,

James was able to state complacently that there was agreement among his contemporaries that the human instincts were: sucking, biting, chewing, grinding the teeth, licking, making grimaces, spitting, clasping, grasping, pointing, making sounds of expressive desire, carrying to the mouth, the function of alimentation, crying, smiling, protrusion of the lips, turning the head aside, holding the head erect, sitting up, standing, locomotion, vocalization, imitation, emulation or rivalry, pugnacity, anger, resentment, sympathy, the hunting instinct, fear, appropriation or acquisitiveness, constructiveness, play, curiosity, sociability and shyness, secretiveness, cleanliness, modesty and shame, love, the anti-sexual instincts, jealousy, and parental love.  (Italics are mine)

Turn the page to the 20th century, and we already find two of the prominent psychologists of the day, James Angell and Edward Thorndike, squabbling over the definition of “instinct.”  According to Adams,

Angell, accepting James’ argument that instincts once yielded to are thereafter felt in connection with the foresight of their ends, expands this idea into the statement that “instincts, in the higher animals, at all events, appear always to involve consciousness.” And he makes consciousness the essential element of instincts. Thorndike, on the other hand, remembers James’ admission that instincts are originally blind and maintains that “all original tendencies are aimless in the sense that foresight of the consequences does not affect the response.”  For him the only necessary components of an instinct are “the ability to be sensitive to a certain situation, the ability to make a certain response, and the existence of a bond or connection whereby that response is made to that situation.” While the ideas of neither Angell nor Thorndike are actually inconsistent with James’ two-fold definition of an instinct, they lead to very different lists of instincts.

To cut to the chase, here are the lists of Angell,

Angell, by making consciousness the mark that distinguishes an instinct from a reflex, has to narrow the number of instincts to fear, anger, shyness, curiosity, sympathy, modesty (?), affection, sexual love, jealousy and envy, rivalry, sociability, play, imitation, constructiveness, secretiveness and acquisitiveness.

…and Thorndike,

But Thorndike admits no gap between reflexes and instincts, so he must both expand and subdivide James’ list. He does this in a two hundred page inventory (!) which he regrets is incomplete. He adds such activities as teasing, tormenting, bullying, sulkiness, grieving, the horse-play of youths, the cooing and gurgling of infants and their satisfaction at being held, cuddled and carried, attention-getting, responses to approving behavior, responses to scornful behavior, responses by approving behavior, responses by scornful behavior, the instinct of multiform physical activity, and the instinct of multiform mental activity. The “so-called instinct of fear” he analyzes into the instinct of escape from restraint, the instinct of overcoming a moving obstacle, the instinct of counterattack, the instinct of irrational response to pain, the instinct to combat in rivalry, and the threatening or attacking movements with which the human male tends to react to the mere presence of a male of the same species during acts of courtship.

In a word, the psychologists of the 20’s were still quite uninhibited when it came to compiling lists of instincts.  It is noteworthy that Thorndike’s The Elements of Psychology, which originally included extensive discussions of human “instincts” in Chapters 12 and 13, continued in use as a textbook for many years.  Indeed, Thorndike was one of the many psychologists of his day who seem surprisingly “modern” in the context of the early 21st century.  For example, again quoting Adams,

And Thorndike points out that a complete inventory of man’s original nature is needed not only as a basis of education but for economic, political, ethical and religious theories.

And, in a passage that, in light of recent developments in the field of evolutionary psychology, can only be described as stunning, Adams continues,

For Colvin and Bagly the chief essential of instincts is that “they are directed toward some end that is useful.” But they do not mean useful in a selfish or materialistic sense, for they are able to describe an altruistic instinct which is as real to them as the predatory instinct. And Kirkpatrick conceives of man being by native endowment even more noble. Indeed he credits to the human being a regulative instinct “which exists in the moral tendency to conform to law and to act for the good of others as well as self, and in the religious tendency to regard a Higher Power.”

Writing in the June and August, 1928 editions of the Mercury, H. M. Parshley elaborates on the connection, noticed decades earlier by Darwin himself, between “instincts” and morality:

Ethics certainly involves the consideration of motives, values, and ideals; and a scientific ethics requires genuine knowledge about these elusive matters.

As if anticipating Stephen J. Gould’s delusional theories of “non-overlapping magisterial,” he continues,

…in my “opinion, the chief support of obscurantism at this moment is the notion that motives, values, and ideals, unlike material things, are beyond the range of scientific study, and thus afford a free and exclusive field in which religion and philosophy may disport themselves authoritatively without challenge.

Parshley continues with a comment that we now recognize was sadly mistaken:

The biological needs are clear enough to see and we know a great deal about them – quite sufficient to establish the futility of asceticism and give rise to a complete distrust of any ethics that involves us in serious conflict with them.  Science has done this, and, I think, it will never be undone.

Parshley’s naïve faith in the integrity and disinterestedness of science was to be shattered all too soon.  Indeed, without recognizing the danger, Adams was already quite familiar with its source:

For many years the iconoclastic Watson strove to explain instincts in suitably behavioristic terms. But neither his definition nor his classification need concern us now, for in 1924 Watson repudiated everything he had previously said about them by declaring that “there are no instincts,” and furthermore, that “there is no such thing as an inheritance of capacity, talent, temperament, mental constitution and characteristics.” With these two statements Watson cast aside the biological as well as the psychological notion of mental inheritance.

For Adams, the behaviorist creed of Watson and Boas was just a curiosity.  She didn’t realize they were already riding on the crest of an ideological wave that would submerge the behavioral sciences in a sea of obscurantism for decades to come.  Marxism was hardly the only dogma that required their theories to be “true.”  The same could be said of many other pet utopias that could generally be included in the scope of E. O. Wilson’s epigram, “Great theory, wrong species.”  The ideological imperative was described in a nutshell by psychologist Geoffrey Gorer in an essay entitled The Remaking of Man, published in 1956:

One of the most urgent problems – perhaps the most urgent problem – facing the world today is how to change the character and behavior of adult human beings within a single generation.  This problem of rapid transformation has underlaid every revolution (as opposed to coups d’etat) at least from the time of the English Revolution in the seventeenth century, which sought to establish the Rule of the Saints by some modifications in the governing institutions and the laws they promulgated; and from this point of view every revolution has failed… the character of the mass of the population, their attitudes and expectations, change apparently very little.

Up till the present century revolutions were typically concerned with the internal arrangements of one political unit, one country; but the nearly simultaneous development of world-wide communications and world-wide ideologies – democracy, socialism, communism – has posed the problem not merely of how to transform ourselves – whoever ‘ourselves’ may be – but how to transform others.

This imperative shattered the naïve faith of Adams and Parshley in the inevitability of scientific progress with astonishing rapidity.  Later, during the heyday of the Blank Slate, Margaret Mead described the triumph of the “new ideas,” just a few short years after their articles appeared in the Mercury:

In the central concept of culture as it was developed by Boas and his students, human beings were viewed as dependent neither on instinct nor on genetically transmitted specific capabilities but on learned ways of life that accumulated slowly through endless borrowing, readaptation, and innovation… The vast panorama which Boas sketched out in 1932 in his discussion of the aims of anthropological research is still the heritage of American anthropology.

And so the darkness fell, and remained for more than half a decade.  The victory of the Blank Slate was, perhaps, the greatest debacle in the history of scientific thought.  Even today the “men of science” are incapable of discussing that history without abundant obfuscation and revision.  Still, the salient facts aren’t that hard to ferret out for anyone curious enough to dig for them a little.  It would behoove anyone with an exaggerated tendency to believe in the “integrity of science” to grab a shovel.

American Mercury

The “Worry” that we don’t have Free Will

In the last couple of posts I’ve been looking at some of the more interesting responses to the “annual question” at Edge.org.  This year’s question was, “What *Should” we be Worried About,” and answers were submitted by a select group of 155 public intellectuals, scientists, philosophers, etc.  An answer that is interesting if only because it is counterintuitive was submitted by Robert Sapolsky, a professor of biological science and neurology at Stanford.  In his response, entitled, “The Danger Of Inadvertently Praising Zygomatic Arches,” we find that Sapolsky is worried that we will make wrong choices because we don’t have free will.  In his words,

I don’t think that there is Free will. The conclusion first hit me in some sort of primordial ooze of insight when I was about 13-years old, and that conclusion has only become stronger since then. What worries me is that despite the fact that I think this without hesitation, there are times that it is simply too hard to feel as if there is no free will, to believe that, to act accordingly. What really worries me is that it is so hard for virtually anyone to truly act as if there is no free will. And that this can have some pretty bad consequences.

and,

But it is so difficult to really believe that there is no free will, when so many of the threads of causality are not yet known, or are as intellectually inaccessible as having to automatically think about the behavioral consequences of everything from the selective pressures of hominid evolution to what someone had for breakfast. This difficulty is something that we should all worry about.

To this, I can only answer, “Why?”  Why be worried about things you can do absolutely nothing about?  Why be worried that people won’t “truly act as if there is no free will” when it is perfectly obvious that, lacking free will, they can have no choice in the matter?  Why be worried about how difficult it is to “really believe that there is no free will” if we have not the faintest control over what we believe?  This is supposed to be a difficulty we all “should” worry about?  Surely it must be obvious that “should” is a completely meaningless term in a world without free will.  “Should” implies the freedom to choose between alternatives.  Remove free will, and that freedom is removed with it.  Remove free will and worry becomes absurd.  Why worry about something you can do nothing about?  It makes no more sense than poisoning your whole life by constantly worrying about the inevitability of death.

I by no means mean to imply that I am taking sides one way or the other on the question of whether we have free will.  I am simply pointing out that the very suggestion that we worry about it implies that we do.  If we have no free will then the question of whether we will worry about it or not is completely out of our control.  In that case it turns out I am in that happy category of people who are not worried about it.  If we do have free will, then the rationale for worrying about the lack of it is removed.  In either case, I am happy to report, I have no worries.

Neither do I imply any disrespect of Prof. Sapolsky, a brilliant man whose work I admire regardless of whether I have any choice in the matter or not.  See, for example, his work on the Toxo parasite, which strongly suggests that we must throw manipulation by other species into the mix along with genes and culture if we are ever to gain a complete understanding of human behavior.  Work of this kind, by the way, is so critical to the human condition that it cries out for replication.  There are only a few groups in the world doing similar work, and one must hope that they are not so intent on charging ahead with their own research that they neglect the scientific imperative of checking the work of their peers.

On the lighter side, readers of Prof. Sapolsky’s response will note that he throws in the disclaimer, “… lack of free will doesn’t remotely equal anything about genetic determinism.”  The Blank Slaters must have gotten to him!  In fact, to the best of my knowledge, there is not nor has there ever been such a beast as a “genetic determinist.”  They are as rare as unicorns.  The term was invented by cultural determinists to use in ad hominem attacks on anyone who dared to suggest that our behavior might actually be influenced by something other than environment and learning.  Their ideology requires them to blindly insist that “there is no evidence whatsoever” that anything but culture influences our behavior, just as the fundamentalist Christian must blindly insist that “there is not one iota of evidence for Darwinian evolution,” and the right wing ideologue must blindly insist that “there is not the faintest scrap of evidence for global warming.”  Of course, Prof. Sapolsky has just supplied even more compelling evidence that they are wrong.

In closing, I will include a poetic statement of Prof. Sapolsky’s philosophy by Edward Fitzgerald, who cloaked his own world view in his whimsical “translation” of Omar Khayyam’s Rubaiyat:

With earth’s first clay they did the last man knead,

And there of the last harvest sow’s the seed,

And the first morning of creation wrote,

what the last dawn of reckoning shall read.

Piltdown Man and the Delusions of Grafton Elliot Smith

According to the frontispiece of his The Evolution of Man, published in 1924, Grafton Elliot Smith held the titles of M.A., M.D., Litt. D., D.Sc., F.R.C.P., F.R.S., and Professor of Anatomy at the University of London.  If titles and academic honors are any guide, he must have been a very intelligent man.  He was well aware of the limitations of human intelligence, and wary of the influence of the emotions on judgments of fact.  For example, in the book referred to above, from which all the following quotes are taken as well, he wrote,

The range of true judgment is in fact extremely limited in the vast majority of human beings.  Emotions and the unconscious influence of the environment in which an individual has grown up play an enormous part in all his decisions, even though he may give a rational explanation of the motives for many of his actions without realizing that they were inspired by causes utterly alien to those which he has given – and given without any intention of dishonesty – in explanation of them.  It is the exception rather than the rule for men to accept new theories on evidence that appeals to reason alone.  The emotional factor usually expresses itself in an egotistical form.  The ‘will to believe’ can often be induced by persuading a man that he discovered the new theory of his own initiative.

No one could have written a better post mortem for Smith’s career.  When it came to questions that really mattered about the evolution of man, he had a positive penchant for getting it wrong.  Regarding the issue of whether erect posture or a large brain came first in the transition from ape to man, he noted in passing,

The case for the erect attitude was ably put by Dr. Munro (Neil Gordon Munro, better known for his studies of the Japanese Ainu, ed.) in 1893.  He argued that the liberation of the hands and the cultivation of their skill lay at the root of Man’s mental supremacy.

Smith would have done well to listen to Munro, not to mention Charles Darwin and Ernst Haeckel, both of whom proposed similar, “bipedalism before large brain” theories.  However, he would have none of it, writing,

It was not the adoption of the erect attitude that made Man from an Ape, but the gradual perfecting of the brain and the slow upbuilding of the mental structure, of which erectness of carriage is one of the incidental manifestations.

Noting that the above quote was included in the substance of an address to the British Association delivered in the autumn of 1912, he rejoiced in a latter chapter that his conjecture had been followed almost immediately by a “dramatic confirmation”:

Within the month after its delivery a dramatic confirmation was provided of the argument that in the evolution of Man the brain led the way.  For the late Mr. Charles Dawson (in association with Dr. – now Sir Arthur – Smith Woodward) brought to light in Sussex the remains of a hirtherto unknown type of Primate with a brain that, so far as size is concerned, came within the range of human variation, being more than 200 c.cm. larger than that of the more ancient and primitive member of the Human Family (Pithecanthropus), in association with a jaw so like that of a Chimpanzee that many of the leading palaeontologists believed it to be actually the remains of that Ape.

This, of course, was the famous Piltdown Man, probably the most damaging scientific forgery of all times, proved in 1953 to be a composite of a medieval human skull and the jaw of an orangutan.  It was probably fabricated by Dawson himself, who had a knack for making similar “sensational” finds, and whose antiquarian collection was found to include at least 38 specimens that were “clear fakes” after his death.  Ironically, its discovery induced just such a “will to believe” in Smith as he had warned his readers about earlier in the book.  He rationalized the “genuineness” of Piltdown Man with arguments that were formidably “scientific” and astoundingly intricate.  For example,

When the skull is restored in this way (according to an intricate reconstruction process described earlier, ed.) its conformation is quite distinctive, and differs profoundly from all other human skulls, recent or fossil.  The parietal bone exhibits a peculiar depression between the diverging temporal lines, and the lower margin of the bone, below the depression, is everted.  This creates a peculiarity in the form of the cranium that is found in the Gorilla and Chimpanzee.  But the simian resemblances are revealed most strikingly in a transverse section of the reconstructed Piltdown Skull, when compared with corresponding sections of those of a Chimpanzee, a Gorilla, and a modern European.  It will then be realized how much more nearly the Piltdown skull approaches the simian type.  The general form of the cranium in transverse section is greatly expanded like that of an Ape.  This applies particularly to the contour of the parietal bones.  But the construction of the temporal bone is even more strikingly Ape-like in character.

…and so on.  One can but feel a painful and vicarious sense of shame for the worthy professor, who had so thoroughly succeeded in hoodwinking himself.  Unfortunately, his weighty testimony hoodwinked many others as well, eventually including even Sir Arthur Keith, who had immediately smelled a rat and publicly cast doubt on the discovery, only to later accept the forgery as real against his better judgment with the help of Smith’s “coaching.”

Piltdown Man wasn’t the only sensational discovery of the day.  Raymond Dart had also discovered the first specimen of Australopithecus Africanus in the same year as Smith’s book was published.  Dart had immediately noticed evidence of the creature’s upright posture, but Smith would have none of it:

But there is no evidence to suggest that its posture differed from that of the Chimpanzee.  The peculiarity in the position of the foramen magnum – which Professor Dart assumed to afford further corroboration of its human affinity – is merely an infantile trait that is found equally in other young Anthropoids.

Poor old Dart.  He was always being “debunked” for being right.  He was similarly “set straight” by his peers for suggesting that early man engaged in anything so unsavory and politically incorrect as hunting live game.  Next it was the turn of Neanderthal Man.  To add insult to the injury of his recent extinction, Smith’s unflattering description spawned a myriad museum displays of a stooped, bestial creature, seemingly unattractive as a sex partner except to the most desperate:

His short, thick-set, and coarsely built body was carried in a half-stooping slouch upon short, powerful, and half-flexed legs of peculiarly ungraceful form.  His thick neck sloped forward from the broad shoulders to support the massive flattened head, which protruded forward, so as to form an unbroken curve of neck and back, in place of the alternation of curves which is one of the graces of the truly erect Homo sapiens.

In a word, Professor Smith left us with a wealth of disinformation that it took decades of careful research to correct.  His example should teach us humility.  His book and a few others like it should be required reading for nascent Ph.D.’s.  Many of them will find little time for such ephemera later on in their struggles to stay up to speed with all the latest in the collection of learned journals that pertain to their specialty.  Still, they might find it amusing and even informative to occasionally step back from the information maelstrom, dust off some of the old books and journals in forgotten stacks, and recall the foibles as well as the triumphs of their compatriots gone before.  In ambling through the old source material, they’re likely to find that the history they find on the Internet isn’t always served straight up.  As is regrettably the case with Prof. Smith, it often happens that some of the more egregious warts and blemishes have been charitably removed.  They are likely to find the unexpurgated versions more helpful, especially if they happen to specialize in fields that are long on unfalsifiable theories and short on repeatable experiments.

Grafton Elliot Smith

Milovan Djilas and the Genesis of a Communist Ingroup

Milovan Djilas was a man of genius.  He was also, for much of his life, a Communist, and a very effective one who contributed mightily to the victory of Tito’s Partisans in World War II.  After the war he was one of the four most powerful men in Yugoslavia, but became disillusioned with the reality of Communism.  After publishing a series of 18 articles critical of the regime that appeared in the Communist organ Borba between October 1953 and January 1954, he was expelled from the party’s Central Committee.  He was arrested in 1956 and imprisoned for “hostile propaganda” following interviews that appeared in The New York Times and Agence France Presse, and spent much of the next ten years in jail.  His famous exposé of Communism, The New Class, appeared in 1957 after the manuscript was smuggled out of prison.  His later autobiographical works, such as Land Without Justice, Memoir of a Revolutionary, and Wartime, are treasure troves, not only for historians, but for sociologists and psychologists as well.  They are also full of invaluable insights into the birth and evolution of ideological ingroups.

In this case, of course, the ingroup in question is Communism, with Nazism one of the two great secular faiths of the 20th century.  However, the phenomena described by Djilas are also evident among the ingroups spawned by the earlier religious faiths as well.  Indeed, it might be said that one of these, a latter day version of Islam, “rushed in to fill the vacuum” left by the collapse of Communism.  At the moment, pending the rise of the next great secular faith, it is, in a sense, the only game in town for those with a penchant for saving the world.  Hence the occasionally comical love affair of the stalwarts of the extreme left with fundamentalist religious ideologues of the extreme right.

This phenomenon is hardly without historical precedent.  For example, the Nazis found a fertile recruiting ground for their storm troopers among former Communists.  Both of these ideological ingroups were strongly attractive to the same psychological type.  Both promised to save the world, albeit in radically different ways.  However, the strength of the attraction does not depend on the minutiae of theory, but on the degree to which an ideology appeals to the innate wellsprings of human moral behavior; what Jonathan Haidt has referred to as Moral Foundations in The Righteous Mind.  If the appeal is there, theoretical details are almost a matter of indifference.  Communist intellectuals were occasionally puzzled by the appeal of Nazism because of what they considered its theoretical incoherence.  Their mistake was in believing that the appeal of either Nazism or Communism depended on theory.  Communists became Communists, not because of the intellectual elegance of Marxism, but because it happened to be around.  They had an emotional itch, and Communism was a convenient tool for scratching it.  As Djilas put it in Memoir of a Revolutionary,

We called it Communism.  It was not Communism, but, rather, a deep dissatisfaction with existing conditions and an irrepressible desire to change life, not to accept a hopeless monotony.

Here, too, in a nutshell, he describes the susceptible “psychological type.”  Not surprisingly, the greatest susceptibility is found among the young.  In Djilas words,

Youthful rebellion first assumed a moral form:  the negation of traditional views and relationships.  The common man suffered the dictatorship and the other hardships as elementary evils which had rendered him helpless.  His concentration was on his family life.  He was petit bourgeois.  But he did not have any choice if he was not willing to go to prison.  Opposition to this kind of life, resistance to it and the bourgeois existence, was the most frequent form rebellion took among young people, particularly among intellectuals.

Initial attempts to scratch the “itch” took familiar forms:

In the course of my two years as a student (1929 to 1931), young people sought relief in a special form of bohemian existence, in which alcohol was perhaps not the chief solace.

They did not immediately turn to Communism, in part because of the lack of an organized Communist movement in Yugoslavia at the time.  King Alexander had abolished the constitution and established a personal dictatorship in 1929.

With the advent of the dictatorship, political organizations at the University were either broken up or they disintegrated.  There wasn’t a trace left of the Communist organization.  There were a few Communists, older students, but they were either so passive or so secretive that one didn’t know who they were.  I knew one of them, Milos Tujo Cetkovic, but only because he was a Montenegrin, from my region, and a relative of my Aunt Draguna.  However, he never said anything to encourage me in my rebellion, so involved was he in himself and in the mechanics of his conspiracy.

In keeping with ideological tradition, Djilas turn to Communism was catalyzed by admiration of a “heroic martyr.”  In his case, it was Bracan Bracanovic, a former member of the Yugoslav Communist Party’s Central Committee.

They say that he was dark and young and wild, and that he had enormous physical strength.  Several times he broke the chains on his wrists and it took as many as ten agents to subdue him.  He shouted big angry words at the policemen, spitting at them in spite of horrible physical tortures.  Uncompromising and unyielding, proud and strong, covered with blood and wounds, he died one night of a bullet in the nape of his neck, in a ditch near Belgrade.  No grave and no stone.  In my mind Bracanovic was identified with the heroes of our legendary past, the struggle against the Turks which I had sucked with my mother’s milk.  The death of such a hero was a crime a hundred times greater than any other, which inspired hatred and thoughts of revenge in any young fiery spirit.

Djilas time at the University also coincided with the worst years of the Great Depression, which did not spare Yugoslavia.  Economic misery and political repression promoted extremism:

My rebellious tendencies thrived in the Belgrade of this time:  Belgrade with its wild night life, its crisscross of influences from the whole country and abroad, its restricted social and political life… All the forces that yearned for a breath of fresh air were packed into underground cellars.  Belgrade was lively, colorful, and full of contrasts – an ostentatious display of newly acquired wealth on the one hand, and misery, hunger, and unemployment on the other.  It was a setting that gave form and encouragement to the conscious organized rebellion of the young… The dictatorship’s major undoing was that it took over in Yugoslavia just prior to the Great Depression of 1929.  The man in the street, who knows nothing about world economic laws, could not be convinced by elaborate but valid explanations in the press that the government was not wholly responsible for the economic downturn.  Poverty was spreading every step of the way, exposing gruesome crimes and perversities.

As individuals in the face of all this misery, Djilas and his friends felt a stifling impotence:

I found my own impotence in this situation insufferable, my own and that of so many people who opposed this power as personified by the King, the tyrant.  I felt that this night marked a final break between me, a citizen, and the King, the representative of state power.  As it turned out, I was not alone in this reaction:  we finally understood it was the King who was responsible for all that evil.

At first, Djilas joined a fellow student from a “bourgeois” party in distributing illegal political leaflets calling for a boycott of mock “elections” planned by the regime.  However, this first experience with organized resistance failed to scratch the itch:

For many years I was ashamed of having distributed those leaflets and for having urged other people to join me.  For a whole year my friends kept reproaching me, and their reproach, coupled with my own feelings of guilt, fortified my opposition to the bourgeois parties and their leaders.  We were not yet Communists, but we had begun to compete with each other in degrees of hostility toward the bourgeoisie.  Later this game assumed the character of deep “class” hatred.

The group of similarly disaffected left-wing students that had begun to gather around Djilas decided to take their opposition a step further:

We agreed that demonstrations should be held at the Law School at noon the day before the elections… That was the first public demonstration against the dictatorship.  This is not the time to talk of its impact on the development of the opposition and the Communist movement among the students.  But those who joined the demonstration felt that they were initiating something new and dangerous, that they were treading into the unknown.  Of that there can be no doubt.

The police smashed the demonstration, but only succeeded in fanning the flames.  The result was evident at a meeting of the students the following day.

Several people made speeches, including me, critical of our weak showing.  It was apparent that an organized minority was taking shape and imposing its will on the group.  There were a few moderate speakers, but they were quickly silenced.  Our skill in public speech-making – passion, invocation of patriotism, responsibility to the people, the duties of the young generation – had a tremendous impact.  Certain speakers were able to do anything they wanted with the crowd.

The emotional buttons were being pushed.  The moderate parties were pushed aside:

None of us leftists understood the full significance of the demonstrations.  However, the results were soon in evidence.  The bourgeois parties had lost control.  In the demonstrations they were moderate, and in action they were nowhere to be seen… But the most surprising thing of all was that the bourgeois parties had lost all influence on the masses, the ram and unformed masses, rebellious, politically undecided, strongly leftist in outlook.  A new generation was growing up under the dictatorship, ready to pounce.  The dictatorship had given birth to its own gravedigger.

For the Party, it was now merely a question of collecting the ripe fruit.  In Djilas’ case, it took the form of a message from the Communist Regional Committee that “the ‘comrades’ wished to see us.”  The “comrade” who did most of the talking was one Blazo Raicevic.  It turned out his Communist bone fides were somewhat dubious.  According to Djilas,

In the post-1937 internal struggles, he was included in the purge as an “unhealthy,” “factional,” “antiparty” element.

It didn’t matter.  Djilas continues,

…we were young Communists, not organized yet, but for that very reason most useful.  He was not bothered by our ideological immaturity – he was not a very well-formed Marxist himself… For us Montenegrin leftists, he was the first contact with the party organization, even if we overestimated him as a Communist and the strength of the existing Communist Party.

Raicevic encouraged the young Communists, but he did not organize them.  He didn’t need to.  They had found a unifying ideological outlet for their discontent.  From that point, the organization of the ingroup was almost spontaneous.  Djilas had left Belgrade for several months to avoid the police, who were already watching him.  The process of self-organization was already well underway when he returned:

In the three months that I had been away from Belgrade, the situation at the University had changed.  The unstable leftist groups had grown stronger and better organized, and had been formed into Marxist circles.  The official Communist party could in no way be credited with this development, even though the party did have its representatives in Belgrade, very respectable people at that… (I) found my colleagues organized in groups, absorbing ideology from Marxist pamphlets.  They were now sober, coldly analytical, and unsparing in their criticism of “bourgeois remnants.” … I felt ashamed I had “fled” from the police and stayed away so long.  I made up my mind to join one of the circles at once.

The process was complete.  The young students with a “deep dissatisfaction with existing conditions and an irrepressible desire to change life” now belonged to the Communist ingroup.  In the words of philosopher Eric Hofer, they were now “True Believers.”  The particular ideological shibboleths of the faith in question, Communism, were almost incidental.  It was adopted, not because of its rational beauty, but because it happened to be the most effective nostrum for “scratching the itch” available at the time.  Religious enthusiams have served just as well at different times and places.  Nazism, which appealed, in part, to a different set of moral foundations, proved to be even more effective in what amounted to a head-to-head competition.  However, for obvious reasons, an ideology based on the German Master Race didn’t play well in Yugoslavia.  Communism had international appeal.

And what of Milovan Djilas?  By all means, if you are suffering information overload about the results of the recent Presidential election, and are inclined to read something useful for a change, head to eBay or Amazon and pick up a couple of his books.  I recommend his autobiographical works for starters, beginning with Land Without Justice.  Save The New Class for later.  It’s best read once you’ve gained some familiarity with the man who wrote it.

Milovan Djilas

…and One More Thing about Mencken

Occasionally Mencken’s American Mercury would include a section called the “Soapbox,” where lucky readers might find their letters to the editor.  One of them, signed by “A Reformed Psychologist” from Utica, NY, read as follows:

The great problem of psychology, during the next fifty years, will be to account for the fact that presumably rational beings once believed in some of the psychologies prevailing today.

The same could have been said fifty years later, when presumably rational beings believed,

The genetic contribution to man’s nervous system is virtually complete at birth.  Almost everything that happens thereafter is learned.  It is this consideration which inspires the modern anthropologist to declare that man has virtually no instincts, and that virtually everything he knows he has learned from his environment.  (Kenneth Boulding, Man and Aggression, p. 87)

The field studies of Schaller on the gorilla, of Goodall on the chimpanzee, of Harrisson on the orang-utan, as well as those of others, show these creatures to be anything but irascible.  All the field observers agree that these creatures are amiable and quite unaggressive.  (Ashley Montagu, Ibid., p. 12)

…human nature is what man learns to become as a human being.  As we trace the details of man’s evolutionary history we see that it is with the development of culture that man’s brain began to grow and develop in a simultaneous feedback interaction with culture as an organ of learning, retrieval, and intelligence.  Under the selection pressures exerted by the necessity to function in the dimension of culture, instinctive behavior would have been worse than useless, and hence would have been negatively selected, assuming that any remnant of it remained in man’s progenitors.  In fact, I also think it very doubtful that any of the great apes have any instincts.  On the contrary, it seems that as social animals they must learn from others everything they come to know and do.  Their capacities for learning are simply more limited than those of Homo sapiens.(Ashley Montagu, Ibid., p. 15)

If the Reformed Psychologist of 1933 were resurrected in our own time, he would likely be very disappointed.  No apology has been forthcoming from the psychologists, not to mention the anthropologists or sociologists, either for the silliness retailed as “science” cited above, or the earlier silliness of the Reformed Psychologist’s own time.  We have neither heard an apology, nor has there even been a serious attempt by behavioral scientists to study and understand their own behavior.  That’s why I have to smile whenever I hear them refer to themselves as “men of science.”  If they were truly “men of science” surely it would occur to them that they owe the rest of us a convincing explanation of how they could have been so wrong about so many things for so long.  But failure to provide an explanation for why they foisted nonsense that was palpably false on the rest of us as “science” for so long is not the worst of it.  The worst of it is that they vilified and shouted down anyone who disagreed with them as fascists, racists, Nazis, right-wing reactionaries, John Birchers, and any number of other unsavory epithets, as documented in that invaluable little piece of historical source material, Man and Aggression, and many other easily accessible books and other documents.  Are we to understand that this, too, was “good science”?  Under the circumstances, a certain degree of skepticism regarding theories coming from those quarters would seem justified.

 

The Implosion of Jonah Lehrer

A couple of years ago Harvard announced an ongoing investigation of evolutionary biologist and anthropologist Marc Hauser for “scientific misconduct” involving the integrity of experimental data.  Hauser wrote books such as Moral Minds for a lay audience as well as numerous papers in academic journals co-authored with the likes of Noam Chomsky, Peter Singer, and Richard Wrangham.  He resigned his professorship about a year later.

Now another public scientist and intellectual who also specialized in the behavioral sciences has fallen.  Jonah Lehrer was fired from his position at “The New Yorker” after admitting he fabricated quotes attributed to Bob Dylan.  I can only agree with his editor, David Remnick, that “This is a terrifically sad situation.”  Why someone as ostensibly successful and highly regarded as Lehrer would do such a thing is beyond my comprehension.

One must hope we’re not seeing the start of a trend.  I can think of few things more important than the credibility and integrity of the behavioral sciences, so lately emerged from the debacle of the Blank Slate.  It turns out Lehrer didn’t even need to invent the quotes in question.  According to Randy Lewis writing for the LA Times, Dylan actually did say substantially the same thing in an interview with pop music critic Robert Hilburn in 2004.  Quoting from Lewis’ article,

At one point, he told Hilburn something very close to what Lehrer seemed to have been after: “I’m not good at defining things,” Dylan said in 2004. “Even if I could tell you what the song was about I wouldn’t. It’s up to the listener to figure out what it means to him.”

But he also did open up remarkably about how he viewed the art and craft of songwriting.

“I don’t think in lateral terms as a writer. That’s a fault of a lot of the old Broadway writers…. They are so lateral. There’s no circular thing, nothing to be learned from the song, nothing to inspire you. I always try to turn a song on its head. Otherwise, I figure I’m wasting the listener’s time.”

Had he been more thorough in doing the research for his book, perhaps Lehrer could have been able to hold onto the success he seemed so desperately to want that he concocted quotes from the greatest songwriter of the rock era.

It would be nice if the scientists who study our behavior and morality were themselves immune to the human frailties they write about.  Once again, we have seen that they most decidedly are not.  Those who seek Plato’s philosopher kings will have to keep looking.

War and the Fantasy World of the Blank Slate

In the introduction to his book The Origin of War, published in 1995, Johan van der Dennen writes,

When I embarked upon the enterprise of collecting literature on human primitive war some 15 years ago – with the objective to understand the origin of this puzzling and frightening phenomenon of intrahuman, intergroup killing – little did I suspect that some ten years later that subject would be very much alive and kicking in disciplines as diverse as cultural anthropology, ethology, evolutionary biology and sociobiology, and the socio-ecological branch of primatology, generating an abundance of novel and intriguing theories, engendering new waves of empirical (cross-cultural) research, and lots and lots of controversies.

At that time, the question of the origin and evolution (if any) of human warfare was a totally marginal and neglected domain of investigation. Among polemologists (or peace researchers as they are known in the Anglosaxon language area), there seemed to be an unshakable consensus that war was a cultural invention and social institution, which had originated somewhere in Mesopotamia some five thousand years ago (It actually was, and still is, a curious blend of the credos of the Margaret Mead school of anthropology, the simplistic dogmas of behaviorist psychology, and a historicist sociology – all consenting to the tabula rasa model of human behavior, i.e., the assumption of infinite plasticity and sociocultural determinism – inexplicably mixed with assumptions of a static Human Nature derived from the Realist school of political science). Such a conception precluded any evolutionary questions:  war had a history and development, but no evolution in the Darwinian sense.

He’s right, as anyone who was around at the time and happened to take an interest in the behavioral sciences is aware.  It seems almost incredible that whole branches of what were charitably referred to as “sciences” could have listened to the doctrine that “war was a cultural invention and social institution, which had originated somewhere in Mesopotamia some five thousand years ago” without breaking out into peals of laughter, but so it was.  They not only listened to it without cracking a smile, but most of them actually believed it.  One would think the idea that a phenomenon that has been ubiquitous across human cultures on every continent since time immemorial was just a “cultural invention” must seem palpably stupid to any ten year old.  It was, nevertheless swallowed without a murmur by the high priests of the behavioral sciences, just as the dogmas of the trinity and transubstantiation in the Eucharist are swallowed by the high priests of more traditional religions.

The Blank Slate is now dead, or at least hibernating, and the behavioral sciences have made the startling discovery that there is such a thing as human nature, but there is still a remarkable reticence to talk about warfare.  It’s not surprising, given the political proclivities of the average university professor, but dangerous, nonetheless.  In a world full of nuclear weapons, it seems that a serious investigation into the innate origins of warfare might be a profitable use of their time.  With self-understanding might come insight into how we might give ourselves a fighting chance of avoiding the worst.  Instead, the learned doctors feed us bromides about the gradual decline of violence.  A general nuclear exchange is likely to provide them with a data point that will somewhat disarrange their theories.

Perhaps it would be best if they started by taking a good look in the mirror, and then explaining to us how so many so-called experts could have been delusional for so long.  What were the actual mechanisms that allowed secular religious dogmas to hijack the behavioral sciences?  The Blank Slate is not “archaic science.”  It was alive and well less than two decades ago.  Why is it that we are now supposed to trust as “scientists” people who were so wrong for so long, shouting down anyone who disagreed with them with vile ad hominem attacks?  Instead of seeking to understand this past debacle and thereby at least reducing the chances of stumbling into similar debacles in the future, they invent a few self-serving yarns about it, and just keep plodding on as if nothing had happened.

Perhaps we should be counting our blessings.  After all, the idea that war is a mere “cultural innovation” is no longer in fashion.  Occasionally, it is actually mentioned as a manifestation of certain unfortunate innate predispositions, usually along with comforting words about the decline in violence noted above, expanding our “ingroups” to include all mankind, etc.  Given the ferocity with which the spokespersons of the “progressive” end of the political spectrum generally favored by professors of the behavioral sciences attack anyone who disagrees with them, I personally am not particularly sanguine about that possibility.

Well, perhaps if a nuclear war does come, they will finally get serious and come up with some sound advice for avoiding the next one.  Unfortunately, finding publishers to spread the good news might be a problem at that point.

Morality and the John Stuart Mill Syndrome

John Stuart Mill recognized the subjective nature of morality, contrasting his own opinion with those who believed that good and evil were objective “things in themselves.” As he put it,

There is, I am aware, a disposition to believe that a person who sees in moral obligation a transcendental fact, an objective reality belonging to the province of “Things in themselves”, is likely to be more obedient to it than one who believes it to be entirely subjective, having its seat in human consciousness only.

In spite of this, one constantly runs into artifacts of the implicit assumption that morality really does correspond to an object, a real thing. Consider, for example, the following excerpt concerning the basis of right and wrong:

A test of right and wrong must be the means, one would think, of ascertaining what is right or wrong, and not a consequence of having already ascertained it. The difficulty is not avoided by having recourse to the popular theory of a natural faculty, a sense of instinct, informing us of right and wrong. For – besides that the existence of such a moral instinct is itself one of the matters in dispute – those believers in it who have any pretensions to philosophy, have been obliged to abandon the idea that it discerns what is right or wrong in the particular case in hand, as our other senses discern the sight or sound actually present. Our moral faculty, according to those of its interpreters who are entitled to the name of thinkers, supplies us only with the general principles of moral judgments; it is a branch of our reason, not of our sensitive faculty; and must be looked to for the abstract doctrines of morality, not the perception of it in the concrete.

The implication here is, of course, that there actually is something concrete to find.  Weight is added to that impression by the following passage, in which, after noting the failure of philosophers to discover a universal morality in spite of more than 2000 years of effort, Mill suggests that whatever consistency we have finally attained on the subject is due to a “standard not recognized.”

To inquire how far the bad effects of this deficiency have been mitigated in practice, or to what extent the moral beliefs of mankind have been vitiated or made uncertain by the absence of any distinct recognition of an ultimate standard, would imply a complete survey and criticism of past and present ethical doctrine.  It would, however, be easy to show that whatever steadiness or consistency these moral beliefs have attained, has been mainly due to the tacit influence of a standard not recognized.

To the extent that such a standard exists, and is not due to innate human nature, it must be an objective thing it itself.  Mill was a brilliant man.  He had, however, the great misfortune of writing before the theories of Darwin could inform his work.  He was not a “Blank Slater” in the 20th century sense of the term, that is, an ideologue who insisted that he could not be wrong about innate human nature, and that anyone who maintained the contrary was morally or politically suspect.  He was aware he might be wrong about the matter, and admitted as much.

But I digress.  The point of this post is that, in spite of admitting the subjective nature of moral systems, Mill believed that, once the rational basis for the “utility” of his system of utilitarianism, or, as he put it, “The Greatest Happiness Principle,” had been accepted, It would somehow also acquire legitimacy.  In other words, it would become a valid basis for judging the actions, not just of himself and those who agreed with him, but everyone else, as well.  In short, it would become an objective thing.

We have learned a lot since then.  “Innate human nature” is now accepted as if there had never been any dispute about the matter, and, if the works of the likes of Jonathan Haidt, Frans de Waal, and Richard Wrangham are any guide, the ultimate reasons for the existence of morality are to be found in that nature.  As Mill would have agreed, it is entirely subjective.  It seems abundantly obvious that, given its nature and origins, morality cannot possibly acquire anything like universal legitimacy.  That, however, is a truth that our modern experts in ethics have found too hard to bear.  In a sense, it puts them out of business.  What good is their expertise if there is no universal standard to discover?  What becomes of the delicious joy of virtuous indignation and the divine pleasure of moral outrage once the absolute standard those joys depend on evaporates?

For example, consider an essay penned by Michael Price, a professor of psychology, in “From Darwin to Eternity,” a blog he writes for Psychology Today.  Entitled “Morality:  What is it Good for?,” the article makes all the requisite nods to human nature.  For example,

Human moral systems are ultimately biological:  they are generated by brains, and brains are composed of mechanisms that evolve by standard Darwinian natural selection.  Like all biological adaptations (such as hearts, uteruses, and hands), these mechanisms solve problems related to individual survival and reproduction.  The moral judgments of individuals can generally be regarded as the primary products, or else as the by-products, of these mechanisms.

and, fending off in advance the charge of genetic determinism beloved of the old Blank Slaters,

Some psychological adaptations for morally-relevant behavior solve problems that exist in virtually all human environments (for instance, the problem of avoiding inbreeding).  Others are solutions to problems that are more severe in some environments than others, and this is a major reason why – despite the fact that human nature is fundamentally the same cross-culturally – some aspects of moral systems vary significantly across cultures.  For example, in environments in which access to resources depends especially heavily on success in war – such as among the tribal communities of highland New Guinea, or the fiefdoms of medieval Europe – people are relatively likely to endorse military virtues like fierceness and valor and to disparage cowardice.

Prof. Price concludes with some reflections on what he calls “cultural group selection”:

Historically, groups with relatively empowering moral systems have tended to supplant groups with relatively enfeebling moral systems, and also to be imitated by weaker groups who wish to emulate their success.  Through these processes, winning moral formulas have tended to spread at the expense of losing ones.  From this perspective, the crucible of intergroup competition plays a key role in determining which moral systems flourish and which ones perish.  This view does not necessarily imply anything cynical about morality:  there’s no reason at all from biology that this competition must be violent (and indeed, Pinker argues persuasively in his recent book (The Better Angels of our Nature) that it has become much less violent over time), and nonviolent, productive competition can lead to a rising tide of benefits for humanity in general.

“Benefits for humanity?”  Where have we heard that before?  You guessed it.  In the end it’s not about gaining a rational understanding of human moral emotions and accommodating them as best we can in a rapidly changing world.  It’s about inventing a better mousetrap:

What this view does imply is that morality ought to be less about passionate expressions of outrage, and more about designing a value system that will enable societal success in a constantly changing and eternally competitive world.

And so, after all these assurances about the subjective nature of morality as a consequence of the evolved mental characteristics of a certain biological species with large brains, the Good Object begins to emerge from the shadows once again, hazy but palpable.  From its admittedly humble origins as an odd collection of behavioral traits that happened to contribute to the fitness of ancient groups of hunter gatherers, an infant “value system” emerges, a Thing that, if it survives to adulthood, will seek to acquire legitimacy by “enabling societal success” along the way.  In a word, we’ve come full circle, back to John Stuart Mill.  Undeterred by the dubious success of innovative “value systems” like Nazism and Communism in the 20th century, we merely need to persevere.  With luck, we’ll cobble together an entirely new one that will finally “enable societal success” without the creation of another luckless outgroup like the Jews or the bourgeoisie along the way, and with none of the other traditional unfortunate side effects that have inevitably accompanied mankind’s previous efforts to apply morality to modern societies.  No thanks.  We’ve been down that path before.

I don’t mean to pick on Professor Price.  What public intellectual doesn’t share his penchant for concocting gaudy new moralities that will usher in a Brave New World of “human flourishing?”  We find even the new atheists ostentatiously striking pious poses and raining down indignant anathemas on the morally suspect.  Nothing is harder to shake off and leave behind than the odor of sanctity.  I suspect, however, that we must if we ever really want to flourish.