Posted on December 4th, 2016 No comments
Ask anyone who voted in the recent election why they voted the way they did, and they are sure to have some answer. They will give you some reason why they considered one candidate good, and/or the other candidate bad. Generally, these answers will be understandable in the context of the culture in which they were made, even if you don’t agree with them. The question is, how much sense do they really make when you peel off all the obscuring layers of culture and penetrate to the emotions that are the ultimate source of all these “logical” explanations. There are those who are convinced that their answer to this question is so far superior to that of the average voter that they should have more votes, or even that the average voter should have no vote at all. Coincidentally, the “average voter” is almost always one who doesn’t vote the same way they do.
Claire Lehman recently wrote an interesting essay on the subject at the Quillette website. Her description of these self-appointed “superior voters” might have been lifted from the pages of Jonathan Haidt’s The Righteous Mind. In that book Haidt uses his parable of the elephant and its rider to describe the process of moral judgment. It begins with a split-second positive or negative moral intuition, which Haidt describes as the “elephant” suddenly leaning to the left or the right. Instead of initiating or guiding this snap judgment, the “rider” uses “reason” to justify it. In other words, he serves as an “inner lawyer,” rationalizing whatever path the elephant happened to take. Here’s how Lehman describes these “riders”:
This is one reason why charges of wholesale ignorance are so obtuse. “High information” people ignore evidence if it conflicts with their preferred narrative all the time. And while it may be naïve for voters to believe the promises of Trump and the Brexit campaigners — it has also been profoundly naïve for the cosmopolitan classes to believe that years of forced internationalism and forced political correctness were never going to end with a large scale backlash.
In fact, high information people are likely to be much better at coming up with rationalisations as to why their preferred ideology is not only best, but in the national interest. And high information rationalisers are probably more likely to put forward theories about how everyone who disagrees with them is stupid, and is not deserving of the right to vote.
As a representative example of how these people think, she quotes the philosopher John Brennan:
And while I no doubt suffer from some degree of confirmation bias and self-serving bias, perhaps I justifiably believe that I — a chaired professor of strategy, economics, ethics, and public policy at an elite research university, with a Ph.D. from the top-ranked political philosophy program in the English-speaking world, and with a strong record of peer-reviewed publications in top journals and academic presses — have superior political judgment on a great many political matters to many of my fellow citizens, including to many large groups of them.
It would seem “some degree of confirmation bias” is something of an understatement. What, exactly, does “superior political judgment” consist of. In the end it must amount to a superior ability to recognize and realize that which is “Good” for society at large. The problem is that this “Good” is a fantasy. All it really describes is the direction in which the elephant is leaning in the minds of individuals.
There can be no rational or legitimate basis for things that don’t exist. It is instructive to consider the response of secular philosophers like Brennan if you ask them to supply this nonexistent basis for the claim that their version of “Good” is really good. The most common one will be familiar to readers of secular moralist Sam Harris’ The Moral Landscape. Whatever political or social nostrum they happen to propose is good because it will lead to human flourishing. Human flourishing is good because it will lead to the end of war. The end of war is good because it will result in the end of pain and suffering. And so on. In other words, the response will consist of circular logic. What they consider good is good because it is good. Question any of the steps in this logical syllogism, and their response will typically be to bury you under a heap of negative moral intuitions, again, exactly as described by Haidt. How can you be so vile as to favor the mass slaughter of innocent civilians? How can you be so ruthless and uncaring as to favor female genital mutilation? How can you be so evil as to oppose the brotherhood of all mankind? Such “logic” hardly demonstrates the existence of the “Good” as an objective thing-in-itself. It merely confirms the eminently predictable fact that, at least within a given culture, most elephants will tend to lean the same way.
Philosophers like Brennan either do not realize or do not grasp the significance of the fact that, in the end, their “superior political judgment” is nothing more sublime than an artifact of evolution by natural selection. They epitomize the truth of the Japanese proverb, “Knowledge without wisdom is a load of books on the back of an ass.” In the end such judgments invariably boil down to the moral intuitions that lie at their source, and it is quite impossible for the moral intuitions of one individual to be superior to those of another in any objective sense. The universe at large doesn’t care in the slightest whether humans “flourish” or not. That hardly means that it is objectively “bad” to act on, passionately care about, or seek to realize ones individual moral whims. It can be useful, however, to keep the source of those whims in perspective.
One can consider, for example, whether the “rational” manner in which one goes about satisfying a particular whim is consistent with the reasons the whim exists to begin with. The “intuitions” Haidt speaks of exist because they evolved, and they evolved because they happened to increase the odds that the genes responsible for programming them would survive and reproduce. This fundamental fact is ignored by the Brennans of the world. What they call “superior political judgment” really amounts to nothing more than blindly seeking to satisfy these “intuitional” artifacts of evolution. However, the environment in which they are acting is radically different from the one in which the intuitions in question evolved. As a result, their “judgments” often seem less suited to insuring the survival and reproduction of the responsible genes than to accomplishing precisely the opposite.
For example, the question of whether international borders should exist and be taken seriously or not was fundamental to the decision of many to vote one way or the other in the recent U.S. presidential election. Lehman quotes Sumantra Maitra on this issue as follows:
[T]his revolutionary anti-elitism one can see, is not against the rich or upper classes per se, it is against the liberal elites, who just “know better” about immigration, about intervention and about social values. What we have seen is a “burn it all down” revenge vote, against sententious, forced internationalism, aided with near incessant smug lecturing from the cocooned pink haired urban bubbles. Whether it’s good or bad, is for time to decide. But it’s a fact and it might as well be acknowledged.
It is quite true that “forced internationalism” has been experienced by the populations of many so-called democracies without the formality of a vote. However, it is hardly an unquestionable fact that this policy will increase the odds that the genes responsible for the moral whims of the populations affected, or any of their other genes, will survive and reproduce. In fact, it seems far more likely that it will accomplish precisely the opposite.
A fundamental reason for the above conclusion is the existence of another artifact of evolution that the Brennans of the world commonly ignore; the universal human tendency to categorize others into ingroup and outgroups. I doubt that there are many human individuals on the planet whose mental equipment doesn’t include recognition of an outgroup. Outgroups are typically despised. They are considered disgusting, unclean, immoral, etc. In a word, they are hated. For the Brennans of the world, hatred is “bad.” As a result, they are very reticent about recognizing and confronting their own hatreds. However, they are perfectly obvious to anyone who takes the trouble to look for them. As it happens, they can be easily found in Lehman’s essay. For example,
She quotes the following passage which appeared in Haaertz:
But there is one overarching factor that everyone knows contributed most of all to the Trump sensation. There is one sine qua non without which none of this would have been possible. There is one standalone reason that, like a big dodo in the room, no one dares mention, ironically, because of political correctness. You know what I’m talking about: Stupidity. Dumbness. Idiocy. Whatever you want to call it: Dufusness Supreme.
In other words, the hatreds of the “superior voters” are quite healthy and robust. The only difference between their outgroup and some of the others to which familiar names have been attached is that, instead of being defined based on race, ethnicity, or religion, it is defined based on ideology. They hate those who disagree with their ideological narrative. Outgroup identification is usually based on easily recognizable differences. Just as ideological differences are easily recognized, so are cultural and ethnic differences. As a result, multi-culturalism does not promote either human brotherhood or human flourishing. It is far more likely to promote social unrest and, eventually, civil war. In fact, it has done just that countless times in the past, as anyone who has at least a superficial knowledge of the history of our species is aware. Civil war is unlikely to promote the survival of the human beings effected, nor of the genes they carry. “Low information voters” appear to be far more capable of appreciating this fundamental fact than the Brennans of the world who despise them. The predictable result of the “superior judgments” of self-appointed “high information voters” is likely to be the exact opposite of those that resulted in the existence of the fundamental whims that account for the existence of the “superior judgments” to begin with.
It is useless to argue that human beings “ought” not to hate. They will hate whether they “ought” to or not. We will be incapable of avoiding in the future the disastrous outcomes that have so often been the result of this salient characteristic of our species in the past if we are not even capable of admitting its existence. When Robert Ardrey and Konrad Lorenz insisted half a century ago that the existence of ingroups and outgroups, what Ardrey called the “Amity-Enmity Complex,” is real, and made a few suggestions about what we might do to mitigate the threat this aspect of our behavior now poses to our species in a world full of nuclear weapons, they were shouted down as “fascists.” In the ensuing years the “experts” have finally managed to accept the fundamental theme of their work; the existence and significance of human nature. They have not, however, been capable of looking closely enough in the mirror to recognize their own outgroups. Those who spout slogans like “Love Trumps Hate” are often the biggest, most uncontrolled and most dangerous haters of all, for the simple reason that their ideology renders them incapable of recognizing their own hatreds.
There is nothing objectively good about one version or another of “human flourishing,” and there is nothing objectively bad about social unrest and civil war. However I, for one, would prefer to avoid the latter. Call it a whim if you will, but at least it isn’t 180 degrees out of step with the reason for the whim’s existence. We are often assured that flooding our countries with unassimilable aliens will be “good for the economy.” It seems to me that the “good of the economy” can be taken with a grain of salt when compared with the “bad of civil war.” It is hard to imagine what can be fundamentally “good” about a “good economy” that threatens the genetic survival of the existing population of a country. I would prefer to dispense with the “good of the economy” and avoid rocking the boat. By all means, call the “low information voters” racist, bigoted, misogynistic and xenophobic until you’re blue in the face. The fact that one was “good” rather than “bad” in these matters will make very little difference to the rest of the universe if one fails to survive.
I have no idea what the final outcome of the Trump Presidency will be. However, I think “low information voters” had reasons for voting for him that make a great deal more sense than those given by their “superiors.” One does not necessarily become more rational or more intelligent by virtue of having a Ph.D. or reading a lot of books.
Posted on November 6th, 2016 5 comments
There are moral emotions. There is no such thing as moral truth.
The above are fundamental facts. We live in a world of moral chaos because of our failure to accept them and grasp their significance.
Eighteenth century British philosophers demonstrated that emotions are the source of all moral judgments. “Pure reason” is incapable of anything but chasing its own tail. Darwin revealed the origin of the emotions as the result of evolution by natural selection. It was left for the Finnish philosopher Edvard Westermarck to draw the obvious conclusion; that there is no such thing as moral truth.
David Hume is often given the credit for identifying emotions or, as he put it, “passions,” as the source of moral judgments. According to Hume,
Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.
However, when he wrote the above, Hume was really just repeating the earlier work of Francis Hutcheson. It was Hutcheson who demonstrated the emotional origin of moral judgments beyond any serious doubt. I encourage modern readers who are interested in the subject to read his books on the subject. I have quoted him at length in earlier posts, and I will do so again here. Here is what he had to say about the power of “pure reason” to isolate moral truth:
If conformity to truth, or reasonable, denote nothing else but that “an action is the object of a true proposition,” ‘tis plain, that all actions should be approved equally, since as many truths may be made about the worst, as can be made about the best.
There is one sort of conformity to truth which neither determines to the one or the other; viz. that conformity which is between every true proposition and its object. This sort of conformity can never make us choose or approve one action more than its contrary, for it is found in all actions alike: Whatever attribute can be ascribed to a generous kind action, the contrary attribute may as truly be ascribed to a selfish cruel action: Both propositions are equally true.
But as to the ultimate ends, to suppose exciting reasons for them, would infer, that there is no ultimate end, but that we desire one thing for another in an infinite series.
Hutcheson followed up this critique of reason with some comments about the role of “human nature” as the origin and inspiration of all moral judgment that might almost have come from a modern textbook on evolutionary psychology, and that are truly stunning considering that they were written early in the 18th century. Again quoting the Ulster Scots/British philosopher as well as my own comments from an earlier post:
Now we shall find that all exciting reasons presuppose instincts and affections; and the justifying presuppose a moral sense.
If we assume the existence of human nature, the “reasons” fall easily into place:
Let us once suppose affections, instincts or desires previously implanted in our nature: and we shall easily understand the exciting reasons for actions, viz. “These truths which show them to be conducive toward some ultimate end, or toward the greatest end of that kind in our power.” He acts reasonably, who considers the various actions in his power, and forms true opinions of the tendencies; and then chooses to do that which will obtain the highest degree of that, to which the instincts of his nature incline him, with the smallest degree of those things to which the affections in his nature make him averse.
Of course, versions of the Blank Slate have been around since the days of the ancient Greek philosophers, and “updated” versions were current in Hutcheson’s own time. As he points out, they were as irrational then as they are now:
Some elaborate Treatises of great philosophers about innate ideas, or principles practical or speculative, amount to no more than this, “That in the beginning of our existence we have no ideas or judgments;” they might have added too, no sight, taste, smell, hearing, desire, volition. Such dissertations are just as useful for understanding human nature, as it would be in explaining the animal oeconomy, to prove that the faetus is animated before it has teeth, nails, hair, or before it can eat, drink, digest, or breathe: Or in a natural history of vegetables, to prove that trees begin to grow before they have branches, leaves, flower, fruit, or seed: And consequently that all these things were adventitious or the effect of art.
Now we endeavored to show, that “no reason can excite to action previously to some end, and that no end can be proposed without some instinct or affection.” What then can be meant by being excited by reason, as distinct from all motion of instincts or affections? …Then let any man consider whether he ever acts in this manner by mere election, without any previous desire? And again, let him consult his own breast, whether such kind of action gains his approbation. A little reflection will show, that none of these sensations depend upon our choice, but arise from the very frame of our nature, however we may regulate or moderate them.
The fact that Hutcheson believed that God was the origin of the emotions in question in no way detracts from the power of his logic about the essential role of the emotions themselves. No modern philosopher sitting on the shoulders of Darwin has ever spoken more brilliantly or more clearly.
In considering the relevance of the above to the human condition, one must keep in mind the fact that any boundary between moral emotions and other emotions is artificial. Nature created no such boundaries, and they are an artifact of the human tendency to categorize. Of all the emotions not normally included in the category of moral emotions, the most significant may well be our tendency to perceive others of our species in terms of ingroups and outgroups. Our outgroup includes people we consider “deplorable.” They are commonly perceived as evil, and are usually associated with other negative qualities. For example, they may be considered impure, disgusting, contemptible, infidels, etc. Outgroup identification is universal, although the degree to which it is present may vary significantly from one individual to the next, like any other subjective mental predisposition. If one would explore and learn to understand his moral consciousness, he would do well to begin by asking the question, “What is my outgroup?” The “deplorables” will always be there.
Consider the implications of the above. Follow the abstruse reasoning of the “experts on ethics,” to its source, and you will find the whole façade is built on a foundation of emotions that evolved in times utterly unlike the present because they happened to improve the odds that the responsible genes would survive and reproduce. Look a little further, and you’ll find the outgroup.
Follow the arcane logic of theologians touching on the moral implications of this or that excerpt from the holy scriptures, and you will find the whole façade is built on a foundation of emotions that evolved in times utterly unlike the present because they happened to improve the odds that the responsible genes would survive and reproduce. Look a little further, and you’ll find the outgroup.
When bathroom warriors, or anti-culture appropriators, or the unmaskers of inappropriate Halloween costumes rain down their anathemas on anyone who happens to disagree with them, consider what motivates their behavior, and yet again you will find emotions that evolved in times utterly unlike the present because they happened to improve the odds that the responsible genes would survive and reproduce. Look a little further, and you’ll find the outgroup.
Stand in a crowd of Communists as they sing the Internationale, or of Nazis dreaming noble dreams of the liberation of Aryans everywhere from the powers of darkness as they sing the Horst Wessel Song, and you will find that the emotions those songs evoke evolved in times utterly unlike the present because they happened to improve the odds that the responsible genes would survive and reproduce. You won’t have to look very far to find the outgroup, either of Communists or Nazis. Millions of them were murdered in the name of these two manifestations of higher morality.
We live in a time of moral chaos because these truths have been too hard for us to bear. As Jonathan Haidt pointed out in his The Righteous Mind, we tend to invoke our inner moral lawyer whenever we happen to disagree with someone else about what ought to be. We consult our moral emotions, and seek to justify ourselves by evoking similar moral emotions in others. In the process we bamboozle ourselves and others into believing that those emotions relate to real things that we commonly refer to as good and evil, that are imagined to have an independent existence of their own. They don’t. They are merely illusions spawned by emotions that evolved in times utterly unlike the present because they happened to improve the odds that the responsible genes would survive and reproduce.
In a word, what we are doing is blindly following and reacting to emotional whims, even though it is questionable whether doing so will have the same result as it did when those whims evolved. For that matter, we don’t even care. As long as we can satisfy whims that evolved in the Pleistocene, it matters not at all to us that they will accomplish precisely the opposite in the 21st century to what they did then. The result is what I have referred to as a morality inversion. Instead of promoting our survival, the emotions in question promote behavior that accomplishes the opposite in the radically different environment we live in today. It matters not a bit. As long as we “feel in our bones” that the actions in question are “Good,” we cheerfully commit suicide, whether by donning a suicide belt or deciding that it must be “immoral” to have children. We imagine that these actions are “noble” and “morally pure” even though all we have really done is satisfy atavistic whims without the least regard for why those whims exist to begin with, and whether responding to them is likely to accomplish the same thing now as it did millions of years ago or not.
Again, we live in a world of moral chaos because we have been unable to face the truth, simple and obvious as it is. There is nothing “bad” about that, nor is there anything “good” about it. It is just the way things are. I personally would prefer that we face the truth. Perhaps then it would occur to us that, since we can hardly do without morality, we would be well advised to come up with a simple moral system that maximizes the ability of each of us to pursue whatever whims we happen to find important with as little fear of possible of being threatened, vilified, or otherwise subjected to the penalties that are typically the lot of outgroups. If we faced the truth about the real subjective origins of what have seemed objective moral certainties to so many of us in the past, perhaps at least some of us would be more reticent about seeking to impose their own versions of morality on those around them. If we faced the truth, perhaps we would realize that our universal tendency to blindly vilify and condemn outgroups represents an existential threat to us all, and that the threat must be recognized and controlled.
These are things that I would like to see. Of course, they represent nothing more significant than my own whims.
Posted on October 17th, 2016 No comments
At the moment the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is in a class by itself when it comes to inertial confinement fusion (ICF) facilities. That may change before too long. A paper by a group of Chinese authors describing a novel 3-axis cylindrical hohlraum design recently appeared in the prestigious journal Nature. In ICF jargon, a “hohlraum” is a container, typically cylindrical in form. Powerful laser beams are aimed through two or more entrance holes to illuminate the inner wall of the hohlraum, producing a burst of x-rays. These strike a target mounted inside the hohlraum containing fusion fuel, typically consisting of heavy isotopes of hydrogen, causing it to implode. At maximum compression, a series of shocks driven into the target are supposed to converge in the center, heating a small “hot spot” to fusion conditions. Unfortunately, such “indirect drive” experiments haven’t worked so far on the NIF. The 1.8 megajoules delivered by NIF’s 192 laser beams haven’t been enough to achieve fusion with current target designs, even though the beams are very clean and uniform, and the facility itself is working as designed. Perhaps the most interesting thing about the Chinese paper is not their novel three axis hohlraum design, but the fact that they are still interested in ICF at all in spite of the failure of the NIF to achieve ignition to date. To the best of my knowledge, they are still planning to build SG-IV, a 1.5 megajoule facility, with ignition experiments slated for the early 2020’s.
Why would the Chinese want to continue building a 1.5 megajoule facility in spite of the fact that U.S. scientists have failed to achieve ignition with the 1.8 megajoule NIF? For the answer, one need only look at who paid for the NIF, and why. The project was paid for by the people at the Department of Energy (DOE) responsible for maintaining the nuclear stockpile. Many of our weapons designers were ambivalent about the value of achieving ignition before the facility was built, and were more interested in the facility’s ability to access physical conditions relevant to those in exploding nuclear weapons for studying key aspects of nuclear weapon physics such as equation of state (EOS) and opacity of materials under extreme conditions. I suspect that’s why the Chinese are pressing ahead as well. Meanwhile, the Russians have also announced a super-laser project of their own that they claim will deliver energies of 2.8 megajoules.
Meanwhile, in the wake of the failed indirect drive experiments on the NIF, scientists in favor of the direct drive approach have been pleading their case. In direct drive experiments the laser beams are shot directly at the fusion target instead of at the inner walls of a hohlraum. The default approach for the NIF has always been indirect drive, but the alternative approach may be possible using an approach called “polar direct drive.” In recent experiments at the OMEGA laser facility at the University of Rochester’s Laboratory for Laser Energetics, the nation’s premier direct drive facility, scientists claim to have achieved results that, if scaled up to energies available on the NIF would produce five times more fusion energy output than has been achieved with indirect drive to date.
Meanwhile, construction continues on ITER, a fusion facility designed purely for energy applications. ITER will rely on magnetic plasma confinement, the other “mainstream” approach to harnessing fusion energy. The project is a white elephant that continues to devour ever increasing amounts of scarce scientific funding in spite of the fact that the chances that magnetic fusion will ever be a viable source of electric power are virtually nil. That fact should be obvious by now, and yet the project staggers forward, seemingly with a life of its own. Watching its progress is something like watching the Titanic’s progress towards the iceberg. Within the last decade the projected cost of ITER has metastasized from the original 6 billion euros to 15 billion euros in 2010, and finally to the latest estimate of 20 billion euros. There are no plans to even fuel the facility for full power fusion until 2035! It boggles the mind.
Magnetic fusion of the type envisioned for ITER will never come close to being an economically competitive source of power. It would already be a stretch if it were merely a question of controlling an unruly plasma and figuring out a viable way to extract the fusion energy. Unfortunately, there’s another problem. Remember all those yarns you’ve been told about how an unlimited supply of fuel is supposed to be on hand in the form of sea water? In fact, reactors like ITER won’t work without a heavy isotope of hydrogen known as tritium. A tritium nucleus contains a proton and two neutrons, and, for all practical purposes, the isotope doesn’t occur in nature, in sea water or anywhere else. It is highly radioactive, with a very short half-life of a bit over 12 years, and the only way to get it is to breed it. We are told that fast neutrons from the fusion reactions will breed sufficient tritium in lithium blankets surrounding the reaction chamber. That may work on paper, but breeding enough of the isotope and then somehow extracting it will be an engineering nightmare. There is virtually no chance that such reactors will ever be economically competitive with renewable power sources combined with baseline power supplied by proven fission breeder reactor technologies. Such reactors can consume most of the long-lived transuranic waste they produce.
In short, ITER should be stopped dead in its tracks and abandoned. It won’t be, because too many reputations and too much money are on the line. It’s too bad. Scientific projects that are far worthier of funding will go begging as a result. At best my descendants will be able to say, “See, my grandpa told you so!”
Posted on October 15th, 2016 1 comment
A limited number of common themes are always recognizable in human moral behavior. However, just as a limited number of atoms can combine to form a vast number of different molecules, so those themes can combine to form a vast variety of different moral systems. Those systems vary not only from place to place, but in the same place over time. A striking example of the latter may be found in the novels of George Gissing, most of which were published in the last quarter of the 19th century. Gissing was a deep-dyed Victorian conservative of a type that would be virtually unrecognizable to the conservatives of today. George Orwell admired him, and wrote a brief but brilliant essay about him that appears in In Front of Your Nose, the fourth volume of his collected essays, journalism and letters. Orwell described him as one of the greatest British novelists because of the accuracy with which he portrayed the poverty, sordid social conditions, and sharp caste distinctions in late Victorian England. Orwell was generous. Gissing condemned socialism, particularly in his novel Demos, whereas Orwell was a lifelong socialist.
According to the subtitle of the novel, it is “A story of English socialism.” Socialism was becoming increasingly fashionable in those days, but Gissing wasn’t a sympathizer. He wanted to preserve everything just as it had been at some halcyon time in the past. Hubert Eldon, the “hero” of the novel, wouldn’t pass for one in our time. Today he would probably be seen as a rent-seeking parasite. He was apparently unsuited for any kind of useful work, and spent most of his time gazing at pretty pictures in European art galleries when he wasn’t in England. When he was home his favorite pastime was to admire the country scenery near the village of Wanley, where he lived with his mother.
Eldon was expecting to inherit a vast sum of money from his brother’s father-in-law, a self-made industrialist named Richard Mutimer. He could then marry the pristine Victorian heroine, Adela Waltham, who also lived in the village. However, to everyone’s dismay, the old man dies intestate, and the lion’s share of the money goes to a distant relative, also named Richard Mutimer, who happens to be a socialist workingman. The younger Mutimer uses the money to begin tearing the lovely valley apart in order to build mines and steel mills for a model socialist community. Adela’s mother, a firm believer in the ennobling influence of money, insists that she marry Mutimer. Dutiful daughter that she is, she obeys, even though she loves Eldon. In the end, Mutimer is conveniently killed off. The old man’s will is miraculously found and it turns out Eldon inherits the money after all. This “hero” doesn’t shrink from dismantling the socialist community that had been started by his rival, even though he knew it would throw the breadwinners of many families out of work. He thought it was too ugly, and wanted to return the landscape to its original beauty. Obviously, the author thought he was being perfectly reasonable even though, as he mentioned in passing, former workers in a socialist community would likely be blacklisted and unable to find work elsewhere. It goes without saying that the “hero” gets the girl in the end.
One of the reasons Orwell liked Gissing so much was the skill with which he documented the vast improvement in the material welfare of the average citizen that had taken place in England over the comparatively horrific conditions that prevailed in the author’s time. Unfortunately, that improvement could never have taken place without the sacrifice of many pleasant country villages like Wanley. Gissing was nothing if not misanthropic, and probably would have rejected such progress even if he could have imagined it. In fact old Mutimer was the first one to think of mining the valley, and the author speaks of the idea as follows:
It was of course a deplorable error to think of mining in the beautiful valley which had once been the Eldon’s estate. Richard Mutimer could not perceive that. He was a very old man, and possibly the instincts of his youth revived as his mind grew feebler; he imagined it the greatest kindness to Mrs. Eldon and her son to increase as much as possible the value of the property he would leave at his death. They, of course, could not even hint to him the pain with which they viewed so barbarous a scheme; he did not as much as suspect a possible objection.
Gissing not only accepted the rigid class distinctions of his day, but positively embraced them. In describing the elder Mutimer he writes,
Remaining the sturdiest of Conservatives, he bowed in sincere humility to those very claims which the Radical most angrily disallows: birth, hereditary station, recognised gentility – these things made the strongest demand upon his reverence. Such an attitude was a testimony to his own capacity for culture, since he knew not the meaning of vulgar adulation, and did in truth perceive the beauty of those qualities to which the uneducated Iconoclast is wholly blind.
The author leaves no doubt about his rejection of “progress” and his dim view of the coming 20th century in the following exchange between Eldon and his mother about the socialist Mutimer:
“Shall I tell you how I felt in talking with him? I seemed to be holding a dialogue with the 20th century, and you may think what that means.”
“Ah, it’s a long way off, Hubert.”
“I wish it were farther. The man was openly exultant; He stood for Demos grasping the scepter. I am glad, mother, that you leave Wanley before the air is poisoned.”
“Mr. Mutimer does not see that side of the question?”
“Not he! Do you imagine the twentieth century will leave one green spot on the earth’s surface?”
“My dear, it will always be necessary to grow grass and corn.”
“By no means; depend upon it. Such things will be cultivated by chemical processes. There will not be one inch left to nature; the very oceans will somehow be tamed, the snow mountains will be leveled. And with nature will perish art. What has a hungry Demos to do with the beautiful?”
Mrs. Eldon sighed gently.
“I shall not see it.”
Well, the twentieth century did turn out pretty badly, especially for socialism, but not quite that badly. Of course, one can detect some of the same themes in this exchange that one finds in the ideology of 21st century “Greens.” However, I think the most interesting affinity is between the sentiments in Gissing’s novels and the moral philosophy of G. E. Moore. I touched on the subject in an earlier post . Moore was the inventor of the “naturalistic fallacy,” according to which all moral philosophers preceding him were wrong, because they insisted on defining “the Good” with reference to some natural object. Unfortunately, Moore’s own version of “the Good” turned out to be every bit as slippery as any “sophisticated Christian’s” version of God. It was neither fish nor fowl, mineral nor vegetable.
When Moore finally got around to giving us at least some hint of exactly what he was talking about in his Principia Ethica, we discovered to our surprise that “the Good” had nothing to do with the heroism of the Light Brigade, or Horatius at the Bridge. It had nothing to do with loyalty or honor. It had nothing to do with social justice or the brotherhood of man. Nor did it have anything to do with honesty, justice, or equality. In fact, Moore’s version of “the Good” turned out to be a real thigh slapper. It consisted of the “nice things” that appealed to English country gentlemen at more or less the same time that Gissing was writing his novels. It included such things as soothing country scenery, enchanting music, amusing conversations with other “good” people, and perhaps a nice cup of tea on the side. As Moore put it,
We can imagine the case of a single person, enjoying throughout eternity the contemplation of scenery as beautiful, and intercourse with persons as admirable, as can be imagined.
By far the most valuable things which we know or can imagine, are certain states of consciousness, which may be roughly described as the pleasures of human intercourse and the enjoyment of beautiful objects. No one, probably, who has asked himself the question, has ever doubted that personal affection and the appreciation of what is beautiful in Art or Nature, are good in themselves.
Well, actually, that’s not quite true. I’ve doubted it. Not only have I doubted it, but I consider the claim absurd. Those words were written in 1903. By that time a great many people were already aware of the connection between morality and evolution by natural selection. That connection was certainly familiar to Darwin himself, and a man named Edvard Westermarck spelled out the seemingly obvious implications of that connection in his The Origin and Development of the Moral Ideas a few years later, in 1906. Among those implications was the fact that the “good in itself” is pure fantasy. “Good” and “evil” are subjective artifacts that are the result of the behavioral predispositions we associate with morality filtered through the minds of creatures with large brains. Nature played the rather ill-natured trick of portraying them to us as real things because that’s the form in which they happened to maximize the odds that the genes responsible for them would survive and reproduce. (That, by the way, is why it is highly unlikely that “moral relativity” will ever be a problem for our species.) The fact that Moore was capable of writing such nonsense more than 40 years after Darwin appeared on the scene suggests that he must have lived a rather sheltered life.
In retrospect, it didn’t matter. Today Moore is revered as a great moral philosopher, and Westermarck is nearly forgotten. It turns out that the truth about morality was very inconvenient for the “experts on ethics.” It exposed them as charlatans who had devoted their careers to splitting hairs over the fine points of things that didn’t actually exist. It popped all their pretentions to superior wisdom and virtue like so many soap bubbles. The result was predictable. They embraced Moore and ignored Westermarck. In the process they didn’t neglect to spawn legions of brand new “experts on ethics” to take their places when they were gone. Thanks to their foresight we find the emperor’s new clothes are gaudier than ever in our own time.
The work of George Gissing is an amusing footnote to the story. We no longer have to scratch our heads wondering where on earth Moore came up with his singular notions about the “Good in itself.” It turns out the same ideas may be found fossilized in the works of a Victorian novelist. The “experts on ethics” have been grasping at a very flimsy straw indeed!
Posted on October 10th, 2016 8 comments
D. C. McAllister just posted an article entitled “America, You Have No Right to Judge Donald Trump” over at PJ Media. Setting aside the question of “rights” for the moment, I have to admit that she makes some good points. Here’s one of the better ones:
Those who are complaining about Trump today have no basis for their moral outrage. That’s because their secular amoral worldview rejects any basis for that moral judgment. Any argument they make against the “immorality” of Trump is stolen, or at least borrowed for expediency, from a religious worldview they have soundly rejected.
Exactly! It’s amazing that the religious apologists the Left despises can see immediately that they “have no basis for their moral outrage,” and yet the “enlightened” people on the Left never seem to get it. You can say what you want about the “religious worldview,” but a God that threatens to burn you in hell for billions and trillions of years unless you do what he says seems to me a pretty convincing “basis” for “acting morally.” The “enlightened” have never come up with anything of the sort. One typically finds them working themselves into high dudgeon of virtuous indignation without realizing that the “basis” for all their furious anathemas is nothing but thin air.
The reason for their latest outburst of pious outrage is threadbare enough. They claim that Trump is “immoral” because he engaged in “locker room talk” about women in what he supposed was a private conversation. Are you kidding me?! These people have just used their usual bullying tactics to impose a novel version of sexual morality on the rest of us that sets the old one recommended by a “religious worldview” on its head. Now, all of a sudden, we are to believe that they’ve all rediscovered their inner prude. Heaven forefend that anyone should dare to think of women as “objects!”
Puh-lease! I’d say the chances that 99 out of 100 of the newly pious MSM journalists who have been flogging this story have never engaged in similar talk or worse are vanishingly small. The other one is probably a eunuch. As for the “objectification of women,” I’m sorry to be a bearer of bad tidings, but that’s what men do. They are sexually attracted to the “object” of a woman’s body because it is their nature to be so attracted. That is how they evolved. It is highly unlikely that any of the current pious critics of “objectification” would be around today to register their complaints if that particular result of natural selection had never happened.
And what of Trump? Well, if nothing else, he’s been a very good educator. He’s shown us what the elites in the Republican Party really stand for. I personally will never look at the McCains, Romneys, and Ryans of the world in quite the same way. At the very least, I’m grateful to him for that.
Posted on October 10th, 2016 No comments
It really seems as if the weapon designers at the nation’s three nuclear weapons laboratories, Los Alamos, Livermore, and Sandia, never really believed that nuclear testing would ever end. If so, they were singularly blind to the consequences. Instead of taking the approach apparently adopted by the Russians of designing and testing robust warheads that could simply be scrapped and replaced with newly manufactured ones at the end of their service life, they decided to depend on a constant process of refurbishing old warheads, eliminating the ability to make new ones in the process. When our weapons got too old, they would be repeatedly patched up in so-called Life Extension Programs, or LEPs. Apparently it began to occur to an increasing number of people in the weapons community that maintaining the safety and reliability of the stockpile indefinitely using that approach might be a bit problematic.
The first “solution” to the problem proposed by the National Nuclear Security Administration (NNSA), the semi-autonomous agency within the Department of Energy (DOE) responsible for maintaining the nuclear stockpile, was the Reliable Replacement Warhead (RRW). It was to be robust, easy to manufacture, and easy to maintain. It was also a new, untested design. As such, it would have violated the spirit, if not the letter, of Article VI of the Non-proliferation Treaty (NPT). If it had been built, it would also very likely have forced violation of the Comprehensive Nuclear Test Ban Treaty (CTBT), which the U.S. has signed, but never ratified. It was claimed that the RRW could be built and certified without testing. This was very probably nonsense. There have always been more or less influential voices within NNSA, the Department of Defense (DoD), and the weapons labs, in favor of a return to nuclear testing. That would not have been a good thing then, and I doubt that it will be a good thing at any foreseeable time in the future. In general, I think we should do our best to keep the nuclear genie bottled up as long as possible. Fortunately, Congress agreed and killed the RRW Program.
That didn’t stop the weaponeers. They just tried a new gambit. It’s called the “3+2 Strategy.” There are currently four types of ballistic missile warheads, two bombs, and a cruise missile warhead in the U.S. arsenal. The basic idea of 3+2 would be to reduce this to three “interoperable” ballistic missile warheads and two air delivered weapons (a bomb and a cruise missile), explaining the “3+2.” In the process, the conventional chemical explosives that drive the implosion of the “atomic bomb” stage of the weapons would be replaced by insensitive high explosives (IHE). The result would supposedly be a safer, more secure stockpile that would be easier to maintain. The price tag, in round numbers, would be $60 billion.
I can only hope Congress will be as quick to deep six 3+2 as it was with the RRW. The 3+2 will require tinkering not only with the bits surrounding the nuclear explosive package (NEP), but with the NEP itself. In other words, its just as much a violation of the spirit of Article VI of the NPT as was the RRW. The predictable result of any such changes will be the “sudden realization” by the weapons labs somewhere down the line that they can’t certify the new designs without a return to nuclear testing. There’s a better and, in the long run, probably cheaper way to maintain the stockpile.
In the first place, we need to stop relying on LEPs, and return to manufacturing replacement weapons. The common argument against this is that we have lost the ability to manufacture critical parts of our weapons since the end of testing, and in some cases the facilities and companies that supplied the parts no longer exist. Nonsense! The idea that a country responsible for a quarter of the entire world’s GDP has lost the ability to reproduce the weapons it was once able to design, build and test in a few years is ridiculous. We are told that subtle changes in materials might somehow severely degrade the performance of remanufactured weapons. I doubt it. Regardless, DOE has always known there was a solution to that problem. It’s called the Advanced Hydrodynamic Facility, or AHF.
Basically, the AHF would be a giant accelerator facility capable of producing beams that would be able to image an imploding nuclear weapon pit in three dimensions and at several times during the implosion. Serious studies of such a facility were done as long ago as the mid-90’s, and there is no doubt that it is feasible. In actual experiments, of course, highly enriched uranium and plutonium would be replaced by surrogate materials such as tungsten, but they would still determine with a high degree of confidence whether a given remanufactured primary would work or not. The primary, or “atomic bomb” part of a weapon supplies the energy that sets off the secondary, or thermonuclear part. If the primary, of a weapon works, then there can be little doubt that the secondary will work as well. The AHF would be expensive, which is probably the reason it still hasn’t been built. Given the $60 billion cost of 3+2, that decision may well prove to be penny-wise and pound-foolish.
The whole point of having a nuclear arsenal is its ability to deter enemies from attacking us. Every time people who are supposed to be the experts about such things question the reliability of our stockpile, they detract from its ability to deter. I think a remanufacturing capability along with the AHF is the best way to shut them up, preventing a very bad decision to resume nuclear testing in the process. I suggest we get on with it.
Posted on October 5th, 2016 3 comments
The truth about morality is both simple and obvious. It exists as a result of evolution by natural selection. From that it follows that it cannot possibly have a purpose or goal, and from that it follows that one cannot make “progress” towards fulfilling that nonexistent purpose or reaching that nonexistent goal. Simple and obvious as it is, no truth has been harder for mankind to accept.
The reason for this has to do with the nature of moral emotions themselves. They portray Good and Evil to us as real things that exist independent of human consciousness, when in fact they are subjective artifacts of our imaginations. That truth has always been hard for us to accept. It is particularly hard when self-esteem is based on the illusion of moral superiority. That illusion is obviously alive and well at a time when a large fraction of the population is capable of believing that another large fraction is “deplorable.” The fact that the result of indulging such illusions in the past has occasionally and not infrequently been mass murder suggests that, as a matter of public safety, it may be useful to stop indulging them.
The “experts on ethics” delight in concocting chilling accounts of what will happen if we do stop indulging them. We are told that a world without objective moral truths will be a world of moral nihilism and moral chaos. The most obvious answer to such fantasies is, “So what?” Is the truth really irrelevant? Are we really expected to force ourselves to believe in lies because that truth is just to scary for us to face? Come to think of it, what, exactly, do we have now if not moral nihilism and moral chaos?
We live in a world in which every two bit social justice warrior can invent some new “objective evil,” whether “cultural appropriation,” failure to memorize the 57 different flavors or gender, or some arcane “micro-aggression,” and work himself into a fine fit of virtuous indignation if no one takes him seriously. The very illusion that Good and Evil are objective things is regularly exploited to justify the crude bullying that is now used to enforce new “moral laws” that have suddenly been concocted out of the ethical vacuum. The unsuspecting owners of mom and pop bakeries wake up one morning to learn that they are now “deplorable,” and so “evil” that their business must be destroyed with a huge fine.
We live in a world in which hundreds of millions believe that other hundreds of millions who associate the word “begotten” with the “son of God,” or believe in the Trinity, are so evil that they will certainly burn in hell forever. These other hundreds of millions believe that heavenly bliss will be denied to anyone who doesn’t believe in a God with these attributes.
We live in a world in which the regime in charge of the most powerful country in the world believes it has such a monopoly on the “objective Good” that it can ignore international law, send its troops to occupy parts of another sovereign state, and dictate to the internationally recognized government of that state which parts of its territory it is allowed to control, and which not. It persists in this dubious method of defending the “Good” even though it risks launching a nuclear war in the process. The citizens in that country who happen to support one candidate for President don’t merely consider the citizens who support the opposing candidate wrong. They consider them objectively evil according to moral “laws” that apparently float about as insubstantial spirits, elevating themselves by their own bootstraps.
We live in a world in which evolutionary biologists, geneticists, and neuroscientists who are perfectly well aware of the evolutionary roots of morality nevertheless persist in cobbling together new moral systems that lack even so much as the threadbare semblance of a legitimate basis. The faux legitimacy that the old religions at least had the common decency to supply in the form of imaginary gods is thrown to the winds without a thought. In spite of that these same scientists expect the rest of us to take them seriously when they announce that, at long last, they’ve discovered the philosopher’s stone of objective Good and Evil, whether in the form of some whimsical notion of “human flourishing,” or perhaps a slightly retouched version of utilitarianism. In almost the same breath, they affirm the evolutionary basis of morality, and then proceed to denounce anyone who doesn’t conform to their newly minted moral “laws.” When it comes to morality, it is hard to imagine a more nihilistic and chaotic world.
I find it hard to believe that a world in which the subjective nature and rather humble evolutionary roots of all our exalted moral systems were commonly recognized, along with the obvious implications of these fundamental truths, could possibly be even more nihilistic and chaotic than the one we already live in. I doubt that “moral relativity” would prevail in such a world, for the simple reason that it is not in our nature to be moral relativists. We might even be able to come up with a set of “absolute” moral rules that would be obeyed, not because humanity had deluded itself into believing they were objectively true, but because of a common determination to punish free riders and cheaters. We might even be able to come up with some rational process for changing and adjusting the rules when necessary by common consent, rather than by the current “enlightened” process of successful bullying.
We would all be aware that even the most “exalted” and “noble” moral emotions, even those accompanied by stimulating music and rousing speeches, have a common origin; their tendency to improve the odds that the genes responsible for them would survive in a Pleistocene environment. Under the circumstances, it would be reasonable to doubt, not only their ability to detect “objective Good” and “objective Evil,” but the wisdom of paying any attention to them at all. Instead of swallowing the novel moral concoctions of pious charlatans without a murmur, we would begin to habitually greet them with the query, “Exactly what innate whim are you trying to satisfy?” We would certainly be very familiar with the tendency of every one of us, described so eloquently by Jonathan Haidt in his “The Righteous Mind,” to begin rationalizing our moral emotions as soon as we experience them, whether in response to “social injustice” or a rude driver who happened to cut us off on the way to work. We would realize that that very tendency also exists by virtue of evolution by natural selection, not because it is actually capable of unmasking social injustice, or distinguishing “evil” from “good” drivers, but merely because it improved our chances of survival when there were no cars, and no one had ever heard of such a thing as social justice.
I know, I’m starting to ramble. I’m imagining a utopia, but one can always dream.
Posted on October 3rd, 2016 8 comments
Once upon a time, half a century ago and more, several authors wrote books according to which certain animals, including human beings, are, at least in certain circumstances, predisposed to aggressive behavior. Prominent among them was On Aggression, published in English in 1966 by Konrad Lorenz. Other authors included Desmond Morris (The Naked Ape, 1967), Lionel Tiger (Men in Groups, 1969) and Robin Fox (The Imperial Animal, co-authored with Tiger, 1971). The most prominent and widely read of all was the inimitable Robert Ardrey (African Genesis, 1961, The Territorial Imperative, 1966, The Social Contract, 1970, and The Hunting Hypothesis, 1976). Why were these books important, or even written to begin with? After all, the fact of innate aggression, then as now, was familiar to any child who happened to own a dog. Well, because the “men of science” disagreed. They insisted that there were no innate tendencies to aggression, in man or any of the other higher animals. It was all the fault of unfortunate cultural developments back around the start of the Neolithic era, or of the baneful environmental influence of “frustration.”
Do you think I’m kidding? By all means, read the source literature! For example, according to a book entitled Aggression by “dog expert” John Paul Scott published in 1958 by the University of Chicago Press,
All research findings point to the fact that there is no physiological evidence of any internal need or spontaneous driving force for fighting; that all stimulation for aggression eventually comes from the forces present in the external environment.
A bit later, in 1962 in a book entitled Roots of Behavior he added,
All our present data indicate that fighting behavior among the higher mammals, including man, originates in external stimulation and that there is no evidence of spontaneous internal stimulation.
Ashley Montagu added the following “scientific fact” about apes (including chimpanzees!) in his “Man and Aggression,” published in 1968:
The field studies of Schaller on the gorilla, of Goodall on the chimpanzee, of Harrison on the orang-utan, as well as those of others, show these creatures to be anything but irascible. All the field observers agree that these creatures are amiable and quite unaggressive, and there is not the least reason to suppose that man’s pre-human primate ancestors were in any way different.
When Goodall dared to contradict Montagu and report what she had actually seen, she was furiously denounced in vile attacks by the likes of Brian Deer, who chivalrously recorded in an artical published in the Sunday Times in 1997,
…the former waitress had arrived at Gombe, ordered the grass cut and dumped vast quantities of trucked-in bananas, before documenting a fractious pandemonium of the apes. Soon she was writing about vicious hunting parties in which our cheery cousins trapped colubus monkeys and ripped them to bits, just for fun.
This remarkable transformation from Montagu’s expert in the field to Deer’s “former waitress” was typical of the way “science” was done by the Blank Slaters in those days. This type of “science” should be familiar to modern readers, who have witnessed what happens to anyone who dares to challenge the current climate change dogmas.
Fast forward to 2016. A paper entitled The phylogenetic roots of human lethal violence has just been published in the prestigious journal Nature. The first figure in the paper has the provocative title, “Evolution of lethal aggression in non-human mammals.” It not only accepts the fact of “spontaneous internal stimulation” of aggression without a murmur, but actually quantifies it in no less than 1024 species of mammals! According to the abstract,
Here we propose a conceptual approach towards understanding these roots based on the assumption that aggression in mammals, including humans, has a significant phylogenetic component. By compiling sources of mortality from a comprehensive sample of mammals, we assessed the percentage of deaths due to conspecifics and, using phylogenetic comparative tools, predicted this value for humans. The proportion of human deaths phylogenetically predicted to be caused by interpersonal violence stood at 2%.
All this and more is set down in the usual scientific deadpan without the least hint that the notion of such a “significant phylogenetic component” was ever seriously challenged. Unfortunately the paper itself is behind Nature’s paywall, but a there’s a free review with extracts from the paper by Ed Yong on the website of The Atlantic, and Jerry Coyne also reviewed the paper over at his Why Evolution is True website. Citing the paper Yong notes,
It’s likely that primates are especially violent because we are both territorial and social—two factors that respectively provide motive and opportunity for murder. So it goes for humans. As we moved from small bands to medium-sized tribes to large chiefdoms, our rates of lethal violence increased.
“Territorial and social!?” Whoever wrote such stuff? Oh, now I remember! It was a guy named Robert Ardrey, who happened to be the author of The Territorial Imperative and The Social Contract. Chalk up another one for the “mere playwright.” Yet again, he was right, and almost all the “men of science” were wrong. Do you ever think he’ll get the credit he deserves from our latter day “men of science?” Naw, neither do I. Some things are just too embarrassing to admit.
Posted on August 14th, 2016 7 comments
“Moral progress” is impossible. It is a concept that implies progress towards a goal that doesn’t exist. We exist as a result of evolution by natural selection, a process that has simply happened. Progress implies the existence of an entity sufficiently intelligent to formulate a goal or purpose towards which progress is made. No such entity has directed the process, nor did one even exist over most of the period during which it occurred. The emotional predispositions that are the root cause of what we understand by the term “morality” are as much an outcome of natural selection as our hands or feet. Like our hands and feet, they exist solely because they have enhanced the probability that the genes responsible for their existence would survive and reproduce. There is increasing acceptance of the fact that morality owes its existence to evolution by natural selection among the “experts on ethics” among us. However, as a rule they have been incapable of grasping the obvious implication of that fact; that the notion of “moral progress” is a chimera. It is a truth that has been too inconvenient for them to bear.
It’s not difficult to understand why. Their social gravitas and often their very livelihood depend on propping up the illusion. This is particularly true of the “experts” in academia, who often lack marketable skills other than their “expertise” in something that doesn’t exist. Their modus operandi consists of hoodwinking the rest of us into believing that satisfying some whim that happens to be fashionable within their tribe represents “moral progress.” Such “progress” has no more intrinsic value than a five year old’s progress towards acquiring a lollipop. Often it can be reasonably expected to lead to outcomes that are the opposite of those that account for the existence of the whim to begin with, resulting in what I have referred to in earlier posts as a morality inversion. Propping up the illusion in spite of recognition of the evolutionary roots of morality in a milieu that long ago dispensed with the luxury of a God with a big club to serve as the final arbiter of what is “really good” and “really evil” is no mean task. Among other things it requires some often amusing intellectual contortions as well as the concoction of an arcane jargon to serve as a smokescreen.
Consider, for example, a paper by Professors Allen Buchanan and Russell Powell entitled Toward a Naturalistic Theory of Moral Progress. It turned up in the journal Ethics, that ever reliable guide to academic fashion touching on the question of “human flourishing.” Far from denying the existence of human nature after the fashion of the Blank Slaters of old, the authors positively embrace it. They cheerfully admit its relevance to morality, noting in particular the existence of a predisposition in our species to perceive others of our species in terms of ingroups and outgroups; what Robert Ardrey used to call the Amity/Enmity Complex. Now, if these things are true, and absent the miraculous discovery of any other contributing “root cause” for morality other than evolution by natural selection, whether in this world or the realm of spirits, it follows logically that “progress” is a term that can no more apply to morality than it does to evolution by natural selection itself. It further follows that objective Good and objective Evil are purely imaginary categories. In other words, unless one is merely referring to the scientific investigation of evolved behavioral traits, “experts on ethics” are experts about nothing. Their claim to possess a philosopher’s stone pointing the way to how we should act is a chimera. For the last several thousand years they have been involved in a sterile game of bamboozling the rest of us, and themselves to boot.
Predictly, the embarrassment and loss of gravitas, not to mention the loss of a regular paycheck, implied by such a straightforward admission of the obvious has been more than the “experts” could bear. They’ve simply gone about their business as if nothing had happened, and no one had ever heard of a man named Darwin. It’s actually been quite easy for them in this puritanical and politically correct age, in which the intellectual life and self-esteem of so many depends on maintaining a constant state of virtuous indignation and moral outrage. Virtuous indignation and moral outrage are absurd absent the existence of an objective moral standard. Since nothing of the sort exists, it is simply invented, and everyone stays outraged and happy.
In view of this pressing need to prop up the moral fashions of the day, then, it follows that no great demands are placed on the rigor of modern techniques for concocting real Good and real Evil. Consider, for example, the paper referred to above. The authors go to a great deal of trouble to assure their readers that their theory of “moral progress” really is “naturalistic.” In this enlightened age, they tell us, they will finally be able to steer clear of the flaws that plagued earlier attempts to develop secular moralities. These were all based on false assumptions “based on folk psychology, flawed attempts to develop empirically based psychological theories, a priori speculation, and reflections on history hampered both by a lack of information and inadequate methodology.” “For the first time,” they tell us, “we are beginning to develop genuinely scientific knowledge about human nature, especially through the development of empirical psychological theories that take evolutionary biology seriously.” This begs the question, of course, of how we’ve managed to avoid acquiring “scientific knowledge about human nature” and “taking evolutionary biology seriously” for so long. But I digress. The important question is, how do the authors manage to establish a rational basis for their “naturalistic theory of moral progress” while avoiding the Scylla of “folk psychology” on the one hand and the Charybdis of “a priori speculation” on the other? It turns out that the “basis” in question hardly demands any complex mental gymnastics. It is simply assumed!
Here’s the money passage in the paper:
A general theory of moral progress could take a more a less ambitious form. The more ambitious form would be to ground an account of which sorts of changes are morally progressive in a normative ethical theory that is compatible with a defensible metaethics… In what follows we take the more modest path: we set aside metaethical challenges to the notion of moral progress, we make no attempt to ground the claim that certain moralities are in fact better than others, and we do not defend any particular account of what it is for one morality to be better than another. Instead, we assume that the emergence of certain types of moral inclusivity are significant instances of moral progress and then use these as test cases for exploring the feasibility of a naturalized account of moral progress.
This is indeed a strange approach to being “naturalistic.” After excoriating the legions of thinkers before them for their faulty mode of hunting the philosopher’s stone of “moral progress,” they simply assume it exists. It exists in spite of the elementary chain of logic leading inexorably to the conclusion that it can’t possibly exist if their own claims about the origins of morality in human nature are true. In what must count as a remarkable coincidence, it exists in the form of “inclusivity,” currently in high fashion as one of the shibboleths defining the ideological box within which most of today’s “experts on ethics” happen to dwell. Those who trouble themselves to read the paper will find that, in what follows, it is hardly treated as a mere modest assumption, but as an established, objective fact. “Moral progress” is alluded to over and over again as if, by virtue this original, “modest assumption,” the real thing somehow magically popped into existence in the guise of “inclusivity.”
Suppose we refrain from questioning the plot, and go along with the charade. If inclusivity is really to count as moral progress, than it must not only be desirable in certain precincts of academia, but actually feasible. However if, as the authors agree, humans are predisposed to perceive others of their species in terms of ingroups and outgroups, the feasibility of inclusivity is at least in question. As the authors put it,
Attempts to draw connections between contemporary evolutionary theories of morality and the possibility of inclusivist moral progress begin with the standard evolutionary psychological assertion that the main contours of human moral capacities emerged through a process of natural selection on hunter-gatherer groups in the Pleistocene – in the so-called environment of evolutionary adaptation (EEA)… The crucial claim, which leads some thinkers to draw a pessimistic inference about the possibility of inclusivist moral progress, is that selection pressures in the EEA favored exclusivist moralisties. These are moralities that feature robust moral commitments among group members but either deny moral standing to outsiders altogether, relegate out-group members to a substantially inferior status, or assign moral standing to outsiders contingent on strategic (self-serving) considerations.
No matter, according to the authors, this flaw in our evolved moral repertoire can be easily fixed. All we have to do is lift ourselves out of the EEA, achieve universal prosperity so great and pervasive that competition becomes unnecessary, and the predispositions in question will simply fade away, more or less like the state under Communism. Invoking that wonderful term “plasticity,” which seems to pop up with every new attempt to finesse human behavioral traits out of existence, they write,
According to an account of exclusivist morality as a conditionally expressed (adaptively plastic) trait, the suite of attitudes and behaviors associated with exclusivist tendencies develop only when cues that were in the past highly correlated with out-group threat are detected.
In other words, it is the fond hope of the authors that, if only we can make the environment in which inconvenient behavioral predispositions evolved disappear, the traits themselves will disappear as well! They go on to claim that this has actually happened, and that,
…exclusivist moral tendencies are attenuated in populations inhabiting environments in which cues of out-group threat are absent.
Clearly we have seen a vast expansion in the number of human beings that can be perceived as ingroup since the Pleistocene, and the inclusion as ingroup of racial and religious categories that once defined outgroups. There is certainly plasticity in how ingroups and outgroups are actually defined and perceived, as one might expect of traits evolved during times of rapid environmental change in the nature of the “others” one happened to be in contact with or aware of at any given time. However, this hardly “proves” that the fundamental tendency to distinguish between ingroups and outgroups itself will disappear or is likely to disappear in response to any environmental change whatever. Perhaps the best way to demonstrate this is to refer to the paper itself.
Clearly the authors imagine themselves to be “inclusive,” but is that really the case? Hardly! It turns out they have a very robust perception of outgroup. They’ve merely fallen victim to the fallacy that it “doesn’t count” because it’s defined in ideological rather than racial or religious terms. Their outgroup may be broadly defined as “conservatives.” These “conservatives” are mentioned over and over again in the paper, always in the guise of the bad guys who are supposed to reject inclusivism and resist “moral progress.” To cite a few examples,
We show that although current evolutionary psychological understandings of human morality do not, contrary to the contentions of some authors, support conservative ethical and political conclusions, they do paint a picture of human morality that challenges traditional liberal accounts of moral progress.
…there is no good reason to believe conservative claims that the shift toward greater inclusiveness has reached its limit or is unsustainable.
These “evoconservatives,” as we have labeled them, infer from evolutionary explanations of morality that inclusivist moralities are not psychologically feasible for human beings.
At the same time, there is strong evidence that the development of exclusivist moral tendencies – or what evolutionary psychologists refer to as “in-group assortative sociality,” which is associated with ethnocentric, xenophobic, authoritarian, and conservative psychological orientations – is sensitive to environmental cues…
and so on, and so on. In a word, although the good professors are fond of pointing with pride to their vastly expanded ingroup, they have rather more difficulty seeing their vastly expanded outgroup as well, more or less like the difficulty we have seeing the nose at the end of our face. The fact that the conservative outgroup is perceived with as much fury, disgust, and hatred as ever a Grand Dragon of the Ku Klux Klan felt for blacks or Catholics can be confirmed by simply reading through the comment section of any popular website of the ideological Left. Unless professors employed by philosophy departments live under circumstances more reminiscent of the Pleistocene than I had imagined this bodes ill for their theory of “moral progress” based on “inclusivity.” More evidence that this is the case is easily available to anyone who cares to look for “diversity” in the philosophy department of the local university in the form of a professor who can be described as conservative by any stretch of the imagination.
I note in passing another passage in the paper that demonstrates the fanaticism with which the chimera of “moral progress” is pursued in some circles. Again quoting the authors,
Some moral philosophers whom we have elsewhere called “evoliberals,” have tacitly affirmed the evo-conservative view in arguing that biomedical interventions that enhance human moral capacities are likely to be crucial for major moral progress due to evolved constraints on human moral nature.
In a word, the delusion of moral progress is not necessarily just a harmless toy for the entertainment of professors of philosophy, at least as far as those who might have some objection to “biomedical interventions” carried out be self-appointed “experts on ethics” are concerned.
What’s the point? The point is that we are unlikely to make progress of any kind without first accepting the truth about our own nature, and the elementary logical implications of that truth. Darwin saw them, Westermarck saw them, and they are far more obvious today than they were then. We continue to ignore them at our peril.
Posted on June 5th, 2016 17 comments
It’s heartening to learn that there is a serious basis for recent speculation to the effect that the science of animal cognition may gradually advance to a level long familiar to any child with a pet dog. Frans de Waal breaks the news in his latest book, Are We Smart Enough to Know How Smart Animals Are? In answer to his own question, de Waal writes,
The short answer is “Yes, but you’d never have guessed.” For most of the last century, science was overly cautious and skeptical about the intelligence of animals. Attributing intentions and emotions to animals was seen as naïve “folk” nonsense. We, the scientists, knew better! We never went in for any of this “my dog is jealous” stuff, or “my cat knows what she wants,” let alone anything more complicated, such as that animals might reflect on the past or feel one another’s pain… The two dominant schools of thought viewed animals as either stimulus-response machines out to obtain rewards and avoid punishment or as robots genetically endowed with useful instincts. While each school fought the other and deemed it too narrow, they shared a fundamentally mechanistic outlook: there was no need to worry about the internal lives of animals, and anyone who did was anthropomorphic, romantic and unscientific.
Did we have to go through this bleak period? In earlier days, the thinking was noticeably more liberal. Charles Darwin wrote extensively about human and animal emotions, and many a scientist in the nineteenth century was eager to find higher intelligence in animals. It remains a mystery why these efforts were temporarily suspended, and why we voluntarily hung a millstone around the neck of biology.
Here I must beg to differ with de Waal. It is by no means a “mystery.” This “mechanization” of animals in the sciences was more or less contemporaneous with the Blank Slate debacle, and was motivated by more or less the same ideological imperatives. I invite readers interested in the subject to consult the first few chapters of Robert Ardrey’s African Genesis, published as far back as 1961. Noting a blurb in Scientific American by Marshall Sahlins, more familiar to later readers as a collaborator in the slander of Napoleon Chagnon, to the effect that,
There is a quantum difference, at points a complete opposition, between even the most rudimentary human society and the most advanced subhuman primate one. The discontinuity implies that the emergence of human society required some suppression, rather than direct expression, of man’s primate nature. Human social life is culturally, not biologically determined.
Ardrey, that greatest of all debunkers of the Blank Slate, continues,
Dr. Sahlins’ conclusion is startling to no one but himself. It is a scientific restatement, 1960-style, of the philosophical conclusion of an eighteenth-century Neapolitan monk (Giambattista Vico, ed.): Society is the work of man. It is just another prop, fashioned in the shop of science’s orthodoxies from the lumber of Zuckerman’s myth, to support the fallacy of human uniqueness.
The Zuckerman Ardrey refers to is anthropologist Solly Zuckerman. I invite anyone who doubts the fanaticism with which “science” once insisted on the notion of human uniqueness alluded to in de Waal’s book to read some of Zuckerman’s papers. For example, in The Social Life of Monkeys and Apes, he writes,
It is now generally recognized that anthropomorphic preoccupations do not help the critical development of knowledge, either in fields of physical or biological inquiry.
He exulted in the great “advances” science had made in correcting the “mistakes” of Darwin:
The Darwinian period, in which animal behavior as a distinct study was born, was one in which anthropomorphic interpretation flourished. Anecdotes were regarded in the most generous light, and it was believed that many animals were highly rational creatures, possessed of exalted ethical codes of social behavior.
According to Zuckerman, “science” had now discovered that the very notion of animal “intelligence” was absurd. As he put it,
Until 1890, the study of the social behavior of mammals developed hand in hand with the study of their “intelligence,” and both subjects were usually treated in the same books.
Such comments, which are ubiquitous in the literature of the Blank Slate era, make it hard to understand how de Waal can still be “mystified” about the motivation for the “scientific” denial of animal intelligence. Be that as it may, he presents a wealth of data derived from recent experiments and field studies debunking all the lingering rationale for claims of human uniqueness one by one, whether it be the ability to experience emotion, a “theory of mind,” social problem solving ability, ability to contemplate the past and future, or even consciousness. In the process he documents the methods “science” used to hermetically seal itself off from reality, such as the invention of pejorative terms like “anthropomorphism” to denounce and dismiss anyone who dared to challenge the human uniqueness orthodoxy, and the rejection of all evidence not supplied by members of the club as mere “anecdotes.” In the process he notes,
Needing a new term to make my point, I invented anthropodenial, which is the a priori rejection of humanlike traits in other animals or animallike traits in us.
It’s hard to imagine that anyone could seriously believe that “science” consists of fanatically rejecting similarities between human and animal behavior that are obvious to everyone but “scientists” as “anthropomorphism” and “anecdotes” and assuming a priori that they’re of no significance until it can be absolutely proven that everyone else was right all along. This does not strike me as a “parsimonious” approach.
Not the least interesting feature of de Waal’s latest is his “rehabilitation” of several important debunkers of the Blank Slate who were unfortunate enough to publish before the appearance of E. O. Wilson’s Sociobiology in 1975. According to the fairy tale that currently passes for the “history” of the Blank Slate, before 1975 “darkness was on the face of the deep.” Only then did Wilson appear on the scene as the heroic slayer of the Blank Slate dragon. A man named Robert Ardrey was never heard of, and anyone mentioned in his books as an opponent of the Blank Slate before the Wilson “singularity” is to be ignored. The most prominent of them all, a man on whom the anathemas of the Blank Slaters often fell, literally in the same breath as Ardrey, was Konrad Lorenz. Sure enough, in Steven Pinker’s fanciful “history” of the Blank Slate, Lorenz is dismissed, in the same paragraph with Ardrey, no less, as “totally and utterly wrong,” and a delusional believer in “archaic theories such as that aggression was like the discharge of a hydraulic pressure.” De Waal’s response must be somewhat discomfiting to the promoters of Pinker’s official “history.” He simply ignores it!
Astoundingly enough, de Waal speaks of Lorenz as one of the great founding fathers of the modern sciences of animal behavior and cognition. In other words, he tells the truth, as if it had never been disputed in any bowdlerized “history.” Already at the end of the prologue we find the matter-of-fact observation that,
…behavior is, as the Austrian ethologist Konrad Lorenz put it, the liveliest aspect of all that lives.
Reading on, we find that this mention of Lorenz wasn’t just an anomaly designed to wake up drowsy readers. In the first chapter we find de Waal referring to the field of phylogeny,
…when we trace traits across the evolutionary tree to determine whether similarities are due to common descent, the way Lorenz had done so beautifully for waterfowl.
A few pages later he writes,
The maestro of observation, Konrad Lorenz, believed that one could not investigate animals effectively without an intuitive understanding grounded in love and respect.
and notes, referring to the behaviorists, that,
The power of conditioning is not in doubt, but the early investigators had totally overlooked a crucial piece of information. They had not, as recommended by Lorenz, considered the whole organism.
And finally, in a passage that seems to scoff at Pinker’s “totally and utterly wrong” nonsense, he writes,
Given that the facial musculature of humans and chimpanzees is nearly identical, the laughing, grinning, and pouting of both species likely goes back to a common ancestor. Recognition of the parallel between anatomy and behavior was a great leap forward, which is nowadays taken for granted. We all now believe in behavioral evolution, which makes us Lorenzians.
Stunning, really for anyone who’s followed what’s been going on in the behavioral and animal sciences for any length of time. And that’s not all. Other Blank Slate debunkers who published long before Wilson, like Niko Tinbergen and Desmond Morris, are mentioned with a respect that belies the fact that they, too, were once denounced by the Blank Slaters as right wing fascists and racists in the same breath with Lorenz. I have a hard time believing that someone as obviously well read as de Waal has never seen Pinker’s The Blank Slate. I honestly don’t know what to make of the fact that he can so blatantly contradict Pinker, and yet never trouble himself to mention even the bare existence of such a remarkable disconnect. Is he afraid of Pinker? Does he simply want to avoid hurting the feelings of another member of the academic tribe? I must leave it up to the reader to decide.
And what of Ardrey, who brilliantly described both “anthropodenial” and the reasons that it was by no means a “mystery” more than half a century before the appearance of de Waal’s latest book? Will he be rehabilitated, too? Don’t hold your breath. Unlike Lorenz, Tinbergen and Morris, he didn’t belong to the academic tribe. The fact that it took an outsider to smash the Blank Slate and give a few academics the courage to finally stick their noses out of the hole they’d dug for themselves will likely remain deep in the memory hole. It happens to be a fact that is just too humiliating and embarrassing for them to ever admit. It would seem the history of the affair can be adjusted, but it will probably never be corrected.