Posted on October 17th, 2016 No comments
At the moment the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is in a class by itself when it comes to inertial confinement fusion (ICF) facilities. That may change before too long. A paper by a group of Chinese authors describing a novel 3-axis cylindrical hohlraum design recently appeared in the prestigious journal Nature. In ICF jargon, a “hohlraum” is a container, typically cylindrical in form. Powerful laser beams are aimed through two or more entrance holes to illuminate the inner wall of the hohlraum, producing a burst of x-rays. These strike a target mounted inside the hohlraum containing fusion fuel, typically consisting of heavy isotopes of hydrogen, causing it to implode. At maximum compression, a series of shocks driven into the target are supposed to converge in the center, heating a small “hot spot” to fusion conditions. Unfortunately, such “indirect drive” experiments haven’t worked so far on the NIF. The 1.8 megajoules delivered by NIF’s 192 laser beams haven’t been enough to achieve fusion with current target designs, even though the beams are very clean and uniform, and the facility itself is working as designed. Perhaps the most interesting thing about the Chinese paper is not their novel three axis hohlraum design, but the fact that they are still interested in ICF at all in spite of the failure of the NIF to achieve ignition to date. To the best of my knowledge, they are still planning to build SG-IV, a 1.5 megajoule facility, with ignition experiments slated for the early 2020’s.
Why would the Chinese want to continue building a 1.5 megajoule facility in spite of the fact that U.S. scientists have failed to achieve ignition with the 1.8 megajoule NIF? For the answer, one need only look at who paid for the NIF, and why. The project was paid for by the people at the Department of Energy (DOE) responsible for maintaining the nuclear stockpile. Many of our weapons designers were ambivalent about the value of achieving ignition before the facility was built, and were more interested in the facility’s ability to access physical conditions relevant to those in exploding nuclear weapons for studying key aspects of nuclear weapon physics such as equation of state (EOS) and opacity of materials under extreme conditions. I suspect that’s why the Chinese are pressing ahead as well. Meanwhile, the Russians have also announced a super-laser project of their own that they claim will deliver energies of 2.8 megajoules.
Meanwhile, in the wake of the failed indirect drive experiments on the NIF, scientists in favor of the direct drive approach have been pleading their case. In direct drive experiments the laser beams are shot directly at the fusion target instead of at the inner walls of a hohlraum. The default approach for the NIF has always been indirect drive, but the alternative approach may be possible using an approach called “polar direct drive.” In recent experiments at the OMEGA laser facility at the University of Rochester’s Laboratory for Laser Energetics, the nation’s premier direct drive facility, scientists claim to have achieved results that, if scaled up to energies available on the NIF would produce five times more fusion energy output than has been achieved with indirect drive to date.
Meanwhile, construction continues on ITER, a fusion facility designed purely for energy applications. ITER will rely on magnetic plasma confinement, the other “mainstream” approach to harnessing fusion energy. The project is a white elephant that continues to devour ever increasing amounts of scarce scientific funding in spite of the fact that the chances that magnetic fusion will ever be a viable source of electric power are virtually nil. That fact should be obvious by now, and yet the project staggers forward, seemingly with a life of its own. Watching its progress is something like watching the Titanic’s progress towards the iceberg. Within the last decade the projected cost of ITER has metastasized from the original 6 billion euros to 15 billion euros in 2010, and finally to the latest estimate of 20 billion euros. There are no plans to even fuel the facility for full power fusion until 2035! It boggles the mind.
Magnetic fusion of the type envisioned for ITER will never come close to being an economically competitive source of power. It would already be a stretch if it were merely a question of controlling an unruly plasma and figuring out a viable way to extract the fusion energy. Unfortunately, there’s another problem. Remember all those yarns you’ve been told about how an unlimited supply of fuel is supposed to be on hand in the form of sea water? In fact, reactors like ITER won’t work without a heavy isotope of hydrogen known as tritium. A tritium nucleus contains a proton and two neutrons, and, for all practical purposes, the isotope doesn’t occur in nature, in sea water or anywhere else. It is highly radioactive, with a very short half-life of a bit over 12 years, and the only way to get it is to breed it. We are told that fast neutrons from the fusion reactions will breed sufficient tritium in lithium blankets surrounding the reaction chamber. That may work on paper, but breeding enough of the isotope and then somehow extracting it will be an engineering nightmare. There is virtually no chance that such reactors will ever be economically competitive with renewable power sources combined with baseline power supplied by proven fission breeder reactor technologies. Such reactors can consume most of the long-lived transuranic waste they produce.
In short, ITER should be stopped dead in its tracks and abandoned. It won’t be, because too many reputations and too much money are on the line. It’s too bad. Scientific projects that are far worthier of funding will go begging as a result. At best my descendants will be able to say, “See, my grandpa told you so!”
Posted on October 15th, 2016 1 comment
A limited number of common themes are always recognizable in human moral behavior. However, just as a limited number of atoms can combine to form a vast number of different molecules, so those themes can combine to form a vast variety of different moral systems. Those systems vary not only from place to place, but in the same place over time. A striking example of the latter may be found in the novels of George Gissing, most of which were published in the last quarter of the 19th century. Gissing was a deep-dyed Victorian conservative of a type that would be virtually unrecognizable to the conservatives of today. George Orwell admired him, and wrote a brief but brilliant essay about him that appears in In Front of Your Nose, the fourth volume of his collected essays, journalism and letters. Orwell described him as one of the greatest British novelists because of the accuracy with which he portrayed the poverty, sordid social conditions, and sharp caste distinctions in late Victorian England. Orwell was generous. Gissing condemned socialism, particularly in his novel Demos, whereas Orwell was a lifelong socialist.
According to the subtitle of the novel, it is “A story of English socialism.” Socialism was becoming increasingly fashionable in those days, but Gissing wasn’t a sympathizer. He wanted to preserve everything just as it had been at some halcyon time in the past. Hubert Eldon, the “hero” of the novel, wouldn’t pass for one in our time. Today he would probably be seen as a rent-seeking parasite. He was apparently unsuited for any kind of useful work, and spent most of his time gazing at pretty pictures in European art galleries when he wasn’t in England. When he was home his favorite pastime was to admire the country scenery near the village of Wanley, where he lived with his mother.
Eldon was expecting to inherit a vast sum of money from his brother’s father-in-law, a self-made industrialist named Richard Mutimer. He could then marry the pristine Victorian heroine, Adela Waltham, who also lived in the village. However, to everyone’s dismay, the old man dies intestate, and the lion’s share of the money goes to a distant relative, also named Richard Mutimer, who happens to be a socialist workingman. The younger Mutimer uses the money to begin tearing the lovely valley apart in order to build mines and steel mills for a model socialist community. Adela’s mother, a firm believer in the ennobling influence of money, insists that she marry Mutimer. Dutiful daughter that she is, she obeys, even though she loves Eldon. In the end, Mutimer is conveniently killed off. The old man’s will is miraculously found and it turns out Eldon inherits the money after all. This “hero” doesn’t shrink from dismantling the socialist community that had been started by his rival, even though he knew it would throw the breadwinners of many families out of work. He thought it was too ugly, and wanted to return the landscape to its original beauty. Obviously, the author thought he was being perfectly reasonable even though, as he mentioned in passing, former workers in a socialist community would likely be blacklisted and unable to find work elsewhere. It goes without saying that the “hero” gets the girl in the end.
One of the reasons Orwell liked Gissing so much was the skill with which he documented the vast improvement in the material welfare of the average citizen that had taken place in England over the comparatively horrific conditions that prevailed in the author’s time. Unfortunately, that improvement could never have taken place without the sacrifice of many pleasant country villages like Wanley. Gissing was nothing if not misanthropic, and probably would have rejected such progress even if he could have imagined it. In fact old Mutimer was the first one to think of mining the valley, and the author speaks of the idea as follows:
It was of course a deplorable error to think of mining in the beautiful valley which had once been the Eldon’s estate. Richard Mutimer could not perceive that. He was a very old man, and possibly the instincts of his youth revived as his mind grew feebler; he imagined it the greatest kindness to Mrs. Eldon and her son to increase as much as possible the value of the property he would leave at his death. They, of course, could not even hint to him the pain with which they viewed so barbarous a scheme; he did not as much as suspect a possible objection.
Gissing not only accepted the rigid class distinctions of his day, but positively embraced them. In describing the elder Mutimer he writes,
Remaining the sturdiest of Conservatives, he bowed in sincere humility to those very claims which the Radical most angrily disallows: birth, hereditary station, recognised gentility – these things made the strongest demand upon his reverence. Such an attitude was a testimony to his own capacity for culture, since he knew not the meaning of vulgar adulation, and did in truth perceive the beauty of those qualities to which the uneducated Iconoclast is wholly blind.
The author leaves no doubt about his rejection of “progress” and his dim view of the coming 20th century in the following exchange between Eldon and his mother about the socialist Mutimer:
“Shall I tell you how I felt in talking with him? I seemed to be holding a dialogue with the 20th century, and you may think what that means.”
“Ah, it’s a long way off, Hubert.”
“I wish it were farther. The man was openly exultant; He stood for Demos grasping the scepter. I am glad, mother, that you leave Wanley before the air is poisoned.”
“Mr. Mutimer does not see that side of the question?”
“Not he! Do you imagine the twentieth century will leave one green spot on the earth’s surface?”
“My dear, it will always be necessary to grow grass and corn.”
“By no means; depend upon it. Such things will be cultivated by chemical processes. There will not be one inch left to nature; the very oceans will somehow be tamed, the snow mountains will be leveled. And with nature will perish art. What has a hungry Demos to do with the beautiful?”
Mrs. Eldon sighed gently.
“I shall not see it.”
Well, the twentieth century did turn out pretty badly, especially for socialism, but not quite that badly. Of course, one can detect some of the same themes in this exchange that one finds in the ideology of 21st century “Greens.” However, I think the most interesting affinity is between the sentiments in Gissing’s novels and the moral philosophy of G. E. Moore. I touched on the subject in an earlier post . Moore was the inventor of the “naturalistic fallacy,” according to which all moral philosophers preceding him were wrong, because they insisted on defining “the Good” with reference to some natural object. Unfortunately, Moore’s own version of “the Good” turned out to be every bit as slippery as any “sophisticated Christian’s” version of God. It was neither fish nor fowl, mineral nor vegetable.
When Moore finally got around to giving us at least some hint of exactly what he was talking about in his Principia Ethica, we discovered to our surprise that “the Good” had nothing to do with the heroism of the Light Brigade, or Horatius at the Bridge. It had nothing to do with loyalty or honor. It had nothing to do with social justice or the brotherhood of man. Nor did it have anything to do with honesty, justice, or equality. In fact, Moore’s version of “the Good” turned out to be a real thigh slapper. It consisted of the “nice things” that appealed to English country gentlemen at more or less the same time that Gissing was writing his novels. It included such things as soothing country scenery, enchanting music, amusing conversations with other “good” people, and perhaps a nice cup of tea on the side. As Moore put it,
We can imagine the case of a single person, enjoying throughout eternity the contemplation of scenery as beautiful, and intercourse with persons as admirable, as can be imagined.
By far the most valuable things which we know or can imagine, are certain states of consciousness, which may be roughly described as the pleasures of human intercourse and the enjoyment of beautiful objects. No one, probably, who has asked himself the question, has ever doubted that personal affection and the appreciation of what is beautiful in Art or Nature, are good in themselves.
Well, actually, that’s not quite true. I’ve doubted it. Not only have I doubted it, but I consider the claim absurd. Those words were written in 1903. By that time a great many people were already aware of the connection between morality and evolution by natural selection. That connection was certainly familiar to Darwin himself, and a man named Edvard Westermarck spelled out the seemingly obvious implications of that connection in his The Origin and Development of the Moral Ideas a few years later, in 1906. Among those implications was the fact that the “good in itself” is pure fantasy. “Good” and “evil” are subjective artifacts that are the result of the behavioral predispositions we associate with morality filtered through the minds of creatures with large brains. Nature played the rather ill-natured trick of portraying them to us as real things because that’s the form in which they happened to maximize the odds that the genes responsible for them would survive and reproduce. (That, by the way, is why it is highly unlikely that “moral relativity” will ever be a problem for our species.) The fact that Moore was capable of writing such nonsense more than 40 years after Darwin appeared on the scene suggests that he must have lived a rather sheltered life.
In retrospect, it didn’t matter. Today Moore is revered as a great moral philosopher, and Westermarck is nearly forgotten. It turns out that the truth about morality was very inconvenient for the “experts on ethics.” It exposed them as charlatans who had devoted their careers to splitting hairs over the fine points of things that didn’t actually exist. It popped all their pretentions to superior wisdom and virtue like so many soap bubbles. The result was predictable. They embraced Moore and ignored Westermarck. In the process they didn’t neglect to spawn legions of brand new “experts on ethics” to take their places when they were gone. Thanks to their foresight we find the emperor’s new clothes are gaudier than ever in our own time.
The work of George Gissing is an amusing footnote to the story. We no longer have to scratch our heads wondering where on earth Moore came up with his singular notions about the “Good in itself.” It turns out the same ideas may be found fossilized in the works of a Victorian novelist. The “experts on ethics” have been grasping at a very flimsy straw indeed!
Posted on October 10th, 2016 No comments
D. C. McAllister just posted an article entitled “America, You Have No Right to Judge Donald Trump” over at PJ Media. Setting aside the question of “rights” for the moment, I have to admit that she makes some good points. Here’s one of the better ones:
Those who are complaining about Trump today have no basis for their moral outrage. That’s because their secular amoral worldview rejects any basis for that moral judgment. Any argument they make against the “immorality” of Trump is stolen, or at least borrowed for expediency, from a religious worldview they have soundly rejected.
Exactly! It’s amazing that the religious apologists the Left despises can see immediately that they “have no basis for their moral outrage,” and yet the “enlightened” people on the Left never seem to get it. You can say what you want about the “religious worldview,” but a God that threatens to burn you in hell for billions and trillions of years unless you do what he says seems to me a pretty convincing “basis” for “acting morally.” The “enlightened” have never come up with anything of the sort. One typically finds them working themselves into high dudgeon of virtuous indignation without realizing that the “basis” for all their furious anathemas is nothing but thin air.
The reason for their latest outburst of pious outrage is threadbare enough. They claim that Trump is “immoral” because he engaged in “locker room talk” about women in what he supposed was a private conversation. Are you kidding me?! These people have just used their usual bullying tactics to impose a novel version of sexual morality on the rest of us that sets the old one recommended by a “religious worldview” on its head. Now, all of a sudden, we are to believe that they’ve all rediscovered their inner prude. Heaven forefend that anyone should dare to think of women as “objects!”
Puh-lease! I’d say the chances that 99 out of 100 of the newly pious MSM journalists who have been flogging this story have never engaged in similar talk or worse are vanishingly small. The other one is probably a eunuch. As for the “objectification of women,” I’m sorry to be a bearer of bad tidings, but that’s what men do. They are sexually attracted to the “object” of a woman’s body because it is their nature to be so attracted. That is how they evolved. It is highly unlikely that any of the current pious critics of “objectification” would be around today to register their complaints if that particular result of natural selection had never happened.
And what of Trump? Well, if nothing else, he’s been a very good educator. He’s shown us what the elites in the Republican Party really stand for. I personally will never look at the McCains, Romneys, and Ryans of the world in quite the same way. At the very least, I’m grateful to him for that.
Posted on October 10th, 2016 No comments
It really seems as if the weapon designers at the nation’s three nuclear weapons laboratories, Los Alamos, Livermore, and Sandia, never really believed that nuclear testing would ever end. If so, they were singularly blind to the consequences. Instead of taking the approach apparently adopted by the Russians of designing and testing robust warheads that could simply be scrapped and replaced with newly manufactured ones at the end of their service life, they decided to depend on a constant process of refurbishing old warheads, eliminating the ability to make new ones in the process. When our weapons got too old, they would be repeatedly patched up in so-called Life Extension Programs, or LEPs. Apparently it began to occur to an increasing number of people in the weapons community that maintaining the safety and reliability of the stockpile indefinitely using that approach might be a bit problematic.
The first “solution” to the problem proposed by the National Nuclear Security Administration (NNSA), the semi-autonomous agency within the Department of Energy (DOE) responsible for maintaining the nuclear stockpile, was the Reliable Replacement Warhead (RRW). It was to be robust, easy to manufacture, and easy to maintain. It was also a new, untested design. As such, it would have violated the spirit, if not the letter, of Article VI of the Non-proliferation Treaty (NPT). If it had been built, it would also very likely have forced violation of the Comprehensive Nuclear Test Ban Treaty (CTBT), which the U.S. has signed, but never ratified. It was claimed that the RRW could be built and certified without testing. This was very probably nonsense. There have always been more or less influential voices within NNSA, the Department of Defense (DoD), and the weapons labs, in favor of a return to nuclear testing. That would not have been a good thing then, and I doubt that it will be a good thing at any foreseeable time in the future. In general, I think we should do our best to keep the nuclear genie bottled up as long as possible. Fortunately, Congress agreed and killed the RRW Program.
That didn’t stop the weaponeers. They just tried a new gambit. It’s called the “3+2 Strategy.” There are currently four types of ballistic missile warheads, two bombs, and a cruise missile warhead in the U.S. arsenal. The basic idea of 3+2 would be to reduce this to three “interoperable” ballistic missile warheads and two air delivered weapons (a bomb and a cruise missile), explaining the “3+2.” In the process, the conventional chemical explosives that drive the implosion of the “atomic bomb” stage of the weapons would be replaced by insensitive high explosives (IHE). The result would supposedly be a safer, more secure stockpile that would be easier to maintain. The price tag, in round numbers, would be $60 billion.
I can only hope Congress will be as quick to deep six 3+2 as it was with the RRW. The 3+2 will require tinkering not only with the bits surrounding the nuclear explosive package (NEP), but with the NEP itself. In other words, its just as much a violation of the spirit of Article VI of the NPT as was the RRW. The predictable result of any such changes will be the “sudden realization” by the weapons labs somewhere down the line that they can’t certify the new designs without a return to nuclear testing. There’s a better and, in the long run, probably cheaper way to maintain the stockpile.
In the first place, we need to stop relying on LEPs, and return to manufacturing replacement weapons. The common argument against this is that we have lost the ability to manufacture critical parts of our weapons since the end of testing, and in some cases the facilities and companies that supplied the parts no longer exist. Nonsense! The idea that a country responsible for a quarter of the entire world’s GDP has lost the ability to reproduce the weapons it was once able to design, build and test in a few years is ridiculous. We are told that subtle changes in materials might somehow severely degrade the performance of remanufactured weapons. I doubt it. Regardless, DOE has always known there was a solution to that problem. It’s called the Advanced Hydrodynamic Facility, or AHF.
Basically, the AHF would be a giant accelerator facility capable of producing beams that would be able to image an imploding nuclear weapon pit in three dimensions and at several times during the implosion. Serious studies of such a facility were done as long ago as the mid-90’s, and there is no doubt that it is feasible. In actual experiments, of course, highly enriched uranium and plutonium would be replaced by surrogate materials such as tungsten, but they would still determine with a high degree of confidence whether a given remanufactured primary would work or not. The primary, or “atomic bomb” part of a weapon supplies the energy that sets off the secondary, or thermonuclear part. If the primary, of a weapon works, then there can be little doubt that the secondary will work as well. The AHF would be expensive, which is probably the reason it still hasn’t been built. Given the $60 billion cost of 3+2, that decision may well prove to be penny-wise and pound-foolish.
The whole point of having a nuclear arsenal is its ability to deter enemies from attacking us. Every time people who are supposed to be the experts about such things question the reliability of our stockpile, they detract from its ability to deter. I think a remanufacturing capability along with the AHF is the best way to shut them up, preventing a very bad decision to resume nuclear testing in the process. I suggest we get on with it.
Posted on October 5th, 2016 3 comments
The truth about morality is both simple and obvious. It exists as a result of evolution by natural selection. From that it follows that it cannot possibly have a purpose or goal, and from that it follows that one cannot make “progress” towards fulfilling that nonexistent purpose or reaching that nonexistent goal. Simple and obvious as it is, no truth has been harder for mankind to accept.
The reason for this has to do with the nature of moral emotions themselves. They portray Good and Evil to us as real things that exist independent of human consciousness, when in fact they are subjective artifacts of our imaginations. That truth has always been hard for us to accept. It is particularly hard when self-esteem is based on the illusion of moral superiority. That illusion is obviously alive and well at a time when a large fraction of the population is capable of believing that another large fraction is “deplorable.” The fact that the result of indulging such illusions in the past has occasionally and not infrequently been mass murder suggests that, as a matter of public safety, it may be useful to stop indulging them.
The “experts on ethics” delight in concocting chilling accounts of what will happen if we do stop indulging them. We are told that a world without objective moral truths will be a world of moral nihilism and moral chaos. The most obvious answer to such fantasies is, “So what?” Is the truth really irrelevant? Are we really expected to force ourselves to believe in lies because that truth is just to scary for us to face? Come to think of it, what, exactly, do we have now if not moral nihilism and moral chaos?
We live in a world in which every two bit social justice warrior can invent some new “objective evil,” whether “cultural appropriation,” failure to memorize the 57 different flavors or gender, or some arcane “micro-aggression,” and work himself into a fine fit of virtuous indignation if no one takes him seriously. The very illusion that Good and Evil are objective things is regularly exploited to justify the crude bullying that is now used to enforce new “moral laws” that have suddenly been concocted out of the ethical vacuum. The unsuspecting owners of mom and pop bakeries wake up one morning to learn that they are now “deplorable,” and so “evil” that their business must be destroyed with a huge fine.
We live in a world in which hundreds of millions believe that other hundreds of millions who associate the word “begotten” with the “son of God,” or believe in the Trinity, are so evil that they will certainly burn in hell forever. These other hundreds of millions believe that heavenly bliss will be denied to anyone who doesn’t believe in a God with these attributes.
We live in a world in which the regime in charge of the most powerful country in the world believes it has such a monopoly on the “objective Good” that it can ignore international law, send its troops to occupy parts of another sovereign state, and dictate to the internationally recognized government of that state which parts of its territory it is allowed to control, and which not. It persists in this dubious method of defending the “Good” even though it risks launching a nuclear war in the process. The citizens in that country who happen to support one candidate for President don’t merely consider the citizens who support the opposing candidate wrong. They consider them objectively evil according to moral “laws” that apparently float about as insubstantial spirits, elevating themselves by their own bootstraps.
We live in a world in which evolutionary biologists, geneticists, and neuroscientists who are perfectly well aware of the evolutionary roots of morality nevertheless persist in cobbling together new moral systems that lack even so much as the threadbare semblance of a legitimate basis. The faux legitimacy that the old religions at least had the common decency to supply in the form of imaginary gods is thrown to the winds without a thought. In spite of that these same scientists expect the rest of us to take them seriously when they announce that, at long last, they’ve discovered the philosopher’s stone of objective Good and Evil, whether in the form of some whimsical notion of “human flourishing,” or perhaps a slightly retouched version of utilitarianism. In almost the same breath, they affirm the evolutionary basis of morality, and then proceed to denounce anyone who doesn’t conform to their newly minted moral “laws.” When it comes to morality, it is hard to imagine a more nihilistic and chaotic world.
I find it hard to believe that a world in which the subjective nature and rather humble evolutionary roots of all our exalted moral systems were commonly recognized, along with the obvious implications of these fundamental truths, could possibly be even more nihilistic and chaotic than the one we already live in. I doubt that “moral relativity” would prevail in such a world, for the simple reason that it is not in our nature to be moral relativists. We might even be able to come up with a set of “absolute” moral rules that would be obeyed, not because humanity had deluded itself into believing they were objectively true, but because of a common determination to punish free riders and cheaters. We might even be able to come up with some rational process for changing and adjusting the rules when necessary by common consent, rather than by the current “enlightened” process of successful bullying.
We would all be aware that even the most “exalted” and “noble” moral emotions, even those accompanied by stimulating music and rousing speeches, have a common origin; their tendency to improve the odds that the genes responsible for them would survive in a Pleistocene environment. Under the circumstances, it would be reasonable to doubt, not only their ability to detect “objective Good” and “objective Evil,” but the wisdom of paying any attention to them at all. Instead of swallowing the novel moral concoctions of pious charlatans without a murmur, we would begin to habitually greet them with the query, “Exactly what innate whim are you trying to satisfy?” We would certainly be very familiar with the tendency of every one of us, described so eloquently by Jonathan Haidt in his “The Righteous Mind,” to begin rationalizing our moral emotions as soon as we experience them, whether in response to “social injustice” or a rude driver who happened to cut us off on the way to work. We would realize that that very tendency also exists by virtue of evolution by natural selection, not because it is actually capable of unmasking social injustice, or distinguishing “evil” from “good” drivers, but merely because it improved our chances of survival when there were no cars, and no one had ever heard of such a thing as social justice.
I know, I’m starting to ramble. I’m imagining a utopia, but one can always dream.
Posted on October 3rd, 2016 5 comments
Once upon a time, half a century ago and more, several authors wrote books according to which certain animals, including human beings, are, at least in certain circumstances, predisposed to aggressive behavior. Prominent among them was On Aggression, published in English in 1966 by Konrad Lorenz. Other authors included Desmond Morris (The Naked Ape, 1967), Lionel Tiger (Men in Groups, 1969) and Robin Fox (The Imperial Animal, co-authored with Tiger, 1971). The most prominent and widely read of all was the inimitable Robert Ardrey (African Genesis, 1961, The Territorial Imperative, 1966, The Social Contract, 1970, and The Hunting Hypothesis, 1976). Why were these books important, or even written to begin with? After all, the fact of innate aggression, then as now, was familiar to any child who happened to own a dog. Well, because the “men of science” disagreed. They insisted that there were no innate tendencies to aggression, in man or any of the other higher animals. It was all the fault of unfortunate cultural developments back around the start of the Neolithic era, or of the baneful environmental influence of “frustration.”
Do you think I’m kidding? By all means, read the source literature! For example, according to a book entitled Aggression by “dog expert” John Paul Scott published in 1958 by the University of Chicago Press,
All research findings point to the fact that there is no physiological evidence of any internal need or spontaneous driving force for fighting; that all stimulation for aggression eventually comes from the forces present in the external environment.
A bit later, in 1962 in a book entitled Roots of Behavior he added,
All our present data indicate that fighting behavior among the higher mammals, including man, originates in external stimulation and that there is no evidence of spontaneous internal stimulation.
Ashley Montagu added the following “scientific fact” about apes (including chimpanzees!) in his “Man and Aggression,” published in 1968:
The field studies of Schaller on the gorilla, of Goodall on the chimpanzee, of Harrison on the orang-utan, as well as those of others, show these creatures to be anything but irascible. All the field observers agree that these creatures are amiable and quite unaggressive, and there is not the least reason to suppose that man’s pre-human primate ancestors were in any way different.
When Goodall dared to contradict Montagu and report what she had actually seen, she was furiously denounced in vile attacks by the likes of Brian Deer, who chivalrously recorded in an artical published in the Sunday Times in 1997,
…the former waitress had arrived at Gombe, ordered the grass cut and dumped vast quantities of trucked-in bananas, before documenting a fractious pandemonium of the apes. Soon she was writing about vicious hunting parties in which our cheery cousins trapped colubus monkeys and ripped them to bits, just for fun.
This remarkable transformation from Montagu’s expert in the field to Deer’s “former waitress” was typical of the way “science” was done by the Blank Slaters in those days. This type of “science” should be familiar to modern readers, who have witnessed what happens to anyone who dares to challenge the current climate change dogmas.
Fast forward to 2016. A paper entitled The phylogenetic roots of human lethal violence has just been published in the prestigious journal Nature. The first figure in the paper has the provocative title, “Evolution of lethal aggression in non-human mammals.” It not only accepts the fact of “spontaneous internal stimulation” of aggression without a murmur, but actually quantifies it in no less than 1024 species of mammals! According to the abstract,
Here we propose a conceptual approach towards understanding these roots based on the assumption that aggression in mammals, including humans, has a significant phylogenetic component. By compiling sources of mortality from a comprehensive sample of mammals, we assessed the percentage of deaths due to conspecifics and, using phylogenetic comparative tools, predicted this value for humans. The proportion of human deaths phylogenetically predicted to be caused by interpersonal violence stood at 2%.
All this and more is set down in the usual scientific deadpan without the least hint that the notion of such a “significant phylogenetic component” was ever seriously challenged. Unfortunately the paper itself is behind Nature’s paywall, but a there’s a free review with extracts from the paper by Ed Yong on the website of The Atlantic, and Jerry Coyne also reviewed the paper over at his Why Evolution is True website. Citing the paper Yong notes,
It’s likely that primates are especially violent because we are both territorial and social—two factors that respectively provide motive and opportunity for murder. So it goes for humans. As we moved from small bands to medium-sized tribes to large chiefdoms, our rates of lethal violence increased.
“Territorial and social!?” Whoever wrote such stuff? Oh, now I remember! It was a guy named Robert Ardrey, who happened to be the author of The Territorial Imperative and The Social Contract. Chalk up another one for the “mere playwright.” Yet again, he was right, and almost all the “men of science” were wrong. Do you ever think he’ll get the credit he deserves from our latter day “men of science?” Naw, neither do I. Some things are just too embarrassing to admit.
Posted on August 14th, 2016 7 comments
“Moral progress” is impossible. It is a concept that implies progress towards a goal that doesn’t exist. We exist as a result of evolution by natural selection, a process that has simply happened. Progress implies the existence of an entity sufficiently intelligent to formulate a goal or purpose towards which progress is made. No such entity has directed the process, nor did one even exist over most of the period during which it occurred. The emotional predispositions that are the root cause of what we understand by the term “morality” are as much an outcome of natural selection as our hands or feet. Like our hands and feet, they exist solely because they have enhanced the probability that the genes responsible for their existence would survive and reproduce. There is increasing acceptance of the fact that morality owes its existence to evolution by natural selection among the “experts on ethics” among us. However, as a rule they have been incapable of grasping the obvious implication of that fact; that the notion of “moral progress” is a chimera. It is a truth that has been too inconvenient for them to bear.
It’s not difficult to understand why. Their social gravitas and often their very livelihood depend on propping up the illusion. This is particularly true of the “experts” in academia, who often lack marketable skills other than their “expertise” in something that doesn’t exist. Their modus operandi consists of hoodwinking the rest of us into believing that satisfying some whim that happens to be fashionable within their tribe represents “moral progress.” Such “progress” has no more intrinsic value than a five year old’s progress towards acquiring a lollipop. Often it can be reasonably expected to lead to outcomes that are the opposite of those that account for the existence of the whim to begin with, resulting in what I have referred to in earlier posts as a morality inversion. Propping up the illusion in spite of recognition of the evolutionary roots of morality in a milieu that long ago dispensed with the luxury of a God with a big club to serve as the final arbiter of what is “really good” and “really evil” is no mean task. Among other things it requires some often amusing intellectual contortions as well as the concoction of an arcane jargon to serve as a smokescreen.
Consider, for example, a paper by Professors Allen Buchanan and Russell Powell entitled Toward a Naturalistic Theory of Moral Progress. It turned up in the journal Ethics, that ever reliable guide to academic fashion touching on the question of “human flourishing.” Far from denying the existence of human nature after the fashion of the Blank Slaters of old, the authors positively embrace it. They cheerfully admit its relevance to morality, noting in particular the existence of a predisposition in our species to perceive others of our species in terms of ingroups and outgroups; what Robert Ardrey used to call the Amity/Enmity Complex. Now, if these things are true, and absent the miraculous discovery of any other contributing “root cause” for morality other than evolution by natural selection, whether in this world or the realm of spirits, it follows logically that “progress” is a term that can no more apply to morality than it does to evolution by natural selection itself. It further follows that objective Good and objective Evil are purely imaginary categories. In other words, unless one is merely referring to the scientific investigation of evolved behavioral traits, “experts on ethics” are experts about nothing. Their claim to possess a philosopher’s stone pointing the way to how we should act is a chimera. For the last several thousand years they have been involved in a sterile game of bamboozling the rest of us, and themselves to boot.
Predictly, the embarrassment and loss of gravitas, not to mention the loss of a regular paycheck, implied by such a straightforward admission of the obvious has been more than the “experts” could bear. They’ve simply gone about their business as if nothing had happened, and no one had ever heard of a man named Darwin. It’s actually been quite easy for them in this puritanical and politically correct age, in which the intellectual life and self-esteem of so many depends on maintaining a constant state of virtuous indignation and moral outrage. Virtuous indignation and moral outrage are absurd absent the existence of an objective moral standard. Since nothing of the sort exists, it is simply invented, and everyone stays outraged and happy.
In view of this pressing need to prop up the moral fashions of the day, then, it follows that no great demands are placed on the rigor of modern techniques for concocting real Good and real Evil. Consider, for example, the paper referred to above. The authors go to a great deal of trouble to assure their readers that their theory of “moral progress” really is “naturalistic.” In this enlightened age, they tell us, they will finally be able to steer clear of the flaws that plagued earlier attempts to develop secular moralities. These were all based on false assumptions “based on folk psychology, flawed attempts to develop empirically based psychological theories, a priori speculation, and reflections on history hampered both by a lack of information and inadequate methodology.” “For the first time,” they tell us, “we are beginning to develop genuinely scientific knowledge about human nature, especially through the development of empirical psychological theories that take evolutionary biology seriously.” This begs the question, of course, of how we’ve managed to avoid acquiring “scientific knowledge about human nature” and “taking evolutionary biology seriously” for so long. But I digress. The important question is, how do the authors manage to establish a rational basis for their “naturalistic theory of moral progress” while avoiding the Scylla of “folk psychology” on the one hand and the Charybdis of “a priori speculation” on the other? It turns out that the “basis” in question hardly demands any complex mental gymnastics. It is simply assumed!
Here’s the money passage in the paper:
A general theory of moral progress could take a more a less ambitious form. The more ambitious form would be to ground an account of which sorts of changes are morally progressive in a normative ethical theory that is compatible with a defensible metaethics… In what follows we take the more modest path: we set aside metaethical challenges to the notion of moral progress, we make no attempt to ground the claim that certain moralities are in fact better than others, and we do not defend any particular account of what it is for one morality to be better than another. Instead, we assume that the emergence of certain types of moral inclusivity are significant instances of moral progress and then use these as test cases for exploring the feasibility of a naturalized account of moral progress.
This is indeed a strange approach to being “naturalistic.” After excoriating the legions of thinkers before them for their faulty mode of hunting the philosopher’s stone of “moral progress,” they simply assume it exists. It exists in spite of the elementary chain of logic leading inexorably to the conclusion that it can’t possibly exist if their own claims about the origins of morality in human nature are true. In what must count as a remarkable coincidence, it exists in the form of “inclusivity,” currently in high fashion as one of the shibboleths defining the ideological box within which most of today’s “experts on ethics” happen to dwell. Those who trouble themselves to read the paper will find that, in what follows, it is hardly treated as a mere modest assumption, but as an established, objective fact. “Moral progress” is alluded to over and over again as if, by virtue this original, “modest assumption,” the real thing somehow magically popped into existence in the guise of “inclusivity.”
Suppose we refrain from questioning the plot, and go along with the charade. If inclusivity is really to count as moral progress, than it must not only be desirable in certain precincts of academia, but actually feasible. However if, as the authors agree, humans are predisposed to perceive others of their species in terms of ingroups and outgroups, the feasibility of inclusivity is at least in question. As the authors put it,
Attempts to draw connections between contemporary evolutionary theories of morality and the possibility of inclusivist moral progress begin with the standard evolutionary psychological assertion that the main contours of human moral capacities emerged through a process of natural selection on hunter-gatherer groups in the Pleistocene – in the so-called environment of evolutionary adaptation (EEA)… The crucial claim, which leads some thinkers to draw a pessimistic inference about the possibility of inclusivist moral progress, is that selection pressures in the EEA favored exclusivist moralisties. These are moralities that feature robust moral commitments among group members but either deny moral standing to outsiders altogether, relegate out-group members to a substantially inferior status, or assign moral standing to outsiders contingent on strategic (self-serving) considerations.
No matter, according to the authors, this flaw in our evolved moral repertoire can be easily fixed. All we have to do is lift ourselves out of the EEA, achieve universal prosperity so great and pervasive that competition becomes unnecessary, and the predispositions in question will simply fade away, more or less like the state under Communism. Invoking that wonderful term “plasticity,” which seems to pop up with every new attempt to finesse human behavioral traits out of existence, they write,
According to an account of exclusivist morality as a conditionally expressed (adaptively plastic) trait, the suite of attitudes and behaviors associated with exclusivist tendencies develop only when cues that were in the past highly correlated with out-group threat are detected.
In other words, it is the fond hope of the authors that, if only we can make the environment in which inconvenient behavioral predispositions evolved disappear, the traits themselves will disappear as well! They go on to claim that this has actually happened, and that,
…exclusivist moral tendencies are attenuated in populations inhabiting environments in which cues of out-group threat are absent.
Clearly we have seen a vast expansion in the number of human beings that can be perceived as ingroup since the Pleistocene, and the inclusion as ingroup of racial and religious categories that once defined outgroups. There is certainly plasticity in how ingroups and outgroups are actually defined and perceived, as one might expect of traits evolved during times of rapid environmental change in the nature of the “others” one happened to be in contact with or aware of at any given time. However, this hardly “proves” that the fundamental tendency to distinguish between ingroups and outgroups itself will disappear or is likely to disappear in response to any environmental change whatever. Perhaps the best way to demonstrate this is to refer to the paper itself.
Clearly the authors imagine themselves to be “inclusive,” but is that really the case? Hardly! It turns out they have a very robust perception of outgroup. They’ve merely fallen victim to the fallacy that it “doesn’t count” because it’s defined in ideological rather than racial or religious terms. Their outgroup may be broadly defined as “conservatives.” These “conservatives” are mentioned over and over again in the paper, always in the guise of the bad guys who are supposed to reject inclusivism and resist “moral progress.” To cite a few examples,
We show that although current evolutionary psychological understandings of human morality do not, contrary to the contentions of some authors, support conservative ethical and political conclusions, they do paint a picture of human morality that challenges traditional liberal accounts of moral progress.
…there is no good reason to believe conservative claims that the shift toward greater inclusiveness has reached its limit or is unsustainable.
These “evoconservatives,” as we have labeled them, infer from evolutionary explanations of morality that inclusivist moralities are not psychologically feasible for human beings.
At the same time, there is strong evidence that the development of exclusivist moral tendencies – or what evolutionary psychologists refer to as “in-group assortative sociality,” which is associated with ethnocentric, xenophobic, authoritarian, and conservative psychological orientations – is sensitive to environmental cues…
and so on, and so on. In a word, although the good professors are fond of pointing with pride to their vastly expanded ingroup, they have rather more difficulty seeing their vastly expanded outgroup as well, more or less like the difficulty we have seeing the nose at the end of our face. The fact that the conservative outgroup is perceived with as much fury, disgust, and hatred as ever a Grand Dragon of the Ku Klux Klan felt for blacks or Catholics can be confirmed by simply reading through the comment section of any popular website of the ideological Left. Unless professors employed by philosophy departments live under circumstances more reminiscent of the Pleistocene than I had imagined this bodes ill for their theory of “moral progress” based on “inclusivity.” More evidence that this is the case is easily available to anyone who cares to look for “diversity” in the philosophy department of the local university in the form of a professor who can be described as conservative by any stretch of the imagination.
I note in passing another passage in the paper that demonstrates the fanaticism with which the chimera of “moral progress” is pursued in some circles. Again quoting the authors,
Some moral philosophers whom we have elsewhere called “evoliberals,” have tacitly affirmed the evo-conservative view in arguing that biomedical interventions that enhance human moral capacities are likely to be crucial for major moral progress due to evolved constraints on human moral nature.
In a word, the delusion of moral progress is not necessarily just a harmless toy for the entertainment of professors of philosophy, at least as far as those who might have some objection to “biomedical interventions” carried out be self-appointed “experts on ethics” are concerned.
What’s the point? The point is that we are unlikely to make progress of any kind without first accepting the truth about our own nature, and the elementary logical implications of that truth. Darwin saw them, Westermarck saw them, and they are far more obvious today than they were then. We continue to ignore them at our peril.
Posted on June 5th, 2016 17 comments
It’s heartening to learn that there is a serious basis for recent speculation to the effect that the science of animal cognition may gradually advance to a level long familiar to any child with a pet dog. Frans de Waal breaks the news in his latest book, Are We Smart Enough to Know How Smart Animals Are? In answer to his own question, de Waal writes,
The short answer is “Yes, but you’d never have guessed.” For most of the last century, science was overly cautious and skeptical about the intelligence of animals. Attributing intentions and emotions to animals was seen as naïve “folk” nonsense. We, the scientists, knew better! We never went in for any of this “my dog is jealous” stuff, or “my cat knows what she wants,” let alone anything more complicated, such as that animals might reflect on the past or feel one another’s pain… The two dominant schools of thought viewed animals as either stimulus-response machines out to obtain rewards and avoid punishment or as robots genetically endowed with useful instincts. While each school fought the other and deemed it too narrow, they shared a fundamentally mechanistic outlook: there was no need to worry about the internal lives of animals, and anyone who did was anthropomorphic, romantic and unscientific.
Did we have to go through this bleak period? In earlier days, the thinking was noticeably more liberal. Charles Darwin wrote extensively about human and animal emotions, and many a scientist in the nineteenth century was eager to find higher intelligence in animals. It remains a mystery why these efforts were temporarily suspended, and why we voluntarily hung a millstone around the neck of biology.
Here I must beg to differ with de Waal. It is by no means a “mystery.” This “mechanization” of animals in the sciences was more or less contemporaneous with the Blank Slate debacle, and was motivated by more or less the same ideological imperatives. I invite readers interested in the subject to consult the first few chapters of Robert Ardrey’s African Genesis, published as far back as 1961. Noting a blurb in Scientific American by Marshall Sahlins, more familiar to later readers as a collaborator in the slander of Napoleon Chagnon, to the effect that,
There is a quantum difference, at points a complete opposition, between even the most rudimentary human society and the most advanced subhuman primate one. The discontinuity implies that the emergence of human society required some suppression, rather than direct expression, of man’s primate nature. Human social life is culturally, not biologically determined.
Ardrey, that greatest of all debunkers of the Blank Slate, continues,
Dr. Sahlins’ conclusion is startling to no one but himself. It is a scientific restatement, 1960-style, of the philosophical conclusion of an eighteenth-century Neapolitan monk (Giambattista Vico, ed.): Society is the work of man. It is just another prop, fashioned in the shop of science’s orthodoxies from the lumber of Zuckerman’s myth, to support the fallacy of human uniqueness.
The Zuckerman Ardrey refers to is anthropologist Solly Zuckerman. I invite anyone who doubts the fanaticism with which “science” once insisted on the notion of human uniqueness alluded to in de Waal’s book to read some of Zuckerman’s papers. For example, in The Social Life of Monkeys and Apes, he writes,
It is now generally recognized that anthropomorphic preoccupations do not help the critical development of knowledge, either in fields of physical or biological inquiry.
He exulted in the great “advances” science had made in correcting the “mistakes” of Darwin:
The Darwinian period, in which animal behavior as a distinct study was born, was one in which anthropomorphic interpretation flourished. Anecdotes were regarded in the most generous light, and it was believed that many animals were highly rational creatures, possessed of exalted ethical codes of social behavior.
According to Zuckerman, “science” had now discovered that the very notion of animal “intelligence” was absurd. As he put it,
Until 1890, the study of the social behavior of mammals developed hand in hand with the study of their “intelligence,” and both subjects were usually treated in the same books.
Such comments, which are ubiquitous in the literature of the Blank Slate era, make it hard to understand how de Waal can still be “mystified” about the motivation for the “scientific” denial of animal intelligence. Be that as it may, he presents a wealth of data derived from recent experiments and field studies debunking all the lingering rationale for claims of human uniqueness one by one, whether it be the ability to experience emotion, a “theory of mind,” social problem solving ability, ability to contemplate the past and future, or even consciousness. In the process he documents the methods “science” used to hermetically seal itself off from reality, such as the invention of pejorative terms like “anthropomorphism” to denounce and dismiss anyone who dared to challenge the human uniqueness orthodoxy, and the rejection of all evidence not supplied by members of the club as mere “anecdotes.” In the process he notes,
Needing a new term to make my point, I invented anthropodenial, which is the a priori rejection of humanlike traits in other animals or animallike traits in us.
It’s hard to imagine that anyone could seriously believe that “science” consists of fanatically rejecting similarities between human and animal behavior that are obvious to everyone but “scientists” as “anthropomorphism” and “anecdotes” and assuming a priori that they’re of no significance until it can be absolutely proven that everyone else was right all along. This does not strike me as a “parsimonious” approach.
Not the least interesting feature of de Waal’s latest is his “rehabilitation” of several important debunkers of the Blank Slate who were unfortunate enough to publish before the appearance of E. O. Wilson’s Sociobiology in 1975. According to the fairy tale that currently passes for the “history” of the Blank Slate, before 1975 “darkness was on the face of the deep.” Only then did Wilson appear on the scene as the heroic slayer of the Blank Slate dragon. A man named Robert Ardrey was never heard of, and anyone mentioned in his books as an opponent of the Blank Slate before the Wilson “singularity” is to be ignored. The most prominent of them all, a man on whom the anathemas of the Blank Slaters often fell, literally in the same breath as Ardrey, was Konrad Lorenz. Sure enough, in Steven Pinker’s fanciful “history” of the Blank Slate, Lorenz is dismissed, in the same paragraph with Ardrey, no less, as “totally and utterly wrong,” and a delusional believer in “archaic theories such as that aggression was like the discharge of a hydraulic pressure.” De Waal’s response must be somewhat discomfiting to the promoters of Pinker’s official “history.” He simply ignores it!
Astoundingly enough, de Waal speaks of Lorenz as one of the great founding fathers of the modern sciences of animal behavior and cognition. In other words, he tells the truth, as if it had never been disputed in any bowdlerized “history.” Already at the end of the prologue we find the matter-of-fact observation that,
…behavior is, as the Austrian ethologist Konrad Lorenz put it, the liveliest aspect of all that lives.
Reading on, we find that this mention of Lorenz wasn’t just an anomaly designed to wake up drowsy readers. In the first chapter we find de Waal referring to the field of phylogeny,
…when we trace traits across the evolutionary tree to determine whether similarities are due to common descent, the way Lorenz had done so beautifully for waterfowl.
A few pages later he writes,
The maestro of observation, Konrad Lorenz, believed that one could not investigate animals effectively without an intuitive understanding grounded in love and respect.
and notes, referring to the behaviorists, that,
The power of conditioning is not in doubt, but the early investigators had totally overlooked a crucial piece of information. They had not, as recommended by Lorenz, considered the whole organism.
And finally, in a passage that seems to scoff at Pinker’s “totally and utterly wrong” nonsense, he writes,
Given that the facial musculature of humans and chimpanzees is nearly identical, the laughing, grinning, and pouting of both species likely goes back to a common ancestor. Recognition of the parallel between anatomy and behavior was a great leap forward, which is nowadays taken for granted. We all now believe in behavioral evolution, which makes us Lorenzians.
Stunning, really for anyone who’s followed what’s been going on in the behavioral and animal sciences for any length of time. And that’s not all. Other Blank Slate debunkers who published long before Wilson, like Niko Tinbergen and Desmond Morris, are mentioned with a respect that belies the fact that they, too, were once denounced by the Blank Slaters as right wing fascists and racists in the same breath with Lorenz. I have a hard time believing that someone as obviously well read as de Waal has never seen Pinker’s The Blank Slate. I honestly don’t know what to make of the fact that he can so blatantly contradict Pinker, and yet never trouble himself to mention even the bare existence of such a remarkable disconnect. Is he afraid of Pinker? Does he simply want to avoid hurting the feelings of another member of the academic tribe? I must leave it up to the reader to decide.
And what of Ardrey, who brilliantly described both “anthropodenial” and the reasons that it was by no means a “mystery” more than half a century before the appearance of de Waal’s latest book? Will he be rehabilitated, too? Don’t hold your breath. Unlike Lorenz, Tinbergen and Morris, he didn’t belong to the academic tribe. The fact that it took an outsider to smash the Blank Slate and give a few academics the courage to finally stick their noses out of the hole they’d dug for themselves will likely remain deep in the memory hole. It happens to be a fact that is just too humiliating and embarrassing for them to ever admit. It would seem the history of the affair can be adjusted, but it will probably never be corrected.
Posted on May 12th, 2016 5 comments
Hardly a day goes by without some pundit bemoaning the decline in religious faith. We are told that great evils will inevitably befall mankind unless we all believe in imaginary super-beings. Of course, these pundits always assume a priori that the particular flavor of religion they happen to favor is true. Absent that assumption, their hand wringing boils down to the argument that we must all somehow force ourselves to believe in God whether that belief seems rational to us or not. Otherwise, we won’t be happy, and humanity won’t flourish.
An example penned by Dennis Prager entitled Secular Conservatives Think America Can Survive the Death of God that appeared recently at National Review Online is typical of the genre. Noting that even conservative intellectuals are becoming increasingly secular, he writes that,
They don’t seem to understand that the only solution to many, perhaps most, of the social problems ailing America and the West is some expression of Judeo-Christian religion.
When, after the fall of the Roman Empire, the West embraced Christianity as a faith superior to all others, as its founder was the Son of God, the West went on to create modern civilization, and then went out and conquered most of the known world.
The truths America has taught the world, of an inherent human dignity and worth, and inviolable human rights, are traceable to a Christianity that teaches that every person is a child of God.
Today, however, with Christianity virtually dead in Europe and slowly dying in America, Western culture grows debased and decadent, and Western civilization is in visible decline.
Both pundits draw attention to a consequence of the decline of traditional religions that is less a figment of their imaginations; the rise of secular religions to fill the ensuing vacuum. The examples typically cited include Nazism and Communism. There does seem to be some innate feature of human behavior that predisposes us to adopt such myths, whether of the spiritual or secular type. It is most unlikely that it comes in the form of a “belief in God” or “religion” gene. It would be very difficult to explain how anything of the sort could pop into existence via natural selection. It seems reasonable, however, that less specialized and more plausible behavioral traits could account for the same phenomenon. Which begs the question, “So what?”
Pundits like Prager and Buchanan are putting the cart before the horse. Before one touts the advantages of one brand of religion or another, isn’t it first expedient to consider the question of whether it is true? If not, then what is being suggested is that mankind can’t handle the truth. We must be encouraged to believe in a pack of lies for our own good. And whatever version of “Judeo-Christian religion” one happens to be peddling, it is, in fact, a pack of lies. The fact that it is a pack of lies, and obviously a pack of lies, explains, among other things, the increasingly secular tone of conservative pundits so deplored by Buchanan and Prager.
It is hard to understand how anyone who uses his brain as something other than a convenient stuffing for his skull can still take traditional religions seriously. The response of the remaining true believers to the so-called New Atheists is telling in itself. Generally, they don’t even attempt to refute their arguments. Instead, they resort to ad hominem attacks. The New Atheists are too aggressive, they have bad manners, they’re just fanatics themselves, etc. They are not arguing against the “real God,” who, we are told, is not an object, a subject, or a thing ever imagined by sane human beings, but some kind of an entity perched so high up on a shelf that profane atheists can never reach Him. All this spares the faithful from making fools of themselves with ludicrous mental flip flops to explain the numerous contradictions in their holy books, tortured explanations of why it’s reasonable to assume the “intelligent design” of something less complicated by simply assuming the existence of something vastly more complicated, and implausible yarns about how an infinitely powerful super-being can be both terribly offended by the paltry sins committed by creatures far more inferior to Him than microbes are to us, and at the same time incapable of just stepping out of the clouds for once and giving us all a straightforward explanation of what, exactly, he wants from us.
In short, Prager and Buchanan would have us somehow force ourselves, perhaps with the aid of brainwashing and judicious use of mind-altering drugs, to believe implausible nonsense, in order to avoid “bad” consequences. One can’t dismiss this suggestion out of hand. Our species is a great deal less intelligent than many of us seem to think. We use our vaunted reason to satisfy whims we take for noble causes, without ever bothering to consider why those whims exist, or what “function” they serve. Some of them apparently predispose us to embrace ideological constructs that correspond to spiritual or secular religions. If we use human life as a metric, P&B would be right to claim that traditional spiritual religions have been less “bad” than modern secular ones, costing only tens of millions of lives via religious wars, massacres of infidels, etc., whereas the modern secular religion of Communism cost, in round numbers, 100 million lives, and in a relatively short time, all by itself. Communism was also “bad” to the extent that we value human intelligence, tending to selectively annihilate the brightest portions of the population in those countries where it prevailed. There can be little doubt that this “bad” tendency substantially reduced the average IQ in nations like Cambodia and the Soviet Union, resulting in what one might call their self-decapitation. Based on such metrics, Prager and Buchanan may have a point when they suggest that traditional religions are “better,” to the extent that one realizes that one is merely comparing one disaster to another.
Can we completely avoid the bad consequences of believing the bogus “truths” of religions, whether spiritual or secular? There seems to be little reason for optimism on that score. The demise of traditional religions has not led to much in the way of rational self-understanding. Instead, as noted above, secular religions have arisen to fill the void. Their ideological myths have often trumped reason in cases where there has been a serious confrontation between the two, occasionally resulting in the bowdlerization of whole branches of the sciences. The Blank Slate debacle was the most spectacular example, but there have been others. As belief in traditional religions has faded, we have gained little in the way of self-knowledge in their wake. On the contrary, our species seems bitterly determined to avoid that knowledge. Perhaps our best course really would be to start looking for a path back inside the “Matrix,” as Prager and Buchanan suggest.
All I can say is that, speaking as an individual, I don’t plan to take that path myself. I has always seemed self-evident to me that, whatever our goals and aspirations happen to be, we are more likely to reach them if we base our actions on an accurate understanding of reality rather than myths, on truth rather than falsehood. A rather fundamental class of truths are those that concern, among other things, where those goals and aspirations came from to begin with. These are the truths about human behavior; why we want what we want, why we act the way we do, why we are moral beings, why we pursue what we imagine to be noble causes. I believe that the source of all these truths, the “root cause” of all these behaviors, is to be found in our evolutionary history. The “root cause” we seek is natural selection. That fact may seem inglorious or demeaning to those who lack imagination, but it remains a fact for all that. Perhaps, after we sacrifice a few more tens of millions in the process of chasing paradise, we will finally start to appreciate its implications. I think we will all be better off if we do.
Posted on April 24th, 2016 2 comments
When the keepers of the official dogmas in the Academy encounter an inconvenient truth, they refute it by calling it bad names. For example, the fact of human biodiversity is “racist,” and the fact of human nature was “fascist” back in the heyday of the Blank Slate. I encountered another example in “Ethics” journal in one of the articles I discussed in a recent post; Only All Naturalists Should Worry About Only One Evolutionary Debunking Argument, by Tomas Bogardus. It was discretely positioned in a footnote to the following sentence:
Do these evolutionary considerations generate an epistemic challenge to moral realism, that is, the view that evaluative properties are mind-independent features of reality and we sometimes have knowledge of them?
The footnote reads as follows:
As opposed to nihilism – on which there are no moral truths – and subjectivist constructivism or expressivism, on which moral truths are functions of our evaluative attitudes themselves.
This “scientific” use of the pejorative term “nihilism” to “refute” the conclusion that there are no moral truths fits the usual pattern. According to its Wiki blurb, the term “nihilism” was used in a similar manner when it was first coined by Friedrich Jacobi to “refute” disbelief in the transcendence of God. Wiki gives a whole genealogy of the various uses of the term. However, the most common image the term evokes is probably one of wild-eyed, bomb hurling 19th century Russian radicals. No matter. If something is true, it will remain true regardless of how often it is denounced as racist, fascist, or nihilist.
At this point in time, the truth about morality is sufficiently obvious to anyone who cares to think about it. It is a manifestation of behavioral predispositions that evolved at times very different from the present. It has no purpose. It exists because the genes responsible for its existence happened to improve the odds that the package of genes to which they belonged would survive and reproduce. That truth is very inconvenient. It reduces the “expertise” of the “experts on ethics,” an “expertise” that is the basis of their respect and authority in society, and not infrequently of their gainful employment as well, to an expertise about nothing. It also exposes that which the vast majority of human beings “know in their bones” to be true as an illusion. For all that, it remains true.
To the extent that the term “nihilist” has any meaning in the context of morality at all, it suggests that the world will dissolve in moral chaos unless some basis for objective morality can be extracted from the vacuum. Rape, murder and mayhem will prevail when we all realize we’ve been hoodwinked by the philosophers all these years, and there really is no such basis. The truth is rather more prosaic. Human beings will behave morally regardless of the intellectual fashions prevailing among the philosophers because it is their nature to act morally.
Moral chaos will not result from mankind finally learning the “nihilist” truth about morality. Indeed, it’s hard to imagine a state of moral chaos worse than the one we’re already in. Chaos doesn’t exist because of a gradually spreading understanding of the subjective roots of morality. Rather, it exists as a byproduct of continued attempts to prop up the façade of moral realism. The current “bathroom wars” are an instructive if somewhat ludicrous example. They demonstrate both the strong connection between custom and morality, and the typical post hoc rationalization of moral “truths” described by Jonathan Haidt in his paper, The Emotional Dog and its Rational Tail.
Customs are not merely public habits – the habits of a certain circle of men, a racial or national community, a rank or class of society – but they are at the same time rules of conduct. As Cicero observes, the customs of a people “are precepts in themselves.” We say that “custom commands,” or “custom demands,” and even when custom simply allows the commission of a certain class of actions, it implicitly lays down the rule that such actions are not to be interfered with. And the rule of custom is conceived of as a moral rule, which decides what is right and wrong.
However, the rule of custom can be challenged. Westermarck noted that, as societies became more complex,
Individuals arose who found fault with the moral ideas prevalent in the community to which they belonged, criticizing them on the basis of their own individual feelings… In the course of progressive civilization the moral consciousness has tended towards a greater equalization of rights, towards an expansion of the circle within which the same moral rules are held applicable. And this process has been largely due to the example of influential individuals and their efforts to raise public opinion to their own standard of right.
As Westermarck points out, in both cases the individuals involved are responding to subjective moral emotions, yet in both cases they suffer from the illusion that their emotions somehow correspond to objective facts about good and evil. In the case of the bathroom wars, the defenders of custom rationalize their disapproval after the fact by evoking lurid pictures of perverts molesting little girls. The problem is that, at least to the best of my knowledge, there is no data indicating that anything of the sort involving a transgender person has ever happened. On the other side, the LGBT community points to this disconnect without realizing that they are just as deluded in their belief that their preferred bathroom rules are distilled straight out of objective Good and Evil. In fact, they are nothing but personal preferences, with no more legitimate normative authority than the different rules preferred by others. It seems to me that the term “nihilism” is better applied to this absurd state of affairs than to a correct understanding of what morality is and why it exists.
Suppose that in some future utopia the chimera of “moral realism” were finally exchanged for such a correct understanding, at least by most of us. It would change very little. Our moral emotions would still be there, and we would respond to them as we always have. “Moral relativism” would be no more prevalent than it is today, because it is not our nature to be moral relativists. However, we might have a fighting chance of coming up with a set of moral “customs” that most of us could accept, along with a similarly accepted way to change them if necessary. I would certainly prefer such a utopia to the moral obscurantism that prevails today. If nothing else it would tend to limit the moral exhibitionism and virtuous grandstanding that led directly to the ideological disasters of the 20th century, and yet still pass as the “enlightened” way to alter the moral rules that apply in bathrooms and elsewhere. Perhaps in such a utopia “nihilism” would be rejected even more firmly than it is today, because people would finally realize that, in spite of the subjective, emotional source of all moral rules, human societies can’t exist without them.