Posted on November 29th, 2015 No comments
There are few better demonstrations of the fact that the term Homo sapiens is an oxymoron then the results of our species’ attempts to “interpret” the innate emotional responses that are the source of all the gaudy manifestations of human morality. Moral emotions exist. Evolution by natural selection is the reason for their existence. If they did not exist, there would be no morality as we know it. In other words, the only reason for the illusion that Good and Evil are objects, things-in-themselves that don’t depend on any mind, human or otherwise, for their existence, is the fact that, over some period of time, that illusion made it more likely that the genes responsible for spawning it would survive and reproduce. Recently it has been amply demonstrated that, over a different period of time, under different conditions, the very same emotions spawned by the very same genes can accomplish precisely the opposite. In other words, they can promote their own destruction. Mother Nature, it would seem, has a fondness for playing practical jokes.
The elevation of colonialism in some circles to the status of Mother of all Evils is a case in point. It has long been the “root cause” of choice for all sorts of ills. Prominent among them lately has been Islamic terrorism, as may be seen here, here, here and here. Even prominent politicians have jumped on the bandwagon, and we find them engaged in the ludicrous pursuit of explaining to Islamic terrorists, who have been educated in madrassas and know the Quran by heart, that they are not “real Moslems.” It must actually be quite frustrating for the terrorists, who have insisted all along that they are acting on behalf of and according to the dictates of their religion. It also begs the question of how, if Islam is a “religion of peace,” all of north Africa, much of the Middle East outside of Arabia, Turkey, significant parts of Europe, Iran, etc., formerly parts of the Christian Roman Empire or the Zoroastrian Persian Empire, ever became Moslem. Of course, it was accomplished by military force, and the ensuing colonization of these countries resulted in the destruction of the “indigenous” cultures and traditions that were overrun. Interestingly, we seldom find this Moslem version of colonialism treated as a form of immorality. Apparently we are to assume that there is a statute of limitations on the application of the relevant moral principles.
Be that as it may, in bygone days colonialism was often also invoked as the “root cause” for the promiscuous massacres of the Communists, and is the “root cause” of choice for the ills, real or imagined, of all sorts of minorities as well. I have long maintained that Good and Evil have no objective existence. However, whether one agrees with that assertion or not, it seems only reasonable that the terms at least be defined in a way that is consistent with their evolutionary roots. In that case, the notion that colonialism was evil becomes absurd. It is yet another example of a morality inversion, characterized by the whimsical tendency of human moral emotions to stand on their heads in response to sufficiently drastic changes to the external environment.
What were the actual results of colonialism? We will limit our examination to white colonialism, as colonialism by other ethnic groups, although of frequent occurrence in the past, is not generally held to be such an “evil.” Rather, colonialism as practiced by other than whites is deemed a mere expression of “culture.” It would therefore be “racist” to consider it evil. In the first place, then, white colonialism has led to a vast expansion in the area of the planet inhabited primarily by whites. They are now the dominant ethnic groups on whole continents that they never knew existed little over half a century ago. This must certainly be considered good if we are to define the Good consistently with the “root causes” of morality itself. Interestingly, colonialism was also good in this way for other ethnic groups. Sub-Saharan blacks, for example, now have a prominent presence over wide territories that they never would have seen in the absence of the white practice of carrying slaves to their colonies. It is unlikely that, if faced with the choice, blacks would trade a world that never experienced white colonialism with the more “evil” world we actually inhabit.
Even if one chooses to divorce morality entirely from its evolutionary roots, and assume that Good and Evil are independent entities floating about in the luminiferous aether with no biological strings attached whatsoever, it is not entirely obvious that white colonialism was an unmitigated evil. Indeed, if we are to accept the modern secular humanist take on objective morality, as outlined, for example, in Sam Harris’ The Moral Landscape, it would seem that the opposite is the case. According to this version of morality, “human flourishing” is the summum bonum. I would maintain that a vastly greater number of humans are flourishing today because of white colonialism than would otherwise be the case. Thanks to white colonialism, the continents on which its impact was greatest now support much larger populations of healthier people who live for longer times on average, and are less likely to die violent deaths than if it had not occurred. This, of course, is not necessarily true of every race involved. The aborigines of Tasmania, for example, were entirely wiped out, and there has probably been a significant decline in the population of the pre-Columbian inhabitants of North America. However, the opposite has been the case in Africa and India. In any case, if we are to believe the ideological shibboleths that often emanate from the same ideological precincts that gave rise to the latest versions of morality based on “human flourishing,” all these distinctions by race don’t matter, because race is a mere social construct.
I often wonder what makes modern secular Puritans imagine that they will be judged any differently by future generations than they are in the habit of judging the generations of the past. After all, the vast majority of the inhabitants of Great Britain, France, and the other major colonialist countries did not imagine that they were being deliberately immoral during the heyday of colonialism. On what basis is it justified to judge others out of the context of their time? No one has ever come up with a rational answer to that question, for the very good reason that no such basis is possible.
The proponents of colonialism left behind a great many books on the subject. Typically, they perceived colonialism as a benign pursuit that benefited the colonial peoples as much as the colonizers. There is an interesting chapter on the subject in Volume XII (The Latest Age) of the Cambridge Modern History (Chapter XX, The European Colonies), first published in 1910. In reading it, one finds no hint of evidence that the author of the chapter, a university professor who no doubt considered himself enlightened according to the standards of the time, perceived colonialism as other than a benign force, and an expression of the energy and economic growth of the colonizing countries. Some typical passages include,
The few years under present consideration form a brief period in this long process (of European colonization since the 15th century). Yet they have seen an awakened interest in colonization and an extension of the field of enterprise which give them a unique significance. The comparative tranquility of domestic and foreign affairs in most countries of Europe has favoured a great outburst of colonizing energy, for which the growth of population and industry has provided the principal motive. The growth of population has swollen the stream of emigration; the expansion of industry has increased the desire to control sources of supply for raw materials and markets for finished products. A rapid improvement in means of communication and transport has facilitated intercourse between distant parts of the world. A vast store of accumulated wealth in old countries has been available for investment in the new.
In other words, colonization was considered a manifestation of social progress. The rights of indigenous peoples were not simply ignored as is so often claimed today. It was commonly believed, and not without reason, that they, too, benefited from colonization. Epidemic diseases were controlled, pervasive intertribal warfare and the slave trade were ended, and the brutal mistreatment of women was discouraged. On the other hand, the abuse of native populations was also recognized. Quoting again from a section of the book dealing with the Belgian Congo, the author writes,
Its history would be a fine tale of European energy applied to the development of a tropical country, had not the work been marred by a cruel spirit of exploitation gaining the upper hand. The first ten years of its existence were a period of great activity, during which a marvelous change came over the land. Splendid pioneering work was done. Experienced missionaries and travelers explored the great streams. The drink traffic, the slave trade, and cannibalism, were much diminished. The ancient Arab dominion in Central Africa was overthrown after a hard and costly struggle (1890-3). Routes of communication were opened, and railway building commenced…
But it was by its treatment of the native peoples that the Congo State attained that evil eminence which accumulating proof shows it to have well deserved. The system of administration lent itself to abuses. Large powers were devolved upon men not always adequately paid or capable of bearing their responsibilities. The supervision of their activities in the interior was impossible from places so distant as Boma and Brussels. The native was wronged by the disregard of his system of land ownership and of the tribal rights to hunt and gather produce in certain areas, as well as by a system of compulsory labor in the collection of produce on behalf of the State, enforced by barbarous punishments and responsible for continual and devastating warfare… Finally, the Belgian Parliament taking up the question, the Congo State was in 1908 transferred to Belgium, and its rulers have thus become responsible to the public opinion of a nation.
Except, perhaps, during the most active periods of European competition for colonies during the last half of the 19th century, eventual independence was recognized not merely as an ideal but as practically inevitable. In the last paragraph of the chapter the author writes,
(Great Britain’s) colonial policy has been inspired by an understanding and a wise recognition of facts. Settlers in new countries form societies; such societies, as their strength grows, desire the control of their own life; common interests draw contiguous societies together, and union creates and fosters the sense of nationality. Perceiving the course of this development, the mother country has continually readjusted the ties that bound her to her colonies, so that they might be appropriate to the stage of growth which each colony had reached. Wherever possible, she has conceded to them the full control of their own affairs; and she has encouraged contiguous colonies to unite, so that in dimensions, resources, population, and economic strength, the indispensable material foundations of a self-governing state could be formed.
The author closes with sentiments that are likely to shock modern university professors out of their wits:
Slowly the British empire is shaping itself into a league of Anglo-Saxon peoples, holding under its sway vast tropical dependencies as well as many small communities of mixed race. Strong bonds of common loyalty, race, and history, as well as the need of cooperation for defense, unite the white peoples. But the course of progress has carried the empire to an unfamiliar point in political development. Loose and elastic in its structure, it may well take a new shape under the influence of external pressure, political and economic.
In other words, the author did not share the modern penchant among the “Anglo-Saxons” for committing ethnic suicide. In our own day, of course, while it is still perfectly acceptable for every other ethnic group on the planet to speak in a similar fashion, it has become a great sin for whites to do so. Far be it for me to challenge this development on moral grounds, for the simple reason that there are no moral grounds one way or the other. Similarly, this post is in no way intended to morally condone or serve as a form of moral apologetics for colonialism. There exists no objective basis for morally judging colonialism, or anything else, for that matter. I merely point out that the moral standards relating to colonialism have evolved over time. Beyond that, one might add that colonialism accomplished ends in harmony with the reasons that led to the evolution of moral emotions to begin with, whereas the manipulation of those emotions to condemn colonialism on illusory moral grounds accomplishes precisely the opposite. That is not at all the same thing as claiming that colonialism was Good, and anti-colonialism is evil. It is merely stating a fact.
One can certainly choose to oppose, and even actively fight against, colonialism, or anything else to which one happens to have an aversion. I merely suggest that, before one does so, one have a reasonably accurate understanding of the emotions that are the cause of the aversion, and why they exist. Moral emotions seem to point to objective things, Good and Evil, that are perceived as real, but aren’t. I don’t wish to imply that no one should ever act. I merely suggest that, before they do, they should understand the illusion.
Posted on November 26th, 2015 3 comments
The history of the rise and fall of the Blank Slate is fascinating, and not only as an example of the pathological derailment of whole branches of science in favor of ideological dogmas. The continuing foibles of the “men of science” as they attempt to “readjust” that history are nearly as interesting in their own right. Their efforts at post-debacle damage control are a superb example of an aspect of human nature at work – tribalism. There is much at stake for the scientific “tribe,” not least of which is the myth of the self-correcting nature of science itself. What might be called the latest episode in the sometimes shameless, sometimes hilarious bowdlerization of history just appeared in the form of another PBS special; E. O. Wilson – Of Ants and Men. You can watch it online by clicking on the link.
Before examining the latest twists in this continuously evolving plot, it would be useful to recap what has happened to date. There is copious source material documenting not only the rise of the Blank Slate orthodoxy to hegemony in the behavioral sciences, but also the events that led to its collapse, not to mention the scientific apologetics that followed its demise. In its modern form, the Blank Slate manifested itself as a sweeping denial that innate behavioral traits, or “human nature,” had anything to do with human behavior beyond such basic functions as breathing and the elimination of waste. It was insisted that virtually everything about our behavior was learned, and a reflection of “culture.” By the early 1950’s its control of the behavioral sciences was such that any scientist who dared to publish anything in direct opposition to it was literally risking his career. Many scientists have written of the prevailing atmosphere of fear and intimidation, and through the 1950s, ‘60s, and early ‘70s there was little in the way of “self-correction” emanating from within the scientific professions themselves.
The “correction,” when it came, was supplied by an outsider – a playwright by the name of Robert Ardrey who had taken an interest in anthropology. Beginning with African Genesis in 1961, he published a series of four highly popular books that documented the copious evidence for the existence of human nature, and alerted a wondering public to the absurd extent to which its denial had been pursued in the sciences. It wasn’t a hard sell, as that absurdity was obvious enough to any reasonably intelligent child. Following Ardrey’s lead, a few scientists began to break ranks, particularly in Europe where the Blank Slate had never achieved a level of control comparable to that prevailing in the United States. They included the likes of Konrad Lorenz (On Aggression, first published in German in 1963), Desmond Morris (The Naked Ape, 1967), Lionel Tiger (Men in Groups, 1969), and Robin Fox (The Imperial Animal, 1971, with Lionel Tiger). The Blank Slate reaction to these works, not to mention the copious coverage of Ardrey and the rest that began appearing in the popular media, was furious. Man and Aggression, a collection of Blank Slater rants directed mainly at Ardrey and Lorenz, with novelist William Golding thrown in for good measure, is an outstanding piece of historical source material documenting that reaction. Edited by Ashley Montagu and published in 1968, it typifies the usual Blank Slate MO – attacks on straw men combined with accusations of racism and fascism. That, of course, remains the MO of the “progressive” Left to this day.
The Blank Slaters could intimidate the scientific community, but not so the public at large. Thanks to Ardrey and the rest, by the mid-70s the behavioral sciences were in danger of becoming a laughing stock. Finally, in 1975, E. O. Wilson broke ranks and published Sociobiology, a book that was later to gain a notoriety in the manufactured “history” of the Blank Slate out of all proportion to its real significance. Of the 27 chapters, 25 dealt with animal behavior. Only the first and last chapters focused on human behavior. Nothing in those two chapters, nor in Wilson’s On Human Nature, published in 1978, could reasonably be described as other than an afterthought to the works of Ardrey and others that had appeared much earlier as far as human nature is concerned. Its real novelty wasn’t its content, but the fact that it was the first popular science book asserting the existence and importance of human nature by a scientist in the United States that reached a significant audience. This fact was well known to Wilson, not to mention his many Blank Slate detractors. In their diatribe Against Sociobiology, which appeared in the New York Review of Books in 1975 they wrote, “From Herbert Spencer, who coined the phrase “survival of the fittest,” to Konrad Lorenz, Robert Ardrey, and now E. O. Wilson, we have seen proclaimed the primacy of natural selection in determining most important characteristics of human behavior.
As we know in retrospect, the Blank Slaters were facing a long, losing battle against recognition of the obvious. By the end of the 1990s, even the editors at PBS began scurrying off the sinking ship. Finally, in the scientific shambles left in the aftermath of the collapse of the Blank Slate orthodoxy, Steven Pinker published his The Blank Slate. It was the first major attempt at historical revisionism by a scientist, and it contained most of the fairytales about the affair that are now widely accepted as fact. I had begun reading the works of Ardrey, Lorenz and the rest in the early 70s, and had followed the subsequent unraveling of the Blank Slate with interest. When I began reading The Blank Slate, I assumed I would find a vindication of the seminal role they had played in the 1960s in bringing about its demise. I was stunned to find that, instead, as far as Pinker was concerned, the 60s never happened! Ardrey was mentioned only a single time, and then only with the assertion that “the sociobiologists themselves” had declared him and Lorenz “totally and utterly” wrong! The “sociobiologist” given as the source for this amazing assertion was none other than Richard Dawkins! Other than the fact that Dawkins was never a “sociobiologist,” and especially not in 1972 when he published The Selfish Gene, the book from which the “totally and utterly wrong” quote was lifted, he actually praised Ardrey in other parts of the book. He never claimed that Ardrey and the rest were “totally and utterly wrong” because they defended the importance of innate human nature, in Ardrey’s case the overriding theme of all his work. Rather, Dawkins limited that claim to their support of group selection, a fact that Pinker never gets around to mentioning in The Blank Slate. Dropping Ardrey, Lorenz and the rest down the memory hole, Pinker went on to assert that none other than Wilson had been the real knight in shining armor who had brought down the Blank Slate. As readers who have followed this blog for a while are aware, the kicker came in 2012, in the form of E. O. Wilson’s The Social Conquest of Earth. In the crowning (and amusing) irony of this whole shabby affair, Wilson outed himself as more “totally and utterly wrong” than Ardrey and Lorenz by a long shot. He wholeheartedly embraced – group selection!
Which finally brings me to the latest episode in the readjustment of Blank Slate history. It turned up recently in the form of a PBS special entitled, E. O. Wilson – Of Ants and Men. It’s a testament to the fact that Pinker’s deification of Wilson has succeeded beyond his wildest dreams. The only problem is that now it appears he is in danger of being tossed on the garbage heap of history himself. You see, the editors at the impeccably politically correct PBS picked up on the fact that, at least according to Wilson, group selection is responsible for the innate wellsprings of selflessness, love of others, at least in the ingroup, altruism, and all the other endearing characteristics that make the hearts of the stalwart leftists who call the tune at PBS go pitter-pat. Pinker, on the other hand, for reasons that should be obvious by now, must continue to reject group selection, lest his freely concocted “history” become a laughing stock. To see how all this plays out circa 2015, let’s take a closer look at the video itself.
Before I begin, I wish to assure the reader that I have the highest respect for Wilson himself. He is a great scientist, and his publication of Sociobiology was an act of courage regardless of its subsequent exploitation by historical revisionists. As we shall see, he has condoned the portrayal of himself as the “knight in shining armor” invented by Pinker, but that is a forgivable lapse by an aging scientist who is no doubt flattered by the “legacy” manufactured for him.
With that, on to the video. It doesn’t take long for us to run into the first artifact of the Wilson legend. At the 3:45 minute mark, none other than Pinker himself appears, informing us that Wilson, “changed the intellectual landscape by challenging the taboo against discussing human nature.” He did no such thing. Ardrey had very effectively “challenged the taboo” in 1961 with his publication of African Genesis, and many others had challenged it in the subsequent years before publication of Sociobiology. Pinker’s statement isn’t even accurate in terms of U.S. scientists, as several of them in peripheral fields such as political science, had insisted on the existence and importance of human nature long before 1975, and others, like Tiger and Fox, although foreign born, had worked at U.S. universities. At the 4:10 mark Gregory Carr chimes in with the remarkable assertion that,
If someone develops a theory about human nature or biodiversity, and in common living rooms across the world, it seems like common sense, but in fact, a generation ago, we didn’t understand it, it tells you that that person, in this case Ed Wilson, has changed the way all of us view the world.
One can but shake one’s head at such egregious nonsense. In the first place, Wilson didn’t “develop a theory about human nature.” He simply repeated hypotheses that Darwin himself and many others since him had developed. There is nothing of any significance about human nature in any of his books that cannot also be found in the works of Ardrey. People “in common living rooms” a generation ago understood and accepted the concept of human nature perfectly well. The only ones who were still delusional about it at the time were the so-called “experts” in the behavioral sciences. Many of them were also just as aware as Wilson of the absurdity of the Blank Slate dogmas, but were too intimidated to challenge them.
My readers should be familiar by now with such attempts to inflate Wilson’s historical role, and the reasons for them. The tribe of behavioral scientists has never been able to bear the thought that their “science” was not “self-correcting,” and they would probably still be peddling the Blank Slate dogmas to this day if it weren’t for the “mere playwright,” Ardrey. All their attempts at historical obfuscation won’t alter that fact, and source material is there in abundance to prove it to anyone who has the patience to search it out and look at it. We first get an inkling of the real novelty in this particular PBS offering at around minute 53:15, when Wilson, referring to eusociality in ant colonies, remarks,
This capacity of an insect colony to act like a single super-organism became very important to me when I began to reconsider evolutionary theory later in my career. It made me wonder if natural selection could operate not only on individuals and their genes, but on the colony as a whole. That idea would create quite a stir when I published it, but that was much later.
Which brings us to the most amusing plot twist in this whole, sorry farce; PBS’ wholehearted embrace of group selection. Recall that Pinker’s whole rationalization for ignoring Ardrey was based on some good things Ardrey had to say about group selection in his third book, The Social Contract. The subject hardly ever came up in his interviews, and was certainly not the central theme of all his books, which, as noted above, was the existence and significance of human nature. Having used group selection to declare Ardrey an unperson, Pinker then elevated Wilson to the role of the “revolutionary” who was the “real destroyer” of the Blank Slate in his place. Wilson, in turn, in what must have seemed to Pinker a supreme act of ingratitude, embraced group selection more decisively than Ardrey ever thought of doing, making it a central and indispensable pillar of his theory regarding the evolution of eusociality. Here’s how the theme plays out in the video.
Wilson at 1:09:50
Humans don’t have to be taught to cooperate. We do it instinctively. Evolution has hardwired us for cooperation. That’s the key to eusociality.
Wilson at 1:13:40
Thinking on this remarkable fact (the evolution of eusociality) has made me reconsider in recent years the theory of natural selection and how it works in complex social animals.
Pinker at 1:18:50
Starting in the 1960s, a number of biologists realized that if you think rigorously about what natural selection does, it operates on replicators. Natural selection, Darwin’s theory, is the theory of what happens when you have an entity that can make a copy of itself, and so it’s very clear that the obvious target of selection in Darwin’s theory is the gene. That became close to a consensus among evolutionary biologists, but I think it’s fair to say that Ed Wilson was always ambivalent about that turn in evolutionary theory.
I never doubted that natural selection works on individual genes or that kin selection is a reality, but I could never accept that that is the whole story. Our group instincts, and those of other eusocial species, go far beyond the urge to protect our immediate kin. After a lifetime studying ant societies, it seemed to me that the group must also have an important role in evolution, whether or not its members are related to each other.
1:20:15 Jonathan Haidt:
So there’ve been a few revolutions in evolutionary thinking. One of them happened in the 1960s and ‘70s, and it was really captured in Dawkins famous book ‘The Selfish Gene,’ where if you just take the gene’s eye view, you have the simplest elements, and then you sort of build up from there, and that works great for most animals, but Ed was studying ants, and of course you can make the gene’s eye view work for ants, but when you’re studying ants, you don’t see the ant as the individual, you don’t see the ant as the organism, you see the colony or the hive as the entity that really matters.
At 1:20:55 Wilson finally spells it out:
Once you see a social insect colony as a superorganism, the idea that selection must work on the group as well as on the individual follows very naturally. This realization transformed my perspective on humanity, too. So I proposed an idea that goes all the way back to Darwin. It’s called group selection.
Ed was able to see group selection in action. It’s just so clear in the ants, the bees, the wasps, the termites and the humans.” Wilson: “The fact of group selection gives rise to what I call multilevel evolution, in which natural selection is operating both at the level of the individual and the level of the group… And that got Ed into one of the biggest debates of his career, over multilevel selection, or group selection.
Ed Wilson did not give up the idea that selection acted on groups, while most of his fellow biologists did. Then several decades later, revived that notion in a full-throated manifesto, which I think it would be an understatement to say that he did not convince his fellow biologists.
At this point, a picture of Wilson’s The Social Conquest of Earth, appears on the screen, shortly followed by stills of a scowling Richard Dawkins. Then we see an image of the cover of his The Selfish Gene. The film describes Dawkins furious attack on Wilson for daring to promote group selection.
The brouhaha over group selection has brought me into conflict with defenders of the old faith, like Richard Dawkins and many others who believe that ultimately the only thing that counts in the evolution of complex behavior, is the gene, the selfish gene. They believe the gene’s eye view of social evolution can explain all of our groupish behavior. I do not.
And finally, at 1:25, after Wilson notes Pinker is one of his opponents, Pinker reappears to deny the existence of group selection:
Most people would say that, if there’s a burning building, and your child is in one room and another child is in another room, then you are entitled to rescue your child first, right? There is a special bond between, say, parents and children. This is exactly what an evolutionary biologist would predict because any gene that would make you favor your child will have a copy of itself sitting in the body of that child. By rescuing your child the gene for rescuing children, so to speak, will be helping a copy of itself, and so those genes would proliferate in the population. Not just the extreme case of saving your child from a burning building but for being generous and loyal to your siblings, your very close cousins. The basis of tribalism, kinship, family feelings, have a perfectly sensible sensible evolutionary basis. (i.e., kin selection)
At this point one can imagine Pinker gazing sadly at the tattered remains of his whole, manufactured “history” of the Blank Slate lying about like a collapsed house of cards, faced with the bitter realization that he had created a monster. Wilson’s group selection schtick was just too good for PBS to pass up. I seriously doubt whether any of their editors really understand the subject well enough to come up with a reasoned opinion about it one way or the other. However, how can you turn your nose up at group selection if, as Wilson claims, it is responsible for altruism and all the other “good” aspects of our nature, whereas the types of selection favored by Pinker, not to mention Dawkins, are responsible for selfishness and all the other “bad” parts of our nature?
And what of Ardrey, whose good words about group selection no longer seem quite as “totally and utterly wrong” as Pinker suggested when he swept him under the historical rug? Have the editors at PBS ever even heard of him? We know very well that they have, and that they are also perfectly well aware of his historical significance, because they went to the trouble of devoting a significant amount of time to him in another recent special covering the discovery of Homo naledi. It took the form of a bitter denunciation of Ardrey for supporting the “Killer Ape Theory,” a term invented by the Blank Slaters of yore to ridicule the notion that pre-human apes hunted and killed during the evolutionary transition from ape to man. This revealing lapse demonstrated the continuing strength of the obsession with the “unperson” Ardrey, the man who was “totally and utterly wrong.” That obsession continues, not only among ancient, unrepentant Blank Slaters, but among behavioral scientists in general who happen to be old enough to know the truth about what happened in the 15 years before Wilson published Sociobiology, in spite of Pinker’s earnest attempt to turn that era into an historical “Blank Slate.”
Dragging in Ardrey was revealing because, in the first place, it was irrelevant in the context of a special about Homo naledi. As far as I know, no one has published any theories about the hunting behavior of that species one way or the other. It was revealing in the second place because of the absurdity of bringing up the “Killer Ape Theory” at all. That straw man was invented back in the 60s, when it was universally believed, even by Ardrey himself, that chimpanzees were, as Ashley Montagu put it, “non-aggressive vegetarians.” That notion, however, was demolished by Jane Goodall, who observed chimpanzees both hunting and killing, not to mention their capacity for extremely aggressive behavior. Today, few people like to mention the vicious, ad hominem attacks she was subjected to at the time for publishing those discoveries, although those attacks, too, are amply documented for anyone who cares to look for them. In the ensuing years, even the impeccably PC Scientific American has admitted the reality of hunting behavior in early man. In other words, the “Killer Ape Theory” debate has long been over, and Ardrey, who spelled out his ideas on the subject in his last book, The Hunting Hypothesis, won it hands down.
Why does all this matter? It seems to me the integrity of historical truth is worth defending in its own right. Beyond that, there is much to learn from the Blank Slate affair and its aftermath regarding the integrity of science itself. It is not invariably self-correcting. It can become derailed, and occasionally outsiders must play an indispensable role in putting it back on the tracks. Ideology can trump reason and common sense, and it did in the behavioral sciences for a period of more than half a century. Science is not infallible. In spite of that, it is still the best way of ferreting out the truth our species has managed to come up with so far. We can’t just turn our back on it, because, at least in my opinion, all of the alternatives are even worse. As we do science, however, it would behoove us to maintain a skeptical attitude and watch for signs of ideology leaking through the cracks.
I note in passing that excellent readings of all of Ardrey’s books are now available at Audible.com.
Posted on November 10th, 2015 2 comments
Many pre-Darwinian philosophers realized that the source of human morality was to be found in innate “sentiments,” or “passions,” often speculating that they had been put there by God. Hume put the theory on a more secular basis. Darwin realized that the “sentiments,” were there because of natural selection, and that human morality was the result of their expression in creatures with large brains. Edvard Westermarck, perhaps at the same time the greatest and the most unrecognized moral philosopher of them all, put it all together in a coherent theory of human morality, supported by copious evidence, in his The Origin and Development of the Moral Ideas.
Westermarck is all but forgotten today, probably because his insights were so unpalatable to the various academic and professional tribes of “experts on ethics.” They realized that, if Westermarck were right, and morality really is just the expression of evolved behavioral predispositions, they would all be out of a job. Under the circumstances, its interesting that his name keeps surfacing in modern works about evolved morality, innate behavior, and evolutionary psychology. For example, I ran across a mention of him in famous primatologist Frans de Waal’s latest book, The Bonobo and the Atheist. People like de Waal who know something about the evolved roots of behavior are usually quick to recognize the significance of Westermarck’s work.
Be that as it may, G. E. Moore, the subject of my last post, holds a far more respected place in the pantheon of moral philosophers. That’s to be expected, of course. He never suggested anything as disconcerting as the claim that all the mountains of books and papers they had composed over the centuries might as well have been written about the nature of unicorns. True, he did insist that everyone who had written about the subject of morality before him was delusional, having fallen for the naturalistic fallacy, but at least he didn’t claim that the subject they were writing about was a chimera.
Most of what I wrote about in my last post came from the pages of Moore’s Principia Ethica. That work was published in 1903. Nine years later he published another little book, entitled Ethics. As it happens, Westermarck’s Origin appeared between those two dates, in 1906. In all likelihood, Moore read Westermarck, because parts of Ethics appear to be direct responses to his book. Moore had only a vague understanding of Darwin, and the implications of his work on the subject of human behavior. He did, however, understand Westermarck when he wrote in the Origin,
If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth. It has been said by Bentham and others that moral principles cannot be proved because they are first principles which are used to prove everything else. But the real reason for their being inaccessible to demonstration is that, owing to their very nature, they can never be true. If the word “Ethics,” then, is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.
Now that got Moore’s attention. Responding to Westermarck’s theory, or something very like it, he wrote:
Even apart from the fact that they lead to the conclusion that one and the same action is often both right and wrong, it is, I think, very important that we should realize, to begin with, that these views are false; because, if they were true, it would follow that we must take an entirely different view as to the whole nature of Ethics, so far as it is concerned with right and wrong, from what has commonly been taken by a majority of writers. If these views were true, the whole business of Ethics, in this department, would merely consist in discovering what feelings and opinions men have actually had about different actions, and why they have had them. A good many writers seem actually to have treated the subject as if this were all that it had to investigate. And of course questions of this sort are not without interest, and are subjects of legitimate curiosity. But such questions only form one special branch of Psychology or Anthropology; and most writers have certainly proceeded on the assumption that the special business of Ethics, and the questions which it has to try to answer, are something quite different from this.
Indeed they have. The question is whether they’ve actually been doing anything worthwhile in the process. Note the claim that Westermarck’s views were “false.” This claim was based on what Moore called a “proof” that it couldn’t be true that appeared in the preceding pages. Unfortunately, this “proof” is transparently flimsy to anyone who isn’t inclined to swallow it because it defends the relevance of their “expertise.” Quoting directly from his Ethics, it goes something like this:
- It is absolutely impossible that any one single, absolutely particular action can ever be both right and wrong, either at the same time or at different times.
- If the whole of what we mean to assert, when we say that an action is right, is merely that we have a particular feeling towards it, then plainly, provided only we really have this feeling, the action must really be right.
- For if this is so, and if, when a man asserts an action to be right or wrong, he is always merely asserting that he himself has some particular feeling towards it, then it absolutely follows that one and the same action has sometimes been both right and wrong – right at one time and wrong at another, or both simultaneously.
- But if this is so, then the theory we are considering certainly is not true. (QED)
Note that this “proof” requires the positive assertion that it is possible to claim that an action can be right or wrong, in this case because of “feelings.” A second, similar proof, also offered in Chapter III of Ethics, “proves” that an action can’t possible be right merely because one “thinks” it right, either. With that, Moore claims that he has “proved” that Westermarck, or someone with identical views, must be wrong. The only problem with the “proof” is that Westermarck specifically pointed out in the passage quoted above that it is impossible to make truth claims about “moral principles.” Therefore, it is out of the question that he could ever be claiming that any action “is right,” or “is wrong,” because of “feelings” or for any other reason. In other words, Moore’s “proof” is nonsense.
The fact that Moore was responding specifically to evolutionary claims about morality is also evident in the same Chapter of Ethics. Allow me to quote him at length.
…it is supposed that there was a time, if we go far enough back, when our ancestors did have different feelings towards different actions, being, for instance, pleased with some and displeased with others, but when they did not, as yet, judge any actions to be right or wrong; and that it was only because they transmitted these feelings, more or less modified, to their descendants, that those descendants at some later stage, began to make judgments of right and wrong; so that, in a sense, or moral judgments were developed out of mere feelings. And I can see no objection to the supposition that this was so. But, then, it seems also to be supposed that, if our moral judgments were developed out of feelings – if this was their origin – they must still at this moment be somehow concerned with feelings; that the developed product must resemble the germ out of which it was developed in this particular respect. And this is an assumption for which there is, surely, no shadow of ground.
In fact, there was a “shadow of ground” when Moore wrote those words, and the “shadow” has grown a great deal longer in our own day. Moore continues,
Thus, even those who hold that our moral judgments are merely judgments about feelings must admit that, at some point in the history of the human race, men, or their ancestors, began not merely to have feelings but to judge that they had them: and this along means an enormous change.
Why was this such an “enormous change?” Why, of course, because as soon as our ancestors judged that they had feelings, then, suddenly those feelings could no longer be a basis for morality, because of the “proof” given above. Moore concludes triumphantly,
And hence, the theory that moral judgments originated in feelings does not, in fact, lend any support at all to the theory that now, as developed, they can only be judgments about feelings.
If Moore’s reputation among them is any guide, such “ironclad logic” is still taken seriously by todays crop of “experts on ethics.” Perhaps it’s time they started paying more attention to Westermarck.
The Moral Philosophy of G. E. Moore, or Why You Don’t Need to Bother with Aristotle, Hegel, and KantPosted on November 7th, 2015 No comments
G. E. Moore isn’t exactly a household name these days, except perhaps among philosophers. You may have heard of his most famous concoction, though – the “naturalistic fallacy.” If we are to believe Moore, not only Aristotle, Hegel and Kant, but virtually every other philosopher you’ve ever heard of got morality all wrong because of it. He was the first one who ever got it right. On top of that, his books are quite thin, and he writes in the vernacular. When you think about it, he did us all a huge favor. Assuming he’s right, you won’t have to struggle with Kant, whose sentences can run on for a page and a half before you finally get to the verb at the end, and who is comprehensible, even to Germans, only in English translation. You won’t have to agonize over the correct interpretation of Hegel’s dialectic. Moore has done all that for you. Buy his books, which are little more than pamphlets, and you’ll be able to toss out all those thick tomes and learn all the moral philosophy you will ever need in a week or two.
Or at least you will if Moore got it right. It all hinges on his notion of the “Good-in-itself.” He claims it’s something like what philosophers call qualia. Qualia are the content of our subjective experiences, like colors, smells, pain, etc. They can’t really be defined, but only experienced. Consider, for example, the difficulty of explaining “red” to a blind person. Moore’s description of the Good is even more vague. As he puts it in his rather pretentiously named Principia Ethica,
Let us, then, consider this position. My point is that ‘good’ is a simple notion, just as ‘yellow’ is a simple notion; that, just as you cannot, by any manner of means, explain to any one who does not already know it, what yellow is, so you cannot explain what good is.
In other words, you can’t even define good. If that isn’t slippery enough for you, try this:
They (metaphysicians) have always been much occupied, not only with that other class of natural objects which consists in mental facts, but also with the class of objects or properties of objects, which certainly do not exist in time, are not therefore parts of Nature, and which, in fact, do no exist at all. To this class, as I have said, belongs what we mean by the adjective “good.” …What is meant by good? This first question I have already attempted to answer. The peculiar predicate, by reference to which the sphere of Ethics must be defined, is simple, unanalyzable, indefinable.
Or, as he puts it elsewhere, the Good doesn’t exist. It just is. Which brings us to the naturalistic fallacy. If, as Moore claims, Good doesn’t exist as a natural, or even a metaphysical, object, it can’t be defined with reference to such an object. Attempts to so define it are what he refers to as the naturalistic fallacy. That, in his opinion, is why every other moral philosopher in history, or at least all the ones whose names happen to turn up in his books, have been wrong except him. The fallacy is defined at Wiki and elsewhere on the web, but the best way to grasp what he means is to read his books. For example,
The naturalistic fallacy always implies that when we think “This is good,” what we are thinking is that the thing in question bears a definite relation to some one other thing.
That fallacy, I explained, consists in the contention that good means nothing but some simple or complex notion, that can be defined in terms of natural qualities.
To hold that from any proposition asserting “Reality is of this nature” we can infer, or obtain confirmation for, any proposition asserting “This is good in itself” is to commit the naturalistic fallacy.
In short, all the head scratching of all the philosophers over thousands of years about the question of what is Good has been so much wasted effort. Certainly, the average layman had no chance at all of understanding the subject, or at least he didn’t until the fortuitous appearance of Moore on the scene. He didn’t show up a moment too soon, either, because, as he explains in his books, we all have “duties.” It turns out that, not only did the intuition “Good,” pop up in his consciousness, more or less after the fashion of “yellow,” or the smell of a rose. He also “intuited” that it came fully equipped with the power to dictate to other individuals what they ought and ought not to do. Again, I’ll allow the philosopher to explain.
Our “duty,” therefore, can only be defined as that action, which will cause more good to exist in the Universe than any possible alternative… When, therefore, Ethics presumes to assert that certain ways of acting are “duties” it presumes to assert that to act in those ways will always produce the greatest possible sum of good.
But how on earth can we ever even begin to do our duty if we have no clue what Good is? Well, Moore is actually quite coy about explaining it to us, and rightly so, as it turns out. When he finally takes a stab at it in Chapter VI of Principia, it turns out to be paltry enough. Basically, it’s the same “pleasure,” or “happiness” that many other philosophers have suggested, only it’s not described in such simple terms. It must be part of what Moore describes as an “organic whole,” consisting not only of pleasure itself, for example, but also a consciousness capable of experiencing the pleasure, the requisite level of taste to really appreciate it, the emotional equipment necessary to react with the appropriate level of awe, etc. Silly old philosophers! They rashly assumed that, if the Good were defined as “pleasure,” it would occur to their readers that they would have to be conscious in order to experience it without them spelling it out. Little did they suspect the coming of G. E. Moore and his naturalistic fallacy.
When he finally gets around to explaining it to us, we gather that Moore’s Good is more or less what you’d expect the intuition of Good to be in a well-bred English gentleman endowed with “good taste” around the turn of the 20th century. His Good turns out to include nice scenery, pleasant music, and chats with other “good” people. Or, as he put it somewhat more expansively,
We can imagine the case of a single person, enjoying throughout eternity the contemplation of scenery as beautiful, and intercourse with persons as admirable, as can be imagined.
By far the most valuable things which we know or can imagine, are certain states of consciousness, which may be roughly described as the pleasures of human intercourse and the enjoyment of beautiful objects. No one, probably, who has asked himself the question, has ever doubted that personal affection and the appreciation of what is beautiful in Art or Nature, are good in themselves.
Really? No one? One can only surmise that Moore’s circle of acquaintance must have been quite limited. Unsurprisingly, Beethoven’s Fifth is in the mix, but only, of course, as part of an “organic whole.” As Moore puts it,
What value should we attribute to the proper emotion excited by hearing Beethoven’s Fifth Symphony, if that emotion were entirely unaccompanied by any consciousness, either of the notes, or of the melodic and harmonic relations between them?
It would seem, then, that even if you’re such a coarse person that you can’t appreciate Beethoven’s Fifth yourself, it is still your “duty” to make sure that it’s right there on everyone else’s smart phone.
Imagine, if you will, Mother Nature sitting down with Moore, holding his hand, looking directly into his eyes, and revealing to him in all its majesty the evolution of life on this planet, starting from the simplest, one celled creatures more than four billion years ago, and proceeding through ever more complex forms to the almost incredible emergence of a highly intelligent and highly social species known as Homo sapiens. It all happened, she explains to him with a look of triumph on her face, because, over all those four billion years, the chain of life remained unbroken because the creatures that made up the links of that chain survived and reproduced. Then, with a serious expression on her face, she asks him, “Now do you understand the reason for the existence of moral emotions?” “Of course,” answers Moore, “they’re there so I can enjoy nice landscapes and pretty music.” (Loud forehead slap) Mother Nature stands up and walks away shaking her head, consoling herself with the thought that some more advanced species might “get it” after another million years or so of natural selection.
And what of Aristotle, Hegel and Kant? Throw out your philosophy books and forget about them. Imagine being so dense as to commit the naturalistic fallacy!
Posted on October 31st, 2015 1 comment
Reading the “news” can be a painful experience in our time. Most of it consists of a blend of sensationalism, human interest stories, accounts of the lives of various vapid celebrities, and attempts to inspire virtuous indignation based on a half-baked knowledge of some ideologically loaded issue or other. One finds very little that could be accurately described as useful knowledge about things that are likely to have a major impact on our lives. I generally find Fox News less painful to read than what is commonly described as the Mainstream Media because I happen to be emotionally conservative. However, I must admit that Fox can occasionally be more ham-handed than the competition when it comes to dishing out propaganda.
A story that recently turned up on the Fox website is a case in point. It happened to be about the Ivanpah solar generating system that was recently completed in California’s Mojave Desert. The word “solar” should enable most readers to predict the ideological slant on the story one is likely to find at Fox. Sure enough, the title of the story is, “Taxpayer-backed solar plant actually a carbon polluter.” In the article itself we learn that the plant,
…is producing carbon emissions at nearly twice the amount that compels power plants and companies to participate in the state’s cap-and-trade program.
In fact, the plant does emit CO2 because it burns natural gas to avoid damage to equipment and to serve as a baseline source of power to meet electricity needs at night or during cloudy days. A bit further on, we learn from a “research fellow at the Heartland Institute” named H. Sterling Burnett that,
…designers also erred in placing Ivanpah between the tallest mountains in the Mojave where there is significant cloud cover and dust which would interfere with the sunlight.
He adds that,
…They say it is green, but that assumes that there is a power source without any environmental impact.
I don’t find anything as egregious as actual lies in the article. Rather, Fox limits itself to “creative” use of the truth. For example, it may be quite true that the plant, “…is producing carbon emissions at nearly twice the amount that compels power plants and companies to participate in the state’s cap-and-trade program,” but it’s also true that it produces far less carbon per unit of electricity delivered than a purely fossil fuel fired plant, a fact that is left unsaid in spite of its much greater relevance to the underlying issue of climate change. A researcher at the Heartland Institute is quoted without mentioning that the institute is funded by the fossil fuel industry, and is considered a source of blatant disinformation by environmentalists. That charge may be unfair, but one can hardly claim that it is irrelevant and should be ignored. As for his claim that, “designers also erred in placing Ivanpah between the tallest mountains in the Mojave,” etc., I invite interested readers who may happen to visit Las Vegas to drive out and have a look at the plant. It’s actually quite a spectacular sight. It certainly doesn’t appear to be sitting in the shadow of towering mountains, and the cloud cover is generally minimal, as one can confirm by Googling nearby locations. As for the dust, one surmises that it would have been worse if the plant had been built on the Los Angeles side of the mountains. As for Burnett’s last remark, as far as I am aware not even the most wild-eyed and fanatical environmentalist has ever claimed that the description of a power source as “green” implies the assumption that it has no environmental impact at all.
The reality is that the plant is reasonably sited given the location of the major consumers of the power it produces. Given the current limitations in our ability to store and distribute the excess power produced by renewable energy sources like wind and solar, some form of baseline power is always necessary to insure a steady supply of electricity when the wind isn’t blowing or the sun isn’t shining. My own choice for that purpose would be nuclear, but given the regulatory hurdles in the way, that would probably have been impractical for Ivanpah. Natural gas produces significantly less CO2 than, for example, coal, and was probably the best choice.
In short, the article is an example of what I have referred to above as “attempts to inspire virtuous indignation based on a half-baked knowledge of some ideologically loaded issue or other.” If the goal at Fox had been to inform rather than propagandize, they would have provided the reader with “fair and balanced” information about the cost of electricity produced at Ivanpah compared to alternative sources, the amount actually produced in comparison with predictions, the amount of CO2 it produces per unit of electricity in comparison to coal or oil fired plants, the relative advantages of solar and nuclear in limiting greenhouse gas emissions, etc. None of what I write here should be taken to imply a belief that solar should be preferred to any alternative. In fact, my own choice would be to reduce the regulatory burden to rational levels and build next generation nuclear plants instead. However, regardless of the technology involved, I would prefer to see it judged on a level playing field.
I know, I know, the MSM is hardly innocent of slanting the news. Indeed, its hysterical response after the announcement that Sarah Palin would be John McCain’s running mate puts anything I have ever seen at Fox News completely in the shade. Generally, however, it tends to be at least marginally more subtle. For example, instead of attempting to slant important news stories that don’t fit its narrative, it will often simply ignore them. If the story is too big to ignore, it will vilify the messenger instead. Of course, such techniques reflect a greater maturity and experience in handling agitprop than is available to the team at Fox News. However, that doesn’t prevent them from learning by example. Given that we will be subjected to propaganda no matter which “news” source we choose to follow, we should at least be able to demand that it not be crudely done.
Posted on October 30th, 2015 2 comments
One cannot make truth claims about morality because moral perceptions are subjective manifestations of evolved behavioral traits. That fact should have been obvious to any rational human being shortly after the publication of The Origin of Species in 1859. It was certainly obvious enough to Darwin himself. Edvard Westermarck spelled it out for anyone who still didn’t get it in his The Origin and Development of Moral Ideas, published in 1906. More than a century later one might think it should be obvious to any reasonably intelligent child. Alas, most of us still haven’t caught on. We still take our occasional fits of virtuous indignation seriously, and expect everyone else to take them seriously, too. As for the “experts” who have assumed the responsibility of explaining to the rest of us when our fits are “really” justified, and when not, well, it seems they’ve never heard of a man named Darwin. Or at least it does to anyone who takes the trouble to thumb through the pages of the journal Ethics.
You might describe Ethics as a playground for academic practitioners of moral philosophy. They use it to regale each other with articles full of rarefied hair splitting and arcane jargon describing the flavor of morality they happen to prefer at the moment. Of course, it also serves as a venue for accumulating the publications upon which academic survival depends. Look through the articles in any given issue, and you’ll find statements like the following:
The reasons why actions are right or wrong sometimes are relatively straightforward, and then explicit moral understanding may be quite easy to achieve.
Since almost all civilians are innocent in war, and since killing innocent civilians is worse than killing soldiers, killing civilians is worse than killing soldiers.
We are constrained, it seems, not only not to treat others in certain ways, but to do so because they have the moral standing to demand that we do so, and to hold us accountable for wronging them if we fail.
Some deontologists claim that harm-enabling is a species of harm-allowing. Others claim that while harm-enabling is properly classified as a species of harm-doing, it is nonetheless morally equivalent, all else equal, to harm-allowing.
Do you notice the common thread here? That’s right! All these statements are dependent on the tacit assumption that there actually is such a thing as moral truth. In the first that assumption comes in the form of a statement that implies that what we call “good” and “evil” actually exist as objective things. In the second it comes in the form of an assumption that there is an objective way to determine guilt or innocence. In the third it manifests itself as a belief the moral emotions can jump out of the skull of one individual and acquire “standing,” so that they apply to other individuals as well. In the fourth, it turns up in the form of a standard by which it can be determined whether acts are “morally equivalent” or not. Westermarck cut through the fog obfuscating the basis of such claims in the first chapter of his book. As he put it,
As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion. The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.
In other words, all the learned articles on the merits of this or that moral system in the pages of Ethics and similar journals are more or less the equivalent of a similar number of articles on the care and feeding of unicorns, or the number of persons, natures and wills of imaginary super-beings. Why don’t these people face the obvious? Well, perhaps first and foremost, because it would put them out of a job. Beyond that, all their laboriously acquired “expertise,” would become as futile as the expertise of physicians in the 18th century on the proper technique for bleeding patients suffering from smallpox. For that matter, most of them probably believe their own cant. As Julius Caesar, among many others, pointed out long ago, human beings tend to believe what they want to believe.
Morality is what it is, and won’t become something different even if the articles in learned journals on the subject multiply until the stack reaches the moon. What would happen if the whole world suddenly accepted the fact? Very little, I suspect. We don’t behave morally the way we do because of the scribblings of this or that philosopher. We behave the way we do because that is our nature. Accepting the truth about morality wouldn’t result in a chaos of moral relativism, or an astronomical increase in crime, or even a sudden jolt of the body politic to the right or the left of the political spectrum. With luck, a few people might start considering the implications of the truth, and point out that all the virtue posturing and outbursts of pious wrath that are such a pervasive feature of the age we live in are more or less equivalent to the tantrums of children. The result might be a world that is marginally less annoying to live in. I personally wouldn’t mind living in a world in which the posturing of moral buffoons had become more a source of amusement than annoyance.
Posted on October 27th, 2015 2 comments
According to an article that just appeared in Science magazine, scientists in Germany have completed building a stellarator by the name of Wendelstein 7-X (W7-X), and are seeking regulatory permission to turn the facility on in November. If you can’t get past the Science paywall, here’s an article in the popular media with some links. Like the much bigger ITER facility now under construction at Cadarache in France, W7-X is a magnetic fusion device. In other words, its goal is to confine a plasma of heavy hydrogen isotopes at temperatures much hotter than the center of the sun with powerful magnetic fields in order to get them to fuse, releasing energy in the process. There are significant differences between stellarators and the tokamak design used for ITER, but in both approaches the idea is to hold the plasma in place long enough to get significantly more fusion energy out than was necessary to confine and heat the plasma. Both approaches are probably scientifically feasible. Both are also white elephants, and a waste of scarce research dollars.
The problem is that both designs have an Achilles heel. Its name is tritium. Tritium is a heavy isotope of hydrogen with a nucleus containing a proton and two neutrons instead of the usual lone proton. Fusion reactions between tritium and deuterium, another heavy isotope of hydrogen with a single neutron in addition to the usual proton, begin to occur fast enough to be attractive as an energy source at plasma temperatures and densities much less than would be necessary for any alternative reaction. The deuterium-tritium, or DT, reaction will remain the only feasible one for both stellarator and tokamak fusion reactors for the foreseeable future. Unfortunately, tritium occurs in nature in only tiny trace amounts.
The question is, then, where do you get the tritium fuel to keep the fusion reactions going? Well, in addition to a helium nucleus, the DT fusion reaction produces a fast neutron. These can react with lithium to produce tritium. If a lithium-containing blanket could be built surrounding the reaction chamber in such a way as to avoid interfering with the magnetic fields, and yet thick enough and close enough to capture enough of the neutrons, then it should be possible to generate enough tritium to replace that burned up in the fusion process. It sounds complicated but, again, it appears to be at least scientifically feasible. However, it is by no means as certain that it is economically feasible.
Consider what we’re dealing with here. Tritium is an extremely slippery material that can pass right through walls of some types of metal. It is also highly radioactive, with a half-life of about 12.3 years. It will be necessary to find some way to efficiently extract it from the lithium blanket, allowing none of it to leak into the surrounding environment. If any of it gets away, it will be easily detectable. The neighbors are sure to complain and, probably, lawyer up. Again, all this might be doable. The problem is that it will never be doable at a low enough cost to make fusion reactor designs based on these approaches even remotely economically competitive with the non-fossil alternative sources of energy that will be available for, at the very least, the next several centuries.
What’s that? Reactor design studies by large and prestigious universities and corporations have all come to the conclusion that these magnetic fusion beasts will be able to produce electricity at least as cheaply as the competition? I don’t think so. I’ve participated in just such a government-funded study, conducted by a major corporation as prime contractor, with several other prominent universities and corporations participating as subcontractors. I’m familiar with the methodology used in several others. In general, it’s possible to make the cost electricity come out at whatever figure you choose, within reason, using the most approved methods and the most sound project management and financial software. If the government is funding the work, it can be safely assumed that they don’t want to hear something like, “Fuggedaboudit, this thing will be way too expensive to build and run.” That would make the office that funded the work look silly, and the fusion researchers involved in the design look like welfare queens in white coats. The “right” cost numbers will always come out of these studies in the end.
I submit that a better way to come up with a cost estimate is to use a little common sense. Do you really think that a commercial power company will be able to master the intricacies of tritium production and extraction from the vicinity of a highly radioactive reaction chamber at anywhere near the cost of, say, wind and solar combined with next generation nuclear reactors for baseload power? If you do, you’re a great deal more optimistic than me. W7-X cost a billion euros. ITER is slated to cost 13 billion, and will likely come in at well over that. With research money hard to come by in Europe for much worthier projects, throwing amounts like that down a rat hole doesn’t seem like a good plan.
All this may come as a disappointment to fusion enthusiasts. On the other hand, you may want to consider the fact that, if fusion had been easy, we would probably have managed to blow ourselves up with pure fusion weapons by now. Beyond that, you never know when some obscure genius might succeed in pulling a rabbit out of their hat in the form of some novel confinement scheme. Several companies claim they have sure-fire approaches that are so good they will be able to dispense with tritium entirely in favor of more plentiful, naturally occurring isotopes. See, for example, here, here, and here, and the summary at the Next Big Future website. I’m not optimistic about any of them, either, but you never know.
Posted on October 24th, 2015 No comments
Europe is an amazing sight these days. The leftists are doing what leftists do – fighting to eliminate any semblance of recognizable borders or national sovereignty, encouraging hordes of culturally alien immigrants to pour into the continent in the process. All this is being done in the name of “morality,” more or less in the same sense as one might jump off a cliff in the name of “getting exercise.” Leftists, whether they nominally belong to the “conservative” or “liberal” parties, control the media, the schools, the churches, and state power. But in spite of an unprecedented barrage of propaganda from all those sources, the populations of the countries concerned are starting to demonstrate a slight uneasiness, or, if you will, common sense. They know that, historically, allowing the numbers of unassimilable aliens in ones country to increase beyond a certain point has invariably resulted in violent social unrest, and occasionally civil war. They would prefer to avoid those outcomes. Denied any democratic means of expressing their opinions via, for example, plebiscites, some of them have taken to the streets in protest. The response of their masters has been remarkable. Aware that they lack any semblance of a democratic mandate for the profound and likely irreversible changes they have been making to Europe’s demographic and cultural landscape, and also aware that the people are not with them, they have reacted with what one might describe as a form of hysteria.
Germany, of course, has been taking a leading role in destabilizing the continent in keeping with time-honored tradition. As I have German relatives and a German wife, I pay particular attention happenings there. As examples of what I have described above as the hysteria of the string pullers in that country, one might consider the following:
- Some of the first German citizens to take to the streets were loosely organized under the rubric of “PEGIDA,” (Patriotische Europäer gegen die Islamisierung des Abendlandes, or Patriotic Europeans against the Islamization of the Occident.) Several similar groups have emerged since then. For the most part, they consist of citizens who simply gather in the streets and occasionally conduct peaceful marches. In other words, they are people who “peaceably assemble and petition the government for redress of grievances.” According to the Frankfurter Allgemeine Zeitung (FAZ), the response has been to intimidate them with threats of surveillance by the local equivalent of the FBI, rationalized by the claim that they are all merely puppets in the hands of right wing fringe elements.
- The “respectable” and “conservative FAZ also “informed” its readers about PEGIDA by publishing an interview with one Hajo Funke, described as an “extremism researcher.” Readers are left trembling over accounts of supposed “connections” between PEGIDA and the murder of Cologne’s mayor by right wingers. U.S. readers should be familiar with this rather hackneyed tactic of claiming that whatever heinous crimes can be exploited for the purpose were “inspired” by whatever group one wants to smear.
- Media giant ARD ran a story about a “chain of lights,” made up of citizens holding candles and torches to welcome immigrants. Some of the supposed images of the event turned out to be fake, and were actually taken at a different event back in 2003.
- Not to be outdone, Der Spiegel, Germany’s largest news magazine, cites supposed incidents of “hatemongering” against immigrants on YouTube, and “inciting the populace” on Facebook, with allusions of ongoing government investigations of the “extremists.” Focus magazine chimes in that one of these Facebook extremists has just been sentenced to more than two years in prison for “agitating against immigrants.” That should “get his mind right.”
Anyone who suggests that the government might want to assume some elementary level of control over the borders and pause in implementing its radical policies until the citizens have been allowed to weigh in on the matter is commonly described in the German media as a “hater.” This is particularly true of any mention of the subject in Der Spiegel. That’s a bit rich considering that Der Spiegel takes the cake among German hatemongers in this century, and would have gotten at least an honorable mention in the last.
Der Spiegel was in the very vanguard of the lucrative game of peddling hate against the United States during the latest European orgasm of anti-Americanism which reached its peak about a decade ago. Many of the most egregious examples were documented on Davids Medienkritik, now mothballed but still an excellent source of historical source material. I encourage readers to visit the site and page back to the posts prior to, say 2008. Among other things, Medienkritik put together a collage of Spiegel covers that pretty much says it all when it comes to hatred.
Have a look and you’ll see examples of some of Spiegel’s favorite quasi-racist anti-American stereotypes. Of course, the gun nut and religious fanatic are there, as well as well as such favorite themes as Americans exploiting German workers, torturing prisoners, trading “blood for oil,” etc. Such relentlessly negative coverage of the United States occasionally reached levels that can only be described as fanatical, crowding out virtually all other news on Spiegel’s website.
At the crest of the anti-American wave in Germany, one found similar “news” stories in virtually every German publication worth mentioning, from the left wing Der Spiegel to the “conservative” FAZ to the neo-Nazi Deutsche National-Zeitung. Standing bravely in opposition to this wave of xenophobic hate, calling for some modicum of rational and fair treatment of the United States, were a few little bloggers. These people had nothing to gain from resisting the hatemongers, went almost completely unnoticed in the United States, and were subjected to vilification and hacking attacks in their own country. They certainly deserve our gratitude. As it happens, one of the most active of these little blogs went by the name of Politically Incorrect. It’s editor was a reliable voice against the pervasive peddling of hate at Spiegel and elsewhere. The blog still exists. It should come as no surprise that it is now taking a stand against the suicidal policies of the German regime.
Of course, according to the editors of Der Spiegel, Politically Incorrect’s resistance to the uncontrolled deluge of “asylum seekers,” land it among the “inciters of the German Volk,” the “promoters of murder,” the “right wing extremists,” the “neo-Nazis,” and, in a word, the “haters.” In fact, the real haters in Germany are to be found elsewhere. Readers should find a clue about where to look for them if they take a close look at Medienkritik’s collage of magazine covers.
I noted in my recent posts on James Burnham how well he exposed the sources of the current push to eliminate borders and allow the free movement of human populations across the globe in liberal fantasies of universal human brotherhood. I can think of no better demonstration of the delusional nature of this goal than the spectacle of the bitter and fanatical hatreds of the very people who are foremost in attempting to force it down the throats of their fellow citizens. Their hate hasn’t gone anywhere. They’ve merely found a different outgroup to hate, in the form of anyone who dares to oppose their ideological shibboleths. And in the end, that’s why their current experiment in destabilizing their own countries is most unlikely to end well. As the rage of these “anti-haters” against anyone who stands in their way becomes ever more hysterical, they expose themselves as the most virulent haters of all.
Posted on October 17th, 2015 4 comments
There’s another thing about James Burnham’s Suicide of the West that’s quite fascinating; his take on human nature. In fact, Chapter III is entitled “Human Nature and the Good Society.” Here are a few excerpts from that chapter:
However varied may be the combination of beliefs that it is psychologically possible for an individual liberal to hold, it remains true that liberalism is logically committed to a doctrine along the lines that I have sketched: viewing human nature as not fixed but plastic and changing; with no pre-set limit to potential development; with no innate obstacle to the realization of a society of peace, freedom, justice and well-being. Unless these things are true of human nature, the liberal doctrine and program for government, education, reform and so on are an absurdity.
But in the face of what man has done and does, it is only an ideologue obsessed with his own abstractions who can continue to cling to the vision of an innately uncorrupt, rational and benignly plastic human nature possessed of an unlimited potential for realizing the good society.
Quite true, which makes it all the more remarkable that virtually all the “scientists” in the behavioral “sciences” at the time Burnham wrote these lines were “clinging to that vision,” at least in the United States. See, for example, The Triumph of Evolution, in which one of these “men of science,” author Hamilton Cravens, documents the fact. Burnham continues,
No, we must repeat: if human nature is scored by innate defects, if the optimistic account of man is unjustified, then is all the liberal faith in vain.
Here we get a glimpse of the reason that the Blank Slaters insisted so fanatically that there is no such thing as human nature, at least as commonly understood, for so many years, in defiance of all reason, and despite the fact that any 10 year old could have told them their anthropological theories were ludicrous. The truth stood in the way of their ideology. Therefore, the truth had to yield.
All this begs the question of how, as early as 1964, Burnham came up with such a “modern” understanding of the Blank Slate. Reading on in the chapter, we find some passages that are even more intriguing. Have a look at this:
It is not merely the record of history that speaks in unmistakable refutation of the liberal doctrine of man. Ironically enough – ironically, because it is liberalism that has maintained so exaggerated a faith in science – almost all modern scientific studies of man’s nature unite in giving evidence against the liberal view of man as a creature motivated, once ignorance is dispelled, by the rational search for peace, freedom and plenty. Every modern school of biology and psychology and most schools of sociology and anthropology conclude that men are driven chiefly by profound non-rational, often anti-rational, sentiments and impulses, whose character and very existence are not ordinarily understood by conscious reason. Many of these drives are aggressive, disruptive, and injurious to others and to society.
The bolding and italics are mine. How on earth did Burnham come up with such ideas? By all means, dear reader, head for your local university library, fish out the ancient microfiche, and search through the scientific and professional journals of the time yourself. Almost without exception, the Blank Slate called the tune. Clearly, Burnham didn’t get the notion that “almost all modern scientific studies of man’s nature” contradicted the Blank Slate from actually reading the literature himself. Where, then, did he get it? Only Burnham and the wild goose know, and Burnham’s dead, but my money is on Robert Ardrey. True, Konrad Lorenz’ On Aggression was published in Germany in 1963, but it didn’t appear in English until 1966. The only other really influential popular science book published before Suicide of the West that suggested anything like what Burnham wrote in the above passage was Ardrey’s African Genesis, published in 1961.
What’s that you say? I’m dreaming? No one of any significance ever challenged the Blank Slate orthodoxy until E. O. Wilson’s stunning and amazing publication of Sociobiology in 1975? I know, it must be true, because it’s all right there in Wikipedia. As George Orwell once said, “He who controls the present controls the past.”
Posted on October 16th, 2015 2 comments
James Burnham was an interesting anthropological data point in his own right. A left wing activist in the 30’s, he eventually became a Trotskyite. By the 50’s however, he had completed an ideological double back flip to conservatism, and became a Roman Catholic convert on his deathbed. He was an extremely well-read intellectual, and a keen observer of political behavior. His most familiar book is The Managerial Revolution, published in 1941. Among others, it strongly influenced George Orwell, who had something of a love/hate relationship with Burnham. For example, in an essay in Tribune magazine in January 1944 he wrote,
Recently, turning up a back number of Horizon, I came upon a long article on James Burnham’s Managerial Revolution, in which Burnham’s main thesis was accepted almost without examination. It represented, many people would have claimed, the most intelligent forecast of our time. And yet – founded as it was on a belief in the invincibility of the German army – events have already blown it to pieces.
A bit over a year later, in February 1945, however, we find Burnham had made more of an impression on Orwell than the first quote implies. In another essay in the Tribune he wrote,
…by the way the world is actually shaping, it may be that war will become permanent. Already, quite visibly and more or less with the acquiescence of all of us, the world is splitting up into the two or three huge super-states forecast in James Burnham’s Managerial Revolution. One cannot draw their exact boundaries as yet, but one can see more or less what areas they will comprise. And if the world does settle down into this pattern, it is likely that these vast states will be permanently at war with one another, although it will not necessarily be a very intensive or bloody kind of war.
Of course, these super-states later made their appearance in Orwell’s most famous novel, 1984. However, he was right about Burnham the first time. He had an unfortunate penchant for making wrong predictions, often based on the assumption that transitory events must represent a trend that would continue into the indefinite future. For example, impressed by the massive industrial might brought to bear by the United States during World War II, and its monopoly of atomic weapons, he suggested in The Struggle for the World, published in 1947, that we immediately proceed to force the Soviet Union to its knees, and establish a Pax Americana. A bit later, in 1949, impressed by a hardening of the U.S. attitude towards the Soviet Union after the war, he announced The Coming Defeat of Communism in a book of that name. He probably should have left it at that, but reversed his prognosis in Suicide of the West, which appeared in 1964. By that time it seemed to Burnham that the United States had become so soft on Communism that the defeat of Western civilization was almost inevitable. The policy of containment could only delay, but not stop the spread of Communism, and in 1964 it seemed that once a state had fallen behind the Iron Curtain it could never throw off the yoke.
Burnham didn’t realize that, in the struggle with Communism, time was actually on our side. A more far-sighted prophet, a Scotsman by the name of Sir James Mackintosh, had predicted in the early 19th century that the nascent versions of Communism then already making their appearance would eventually collapse. He saw that the Achilles heel of what he recognized was really a secular religion was its ill-advised proclamation of a coming paradise on earth, where it could be fact-checked, instead of in the spiritual realms of the traditional religions, where it couldn’t. In the end, he was right. After they had broken 100 million eggs, people finally noticed that the Communists hadn’t produced an omelet after all, and the whole, seemingly impregnable edifice collapsed.
One thing Burnham did see very clearly, however, was the source of the West’s weakness – liberalism. He was well aware of its demoralizing influence, and its tendency to collaborate with the forces that sought to destroy the civilization that had given birth to it. Inspired by what he saw as an existential threat, he carefully studied and analyzed the type of the western liberal, and its evolution away from the earlier “liberalism” of the 19th century. Therein lies the real value of his Suicide of the West. It still stands as one of the greatest analyses of modern liberalism ever written. The basic characteristics of the type he described are as familiar more than half a century later as they were in 1964. And this time his predictions regarding the “adjustments” in liberal ideology that would take place as its power expanded were spot on.
Burnham developed nineteen “more or less systematic set of ideas, theories and beliefs about society” characteristic of the liberal syndrome in Chapters III-V of the book, and then listed them, along with possible contrary beliefs in Chapter VII. Some of them have changed very little since Burnham’s day, such as,
It is society – through its bad institutions and its failure to eliminate ignorance – that is responsible for social evils. Our attitude toward those who embody these evils – of crime, delinquency, war, hunger, unemployment, communism, urban blight – should not be retributive but rather the permissive, rehabilitating, education approach of social service; and our main concern should be the elimination of the social conditions that are the source of the evils.
Since there are no differences among human beings considered in their political capacity as the foundation of legitimate, that is democratic, government, the ideal state will include all human beings, and the ideal government is world government.
The goal of political and social life is secular: to increase the material and functional well-being of humanity.
Some of the 19 have begun to change quite noticeably since the publication of Suicide of the West in just the ways Burnham suggested. For example, items 9 and 10 on the list reflect a classic version of the ideology that would have been familiar to and embraced by “old school” liberals like John Stuart Mill:
Education must be thought of as a universal dialogue in which all teachers and students above elementary levels may express their opinions with complete academic freedom.
Politics must be though of as a universal dialogue in which all persons may express their opinions, whatever they may be, with complete freedom.
Burnham had already noticed signs of erosion in these particular shibboleths in his own day, as liberals gained increasing control of academia and the media. As he put it,
In both Britain and the United States, liberals began in 1962 to develop the doctrine that words which are “inherently offensive,” as far-Right but not communist words seem to be, do not come under the free speech mantle.
In our own day of academic safe spaces and trigger warnings, there is certainly no longer anything subtle about this ideological shift. Calls for suppression of “offensive” speech have now become so brazen that they have spawned divisions within the liberal camp itself. One finds old school liberals of the Berkeley “Free Speech Movement” days resisting Gleichschaltung with the new regime, looking on with dismay as speaker after speaker is barred from university campuses for suspected thought crime.
As noted above, Communism imploded before it could overwhelm the Western democracies, but the process of decay goes on. Nothing about the helplessness of Europe in the face of the current inundation by third world refugees would have surprised Burnham in the least. He predicted it as an inevitable expression of another fundamental characteristic of the ideology – liberal guilt. Burnham devoted Chapter 10 of his book to the subject, and noted therein,
Along one perspective, liberalism’s reformist, egalitarian, anti-discrimination, peace-seeking principles are, or at any rate can be interpreted as, the verbally elaborated projections of the liberal sense of guilt.
The guilt of the liberal causes him to feel obligated to try to do something about any and every social problem, to cure every social evil. This feeling, too, is non-rational: the liberal must try to cure the evil even if he has no knowledge of the suitable medicine or, for that matter, of the nature of the disease; he must do something about the social problem even when there is no objective reason to believe that what he does can solve the problem – when, in fact, it may well aggravate the problem instead of solving it.
I suspect Burnham himself would have been surprised at the degree to which such “social problems” have multiplied in the last half a century, and the pressure to do something about them has only increased in the meantime. As for the European refugees, consider the following corollaries of liberal guilt as developed in Suicide of the West:
(The liberal) will not feel uneasy, certainly not indignant, when, sitting in conference or conversation with citizens of countries other than his own – writers or scientists or aspiring politicians, perhaps – they rake his country and his civilization fore and aft with bitter words; he is as likely to join with them in the criticism as to protest it.
It follows that,
…the ideology of modern liberalism – its theory of human nature, its rationalism, its doctrines of free speech, democracy and equality – leads to a weakening of attachment to groups less inclusive than Mankind.
All modern liberals agree that government has a positive duty to make sure that the citizens have jobs, food, clothing, housing, education, medical care, security against sickness, unemployment and old age; and that these should be ever more abundantly provided. In fact, a government’s duty in these respects, if sufficient resources are at its disposition, is not only to its own citizens but to all humanity.
…under modern circumstances there is a multiplicity of interests besides those of our own nation and culture that must be taken into account, but an active internationalism in feeling as well as thought, for which “fellow citizens” tend to merge into “humanity,” sovereignty is judged an outmode conception, my religion or no-religion appears as a parochial variant of the “universal ideas common to mankind,” and the “survival of mankind” becomes more crucial than the survival of my country and my civilization.
For Western civilization in the present condition of the world, the most important practical consequence of the guilt encysted in the liberal ideology and psyche is this: that the liberal, and the group, nation or civilization infected by liberal doctrine and values, are morally disarmed before those whom the liberal regards as less well off than himself.
The inevitable implication of the above is that the borders of the United States and Europe must become meaningless in an age of liberal hegemony, as, indeed, they have. In 1964 Burnham was not without hope that the disease was curable. Otherwise, of course, he would never have written Suicide of the West. He concluded,
But of course the final collapse of the West is not yet inevitable; the report of its death would be premature. If a decisive changes comes, if the contraction of the past fifty years should cease and be reversed, then the ideology of liberalism, deprived of its primary function, will fade away, like those feverish dreams of the ill man who, passing the crisis of his disease, finds he is not dying after all. There are a few small signs, here and there, that liberalism may already have started fading. Perhaps this book is one of them.
No, liberalism hasn’t faded. The infection has only become more acute. At best one might say that there are now a few more people in the West who are aware of the disease. I am not optimistic about the future of Western civilization, but I am not foolhardy enough to predict historical outcomes. Perhaps the fever will break, and we will recover, and perhaps not. Perhaps there will be a violent crisis tomorrow, or perhaps the process of dissolution will drag itself out for centuries. Objectively speaking, there is no “good” outcome and no “bad” outcome. However, in the same vein, there is no objective reason why we must refrain from fighting for the survival or our civilization, our culture, or even the ethnic group to which we belong.
As for the liberals, perhaps they should consider why all the fine moral emotions they are so proud to wear on their sleeves exist to begin with. I doubt that the reason has anything to do with suicide.
By all means, read the book.