Posted on October 4th, 2015 No comments
There’s a reason that the Blank Slaters clung so bitterly to their absurd orthodoxy for so many years. If there is such a thing as human nature, then all the grandiose utopias they concocted for us over the years, from Communism on down, would vanish like so many mirages. That orthodoxy collapsed when a man named Robert Ardrey made a laughing stock of the “men of science.” In this enlightened age, one seldom finds an old school, hard core Blank Slater outside of the darkest, most obscure rat holes of academia. Even PBS and Scientific American have thrown in the towel. Still, one occasionally runs across “makeovers” of the old orthodoxy, in the guise of what one might call Blank Slate Lite.
I recently discovered just such an artifact in the pages of Ethics magazine, which functions after a fashion as an asylum for “experts in ethics” who still cling to the illusion that they have anything relevant to say. Entitled The Limits of Evolutionary Explanations of Morality and Their Implications for Moral Progress, it was written by Prof. Allen Buchanan of Duke and Kings College, London, and Asst. Prof. Russell Powell of Boston University. Unfortunately, it’s behind a pay wall, and is quite long, but if you’re the adventurous type you might be able to access it at a local university library. In any case, the short version of the paper might be summarized as follows:
Conservatives have traditionally claimed that “human nature places severe limitations on social and moral reform,” but have “offered little in the way of scientific evidence to support this claim.” Now, however, a later breed of conservatives, knows as “evoconservatives,” have “attempted to fill this empirical gap in the conservative argument by appealing to the prevailing evolutionary explanation of morality to show that it is unrealistic to think that cosmopolitan and other “inclusivist” moral ideals can meaningfully be realized.” However, while evolved psychology can’t be discounted in moral theory, and there is such a thing as human nature, they are so plastic and malleable that it doesn’t stand in the way of moral progress.
This, at least, is the argument until one gets to the “Conclusion” section at the end. Then, as if frightened by their own hubris, the authors make noises in a quite contradictory direction, writing, for example,
…we acknowledge that evolved psychological capacities, interacting with particular social and institutional environments, can pose serious obstacles to using our rationality in ways that result in more inclusive moralities. For example, environments that mirror conditions of the EEA (environment of evolutionary adaptation, i.e., the environment in which moral behavioral predispositions presumably evolved, ed.)—such as those characterized by great physical insecurity, high parasite threat, severe intergroup competition for resources, and a lack of institutions for peaceful, mutually beneficial cooperation—will tend to be very unfriendly to the development of inclusivist morality.
However, they conclude triumphantly with the following:
At the same time, however, we have offered compelling reasons, both theoretical and empirical, to believe that human morality is only weakly constrained by human evolutionary history, leaving the potential for substantial moral progress wide open. Our point is not that human beings have slipped the “leash” of evolution, but rather that the leash is far longer than evoconservatives and even many evolutionary psychologists have acknowledged—and no one is in a position at present to know just how elastic it will turn out to be.
Students of the Blank Slate orthodoxy will see that all the main shibboleths are still there, if in somewhat attenuated form. The Blank Slate itself is replaced by a “long leash.” The “genetic determinist” strawman of the Blank Slaters is replaced by “evoconservatives.” These evoconservatives are no longer “fascists and racists,” but merely a nuisance standing in the way of “moral progress.” The overriding goal is no longer anything like the Marxist paradise on earth, but the somewhat less inspiring continued “development of inclusivist morality.”
Readers of this blog should immediately notice the unwarranted assumption that there actually is such a thing as “moral progress.” In that case, there must be a goal towards which morality is progressing. Natural selection occurs without any such goal or purpose. It follows that the authors believe that there must be some “mysterious, transcendental” origin other than natural evolution to account for this progress. However, they insist they don’t believe in such a “mysterious, transcendental” source. How, then, do they account for the existence of this “thing” they refer to as “moral progress?” What the authors are really referring to when they refer to this “moral progress” is “the way we and other good liberals want things.”
By “inclusivist” moralities, the authors mean versions that can be expanded to include very large subsets of the human population that are neither kin to the bearers of that morality nor members of any identifiable group that is likely to reciprocate their good deeds. Presumably the ultimate goal is to expand these subsets to “include” all mankind. The “evoconservatives” we are told, deny the possibility of such “inclusivism” in spite of the fact that one can cite many obvious examples to the contrary. At this point, one begins to wonder who these obtuse evoconservatives really are. The authors are quite coy about identifying them. The footnote following their first mention merely points to a blurb about what the authors will discuss later in the text. No names are named. Much later in the text Jonathan Haidt is finally identified as one of the evoconservatives. As the authors put it,
Leading psychologist Jonathan Haidt, who has stressed the moral psychological significance of in-group loyalty, expresses a related view: ‘It would be nice to believe that we humans were designed to love everyone unconditionally. Nice, but rather unlikely from an evolutionary perspective. Parochial love—love within groups—amplified by similarity, a sense of shared fate, and the suppression of free riders, may be the most we can accomplish.
In fact, as anyone who has actually read Haidt is aware, he neither believes that “inclusivist” moralities as defined by the authors are impossible, nor does this quote imply anything of the sort. A genuine conservative would doubtless classify Haidt as a liberal, but he has defended, or at least tried to explain, conservative moralities. Apparently that is sufficient to cast him into the outer darkness as an “evoconservative.”
The authors also point the finger at Larry Arnhart. Arnhart is neither a geneticist, nor an evolutionary biologist, nor an evolutionary psychologist, but a political scientist who apparently subscribes to some version of the naturalistic fallacy. Nowhere is it demonstrated that he actually believes that the inclusivist versions of morality favored by the authors are impossible. In a word, the few slim references to individuals who are supposed to fit the description of the evoconservative strawman concocted by the authors actually do nothing of the sort. Yet in spite of the fact that the authors can’t actually name anyone who explicitly embraces their version of evoconservatism, they describe the existence of “inclusivist morality” as a “major flaw in evoconservative arguments.”
A bit later, the authors appear to drop their evoconservative strawman, and expand their field of fire to include anyone who claims that “inclusivist morality” could have resulted from natural selection. For example, quoting from the article:
The key point is that none of these inclusivist features of contemporary morality are plausibly explained in standard selectionist terms, that is, as adaptations or predictable expressions of adaptive features that arose in the environment of evolutionary adaptation (EEA).
Here, “evoconservatives” have been replaced by “standard selectionists.” Invariably, the authors walk back such seemingly undistilled statements of Blank Slate ideology with assurances that no one believes more firmly than they in the evolutionary roots of morality. That, of course, begs the question of how “these inclusivist features,” if they are not explainable in “standard selectionist terms,” are plausibly explained in “non-standard selectionist terms,” and who these “non-standard selectionists” actually are. Apparently the only alternative is that the “inclusivist features” have a “transcendental” explanation, not further elaborated by the authors. This conclusion is not as far fetched as it seems. Interestingly enough, the authors’ work is partially funded by the Templeton Foundation, an accommodationist outfit with the ostensible goal of proving that religion and science are not mutually exclusive.
In fact, I know of not a single scientist whose specialty is germane to the subject of human morality who would dispute the existence of inclusive moralities. The authors limit themselves to statements to the effect that the work of such and such a person “suggests” that they don’t believe in inclusive moralities, or that the work of some other person “implies” that they don’t believe such moralities are stable. Wouldn’t it be more reasonable to simply go and ask these people what they actually believe regarding these matters, instead of putting words in their mouths?
Left out of all these glowing descriptions of inclusive moralities is the fact that not a single one of them exists without an outgroup. That fact is demonstrated by the authors themselves, whose outgroup obviously includes those they identify as “evoconservatives.” One might also point out that those who have “inclusive” ingroups commonly have “inclusive” outgroups as well, and liberals are commonly found among the most violent outgroup haters on the planet. To confirm this, one need only look at the comments at the websites of Daily Kos, or Talking Points Memo, or the Nation, or any other familiar liberal watering hole.
While I’m somewhat dubious about all the authors’ loose talk about “moral progress,” I think we can at least identify some real progress towards getting at the truth in their version of Blank Slate Lite. After all, it’s a far cry from the old school version. Throughout the article the authors question the ability of natural selection in the environment in which moral behavior presumably evolved in early humans to account for this or that feature of their observed “inclusive morality.” As noted above, however, as often as they do it, they are effusive in assuring the reader that by no means do they wish to imply that they find any fault whatsoever with innate theories of human morality. In the end, what more can one ask than the ability to continue seeking the truth about human moral behavior in every relevant area of science without fear of being denounced and intimidated as guilty of one type of villainy or another. That ability seems more assured if the existence of innate behavior is at least admitted, and is therefore unlikely to be criminalized as it was in the heyday of the Blank Slate. In that respect, Blank Slate Lite really does represent progress.
Of course, there remains the question of why so many of us still take seriously the authors’ fantasies about “moral progress” more than a century after Westermarck pointed out the absurdity of truth claims about morality. I suspect the answer lies in the fact that ending the charade would reduce all the pontifications of all the “experts in morality” catered to by learned journals like Ethics to gibberish. Experts don’t like to be confronted with the truth that their painstakingly acquired expertise is irrelevant. Admitting it would make it a great deal harder to secure grants from the Templeton Foundation.
UPDATE: I failed to mention another intriguing paragraph in the paper that reads as follows:
The human capacity to reflect on and revise our conceptions of duty and moral standing can give us reasons here and now to expand our capacities for moral behavior by developing institutions that economize on sympathy and enhance our ability to take the interests of strangers into account. This same capacity may also give us reasons, in the not-too-distant future, to modify our evolved psychology through the employment of biomedical interventions that enable us to implement new norms that we develop as a result of the process of reflection. In both cases, the limits of our evolved motivational capacities do not translate into a comparable constraint on our capacity for moral action. The fact that we are not currently motivationally capable of acting on the considered moral norms we have come to endorse is not a reason to trim back those norms; it is a reason to enhance our motivational capacity, either through institutional or biomedical means, so that it matches the demands of our considered morality.
Note the bolded wording. I’m not sure what to make of it, dear reader, but it appears that, one way or another, the authors intend to “get our minds right.”
Posted on October 2nd, 2015 3 comments
If you’re a regular reader of this blog, you know my take on morality. It is the manifestation of a subset of our suite of innate behavioral traits. The traits in question exist because they evolved. Absent those traits, morality as we know it would not exist. It follows that attempts to apply moral emotions in order to solve complex problems that arise in an environment that is radically different from the one in which the innate, “root causes” of morality evolved are irrational. That, however, is precisely how the Europeans are attempting to deal with an unprecedented flood of culturally and genetically alien refugees. The result is predictable – a classic morality inversion.
…moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached.
In other words, the “emotional dog” makes the judgment. Only after the judgment has been made does the “rational tail” begin “wagging the dog,” concocting good sounding “reasons” for the judgment. One can get a better idea of what’s really going on by tracking down the source of the moral emotions involved.
Let’s consider, then, what’s going on inside the “pro-refugee” brain. As in every other brain, the moral machinery distinguishes between ingroup and outgroup(s). In this case these categories are perceived primarily in ideological terms. The typical pro-refugee individual is often a liberal, as that rather slippery term is generally understood in the context of 21st century western democracies. Such specimens will occasionally claim that they have expanded their ingroup to include “all mankind,” so that it is no longer possible for them to be “haters.” Nothing could be further from the truth. The outgroup have ye always with you. It comes with the human behavioral package.
If anything, the modern liberal hates more violently than any other subgroup. He commonly hates the people within his own culture who disagree with the shibboleths of his ideology. Those particular “others,” almost always constitute at least a part of his outgroup. Outside of his own culture, ideology matters much less as a criterion of outgroup identification, as demonstrated, for example, by the odd affinity between many Western liberals and radical Islamists.
Beyond that, however, he is hardly immune from the more traditional forms of tribalism. For example, European liberals typically hate the United States. The intensity of that hatred tends to rise and fall over time, but can sometimes reach almost incredible levels. The most recent eruption occurred around the year 2000. Interestingly enough, one of the most spectacular examples occurred in Germany, the very country that now takes the cake for moralistic grandstanding in the matter of refugees. Der Spiegel, its number one news magazine, was certainly in the avant-garde of the orgasm of hatred. It was often difficult to find any news about Germany on the homepage of its website, so filled was it with furious, spittle-flinging rants about the imagined evils of “die Amerikaner.” However, virtually every other major German “news” outlet, whether it was nominally “liberal” or “conservative,” eventually jointed the howling pack. The most vicious examples of anti-American hate were typically found in just those publications that are now quick to denounce German citizens who express concern about the overwhelming waves of refugees now pouring into the country as “haters.”
On the other hand, refugees, or at least those of the type now pouring into Europe, seldom turn up in any of these common outgroups of the modern liberal. They land squarely in his ingroup. Humans are generally inclined to help ingroup members who, like the refugees, appear to be in trouble. This is doubly true of the liberal, who piques himself on what he imagines to be his moral superiority. Furthermore, as the refugees can be portrayed as victims of colonialism and imperialism, one might say they are a “most favored subset” of the ingroup. Throw in a few pictures of drowned children, impoverished women begging for help, etc., and all the moral ingredients are there to render the liberal an impassioned defender of the masses of humanity drawing a bead on his country. Nothing gives him more self-righteous joy than imagining himself a “savior.” This explains the fact that liberals are eternally in the process of “saving” one group of unfortunates or another without ever getting around to accomplishing anything actually recognizable as salvation. All the pleasure is in the charade. We find the same phenomenon whether its a matter of “saving” the environment, “saving” the planet from global warming, or “saving” the poor. For the liberal, the pose is everything, and the reality nothing.
Which brings us back to the theme of this post. All the sublime moral emotions now at play in the “salvation” of the refugees have an uncanny resemblance to many other instances of moral behavior as practiced by the modern liberal. They have a tendency to favor an outcome which is the opposite of what the same moral emotions accomplished at an earlier time, and that led to their preservation by natural selection to begin with. In a word, as noted above, we are witnessing yet another classic morality inversion.
Why an inversion? At the most fundamental level, because it will lead to the diminution or elimination of the genes whose survival a similar response once favored. At the moment, the pro-refugee side is calling the shots. It controls the governments of all the major European states. All of them more or less fit the pattern described above, whether they are nominally “liberal” or “conservative.” Indeed, foremost among them is Germany’s “conservative’ regime, which has positively invited a flood of alien refugees across its borders. Based on historical precedent, the outcome of all this altruism isn’t difficult to foresee. In terms of “culture” it will be a future of ethnic and religious strife, possibly leading to civil war. Genetically, it amounts to an attempt at ethnic suicide. I am well aware that these outcomes are disputed by those promoting the refugee inundation. However, I consider it pointless to argue about it. I am content to let history judge.
While we bide our time waiting for the train wreck to unfold, it may be of interest to examine some of the techniques being used to maintain this remarkable instance of moralistic play-acting. I take most of my examples from the German media, which includes some of the most avid refugee cheerleaders. Predictably, outgroup vilification is part of the mix. As noted above, anyone who objects to the flood of refugees is almost universally denounced as a “hater” by just those people who wear their own virulent hatreds on their sleeves while pretending they don’t exist. Of course, there are also the usual hackneyed violations of Godwin’s Law. For example, Jacob Augstein, leftwing stalwart for Der Spiegel, denounces them as “Browns” (i.e., brownshirts, Nazis) in a recent column. On the “positive” side, the “conservative” Frankfurter Allgemeine Zeitung optimistically suggests that the refugees will promote economic growth. According to another article in Der Spiegel, the eastern Europeans, who are not quite so refugee-friendly as the Germans, are “blowing their chance.” The ominous byline reads,
Europe is shrinking. The demographic downtrend is particularly dramatic in the eastern part of the continent, where the population is literally dying out. In spite of that, Hungary, Poland and company are resisting immigration. They will regret it.
In other words, before turning out the lights and committing suicide, the eastern Europeans should make sure an alien culture is in place to take over their territories when they’re gone. Of course, this flies in the face of the impassioned rhetoric the liberals have been feeding us about the need to reduce the surface population if we are to have an environmentally sustainable planet.
I note in passing that the European elites that are driving this process now seem to have taken a step back from the brink. They are having second thoughts. They realize that they don’t have their populations behind them, and that their defiance of popular opinion might eventually threaten their own power. As a result, the number of news articles about the refugees and their plight is only a shadow of what it was only a few weeks ago. Mild reservations about refugee wowserism are starting to appear even in such gung-ho forums as Der Spiegel where, as I write this, the lead article on their homepage is entitled “Now Things Are Getting Uncomfortable.” Ya think!? The byline reads,
There is a chance in tone in the refugee crisis. SPD (German Social Democratic Party) chief Gabriel warms about limits to Germany’s ability to absorb refugees. Minister of the Interior de Maziere deplores the misbehavior of many migrants. The pressure on Chancellor Merkel is increasing.
“Ought” the Europeans to alter their behavior? Is what they consider “good” really “evil?” Are they ignoring the real “goal” of natural selection? Certainly not, at least from an objective point of view. There is no objective criterion for determining what anyone “ought” to do, anymore than there is an objective way to distinguish the difference between things, such as good and evil, that have no objective existence. They are hardly failing to move towards the “goal” of natural selection, since that process does not have either a purpose or a goal. As you may have gathered, my own subjective whim is to oppose unlimited immigration. I have, however, not the slightest basis for declaring that anyone who doesn’t agree with me is “evil.” At best, I can try to explain my own whims.
I’m what you might call a moral compatibilist. I see myself sitting at the end of a chain of life spawned by genetic material that has evolved over a period of more than three billions years, surviving and reproducing over that incredible gulf of time via an almost infinite array of successive forms, culminating in the species to which I now belong. I consider the whole process, and the universe I live in, awesome and wonderful. Subjectively, it seems to me “good” to act in a way that is compatible with the natural processes that have given me life. It follows that, from my own, individual, subjective point of view, I “should” seek to preserve that life and pass it on into the indefinite future.
I have not the slightest basis for claiming that “my way” is better than the whimsical behavior of those I see around me exultantly pursuing their morality inversions. At best, I must limit myself to observing that “my way” seems more consistent.
Posted on September 26th, 2015 7 comments
The Guardian just published an article by Larissa MacFarquhar entitled, “Extreme altruism: should you care for strangers at the expense of your family?” The byline reads as follows:
The world is full of needless suffering. How should each of us respond? Should we live as moral a life as possible, even giving away most of our earnings? A new movement argues that we are not doing enough to help those in need.
It’s a tribute to the power of the emotions responsible for what we call morality that, more than a century after Westermarck published The Origin and Development of the Moral Ideas, questions like the one in the title are still considered rational, and that a “moral life” is equated with “giving away most of our earnings.” Westermarck put it this way:
As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivise the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion.
The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is, that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.
The article tells the tale of one Julia Wise, whom MacFarquhar refers to as a “do-gooder.” She doesn’t use the term in the usual pejorative sense, but defines a “do-gooder” as,
…a human character who arouses conflicting emotions. By “do-gooder” here I do not mean a part-time, normal do-gooder – someone who has a worthy job, or volunteers at a charity, and returns to an ordinary family life in the evenings. I mean a person who sets out to live as ethical a life as possible. I mean a person who is drawn to moral goodness for its own sake. I mean someone who commits himself wholly, beyond what seems reasonable. I mean the kind of do-gooder who makes people uneasy.
Julia is just such a person. MacFarquhar describes her as follows:
Julia believed that because each person was equally valuable, she was not entitled to care more for herself than for anyone else; she believed that she was therefore obliged to spend much of her life working for the benefit of others. That was the core of it; as she grew older, she worked out the implications of this principle in greater detail. In college, she thought she might want to work in development abroad somewhere, but then she realised that probably the most useful thing she could do was not to become a white aid worker telling people in other countries what to do, but, instead, to earn a salary in the US and give it to NGOs that could use it to pay for several local workers who knew what their countries needed better than she did. She reduced her expenses to the absolute minimum so she could give away 50% of what she earned. She felt that nearly every penny she spent on herself should have gone to someone else who needed it more. She gave to whichever charity seemed to her (after researching the matter) to relieve the most suffering for the least money.
Interestingly, Julia became an atheist at the age of eleven. In other words, she must have been quite intelligent by human standards. In spite of that, it apparently never occurred to her to question the objectivity of moral judgments. I’ve always found it surprising that so many religious believers who become atheists don’t reason a bit further and grasp the fact that they no longer have a legitimate basis for making moral judgments. They commonly consider themselves smarter than religious believers, and yet they cling to the illusion that the basis is still there, as solid as ever. Religious believers can usually detect the charade immediately, and notice with a chuckle that the atheist has just sawed off the branch they thought they were sitting on. Alas, the faithful are no less delusional than the infidels. Again quoting Westermarck,
To the verdict of a perfect intellect, that is, an intellect which knows everything existing, all would submit; but we can form no idea of a moral consciousness which could lay claim to a similar authority. If the believers in an all-good God, who has revealed his will to mankind, maintain that they in this revelation possess a perfect moral standard, and that, consequently, what is in accordance with such a standard must be objectively right, it may be asked what they mean by an “all-good” God. And in their attempt to answer this question, they would inevitably have to assume the objectivity they wanted to prove.
In any event, Julia’s case is a perfect example of why it is useful to understand what morality actually is, and why it exists. The truth was obvious enough to Darwin, and of course, to Westermarck and several other great thinkers who followed him. Morality is the manifestation of evolved behavioral traits. It exists because it enhanced the probability that the genetic material that gave rise to it would survive and replicate itself. Julia, however, lives in a world radically different from the world in which the evolution of morality took place. She is an extreme example of what can happen when environmental changes outpace the ability of natural selection to keep up. She suffers from an assortment of morality inversions. It’s as if she had decided to use her hands to cut her throat, or her legs to jump off a cliff. In short, she is a pathological do-gooder.
Several examples are mentioned in the article. In general, she believes that it is “good” to hand over money and other valuable resources that might have enhanced her own chances of genetic survival to genetically unrelated individuals, even though the chances that they will ever return the favor to her or her children are vanishingly small. She very nearly decides it would be “immoral” to have children because, according to the article,
Children would be the most expensive nonessential thing she could possibly possess, so by having children of her own she would be in effect killing other people’s children.
However, she manages to dodge this bullet by reasoning that she and her husband will be able to indoctrinate their child with their own pathological “values.” The decision to have a child becomes “good” as long as the parents are confident that they can control its environment sufficiently well to insure that it will grow up as emotionally crippled as they are. Of course, such therapeutic generational brainwashing is unlikely to be a “good” long term strategy for survival. MacFarquhar concludes her article with the question,
What would the world be like if everyone thought like a do-gooder? What if everyone believed that his family was no more important or valuable than anyone else’s? What if everyone decided that spontaneity or self-expression or certain kinds of beauty or certain kinds of freedom were less vital, or less urgent, than relieving other people’s pain?
Assuming the environment remains more or less the same, the answer is simple enough. The Julias of the world would die out. In the end, that’s really the only answer that matters. Is Julia therefore “wrong,” or even “immoral” for clinging to her pathologically altruistic lifestyle? Of course not, because the question implies the objective existence of things – Good and Evil – that are actually imaginary. One cannot logically claim that either using your hands to cut your throat, or using your legs to jump off a cliff, is objectively immoral. One must be content with the observation that such actions seem a bit counter-intuitive.
Posted on September 19th, 2015 No comments
PBS just aired what’s billed as a NOVA/National Geographic Special entitled Dawn of Humanity on the stunning discovery of a trove of remains of an early human species dubbed Homo naledi in a South African cave. According to the blurb on its website,
NOVA and National Geographic present exclusive access to a unique discovery of ancient remains. Located in an almost inaccessible chamber deep in a South African cave, the site required recruiting a special team of experts slender enough to wriggle down a vertical, pitch-dark, seven-inch-wide passage. Most fossil discoveries of human relatives consist of just a handful of bones. But down in this hidden chamber, the team uncovered an unprecedented trove—so far, over 1,500 bones—with the potential to rewrite the story of our origins.
There’s nothing surprising about the fact that a story about Homo naledi appeared on NOVA. What’s really stunning, however, is its content. To all appearances it appears to have been supplied by an ancient Blank Slater who was frozen like a popsicle some time back in the early 70’s, and then had the good fortune to be thawed out like Rip van Winkle just in time to write the script for Dawn of Humanity. One can certainly quibble about his take on the significance of Homo naledi, but one thing is certain. He has favored us with a remarkable piece of historical source material.
It all starts innocently enough. We are introduced to Lee Berger, who headed the team that discovered Homo naledi. There are scenes of him strolling across the South African landscape with his two dogs, poking into limestone caverns of the sort where his nine year old son discovered the first fossil remains of Australopithecus sediba, like Homo naledi another creature with a small, ape-like brain that walked upright on two feet. He points to the places where the remains of several other individuals of that species were later found. Things continue in that sedate vein until suddenly, at minute 35:15, we are shaken out of our pleasant rut by the announcement that the abundance of the sediba remains,
…might help explain the Australopith’s transition into our genus, Homo. They might also prove or disprove a highly influential theory about the dawn of humanity. A theory inspired by the very first discovery of an Australopith fossil.
We are informed that the discovery referred to happened in 1924. The place was South Africa, and the discoverer was Prof. Raymond Dart of the University of Witswatersrand in Johannesburg. Miners had sent Dart a chunk of limestone in which was embedded the skull of a hominid child, different and more primitive than any previously discovered. He named the new species Australopithecus africanus. At that point, around minute 36:45, we get our first hint that PBS is about to administer a strong dose of propaganda. Quoting from the script,
Darwin and (Thomas Henry) Huxley predicted that our origins would be in Africa based on comparative anatomy. You know, they looked at the skeletons of chimps and gorillas and they looked at ours and they went, “Well they’re so close to us, and they’re more close than anything else, so it must have been in Africa.” And then the sort of second generation of evolutionary biologists shied away from that. They started to find fossils in Europe. They started to find fossils in Asia. And of course that tied in very nicely with the sort of racist, imperialistic thoughts of the day. They couldn’t abide the thought of it being in Africa.
I rather suspect that the reticence of this “second generation of evolutionary biologists” to immediately accept Dart’s “out of Africa” theory was due to the fact that they had based their life’s work on developing theories about the emergence of early man in the only places where fossil evidence had actually been found up to that time. It’s really not too hard to imagine that they may have been unenthusiastic about seeing all that work washed down the drain. Of course, we’ve long been familiar with the tendency of the “progressive” inmates at PBS to instantly seize on such understandable regrets and transmogrify them into something as sinister and criminal as “racist, imperialistic thoughts.” That’s old hat. What’s really surprising is that, in what follows, we are treated to a long-winded denunciation of the “Killer Ape Theory.” At 40:45 we learn,
Raymond Dart was building a theory about how the Australopiths, our apelike ancestors, became human. His ideas about the dawn of humanity were the touchstone for thinking about our origins for generations. In the 1940’s, more examples of Australopithecus began to be found, and a key site not only had fragments of Australopithecus, but also the bones of many other fossil animals. And Dart noted that these bones were broken in a special way. Dart became convinced they were weapons made by our primitive ancestors. Was this the key to what first made us human?
At this point, PBS has passed well beyond prissy comments about racism and imperialism to the full blown distortion of history. In the first place, Dart’s thinking never became a “touchstone” for anything, and certainly not for generations. He never even published anything about hunting behavior in early hominins until 1949, and what you might call his “seminal” paper on the subject, The Predatory Transition from Ape to Man, didn’t appear until 1953. Both papers were published in obscure venues, and both were largely ignored. Dart never claimed that bones that “were broken in a special way” were weapons. Rather, he claimed that the double-headed humerus, or upper foreleg bone, of a common type of antelope had been the weapon, and the bones that “were broken in a special way” were actually skulls with indentations that appeared to be the result of the use of that bone as a club.
In any case, next we learn that Dart had been a medic in World War I, and,
..had seen at first hand the barbarity humans are capable of. It made sense to him that the origins of humanity were steeped in blood. Raymond Dart’s experience in the World War may have colored his interpretation of what these bones and teeth meant. You know it gave him a view of sort of the dark side of humanity and the violence of humanity, and he came up with this idea that Australopithecus had figured out that bones and teeth were hard, and could be used as weapons to kill other animals. The sort of “Killer Ape Theory” of early humans. Dart believed that the more aggressive and adventurous of our ape-like ancestors abandoned their forest environment and moved into savannahs. There, they became hunters and predators. His theory, that this violent transformation gave rise to humanity soon found an audience far beyond the small world of paleoanthropology.
In fact, there is no evidence that all this psychobabble about World War I is anything but that. Dart’s claims were based on compelling statistical evidence, which is left unmentioned in the program. In the first place, a large and statistically anomalous number of the humerus bones proposed as weapons had been found in association with the africanus remains. Damage to the skulls of other animals supposedly inflicted with these weapons was not randomly located, but occurred far more often in locations where one would expect it to occur if it had been inflicted with a bludgeon or club. Dart’s interpretation of these facts has often been challenged, most prominently by C. K. Brain in his The Hunters or the Hunted?, published in 1981. Brain noted that twin puncture wounds found on an Australopith skull may well have been left by a leopard. Sure enough, the skull in question is featured on the program, and the puncture marks described as if they were incontrovertible proof that Dart’s apes had never hunted. As it happens, however, Brain is a careful scientist, and never maintained anything of the sort. Indeed, in The Hunters or the Hunted? he describes in detail two important objections to the leopard theory, and while he certainly challenged Dart’s theories, he never suggested that they had been incontrovertibly disproved. Predictably, these facts are left unmentioned in the program.
At this point I started wondering why on earth PBS would start laying on such thick dollops of propaganda to begin with. Possible hunting in A. africanus wasn’t really germane to the behavior of a newly discovered species like Homo naledi, the apparent theme of the show, nor to that of Australopithecus sediba, for that matter. I wasn’t left hanging for long. At that point, the ancient Blank Slate Rip van Winkle the program had been channeling all along tipped his hand. After all these years, he had hardly forgotten the shame and embarrassment he and his fellow “men of science” had experienced at the hand of a certain playwright by the name of Robert Ardrey! Suddenly, at about the 42:50 point, the screen is filled with Ardrey’s image. Then we see in quick succession images of two Life magazine covers and one of Penthouse, all three of which prominently announce articles he had written. The narrative continues,
In the 1950’s there was a drama critic and playwright names Robert Ardrey, who became very interested in human origins, and he went to Africa and spoke with Raymond Dart. And Robert Ardrey, being a dramatist, could write like anything, and he wrote this amazing book published in 1961 called African Genesis (dramatic drumbeat). African Genesis became a pop-science publishing sensation of the early 1960s. Ardrey’s ideas, building on those of Raymond Dart, helped frame public debate about the dawn of humanity for the next 20 years. (Potts cuts in) The very first sentence in that book; I remember it because I read it as a teenager and was enthralled by it, “Not in innocence, and not in Asia, was mankind born.” And in that one sentence he encapsulated Raymond Dart’s ideas, that it was an African genesis, and that where we came from was not from an innocent creature (dramatic drumroll), but from the most violent of killer apes.
At this point we’re treated to one of the favorite gimmicks of the Blank Slaters of yore. The program segs to scenes from Stanley Kubrick’s 2001: A Space Odyssey. We are assured that Kubrick was influenced by Ardrey, and then shown the familiar opening scene, with an ape-man smashing everything in sight with a bone wielded as a club. The only problem is that Ardrey didn’t write the script for the movie. We find the same trick in that invaluable little piece of historical source material, Man and Aggression, a collection of Blank Slater rants published in 1968 and edited by Ashley Montagu. Most of the attacks are directed at Ardrey and Konrad Lorenz, but William Golding, author of Lord of the Flies, is thrown in for comic effect in the same fashion as Kubrick.
Next, at about 45:45, there is a schtick about how tartar from sediba’s teeth was examined, revealing that it contained phytoliths, microscopic particles of silica that are found in some plant tissues. At 48:15 the narrative continues,
Here, at last, is evidence that will help support or disprove Dart’s theory… The tooth evidence from sediba indicates a diet very similar to todays chimpanzees. While they may have eaten some meat, there’s little to back up Raymond Dart’s theory that they were killer apes.
Here one can but roll one’s eyes. The plant evidence in sediba’s teeth hardly indicates that its consumption of meat was as the same as, not to mention greater or less than, that of chimpanzees. Here, too, we find revealed the remarkably anachronistic nature of this whole production. Left unmentioned is that fact that chimpanzees are, after all, killer apes, too. They organize hunting parties with the intention of killing and eating other species, and they also carry out organized attacks on other chimpanzees, often killing them in the process as well. None of this is mentioned in the program. Indeed, when Jane Goodall first observed and reported the behavior referred to she was furiously denounced and subjected to incredibly demeaning ad hominem attacks by the Blank Slaters. It’s as if none of this ever happened, and the program is frozen in time back around 1975. The rest consists mainly of pleasantries about the recovery of the Homo naledi remains.
In reality, the “killer ape theory” that we have just seen dusted off and trotted out for our benefit is largely a Blank Slater propaganda myth. Modern apes kill, and when they kill they are certainly violent. They can, therefore, be accurately described as violent killer apes. The “killer ape” of the Blank Slaters, however, is a nightmare figment of their imagination – a furious, violent creature constantly attacking everything around it, as so gaudily portrayed in Stanley Kubrick’s film. Nothing in any of Ardrey’s books even comes close to a description of such psychopathic B movie monsters.
The very magazine covers mentioned above, shown as the narrator lays on the propaganda about killer apes, are revealing in themselves. I happen to have copies of all three of them, and none of the articles by Ardrey they contain has the least thing to say about the “killer ape theory.” Instead, they all deal in one way or another with the real theme of all Ardrey’s work; the existence of innate human nature. And that, I strongly suspect, is the real reason the program even mentions Ardrey.
All appearances to the contrary in Dawn of Humanity, the debate about the “hunting hypothesis” is now over for all practical purposes. It has been decided in favor of Ardrey. Clear marks of butchering have been found on bones dated to more than 3 million years before the present. It has been suggested that the bones were scavenged from the kills of other predators, but the idea that it never occurred to early hominins to hunt between that time and half a million to a million years before the present, a period during which early man clearly began using stone tipped and fire-hardened hunting spears, is nonsense. It is doubly nonsense in view of the observed hunting behavior of chimpanzees. Even the impeccably politically correct Scientific American admitted as much in an article entitled Rise of the Human Predator, that appeared in the April 2014 issue. More remarkable still, in a PBS series entitled Becoming Human that aired in 2009, we were informed that,
Homo erectus probably hunted with close-quarters weapons, with spears that were thrown at animals from a short distance, clubs, thrown rocks, weapons like that. They weren’t using long distance projectile weapons that we know of.
The Homo erectus hunt was simple but effective. It fed not just their larger brains, but the growing complexity of that early human society.
Why, then, this grotesque anachronism, this latter day program frozen in time in the early 1970’s? As I mentioned earlier, the Blank Slaters have forgotten nothing, and forgiven nothing. They know that the reason for Ardrey’s enormous influence wasn’t the “killer ape theory.” Rather, the constant theme of all Ardrey’s work was his insistence on the existence of innate human nature. Virtually all of the “men of science” in the behavioral sciences at the time his books began appearing, at least in the United States, firmly supported the Blank Slate orthodoxy, insisted that virtually all human behavior was a result of learning and culture, and denied the existence of any such thing as innate behavioral traits in human beings. Ardrey was right, and they were all dead wrong. A “mere playwright” had shamed them and exposed them for the charlatans they were.
Today books and articles about innate human behavior, and its analogs in other animals, roll off the presses as if the subject had never been the least bit controversial. The Blank Slate orthodoxy has been smashed, and the one man whose writings were far and away the most influential weapon in smashing it was Robert Ardrey. As for the “men of science,” they are engaged in a game of bowdlerizing history to hide this inconvenient truth. The usual tactic is to ignore Ardrey, elevating some pretender to the role of “slayer of the Blank Slate.” If he is mentioned at all, it is only to briefly note, after the fashion of Steven Pinker, that he was “totally and utterly wrong” based on some alleged inaccuracy in one of his books that had nothing to do with the overall theme. That’s why I said that artifacts like Dawn of Humanity are valuable because of their historical interest at the beginning of this post. For such remarkable anachronisms to even appear, someone has to be seriously out of step with the official line. It has to be someone who knows just how significant and influential Ardrey really was, a fact demonstrated by the very magazine covers that appear on the program. Insignificant nobodies weren’t invited to write articles for Life magazine in the late 60’s and early 70’s, not to mention Penthouse, and pieces by Ardrey can be found in many other familiar magazines of the day. Furthermore, that “somebody” has to be so bitter about Ardrey’s demolition of his precious Blank Slate dogmas that his hatred boils to the surface, revealing itself in such remarkable productions as the one described here. When that happens we occasionally learn something about the Blank Slate debacle that the “men of science” would prefer to leave swept under the rug. A little truth manages to leak out around the edges. This time the truth happened to touch on the real historical role of a man named Robert Ardrey.
Posted on September 18th, 2015 1 comment
The alternate reality fallacy is ubiquitous. Typically, it involves the existence of a deity, and goes something like this: “God must exist because otherwise there would be no absolute good, no absolute evil, no unquestionable rights, life would have no purpose, life would have no meaning,” and so on and so forth. In other words, one must only demonstrate that a God is necessary. If so, he will automatically pop into existence. The video of a talk by Christian apologist Ravi Zacharias included below is provided as an illustrative data point for the reader.
The talk, entitled, “The End of Reason: A Response to the New Atheists,” was Zacharias’ contribution to the 2012 Contending with Christianity’s Critics Conference in Dallas. I ran across it at Jerry Coyne’s Why Evolution is True website in the context of a discussion of rights. We find out where Zacharias is coming from at minute 4:15 in the talk when he informs us that the ideas,
…that steadied this part of the world, rooted in the notion of the ineradicable difference between good and evil, facts on which we built our legal system, our notions of justice, the very value of human life, how intrinsic worth was given to every human being,
all have a Biblical mooring. Elaborating on this theme, he quotes Chesterton to the effect that “we are standing with our feet firmly planted in mid-air.” We have,
…no grounding anymore to define so many essential values which we assumed for many years.
Here Zacharias is actually stating a simple truth that has eluded many atheists. Christianity and other religions do, indeed, provide some grounding for such things as objective rights, objective good, and objective evil. After all, it’s not hard to accept the reality of these things if the alternative is to burn in hell forever. The problem is that the “grounding” is an illusion. The legions of atheists who believe in these things, however, actually are “standing with their feet firmly planted in mid-air.” They have dispensed even with the illusion, sawing off the limb they were sitting on, and yet they counterintuitively persist in lecturing others about the nature of these chimeras as they float about in the vacuum, to the point of becoming quite furious if anyone dares to disagree with them. Zacharias’ problem, on the other hand, isn’t that he doesn’t bother to provide a grounding. His problem is his apparent belief in the non sequitur that, if he can supply a grounding, then that grounding must necessarily be real.
Touching on this disconcerting tendency of many atheists to hurl down anathemas on those they consider morally impure in spite of the fact that they lack any coherent justification for their tendency to concoct novel values on the fly, Zacharias remarks at 5:45 in the video,
The sacred meaning of marriage (and others) have been desacralized, and the only one who’s considered obnoxious is the one who wants to posit the sacredness of these issues.
Here, again, I must agree with him. Assuming he’s alluding to the issue of gay marriage, it makes no sense to simply dismiss anyone who objects to it as a bigot and a “hater.” That claim is based on the obviously false assumption that no one actually takes their religious beliefs seriously. Unfortunately, they do, and there is ample justification in the Bible, not to mention the Quran, for the conclusion that gay marriage is immoral. Marriage has a legal definition, but it is also a religious sacrament. There is no rational basis for the claim that anyone who objects to gay marriage is objectively immoral. Support for gay marriage represents, not a championing of objective good, but the statement of a cultural preference. The problem with the faithful isn’t that they are all haters and bigots. The problem is that they construct their categories of moral good and evil based on an illusion.
Beginning at about 6:45 in his talk, Zacharias continues with the claim that we are passing through a cultural revolution, which he defines as a,
decisive break with the shared meanings of the past, particularly those which relate to the deepest questions of the nature and purpose of life.
noting that culture is,
an effort to provide a coherent set of answers to the existential questions that confront all human beings in the passage of their lives.
In his opinion, it can be defined in three different ways. First, there are theonomous cultures. As he puts it,
These are based on the belief that God has put his law into our hearts, so that we act intuitively from that kind of reasoning. Divine imperatives are implanted in the heart of every human being.
Christianity is, according to Zacharias, a theonomous belief. Next, there are heteronymous cultures, which derive their laws from some external source. In such cultures, we are “dictated to from the outside.” He cites Marxism is a heteronymous world view. More to the point, he claims that Islam also belongs in that category. Apparently we are to believe that this “cultural” difference supplies us with a sharp distinction between the two religions. Here we discover that Zacharias’ zeal for his new faith (he was raised a Hindu) has outstripped his theological expertise. Fully theonomous versions of Christianity really only came into their own among Christian divines of the 18th century. The notion, supported by the likes of Francis Hutcheson and the Earl of Shaftesbury, that “God has put his law into our hearts,” was furiously denounced by other theologians as not only wrong, but incompatible with Christianity. John Locke was one of the more prominent Christian thinkers among the many who denied that “divine imperatives are implanted in the heart of every human being.”
But I digress. According to Zacharias, the final element of the triad is autonomous culture, or “self law”, in which everyone is a law into him or herself. He notes that America is commonly supposed to be such a culture. However, at about the 11:00 minute mark he notes that,
…if I assert sacred values, suddenly a heteronymous culture takes over, and tells me I have no right to believe that. This amounts to a “bait and switch.” That’s the new world view under which the word “tolerance” really operates.
This regrettable state of affairs is the result of yet another triad, in the form of the three philosophical evils which Zacharias identifies as secularization, pluralism, and privatization. They are the defining characteristics of the modern cultural revolution. The first supposedly results in an ideology without shame, the second in one without reason, and the third in one without meaning. Together, they result in an existence without purpose.
One might, of course, quibble with some of the underlying assumptions of Zacharias’ world view. One might argue, for example, that the results of Christian belief have not been entirely benign, or that the secular societies of Europe have not collapsed into a state of moral anarchy. That, however, is really beside the point. Let us assume, for the sake of argument, that everything Zacharias says about the baleful effects of the absence of Christian belief is true. It still begs the question, “So what?”
Baleful effects do not spawn alternate realities. If the doctrines of Christianity are false, then the illusion that they supply meaning, or purpose, or a grounding for morality will not transmute them into the truth. I personally consider the probability that they are true to be vanishingly small. I do not propose to believe in lies, whether their influence is portrayed as benign or not. The illusion of meaning and purpose based on a belief in nonsense is a paltry substitute for the real thing. Delusional beliefs will not magically become true, even if those beliefs result in an earthly paradise. As noted above, the idea that they will is what I refer to in my title as the alternate reality fallacy.
In the final part of his talk, Zacharias describes his own conversion to Christianity, noting that it supplied what was missing in his life. In his words, “Without God, reason is dead, hope is dead, morality is dead, and meaning is gone, but in Christ we recover all these.” To this I can but reply that the man suffers from a serious lack of imagination. We are wildly improbable creatures sitting at the end of an unbroken chain of life that has existed for upwards of three billion years. We live in a spectacular universe that cannot but fill one with wonder. Under the circumstances, is it really impossible to relish life, and to discover a reason for cherishing and preserving it, without resort to imaginary super beings? Instead of embracing the awe-inspiring reality of the world as it is, does it really make sense to supply the illusion of “meaning” and “purpose” by embracing the shabby unreality of religious dogmas? My personal and admittedly emotional reaction to such a choice is that it is sadly paltry and abject. The fact that so many of my fellow humans have made that choice strikes me, not as cause for rejoicing, but for shame.
Posted on September 10th, 2015 18 comments
US has as much moral duty to accept Syrian refugees as Europe. If not more.
It’s too bad Socrates isn’t still around to “learn” the nature of this “moral duty” from Dawkins the same way he did from Euthyphro. I’m sure the resulting dialog would have been most amusing.
Where on earth does an atheist like Dawkins get the idea that there is such a thing as moral duty? I doubt that he has even thought about it. After all, if moral duty is not just a subjective figment of his imagination and is capable of acquiring the legitimacy to apply not only to himself, but to the entire population of the United States as well, it must somehow exist as an entity in itself. How else could it acquire that legitimacy? There is no logical justification for the claim that mere subjective artifacts of the consciousness of Richard Dawkins, or any other human individual for that matter, are born automatically equipped with the right to dictate “oughts” to other individuals. They cannot possibly acquire the necessary legitimacy simply by virtue of the fact that the physical processes in the brain responsible for their existence have occurred. In what form, then, does “moral duty” exist as an independent thing-in-itself? To claim that “moral duty” is not a thing, or an object, is tantamount to admitting that it doesn’t exist. In what other form can it possibly manifest itself? As a spirit? If that is Dawkins’ claim, then he is every bit as religious as the most delusional speaker in tongues. As dark matter, perhaps? If so then Dawkins must know more about it then the world’s best physicists.
We’re not talking about a deep philosophical issue here. I really can’t understand why the question doesn’t occur immediately to anyone who claims to be an atheist. (Of course, it should occur to religious believers as well, as noted by Socrates well over 2000 years ago. However, the response that they have a “moral duty” because they don’t want to burn in hell for quintillions of years is at least worth considering). In any case, the question certainly occurred to me shortly after I became an atheist at the age of 12. Then, as now, the world was infested with are commonly referred to today as Social Justice Warriors. Then, as now, they were in a constant state of outrage over one thing or another. And then, as now, they expected the rest of the world to take their tantrums of virtuous indignation seriously. Is it really irrational to pose the simple question, “Why?” I asked myself that question, and quickly came to the conclusion that these people are charlatans.
The question remains and is just as relevant today as it was then, whether one accepts Darwinian explanations for the origin of morality or not. However, for atheists who have some respect for the methods of science, I would claim that natural selection is at once the most logical as well as the most parsimonious explanation for the existence of morality. It is the root cause from which spring all its gaudy and multifarious guises. If that is the case, then one can only speak of morality in scientific terms as a manifestation of evolved behavioral predispositions. As such, there is no possible way for it to acquire objective legitimacy. In other words, the claim that all Americans, or any other subset of the human population, has a genuine “moral duty” of any kind is a mirage. If anything, this would appear to be doubly true in the case claimed by Dawkins. It is yet another instance of what I have previously referred to as a “morality inversion.” “Morality” is invoked as the reason for doing things that accomplish the opposite of that which accounts for the very existence of morality to begin with.
What? You don’t agree with me? Well, if “moral duties” are not made of anything, then they don’t exist, so they must be objects of some kind. They must be made of something. By all means, go out and capture a free range “moral duty,” and prove me wrong. Show it to me! I hope it’s green. That’s my favorite color.
Posted on September 3rd, 2015 No comments
Think of a pile of bowling balls in a deep well. They don’t fly out because the force of gravity holds them in. If you roll an extra bowling ball to the edge of the well and let it drop in, energy is released when it hits the pile at the bottom. Atomic nuclei can be compared to the well. The neutrons and protons that make it up are the bowling balls, and the “gravity” is the far more powerful “strong force.” Roll some of these “bowling balls” into the well and energy will be released, just as in a real well. The process is called nuclear fusion, and it’s the source of energy that powers the sun. We’ve been trying to produce energy by repeating the process here on earth for a good many years now, but were only “lucky” enough to succeed in the case of thermonuclear weapons. We’ve been stymied in our efforts to harness fusion energy in less destructive forms. The problem is the Coulomb, or electrostatic force. It’s what causes unlike charges to attract and like charges to repel in the physics experiments you did in high school. It’s much weaker than the strong force that holds the neutrons and protons in an atomic nucleus together, but the strong force has a very short range. The trick is to get within that range. All atomic nuclei contain protons, and protons are positively charged. They repel each other, resisting our efforts to push them up to the edge of the “well,” where the strong force will finally overwhelm the Coulomb force, causing these tiny “bowling balls” to drop in. So far, only atomic bombs have supplied enough energy to provide a “push” big enough to result in a net release of fusion energy.
To date, we’ve tried two main approaches to supplying the “push” in a more controlled form; magnetic fusion and inertial confinement fusion, or ICF. In both approaches the idea is to heat the nuclei to extreme temperatures, causing them to bang into each other with enough energy to overcome the Coulomb repulsion. However, when you dump that much energy into a material, it tends to fly apart, as in a conventional explosion. Somehow a way must be found to hold it in place long enough for significant fusion to take place. In magnetic fusion that’s accomplished with magnetic lines of force that hold the hot nuclei within a confined space. Some will always manage to escape, but if enough are held in place long enough, the resulting fusion reactions will release enough energy to keep the process going. In inertial confinement fusion, as the name would imply, the magnetic fields are replaced by the material’s own inertia. The idea is to supply so much energy in such a short period of time that significant fusion will occur before the material has time to fly apart. That’s essentially what happens in thermonuclear weapons. In ICF the atomic bomb that drives the reaction is replaced by powerful arrays of laser or particle beams.
Both of these approaches are scientifically feasible. In other words, both will almost certainly work if the magnetic fields can be made strong enough, or the laser beams powerful enough. Unfortunately, after decades of effort, we still haven’t managed to reach those thresholds. Our biggest ICF facility, the National Ignition Facility or NIF, has so far failed to achieve “ignition,” defined as fusion energy out equal to laser energy in, by a wide margin. The biggest magnetic fusion facility, ITER, currently under construction in France, may reach the goal, but we’ll have to wait a long time to find out. The last time I looked there were no plans to even fuel it with deuterium and tritium, (D and T, heavy isotopes of hydrogen with one and two neutrons in the nucleus in addition to the usual proton) until 2028! The DT fusion reaction, shown below with some of the others, is the easiest to harness in the laboratory. For reasons I’ve outlined elsewhere, I doubt that either the “conventional” magnetic or inertial confinement approaches will ever produce energy at a cost competitive with the alternatives.
There are, however, other approaches out there. Over the years, startup companies have occasionally managed to attract millions in investment capital to explore these alternatives. Progress reports occasionally turn up on websites such as NextBigFuture. Examples may be found here, here and here, and many others may be found by typing in the search term “fusion” at the website. Typically, they claim they are three or four years away from building a breakeven device, or even a prototype reactor. So far none of them have panned out, but I keep hoping that eventually one of them will pull a rabbit out of their hat and come up with a workable design. The chances are probably slim, but at least marginally better than the odds that someone will perfect a perpetual motion machine.
I tend to be particularly dubious when I see proposals involving fusion fuels other than the usual deuterium and tritium. Other fusion reactions have their advantages. For example, some produce no neutrons, which can pose a radioactive hazard, and/or use fuels other than the highly radioactive tritium, which occurs in nature only in tiny trace amounts, and must therefore be “bred” in the reactor in order to keep the process going. Some of the most promising ones are shown along with the more “mainline” DT and DD reactions below.
D + T → 4He (3.5 MeV) + neutron (14.1 MeV)
D + D → T (1.01 MeV) + proton (3.02 MeV) 50%
D + D → 3He (0.82 MeV) + neutron (2.45 MeV) 50%
H + 11B → 3(4He); Q = 8.68 MeV
H + 6Li → 3He + 4He; Q = 4.023 MeV
3He + 6Li → H + 2(4He); Q = 16.88 MeV
3He + 6Li → D + 7Be; Q = 0.113 MeV
The problem with the seemingly attractive alternatives to DT shown above as well as a number of others that have been proposed is that they all require significantly higher temperatures and/or confinement times for fusion “ignition” to occur. Take a look at the graph below.
The horizontal axis is in units of the “temperature” of the fuel in thousands of electron volts, and the vertical shows the “cross-section” for any of the reactions shown in units of “barns.” The cross-section is related to the probability that a particular reaction will occur. It is measured in units of 10−24 cm2, or “barns,” because, at least at the atomic scale, that’s as big as the broad side of a barn. Notice that the DT reaction is much higher at lower temperatures than all the others. Yet we failed to achieve fusion ignition on the NIF with that reaction in spite of the fact that the facility is capable of focusing a massive 1.8 megajoules of laser energy on a fusion target in a period of a few billionths of a second! Obviously, if we couldn’t get DT to work on the NIF, the other reactions will be difficult to harness indeed.
In short, I tend to be dubious when I read the highly optimistic progress reports, complete with “breakthroughs,” of the latest fusion startup. I tend to be a great deal more dubious when they announce they will dispense with DT altogether, as they are so sure of the superior qualities of their design that lithium, boron, or some other exotic fuel will work just as well. Still, I keep my fingers crossed.
Posted on September 2nd, 2015 No comments
A synopsis of George Orwell’s A Clergyman’s Daughter may be found in the Wiki entry on the same. In short, it relates the experiences of Dorothy Hare, only daughter of the Reverend Charles Hare, a “gentleman” clergyman with a chronic habit of living beyond his means. Dorothy’s life is consumed by a frantic struggle to maintain respectability in spite of a mountain of debt owed to the local tradesmen, a dwindling congregation, and a church gradually decaying to ruin for lack of maintenance. There’s also a problem so repressed in Dorothy’s mind that she’s hardly conscious of it; she is losing her Christian faith.
Eventually the pressure becomes unbearable. At the end of Chapter 1 we leave Dorothy exhausted, working herself beyond endurance late at night to prepare costumes for a children’s play. At the start of Chapter 2 we find her teleported to the Old Kent Road, south of London, where she wakes up with a bad case of amnesia and only half a crown in her pocket. A good German might describe this rather remarkable turn of events as an den Haaren herbeigezogen (dragged in by the hair.) In other words, it’s far fetched, but we can forgive it because Orwell refrains from boring us with explanatory psychobabble, it’s in one of his earliest books, and he needs some such device in order to dish up a fictional version of the autobiographical events described in his Down and Out in Paris and London, published a couple of years earlier.
Eventually Dorothy is rescued from starvation and squalor by a much older cousin, who sets her up as a school teacher at Ringwood House, which Orwell describes as a fourth rate private school with only 21 female inmates. At this point the astute reader will discover something that might come as a revelation to those who are only familiar with Animal Farm and 1984. Orwell was a convinced socialist when he wrote the book, and remained one until the end of his life. Mrs. Creevy, the woman who runs the school, is a grasping capitalist, interested only in squeezing as much profit out of the enterprise as possible. The girls “education” consists mainly of a mind-numbing routine of rote memorization and handwriting drills. Dorothy’s attempts at education reform are nipped in the bud, and she is eventually sacked. In Mrs. Creevy’s words,
It’s the fees I’m after, not developing the children’s minds. It’s not to be supposed as anyone’s to go to all the trouble of keeping a school and having the house turned upside down by a pack of brats, if it wasn’t that there’s a bit of money to be made out of it. The fee comes first, and everything else comes afterwards.
Orwell later elaborates,
There are, by the way, vast numbers of private schools in England. Second-rate, third-rate, and fourth-rate (Ringwood House was a specimen of the fourth-rate school), they exist by the dozen and the score in every London suburb and every provincial town. At any given moment there are somewhere in the neighborhood of ten thousand of them, of which less than a thousand are subject to Government inspection. And though some of them are better than others, and a certain number, probably, are better than the council schools with which they compete, there is the same fundamental evil in all of them; that is , that they have ultimately no purpose except to make money.
So long as schools are run primarily for money, things like this will happen. The expensive private schools to which the rich send their children are not, on the surface, so bad as the others, because they can afford a proper staff, and the Public School examination system keeps them up to the mark; but they have the same essential taint.
Recall that the book was published in 1935. The Spanish Civil War, in which Orwell fought with a socialist unit not affiliated with the Communists, began in 1936. In that conflict he had his nose rubbed in the reality of totalitarianism, socialism that had dropped the democratic mask. The experience is described in his Homage to Catalonia, which is essential reading for anyone interested in learning what inspired his later work. There he tells how the Communist legions attacked and destroyed his own division, regardless of the fact that it was fighting on the same side. Totalitarianism has never recognized more than two sides; the side that it controls, and the side that it doesn’t. He saw that its real reason for existence was nothing like a worker’s paradise, or any other version of “human flourishing,” but absolute, unconditional power. The nature of the system and the power it aimed at was what he described in 1984. When A Clergyman’s Daughter was published, that revelation still lay in the future. It may be that in 1935 Orwell still thought of the socialists as one big, happy, if occasionally quarrelsome, family.
Be that as it may, the real interest of the book, at least as far as I’m concerned, lies at the end. There, more explicitly than in any other of his novels or essays, Orwell takes up the question of the Meaning of Life. While down and out, Dorothy had lost her faith once and for all. In spite of that, after Mrs. Creevy sacks her, she finds her way back to the family parsonage, and takes up again where she left off. She suffers from no illusions. As Orwell puts it,
It was not that she was in any doubt about the external facts of her future. She could see it all quite clearly before her… Whatever happened, at the very best, she had got to face the destiny that is common to all lonely and penniless women. “The Old Maids of Old England,” as somebody called them. She was twenty-eight – just old enough to enter their ranks.
She was not the same women as before. She had lost her faith, and yet, she meditated,
Faith vanishes, but the need for faith remains the same as before. And given only faith, how can anything else matter? How can anything dismay you if only there is some purpose in the world which you can serve, and which, while serving it, you can understand? Your whole life is illumined by that sense of purpose.
Life, if the grave really ends it, is monstrous and dreadful. No use trying to argue it away. Think of life as it really is, think of the details of life; and then think that there is no meaning in it, no purpose, no goal except the grave. Surely only fools or self-deceivers, or those whose lives are exceptionally fortunate, can face that thought without flinching?
Her mind struggled with the problem, while perceiving that there was no solution. There was, she saw clearly, no possible substitute for faith; no pagan acceptance of life as sufficient unto itself, no pantheistic cheer-up stuff, no pseudo-religion of “progress” with visions of glittering Utopias and ant-heaps of steel and concrete. It is all or nothing. Either life on earth is a preparation for something greater and more lasting, or it is meaningless, dark and dreadful.
Here we see that, even in 1935, Orwell wasn’t quite convinced that the Soviet version of a Brave New World really represented “progress.” And while democratic socialism may have later given him something of a sense of purpose, it wasn’t yet filling the void. Dorothy considers,
Where had she got to? She had been saying that if death ends all, then there is no hope and no meaning in anything. Well, what then?
At this point, the true believers chime in. They know the answer. Bring back faith, and, voila, the void is filled! So many of them honestly seem to believe that, because they feel a need, the thing needed will automatically pop into existence. They need absolute moral standards. Therefore their faith must be true. They need a purpose in life. Therefore their faith must be true. They need human existence to have meaning. Therefore their faith must be true. They must have unquestionable rights. Therefore their faith must be true. And so on, and so on. Orwell is having none of it. Dorothy muses on,
And how cowardly, after all, to regret a superstition that you had got rid of – to want to believe something that you knew in your bones to be untrue.
Orwell provides us with no magic solution to this thorny problem. Indeed, in the end his answer is singularly unsatisfying. He suggests that we just get on with it and leave it at that. As Dorothy glues together strips of paper, forming the boots, armor, and other accoutrements required for the next church play, she has stumbled into the solution without realizing it:
The smell of glue was the answer to her prayer. She did not know this. She did not reflect, consciously, that the solution to her difficulty lay in accepting the fact that there was no solution; that if one gets on with the job that lies to hand, the ultimate purpose of the job fades into insignificance; that faith and no faith are very much the same provided that one is doing what is customary, useful and acceptable. She could not formulate these thoughts as yet, she could only live them. Much later, perhaps, she would formulate them and draw comfort from them.
Dorothy sliced two more sheets of brown paper into strips, and took up the breastplate to give it its final coating. The problem of faith and no faith had vanished utterly from her mind. It was beginning to get dark, but, too busy to stop and light the lamp, she worked on, pasting strip after strip of paper into place, with absorbed, with pious concentration, in the penetrating smell of the gluepot.
Orwell didn’t want A Clergyman’s Daughter to be republished, unless, perhaps, in a cheap version to scare up a few pounds for his heirs. No doubt he considered it too immature. We can be grateful that his literary executors thought otherwise, else we might never have known of his struggles with the Meaning of Life problem so early in his career. He didn’t spill much ink over the problem later on, but we must assume that he had found some more inspiring purpose to strive for than just “getting on with it.” Weak and in pain, he fought to complete 1984 on his death bed with incredible tenacity and dedication. It was a gift to all of us that didn’t follow him to the grave, but lived long after he was gone as the single most effective literary weapon against a threat that had materialized as Communism in his own day, but will likely always lurk among us in one form or another.
And what of the Meaning of Life? That’s a question we must all provide an answer for on our own. None of the imaginary super-beings we have dreamed up over the years is likely to materialize to trivialize the search. And just as Orwell wrote, whether we care to deal with the problem or not, there is no objective solution. It must be subjective and individual. It need not be any less compelling for all that.
Posted on August 25th, 2015 No comments
No doubt if you’ve been following this story you’ve noticed the howls coming from the usual quarters. Take it with a grain of salt. There’s no reason for Ranger School to be “dumbed down” for women to pass if the mission is anything like it was when I went through. Upper body strength isn’t critical for Ranger type missions, and that includes rock/mountain climbing, rappelling, etc. The premium is on being able to pack a serious load under adverse conditions of sleep and food deprivation, tolerate extremes of heat and cold, and “function” when you’re assigned to lead small unit operations. A strong woman can do all those things. There are ranger units, but the ranger and airborne badges are also marks of prestige. Women should have the right to wear them if they can handle the physical and mental challenges.
When I went through, they found out I was a good swimmer, so I was appointed “far shore lifeguard” of my ranger training unit. That meant I had to strip and carry a rope across streams when we came to them and make a rope bridge so the others could shinny across without getting wet. I suppose I would have left my jockey shorts on if women were around. Once the ranger sergeants just had me stand in freezing water up to my neck and pull the short guys across a deep spot by the webbing on their helmets. When I finally got out I couldn’t straighten out from hypothermia. We still had to wade through 600 yards of cypress swamp, but I was very fortunate to find a big fire when we finally got through that. It’s probably the closest I’ve ever come to dying, and years later four guys did die of hypothermia. They’re probably more careful now. The next day it was so cold (in Florida!) that my sodden fatigues started freezing on my body. Fortunately we all had two pair, so I stripped them off, buried them, and put on my dry ones. No doubt they’re still down there rotting away somewhere.
We got one C ration a day (we’re talking about the ancient times before MREs), and were starving at the end of the swamp phase. The first signs were dizziness when you went from a prone position to standing up. Some of the guys started hallucinating at the end. My ranger buddy seriously believed he was standing in line at Mama Leoni’s Restaurant in New York City at one point. I got kind of worried about him.
Occasionally Huey helicopters would pick us up to take us from one mission to another, with “aggressors” usually waiting for us when we reached our landing zones. We would sit on either side with our legs dangling over the edge holding onto a little strap “seat belt.” Those chopper pilots were crazy, and I could swear my feet brushed against the top of the forest foliage one time. By that time I was so dazed it didn’t bother me.
In those days we parachuted in to Eglin for the swamp phase from a combat training altitude of 800 feet. Actual combat jumps were from 500 feet. After my stick landed I looked up and saw the next one jump. One guy’s chute was nothing but a ball of silk. His reserve caught him a fraction of a second before he would have hit the ground. When a later class jumped in a guy died when he landed on an old concrete airstrip and fell backwards, driving the rim of his steel helmet into his neck. There were a few “laig” (leg, non-airborne infantry) rangers around, but they were rare.
I didn’t notice any powder rooms when we were on the march.
Posted on August 23rd, 2015 4 comments
So who is Jaak Panksepp? Have a look at his YouTube talk on emotions at the bottom of this post, for starters. A commenter recommended him, and I discovered the advice was well worth taking. Panksepp’s The Archaeology of Mind, which he co-authored with Lucy Biven, was a revelation to me. The book describes a set of basic emotional systems that exist in all, or virtually all, mammals, including humans. In the words of the authors:
…the ancient subcortical regions of mammalian brains contain at least seven emotional, or affective, systems: SEEKING (expectancy), FEAR (anxiety), RAGE (anger), LUST (sexual excitement), CARE (nurturance), PANIC/GRIEF (sadness), and PLAY (social joy). Each of these systems controls distinct but specific types of behaviors associated with many overlapping physiological changes.
This is not just another laundry list of “instincts” of the type often proposed by psychologists at the end of the 19th and the beginning of the 20th centuries. Panksepp is a neuroscientist, and has verified experimentally the unique signatures of these emotional systems in the ancient regions of the brain shared by humans and other mammals. Again quoting from the book,
As far as we know right now, primal emotional systems are made up of neuroanatomies and neurochemistries that are remarkably similar across all mammalian species. This suggests that these systems evolved a very long time ago and that at a basic emotional and motivational level, all mammals are more similar than they are different. Deep in the ancient affective recesses of our brains, we remain evolutionarily kin.
If you are an astute student of the Blank Slate phenomenon, dear reader, no doubt you are already aware of the heretical nature of this passage. That’s right! The Blank Slaters were prone to instantly condemn any suggestion that there were similarities between humans and other animals as “anthropomorphism.” In fact, if you read the book you will find that their reaction to Panksepp and others doing similar research has been every bit as allergic as their reaction to anyone suggesting the existence of human nature. However, in the field of animal behavior, they are anything but a quaint artifact of the past. Diehard disciples of the behaviorist John B. Watson and his latter day follower B. F. Skinner, Blank Slaters of the first water, still haunt the halls of academia in significant numbers, and still control the message in any number of “scientific” journals. There they have been following their usual “scholarly” pursuit of ignoring and/or vilifying anyone who dares to disagree with them ever since the heyday of Ashley Montagu and Richard Lewontin. In the process they have managed to suppress or distort a great deal of valuable research bearing directly on the wellsprings of human behavior.
We learn from the book that the Blank Slate orthodoxy has been as damaging for other animals as it has been for us. Among other things, it has served as the justification for indifference to or denial of the feelings and consciousness of animals. The possibility that this attitude has contributed to some rather gross instances of animal abuse has been drawing increasing attention from those who are concerned about their welfare. See for example, the website of Panksepp admirer Temple Grandin. According to Panksepp & Bevin,
Another of Descartes’ big errors was the idea that animals are without consciousness, without experiences, because they lack the subtle nonmaterial stuff from which the human mind is made. This notion lingers on today in the belief that animals do not think about nor even feel their emotional responses.
Many emotion researchers as well as neuroscience colleagues make a sharp distinction between affect and emotion, seeing emotion as purely behavioral and physiological responses that are devoid of affective experience. They see emotional arousal as merely a set of physiological responses that include emotion-associated behaviors and a variety of visceral (hormonal/autonomic) responses, without actually experiencing anything – many researchers believe that other animals may not feel their emotional arousals. We disagree.
Some justify this rather counter-intuitive belief by suggesting that it is impossible to really experience or be conscious of emotions (affects) without language. Panksepp & Bevins’ response:
Words cannot describe the experience of seeing the color red to someone who is blind. Words do not describe affects either. One cannot explain what it feels like to be angry, frightened, lustful, tender, lonely, playful, or excited, except indirectly in metaphors. Words are only labels for affective experiences that we have all had – primary affective experiences that we universally recognize. But because they are hidden in our minds, arising from ancient prelinguistic capacities of our brains, we have found no way to talk about them coherently.
With such excuses, and the fact that they could not “see” feelings and emotions in their experiments with “reinforcement” and “conditioning,” the behaviorists concluded that the feelings of the animals they were using in their experiments didn’t matter. It was outside the realm of “science.” Again from the book,
Much as we admire the scientific finesses of these conditioning experiments, we part company with (Joseph) LeDoux and many of the others who conduct this kind of work when it comes to understanding what emotional feelings really are. This is because they studiously ignore the feelings of their animals, and they often claim that the existence or nonexistence of the animals’ feelings is a nonscientific issue (although there are some signs of changing sentiments on these momentous issues). In any event…, LeDoux has specifically endorsed the read-out theory – to the effect that affects are created by neocortical working-memory functions, uniquely expanded in human brains. In other words, he see affects as a higher-order cognitive construct (perhaps only elaborated in humans), and thereby he envisions the striking FEAR responses of his animals to be purely physiological effects with no experiential consequences.
…And when we analyze the punishing properties of electrical stimulation here in animals, we get the strongest aversive responses imaginable at the lowest levels of brain stimulation, and humans experience the most fearful states of mind imaginable. Such issues of affective experience should haunt fear-conditioners much more than they apparently do.
The evidence strongly indicates that there are primary-process emotional networks in the brain that help generate phenomenal affective experiences in all mammals, and perhaps in many other vertebrates and invertebrates.
It’s stunning, really. Anyone who has ever owned a dog is aware of how similar their emotional responses can often be to those of humans, and how well they remember them. Like humans, they are mammals. Like humans, their brains include a cortex. It would hardly be “parsimonious” to simply assume that humans represent some kind of a radical departure when it comes to the ability to experience and remember emotions, and that other animals lack this ability, in defiance of centuries of such “common sense” observations that they can. All this mass of evidence apparently isn’t “scientific,” and therefore doesn’t count, because these latter day Blank Slaters can’t observe in their mazes and shock boxes what appears obvious to everyone else in the world. “Anthropomorphism!” From such profound reasoning we are apparently to conclude that pain in animals doesn’t matter.
Why the Blank Slate’s furious opposition to “anthropomorphism?” In a sense, it’s actually an anachronism. Recall that the fundamental dogma of the Blank Slate was the denial of human nature. Obviously other mammals have a “nature.” Clearly, the claim that dogs and cats must “learn” all their behavior from their “culture” was never going to fly. Not so human beings. Once upon a time the Blank Slaters claimed that everything in the human behavioral repertoire, with the possible exception of breathing, urinating, and defecating, was learned. They even went so far as to include sex. Even orgasms had to be “learned.” It follows that the gulf between humans and animals had to be made as wide as possible.
Fast forward to about the year 2000. As far as their denial of human nature was concerned, the Blank Slaters had lost control of the popular media. To an increasing extent, they were also losing control of the message in academia. Books and articles about innate human behavior began pouring from the presses, and people began speaking of human nature as a given. The Blank Slaters had lost that battle. The main reason for their “anthropomorphism” phobia had disappeared. In the more sequestered field of “animal nature,” however, they could carry on as if nothing had happened without making laughing stocks of themselves. No one was paying any attention except a few animal rights activists. And carry on they did, with the same “scientific” methods they had used in the past. Allow me to quote from Panksepp & Biven again to give you a taste of what I’m talking about:
It is noteworthy that Walter Hess, who first discovered the RAGE system in the cat brain in the mid-1930s (he won a Nobel Prize for his work in 1949), using localized stimulation of the hypothalamus, was among the first to suggest that the behavior was “sham rage.” He confessed, however, in writings published after his retirement (as noted in Chapter 2: e.g., The Biology of Mind ), that he had always believed that the animals actually experienced true anger. He admitted to having shared sentiments he did not himself believe. Why? He simply did not want to have his work marginalized by the then-dominant behaviorists who had no tolerance for talk about emotional experiences. As a result, we still do not know much about how the RAGE system interacts with other cognitive and affective systems of the brain.
In an earlier chapter on The Evolution of Affective Consciousness they added,
In his retirement he admitted regrets about having been too timid, not true to his convictions, to claim that his animals had indeed felt real anger. He confessed that he did this because he feared that such talk would lead to attacks by the powerful American behaviorists, who might thereby also marginalize his more concrete scientific discoveries. To a modest extent, he tried to rectify his “mistake” in his last book, The Biology of Mind, but this work had little influence.
So much for the “self-correcting” nature of science. It is anything but that when poisoned by ideological dogmas. Panksepp and Biven conclude,
But now, thankfully, in our enlightened age, the ban has been lifted. Or has it? In fact, after the cognitive revolution of the early 1970s, the behaviorist bias has largely been retained but more implicitly by most, and it is still the prevailing view among many who study animal behavior. It seems the educated public is not aware of that fact. We hope the present book will change that and expose this residue of behaviorist fundamentalism for what it is: an anachronism that only makes sense to people who have been schooled within a particular tradition, not something that makes any intrinsic sense in itself! It is currently still blocking a rich discourse concerning the psychological, especially the affective, functions of animal brains and human minds.
This passage is particularly interesting because it demonstrates, as can be seen from the passage about “the cognitive revolution of the early 1970s,” that the authors were perfectly well aware of the larger battle with the Blank Slate orthodoxy over human nature. However, that rather opaque allusion is about as close as they came to referring to it in the book. One can hardly blame them for deciding to fight one battle at a time. There is one interesting connection that I will point out for the cognoscenti. In Chapter 6, Beyond Instincts, they write,
The genetically ingrained emotional systems of the brain reflect ancestral memories – adaptive affective functions of such universal importance for survival that they were built into the brain, rather than having to be learned afresh by each generation of individuals. These genetically ingrained memories (instincts) serve as a solid platform for further developments in the emergence of both learning and higher-order reflective consciousness.
Compare this with a passage from the work of the brilliant South African naturalist Eugene Marais, which appeared in his The Soul of the Ape, written well before his death in 1936, but only published in 1969:
…it would be convenient to speak of instinct as phyletic memory. There are many analogies between memory and instinct, and although these may not extend to fundamentals, they are still of such a nature that the term phyletic memory will always convey a clear understanding of the most characteristic attributes of instinct.
As it happens, the very charming and insightful introduction to The Soul of the Ape when it was finally published in 1969 was written by none other than Robert Ardrey! He had an uncanny ability to find and appreciate the significance of the work of brilliant but little-known researchers like Marais.
As for Panksepp, I can only apologize for taking so long to discover him. If nothing else, his work and teachings reveal that this is no time for complacency. True, the Blank Slaters have been staggered, but they haven’t been defeated quite yet. They’ve merely abandoned the battlefield and retreated to what would seem to be their last citadel; the field of animal behavior. Unfortunately there is no Robert Ardrey around to pitch them headlong out of that last refuge, but they face a different challenge now. They can no longer pretend to hold the moral high ground. Their denial that animals can experience and remember their emotions in the same way as humans leaves the door wide open for the abuse of animals, both inside and outside the laboratory. It is to be hoped that more animal rights activists like Temple Grandin will start paying attention. I may not agree with them about eating red meat, but the maltreatment of animals, justified by reference to a bogus ideological dogma, is something that can definitely excite my own RAGE emotions. I will have no problem standing shoulder to shoulder with them in this fight.