Posted on April 14th, 2014 No comments
That great poet among philosophers Friedrich Nietzsche once wrote,
I teach you the overman. Man is something that shall be overcome. What have you done to overcome him? All beings so far have created something beyond themselves; and do you want to be the ebb of this great flood and even go back to the beasts rather than overcome man? What is the ape to man? A laughingstock or a painful embarrassment. And man shall be just that for the overman: a laughingstock or a painful embarrassment… Behold, I teach you the overman. The overman is the meaning of the earth. Let your will say: the overman shall be the meaning of the earth!
Nietzsche was no believer in “scientific morality.” He knew that if, as his Zarathustra claimed, God was really dead, there was no basis for his preferred version of the future of mankind or his preferred versions of Good and Evil beyond a personal whim. However, as whims go, the above passage at least has the advantage of being consistent. In other words, unlike some modern versions of morality, it isn’t a negation of the reasons that morality evolved in the first place. It would have been interesting to hear the great man’s impressions of a world in which modern genetics is increasingly endowing the individual with the power to decide for himself whether he wants to be the “rope between man and overman” or not.
Hardly a month goes by without news of some new startup offering the latest version of the power. For example, a week ago an article turned up in The Guardian describing the “Matchright” technology to be offered by a venture by the name of Genepeeks. Its title, Startup offering DNA screening of ‘hypothetical babies’ raises fears over designer children, reflects the usual “Gattaca” nightmares that so many seem to associate with such technologies. It describes “Matchright” as a computational tool that can screen the DNA of potential sperm donors, identifying those who carry a risk of genetically transmitted diseases when matched with the DNA of a recipients egg. According to the article,
…for the technology to work it needs to pull off a couple of amazing tricks. For a start, it is not as simple as creating a single digital sperm and an egg based on the parents and putting them together. When an egg and a sperm fuse in real life, they swap a bunch of DNA – a process called recombination – which is part of the reason why each child (bar identical twins) is different. To recreate this process, the software needs to be run 10,000 times for each individual potential donor. They can then see the percentage of these offspring that are affected by the disease.
It goes on to quote bioethicist Ronald Green of Dartmouth:
The system will provide the most comprehensive genetic analysis to date of the potential risk of disease in a newborn, without even needing to fertilise a single egg. It gives people more confidence about disease risk, says Green, who is not involved in the work: “If someone I care for was in the market for donor sperm I might encourage them to use this technology,” he says.
In keeping with the usual custom for such articles, this one ends up with a nod to the moralists:
As for the ethical issues, (company co-founder Anne) Morriss does not deny they are there, but believes in opening up the discussion “beyond the self-appointed ethicists”. “I think everybody should be involved – the public and the scientists and the regulators.”
Indeed, “self-appointed ethicists” aren’t hard to find. There is an interesting discussion of the two sides of this debate in an article recently posted at Huffington Post entitled The Ethics of ‘Designer Babies.‘ Such concerns beg a question that also came up in the debate back in the late 40′s and early 50′s about whether we should develop hydrogen bombs – do we really have a choice? After all, we’re not the only ones in the game. Consider, for example, the title of an article that recently appeared on the CBS News website: Designer babies” on the way? In China, scientists attempt to unravel human intelligence. According to the article,
Inside a converted shoe factory in Shenzhen, China, scientists have launched an ambitious search for the genes linked to human intelligence.
The man in charge of the project is 21-year-old science savant, Zhao Bowen. He estimates more than 60 percent of your IQ is decided by your parents, and now they want to prove it.
Asked how he would describe his ultimate goal, Zhao said it’s to “help people understand themselves and to create a better world.”
The “self-appointed ethicists” can react to Zhao’s comment as furiously as they please. The only problem is that they don’t have a monopoly on the right to make the decision. They may not be personally inclined to become ”the rope between man and overman.” However, I suspect they may reevaluate their ethical concerns when they find themselves left in the dust with the apes.
Posted on April 13th, 2014 No comments
Yes, dear reader, there was. It’s quite true that, for half a decade and more, the “Men of Science” imposed on the credulity of mankind by insisting that something perfectly obvious and long familiar to the rest of us didn’t exist. I refer, of course, to human nature. It was a herculean effort in self-deception that confirmed yet again George Orwell’s observation that, “There are some ideas so absurd that only an intellectual could believe them.” In the heyday of the Blank Slate orthodoxy, such “Men of Science” as Ashley Montagu could say things such as,
…man is man because he has no instincts, because everything he is and has become he has learned, acquired from his culture, from the man-made part of the environment, from other human beings.
The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.
and do it with a perfectly straight face. It was an episode in our history that must never be forgotten, and one that should be recalled whenever we hear someone claim that “science says” this or that, or that “the science is settled.” The scientific method is the best butterfly net our species has come up with so far to occasionally capture a fluttering bit of truth. However, it can never be separated from the ideological context in which it functions. As the Blank Slate episode demonstrated, that context is quite capable of subverting and adulterating the truth when the truth stands in the way of ideological imperatives.
In the case of the Blank Slate, as it happens, those imperatives did not derail our search for truth for some time after Darwin first grasped the behavioral implications of his revolutionary theory. And just as those implications were obvious to Darwin, they were obvious to many others. The existence and selective significance of human nature were immediately apparent to anyone with an open mind and rudimentary powers of self-observation. Indeed, they were treated almost as commonplaces in the behavioral sciences for decades after Darwin until they finally succumbed to the ideological fog.
For example, at about the same time that J. B. Watson and Frank Boas began fabricating the first serious “scientific” rationalizations of the Blank Slate, there was no evidence in the popular media of the rigid ideological orthodoxy that became such a remarkable feature of their coverage of anything dealing with human behavior in the 60′s and 70′s. The later vilification of heretics as “racists” and “fascists” was nowhere to be seen. Indeed, one Dr. Grace Adams, who held a Ph.D. in psychology from Cornell, was actually guileless enough to contribute an article entitled Human Instincts to H. L. Mencken’s The American Mercury as late as 1928! Apparently without the faintest inkling of the hijacking of the behavioral sciences that was then already in the works, she wrote,
The recognition of the full scope and function of the human instincts will appear to those who come after us as the most important advance made by psychology in our time. (!)
How ironic those words seem now! The very term “instinct” became toxic during the ascendancy of the Blank Slate, when the high priests of the prevailing orthodoxy insisted on their own rigid definition of the term, and then proceeded to exploit it as a handy tool for “smarter than thou” posturing and scientific one-upmanship. Adams’ article includes some interesting remarks on the origin of the word “instinct” in the biological sciences and the later, gradual redefinitions that occurred when it was taken up by the psychologists. In particular, she notes that, while the biologists of the time still used the term to describe behaviors that were unaffected by either “experience or volition,” and were “purely mechanical processes lying completely outside the province of consciousness,” psychologists preferred a much more flexible definition. Referring to the great American ur-psychologist William James, Adams wrote,
So it was obvious, to him at least, “that every instinctive act in an animal with memory must cease to be ‘blind’ after being once repeated.” In this way, according to James, an instinct could become not only conscious but capable of modification and conscious direction and change.
Or, as we would say today, the expression of “instincts” could be modified by “culture.” Adams notes that, as early as 1890,
James was able to state complacently that there was agreement among his contemporaries that the human instincts were: sucking, biting, chewing, grinding the teeth, licking, making grimaces, spitting, clasping, grasping, pointing, making sounds of expressive desire, carrying to the mouth, the function of alimentation, crying, smiling, protrusion of the lips, turning the head aside, holding the head erect, sitting up, standing, locomotion, vocalization, imitation, emulation or rivalry, pugnacity, anger, resentment, sympathy, the hunting instinct, fear, appropriation or acquisitiveness, constructiveness, play, curiosity, sociability and shyness, secretiveness, cleanliness, modesty and shame, love, the anti-sexual instincts, jealousy, and parental love. (Italics are mine)
Turn the page to the 20th century, and we already find two of the prominent psychologists of the day, James Angell and Edward Thorndike, squabbling over the definition of “instinct.” According to Adams,
Angell, accepting James’ argument that instincts once yielded to are thereafter felt in connection with the foresight of their ends, expands this idea into the statement that “instincts, in the higher animals, at all events, appear always to involve consciousness.” And he makes consciousness the essential element of instincts. Thorndike, on the other hand, remembers James’ admission that instincts are originally blind and maintains that “all original tendencies are aimless in the sense that foresight of the consequences does not affect the response.” For him the only necessary components of an instinct are “the ability to be sensitive to a certain situation, the ability to make a certain response, and the existence of a bond or connection whereby that response is made to that situation.” While the ideas of neither Angell nor Thorndike are actually inconsistent with James’ two-fold definition of an instinct, they lead to very different lists of instincts.
To cut to the chase, here are the lists of Angell,
Angell, by making consciousness the mark that distinguishes an instinct from a reflex, has to narrow the number of instincts to fear, anger, shyness, curiosity, sympathy, modesty (?), affection, sexual love, jealousy and envy, rivalry, sociability, play, imitation, constructiveness, secretiveness and acquisitiveness.
But Thorndike admits no gap between reflexes and instincts, so he must both expand and subdivide James’ list. He does this in a two hundred page inventory (!) which he regrets is incomplete. He adds such activities as teasing, tormenting, bullying, sulkiness, grieving, the horse-play of youths, the cooing and gurgling of infants and their satisfaction at being held, cuddled and carried, attention-getting, responses to approving behavior, responses to scornful behavior, responses by approving behavior, responses by scornful behavior, the instinct of multiform physical activity, and the instinct of multiform mental activity. The “so-called instinct of fear” he analyzes into the instinct of escape from restraint, the instinct of overcoming a moving obstacle, the instinct of counterattack, the instinct of irrational response to pain, the instinct to combat in rivalry, and the threatening or attacking movements with which the human male tends to react to the mere presence of a male of the same species during acts of courtship.
In a word, the psychologists of the 20′s were still quite uninhibited when it came to compiling lists of instincts. It is noteworthy that Thorndike’s The Elements of Psychology, which originally included extensive discussions of human “instincts” in Chapters 12 and 13, continued in use as a textbook for many years. Indeed, Thorndike was one of the many psychologists of his day who seem surprisingly “modern” in the context of the early 21st century. For example, again quoting Adams,
And Thorndike points out that a complete inventory of man’s original nature is needed not only as a basis of education but for economic, political, ethical and religious theories.
And, in a passage that, in light of recent developments in the field of evolutionary psychology, can only be described as stunning, Adams continues,
For Colvin and Bagly the chief essential of instincts is that “they are directed toward some end that is useful.” But they do not mean useful in a selfish or materialistic sense, for they are able to describe an altruistic instinct which is as real to them as the predatory instinct. And Kirkpatrick conceives of man being by native endowment even more noble. Indeed he credits to the human being a regulative instinct “which exists in the moral tendency to conform to law and to act for the good of others as well as self, and in the religious tendency to regard a Higher Power.”
Writing in the June and August, 1928 editions of the Mercury, H. M. Parshley elaborates on the connection, noticed decades earlier by Darwin himself, between “instincts” and morality:
Ethics certainly involves the consideration of motives, values, and ideals; and a scientific ethics requires genuine knowledge about these elusive matters.
As if anticipating Stephen J. Gould’s delusional theories of “non-overlapping magisterial,” he continues,
…in my “opinion, the chief support of obscurantism at this moment is the notion that motives, values, and ideals, unlike material things, are beyond the range of scientific study, and thus afford a free and exclusive field in which religion and philosophy may disport themselves authoritatively without challenge.
Parshley continues with a comment that we now recognize was sadly mistaken:
The biological needs are clear enough to see and we know a great deal about them – quite sufficient to establish the futility of asceticism and give rise to a complete distrust of any ethics that involves us in serious conflict with them. Science has done this, and, I think, it will never be undone.
Parshley’s naïve faith in the integrity and disinterestedness of science was to be shattered all too soon. Indeed, without recognizing the danger, Adams was already quite familiar with its source:
For many years the iconoclastic Watson strove to explain instincts in suitably behavioristic terms. But neither his definition nor his classification need concern us now, for in 1924 Watson repudiated everything he had previously said about them by declaring that “there are no instincts,” and furthermore, that “there is no such thing as an inheritance of capacity, talent, temperament, mental constitution and characteristics.” With these two statements Watson cast aside the biological as well as the psychological notion of mental inheritance.
For Adams, the behaviorist creed of Watson and Boas was just a curiosity. She didn’t realize they were already riding on the crest of an ideological wave that would submerge the behavioral sciences in a sea of obscurantism for decades to come. Marxism was hardly the only dogma that required their theories to be “true.” The same could be said of many other pet utopias that could generally be included in the scope of E. O. Wilson’s epigram, “Great theory, wrong species.” The ideological imperative was described in a nutshell by psychologist Geoffrey Gorer in an essay entitled The Remaking of Man, published in 1956:
One of the most urgent problems – perhaps the most urgent problem – facing the world today is how to change the character and behavior of adult human beings within a single generation. This problem of rapid transformation has underlaid every revolution (as opposed to coups d’etat) at least from the time of the English Revolution in the seventeenth century, which sought to establish the Rule of the Saints by some modifications in the governing institutions and the laws they promulgated; and from this point of view every revolution has failed… the character of the mass of the population, their attitudes and expectations, change apparently very little.
Up till the present century revolutions were typically concerned with the internal arrangements of one political unit, one country; but the nearly simultaneous development of world-wide communications and world-wide ideologies – democracy, socialism, communism – has posed the problem not merely of how to transform ourselves – whoever ‘ourselves’ may be – but how to transform others.
This imperative shattered the naïve faith of Adams and Parshley in the inevitability of scientific progress with astonishing rapidity. Later, during the heyday of the Blank Slate, Margaret Mead described the triumph of the “new ideas,” just a few short years after their articles appeared in the Mercury:
In the central concept of culture as it was developed by Boas and his students, human beings were viewed as dependent neither on instinct nor on genetically transmitted specific capabilities but on learned ways of life that accumulated slowly through endless borrowing, readaptation, and innovation… The vast panorama which Boas sketched out in 1932 in his discussion of the aims of anthropological research is still the heritage of American anthropology.
And so the darkness fell, and remained for more than half a decade. The victory of the Blank Slate was, perhaps, the greatest debacle in the history of scientific thought. Even today the “men of science” are incapable of discussing that history without abundant obfuscation and revision. Still, the salient facts aren’t that hard to ferret out for anyone curious enough to dig for them a little. It would behoove anyone with an exaggerated tendency to believe in the “integrity of science” to grab a shovel.
“Utterly Wrong” Robert Ardrey Vindicated Again. “Scientific American” Embraces the “Hunting Hypothesis”Posted on April 9th, 2014 No comments
The dubious claim that early man never engaged in anything so politically incorrect as hunting was part and parcel of the Blank Slate. In fact, you can almost date its collapse from the time that abashed admissions that he did, in fact, hunt began appearing in the scientific and popular literature. As recently as 1997, for example, British journalist Brian Deer subjected Jane Goodall, no less, to some grossly sexist ridicule as an ignorant ”secretary” and “waitress,” because she had dared to notice that chimpanzees hunt and eat meat, and inform the world about it. In the same year, the Pubic Broadcasting Network aired a series entitled “In Search of Human Origins,” which featured the palpably ludicrous claim that early man satisfied the need for meat to fuel his big brain by duking it out with the hyenas and vultures as a “highly successful scavenger.” The “scavenger” schtick never really passed the “Ho Ho” test, and PBS heaved it overboard in its “Becoming Human” series in 2009, in which viewers learned that,
Homo erectus probably hunted with close-quarters weapons, with spears that were thrown at animals from a short distance, clubs, thrown rocks, weapons like that. They weren’t using long distance projectile weapons that we know of.
The Homo erectus hunt was simple but effective. It fed not just their larger brains, but the growing complexity of that early human society.
This about face was managed without cracking the faintest smile, or the faintest hint that the network had imposed on the credulity of its audience with the “scavenger” routine just over a decade earlier. Of course, there was also not the faintest mention of Robert Ardrey, who had insisted on the hunting proclivities of early man in his book, The Hunting Hypothesis, published in 1976. By this time, of course, Ardrey had already been reduced to an unperson by the ”men of science” in works such as Steven Pinker’s ludicrous revision of history, The Blank Slate. Unlike those who so shamelessly dismiss his legacy today, Ardrey was actually possessed of what H. L. Mencken used to refer to as “common decency.” Instead of burying the contributions of significant thinkers in the past, he had a remarkable facility for digging them up. His works are full of references to such remarkable but little known geniuses as Eugene Marais, Raymond Dart, Henry Eliot Howard, and Carveth Read.
In the case of Read, for example, instead of following the modern practice of declaring him “totally and utterly wrong” and then proceeding to pirate his ideas, he insisted on his contribution as one of the first, if not the first, to suggest a hunting transition from ape to man in his book, The Origin of Man and His Superstitions.” Read coined the term “Lycopithecus” for his hypothesized hunting apes, noting the similarities in social behavior between wolves and man, and wrote,
Moreover, when our ape first pursued game, especially big game (not being by ancient adaptation in structure and instinct a carnivore), he may have been, and probably was, incapable of killing enough prey single-handed; and, if so, he will have profited by becoming both social and cooperative as a hunter, like the wolves and dogs – in short, a sort of wolf-ape (Lycopithecus).
…the less our ancestor in his new career trusted to trees the better for him. Such simple strategy (hunting from trees) could not make him a dominant animal throughout the world; nothing could do this but the gradual attainment of erect gait adapted to running down his prey.
Watch PBS’s Becoming Human series and you’ll see how Read’s hypothesis about the “attainment of erect gait adapted to running down prey” was “rediscovered” a little under a hundred years later.
Fast forward another five years, and we find ourselves treated to yet another vindication of Ardrey and Read, from no less than the relentlessly politically correct Scientific American! In an article shockingly entitled, Rise of the Human Predator, we are treated to an epiphany that would certainly have amused Ardrey, and caused the ancient Blank Slaters to swallow their gum. If a nail in the coffin of the Blank Slate’s bitter resistance to the Hunting Hypothesis were needed, this would definitely qualify. Here’s the byline:
For decades researchers have been locked in debate over how and when hunting began and how big a role it played in human evolution. Recent analyses of human anatomy, stone tools and animal bones are helping to fill in the details of this game-changing shift in subsistence strategy. This evidence indicates that hunting evolved far earlier than some scholars had envisioned – and profoundly impacted subsequent human evolution.
Stunning, really! Ardrey said almost exactly the same things in The Hunting Hypothesis back in 1976. Of course, there is nothing as “shy-making” as one of novelist Evelyn Waugh’s ”bright young things” might have put it, as any mention of him, nor of Carveth Read. Nor, for that matter, in spite of citing evidence of hunting going back 1.8 million years, could the author bring himself to mention Raymond Dart, whose statistical evidence for hunting by Australopithecus africanus was first ignored, then subjected to a lame “refutation” by C. K. Brain, but who is now being hailed as the “Father of Cave Taphonomy.”
Somehow, I’m not surprised.
Posted on April 6th, 2014 1 comment
Popular Mechanics just published an article entitled How Many People Does It Take to Colonize Another Star System? Apparently the number needed to maintain sufficient genetic diversity is very large indeed – 40,000 would be ideal! Unfortunately, if you do the math, the amount of energy it would take to transport that many people to another star system, even allowing a couple of thousand years for the voyage, is enormous. As several commenters pointed out, by the time our technology advances to the point that such missions are feasible, it will also be feasible to send the necessary “genetic diversity” along in the form of frozen eggs and sperm with carefully chosen DNA sequences, complete libraries of human alleles that can be fabricated and inserted into DNA sequences as needed, etc. It might not even be necessary to send anything as bulky as fully formed humans on the voyage. Self-replicating robots could be sent in advance to create housing, farms, and birthing facilities prepared to receive fertilized eggs. The first humans born would have robotic “parents.”
It’s always fun to speculate on what we might be able to do assuming our technology becomes sufficiently advanced. The question is, what can we do now, or at least in the foreseeable future with existing technologies, or ones that seem accessible in the near future? “Existing technologies” means travel times of 25,000 years, give or take. In other words, we must rule out our own species, at least for the time being. It will be necessary for us to send some of our relatives. For some of them – other species – such lengthy interstellar voyages are feasible now. As I wrote in an earlier post,
The 32,000 year old seed of a complex, flowering plant recovered from the ice was recently germinated by a team of scientists in Siberia. Ancient bacteria, as much as 250 million years old have been recovered from sea salt in New Mexico, and also brought back to life. Tiny animals known as tardigrades have survived when exposed to the harsh environment of outer space. We might choose the species from among such candidates most likely to survive the 50,000 to 100,000 years required to journey to nearby stars with conventional rocket propulsion, and most likely to evolve into complex, land-dwelling life forms in the shortest time, and send them now, instead of waiting 100’s or 1000’s of years for the emergence of the advanced technologies necessary to send humans. Slowing down at the destination star would not pose nearly the problem that it does for objects traveling at significant fractions of the speed of light. The necessary maneuvers to enter orbit around and seed promising planets could be performed by on-board computers with plenty of time to spare. Oceans might be seeded with algae in advance of the arrival of organisms that feed on it (and breathe the oxygen it would release).
Why would we want to do such a thing? Survival! Morality exists only because animals equipped with it were more likely to survive. We are one such animal. There is no such thing as an objective “ought.” However, given the reason that morality exists to begin with, the conclusion that nothing can be more immoral than failing to survive does not seem unreasonable. It one accepts that logic, it follows that our first priority “should” be the survival of our own species, and our second should be the preservation of biological life. It’s really just a whim, but I hope that many others will share it. The alternative is to accept the fact that one is a defective biological unit, resigned to extinction, which I personally don’t find an entirely pleasant thought.
Let’s assume that a canonical voyage will last 25,000 years. Conventional rockets are capable of reaching the nearest star systems in that time. By using nuclear propulsion of the type that was successfully tested 50 years ago, we should be able to reach stars within a distance of a dozen light years or so within the same period. As noted above, there are life forms that could survive the voyage. The particular ones chosen would be those most compatible with the conditions existing on candidate planets. Needless to say, the conditions of our own atmosphere, oceans, etc., have been drastically altered by the long existence of life on our planet. Finding such conditions on reachable planets is most unlikely, and our biological voyagers must be chosen accordingly.
It will be necessary to develop certain technologies that we do not as yet possess. Fortunately, they are all within reach, and nowhere near as demanding as, say, fusion or anti-matter propulsion systems. For example, we will need a timing device that can keep “ticking” for 25,000 years, and, when necessary, signal the rest of the interstellar package to “wake up.” The Long Now Foundation has made some interesting starts in this direction, in the form of giant mechanical clocks that are designed to run for 10,000 years. Of course, those designs aren’t exactly what we’re looking for, but if one can conceive of a 10,000 year mechanical clock, than a 25,000 year digital clock must be feasible as well. A similar problem was solved by John Harrison more than two centuries ago, in the form of a clock that kept time exactly enough to keep track of a ship’s longitude. If he succeeded in solving the British Navy’s problem with the technology that existed then, we should be able to succeed in solving our own clock problem with a technology that is now far more advanced.
It will be necessary to develop systems that will perform reliably over extremely long times. As it happens, that, too, is a problem that has already been taken in hand by earth-bound scientists. The relevant acronym is ULLS (Ultra Long Life Systems), and some of the required technologies are discussed in a NASA presentation entitled, Technology Needs for the Development of the Ultra Long Life Missions. Some of the ideas being considered include,
Generic Redundant Blocks – redundant components that are generic and can be programmed to replace any type of failed component. An example might be field-programmable gate arrays (FPGA’s).
Adaptive Fault Tolerance – Working around failures instead of replacing failed components with spares.
Self-repair components – Including self-repair with nano-technologies and self-healing with biologically inspired technologies.
Regenerative systems – Modular regrow with biologically inspired technologies.
In interesting presentation on the subject by NASA scientist Henry Garrett, who happens to prefer Project Orion-type interstellar missions with propulsion by few kiloton nuclear devices, may be found here (sorry about the long-winded introduction). Dave Reneke recently posted an interesting if somewhat speculative article on various types of self-replicating interstellar probes entitled How Self-Replicating Spacecraft Could Take Over the Galaxy.
Of course, none of this fine technology will work without a reliable power supply that needs to last, potentially, for upwards of 25,000 years. It so happens that we have just the isotope – plutonium 239. You might call it the ultimate dual use material – life or death. It is ideal for making nuclear bombs or carrying life across interstellar distances. Of course, another isotope of plutonium, plutonium 238, has already been used to power many spacecraft, including the Voyagers and New Horizons. Unfortunately, with a radioactive half-life of only 87.7 years, there would only be a few atoms of it left after 25,000 years. Pu-239, on the other hand, has a half-life of 24,100 years – just about what’s needed. Of course, it could only provide a tiny fraction of the power of Pu-238 via radioactive decay. Not much is required, though – only enough to keep the clock going. At key points in the mission, of course, a great deal more power will be necessary. And that’s what brings us to the reason that Pu-239 is ideal – it’s fissile. In other words, it’s an ideal fuel for a nuclear reactor. When high power is needed, the plutonium can be assembled into a critical mass, serving as either a conventional reactor or a space propulsion system.
I am convinced that all of the above can be accomplished in a matter of a decade or two instead of centuries if we can somehow again achieve the level of collective willpower we reached during the Apollo Program. Of course, this old planet of ours could easily go on supporting high tech human civilizations until we master the art of interstellar travel on our own. It might – but why take chances?
“Grounds of War” – A New Paper on Territoriality with Remarkable “Similarities” to the Work of Robert ArdreyPosted on April 4th, 2014 6 comments
Robert Ardrey was a brilliant man. After a successful career as a playwright, he became an anthropologist, and wrote a series of four books in the 60′s and 70′s refuting the absurd orthodoxy of the Blank Slate that prevailed at the time. In other words, to the tune of vociferous abuse from the “men of science” in psychology, sociology, anthropology, and the rest of the behavioral sciences, he insisted that there actually is such a thing as human nature. The abuse was an honor Ardrey well deserved, because he proved to be a very potent antidote to the Blank Slate nonsense, perhaps the most remarkable perversion of science of all time. Indeed, he was the most influential and effective opponent of the Blank Slate in its heyday. That fact was nicely documented by the Blank Slaters themselves in an invaluable little collection of essays entitled Man and Aggression. The book, which appeared in 1968, was edited by arch-Blank Slater Ashley Montagu, and was aimed mainly at Ardrey, with a few barbs reserved for Nobel laureate Konrad Lorenz, and with novelist William Golding thrown in for comic effect. As I write this, used copies are still available at Amazon for just a penny. In case you happen to be hard up for cash, here’s a quote from the book taken from an essay by psychologist Geoffrey Gorer:
Almost without question, Robert Ardrey is today the most influential writer in English dealing with the innate or instinctive attributes of human nature, and the most skilled populariser of the findings of paleo-anthropologists, ethologists, and biological experimenters… He is a skilled writer, with a lively command of English prose, a pretty turn of wit, and a dramatist’s skill in exposition; he is also a good reporter, with the reporter’s eye for the significant detail, the striking visual impression. He has taken a look at nearly all the current work in Africa of paleo-anthropologists and ethologists; time and again, a couple of his paragraphs can make vivid a site, such as the Olduvai Gorge, which has been merely a name in a hundred articles.
…he does not distort his authorities beyond what is inevitable in any selection and condensation… even those familiar with most of the literature are likely to find descriptions of research they had hitherto ignored, particularly in The Territorial Imperative, with its bibliography of 245 items.
Of course, we now live in more enlightened times, and the Blank Slate collapsed under the weight of its own absurdity years ago. In a word, the life work of Robert Ardrey has been heroically vindicated, no? Well, not exactly. You see, the “men of science” could never forgive Ardrey, a mere playwright, for shaming them. Indeed, Steven Pinker, one of the tribe, went to the trouble of writing a remarkable revision of history entitled, appropriate enough, The Blank Slate, in which he actually performed the feat of completely ignoring Ardrey, other than in a single paragraph in which he claimed, on the authority of Richard Dawkins, that Ardrey had been “totally and utterly wrong!” It’s like writing that Einstein was ”totally and utterly wrong” about relativity because he didn’t think right about quantum theory. I won’t go into the specious reasons Pinker used to fob off this gross imposture on his readers. I’ve gone into them in some detail, for example, here and here. Suffice it to say that Ardrey’s support for the theory of group selection had much to do with it.
Fast forward to 2014. Two Oxford academics by the names of Monica Duffy Toft and Dominic Johnson have just published a paper in the journal International Security entitled Grounds of War; The Evolution of Territorial Conflict (hattip hbd-chick). And what is it about that title that brings Ardrey to mind? Ah, yes, as those familiar with his work will recall, he wrote a book entitled The Territorial Imperative, published back in 1966. As it happens, the “similarities” don’t end there. Allow me to point out some of the others that appear in this “original” paper:
Toft & Johnson: Territorial behavior—or “territoriality”—is prevalent not only among humans, but across the animal kingdom. It has evolved independently across a wide range of taxonomic groups and ecological contexts, whether from the depths of the ocean to rainforest canopies, or from deserts to the Arctic tundra. This recurrence of territoriality suggests evolutionary “convergence” on a tried and tested strategic solution to a common environmental challenge. Organisms have tended to develop territoriality because it is an effective strategy for survival and maximizing “Darwinian fitness” (reproduction).
Ardrey: Territorial behavior in animals, of the past few decades, has attracted the attention of hundreds of competent specialists who have recorded there observations and their reasoned conclusions in obscure professional publications. The subject is very nearly as well known to the student of animal behavior as is the relation of mother and infant to the student of human behavior. Furthermore, many of the concerned scientists, as we shall see, believe as do I that man is a territorial species, and that the behavior so widely observed in animal species is equally characteristic of our own.
Toft & Johnson: Across the animal kingdom, holders of territory (or “residents”) tend to have a higher probability of winning contests, even against stronger intruders. Territoriality is thus heavily influenced by who was there first.
Ardrey: We may also say that in all territorial species, without exception, possession of a territory lends enhanced energy to the proprieter. Students of animal behavior cannot agree as to why this should be, but the challenger is almost invariably defeated, the intruder expelled.
Toft & Johnson: Territoriality does not necessarily lead to violence. Indeed, biologists regard it as a mechanism that evolved to avoid violence. By partitioning living space according to established behavioral conventions, animals can avoid the costs associated with constant fighting. Furthermore, although discussions of territorial behavior tend to focus on aggression, territorial behavior has two distinct components: attack and avoidance. Residents tend to attack in defense of their territory (fight), intruders tend to withdraw (flight).
Ardrey: The territories of howler (monkey) clans are large, the borders vague. But clans have only to sight each other in this no man’s land and total warfare breaks out. Rage shakes the forest. That rage, however, takes none but vocal expression… Should intrusion occur, these voices joined will be the artillery of battle. And strictly in accord with the territorial principle, the home team will always win, the visiting team will always withdraw.
I could multiply such “similarities” into the dozens. Far be it for me, however, to charge the two authors with anything so crude as plagiarism. Indeed, Toft and Johnson actually do take care to cite Ardrey. Here’s what they have to say about him:
The idea that evolution helps to explain human territorial behavior is not new. Robert Ardrey’s popular book The Territorial Imperative, published in the 1960s, championed the role of territorial instincts in human conflict. This account, however, suffers from some now outdated views of evolution, for example, the idea that behaviors are “hard-wired,” or that they evolved because they helped the group or the species as a whole.
Here we find Toft and Johnson squawking to order like two Pinkeresque parrots. One must charitably assume that neither of them has ever actually read Ardrey, because otherwise one cannot construe this bit as other than a mendacious lie. This is what the two have to say about what they mean by the term “hard-wired”:
As with many other human traits, territoriality might be loosely considered not as “hard-wired” but as “soft-wired”—a component of human nature but one that is responsive to prevailing conditions. Power, rational choice, domestic politics, institutions, and culture are of course important as well in explaining territorial conflict, but evolutionary biology can provide additional explanatory power.
I’m not sure if Ardrey ever even used the term “hard-wired,” but if he did it certainly wasn’t in the sense that Toft and Johnson use it. He constantly and repeatedly insisted on the “soft-wired” nature of human behavioral predispositions. For example, from The Territorial Imperative:
The open instinct, a combination in varying portion of genetic design and relevant experience, is the common sort in all higher animal forms. As beginning with the digger wasp we proceed higher and higher in the animal orders, the closed instinct all but vanishes, the open instinct incorporates more and more a learned portion. In man it reaches a maximum of learning, a minimum of design.
There are many similar passages in Ardrey’s work. Turning to the next charge, I know of nothing therein that suggests that he ever believed that selection actually took place at the species level. He did occasionally point out the obvious truth that various behavioral traits tend to benefit a species as a whole rather than harm it, but the claim that this amounts to support for species-level selection is nonsense. Readers can check this for themselves by reading, for example, the last page of Chapter 3, section 2 of The Territorial Imperative. Ardrey did support theories of group selection. So did Darwin. So did E. O. Wilson in his latest book, The Social Conquest of Earth. Does that fact also disqualify those two from any claim to their own ideas? What’s next? Will Toft and Johnson come up with an “original” theory of evolution by natural selection. Perhaps they could even write a book about it. Allow me to suggest the title On the Origin of Species. They could follow that with their own versions of Wilson’s Sociobiology and On Human Nature.
The saddest thing about it all is that Toft and Johnson are likely to get away with this revision of history a la Dawkins and Pinker. After all, the academics and other “men of science” hate Ardrey. How dare he be right when almost all of them were embracing the mirage of the Blank Slate! How dare a mere playwright do such a thing?
Posted on March 30th, 2014 2 comments
One of the favorite hobbies of secular philosophers of late has been the fabrication of new and improved systems of morality. Perhaps the best known example is outlined in Sam Harris’ The Moral Landscape. If conscientiously applied, we are promised, they will usher in nebulous utopias in which a common thread is some version of “human flourishing.” We have already completed an experimental investigation of how these fancy theories work in practice. It was called Communism. Many eggs were broken to make that omelet, but the omelet never materialized. That unfortunate experience alone should be enough to dissuade us from poking a stick into the same hornet’s nest again.
The Communists were at least realistic enough to realize that their system wouldn’t work without a radical transformation in human behavior. For that to happen, it was necessary for our behavioral habits to be almost infinitely malleable, a requirement that spawned many of the 20th century versions of the Blank Slate, and perverted the behavioral sciences for more than half a decade. Since it became clear, as Trotsky once put it rather euphemistically just before Stalin had him murdered, that Communism had “ended in a utopia,” most of the “not in our genes” crowd have either mercifully died or been dragged kicking and screaming back into the real world. Practitioners of the behavioral “sciences” are now at least generally agreed as to the truth of the proposition, sufficiently obvious to any ten-year old, that there actually is such a thing as human nature.
That hasn’t deterred the inventers of sure-fire new universal moralities. They seem to think that they can finesse the problem by persuading us that we should just ignore those aspects of our nature that stand in the way of “human flourishing.” It won’t work for them any more than it worked for the Communists. This stubborn fact was demonstrated yet again in rather amusing fashion on the occasion of the publication of a somewhat controversial book in Australia.
The title of the book was The Conservative Revolution by Cory Bernardi. The particular aspect of human nature that its release highlighted was our predisposition to adopt dual systems of morality, in which radically different rules apply depending on whether one is dealing with one’s ingroup or one’s outgroup. Robert Ardrey called the phenomenon the “Amity/Enmity Complex,” and it has played a profound and fundamental role in the endemic warfare our species has engaged in since time immemorial. The philosophy outlined in The Conservative Revolution would be familiar to most southern Republicans in the US. His ingroup is the Australian political right. In other words, he is positioned firmly in the outgroup of the political left. When he published the book, “warfare” was not long in coming.
The reaction of the leftist ingroup in Australia was furious. To characterize it as hysterical frothing at the mouth would be putting it mildly. The data demonstrating this enraged reaction has been kindly collected by the people at Amazon in the form of reader reviews of the book. As I write this, there are 554 of them, and virtually all of them, whether “five star” or “one star,” are literary reflections of a two-year old’s temper tantrum. Here are some excerpts from some of the 421 “one star” reviews:
It’s only 178 pages long, and at the current price of just under $27, it’s quite expensive as well. So already one’s expectations are for a good quality product, given that it costs over 15 cents per page (or 30 cents per sheet, in other words). Just for comparison, my local Woolworths has toilet paper on sale for 20 cents per ONE HUNDRED sheets, or less than 1% the price per sheet of this book!!
It made an excellent liner for my bird cage. I love seeing my rainbow parakeets taking a dump on his head.
The Dark One hungers. In his pit of eternal hatred he squats in the darkness feeding on the screams of the weak. Soon, his blood tide reaches a peak and he will scourge the unbelievers.
…and so on. Here are some of the 105 “five star” reviews:
Many of the rituals I frequently practice – mostly summonings of minor demons – require ‘hate’ as an active ingredient. Before this book, I never really knew what to do. When I attempted to provide the hate myself, I found it difficult to focus and the rituals often went wrong (I even ended up losing a hand once, that was a pain to deal with). After that, I tried kidnapping some of my particularly nasty neighbours, but while that worked considerably better, it certainly wasn’t perfect – often fear would override the hate I needed, and of course I had to kill them afterwards, and disposing of all of the bodies was starting to get really annoying. Then this book came along, and all of the took away all of the hassle of finding hatred.
“Conservative Revolution” is the much-anticipated release by Cory Bestiality, after the success of his collaborative work on the ‘Real Solutions’ pamphlet. Effortlessly blending the Palaeofantasy, Historical Fiction and Political and Philosophical Satire genres, Bestiality creates a largely effective and revealing expose of the fallacies of Christian Fundamentalism and neoconservative ideology. Whilst lacking the insight and depth of ‘Real Solutions’, Bestiality’s new work is clearly inspired by similar writings, from Adolf Hitler’s stirring call to action, “Mein Kampf”, to Sarah Palin’s “Going Rogue”
Short and succinct! In just over 100 pages I learned that Adolf Hitler was a very moderate, balanced, caring and compassionate man in comparison to Corey Bernardi.
One wonders that there are so many people in Australia who trouble themselves to write such stuff. It’s certainly a tribute to the power of Ardrey’s “Complex.” The shear irrationality of it is demonstrated by the fact that Bernardi is laughing all the way to the bank. The book has already gone to a second printing, and the publisher is rubbing his hands as copies fly off the book store shelves. The affair is just another data point swimming in an ocean of others, all pointing to a very fundamental truth; the outgroup have ye always with you.
Consider the ingroup responsible for composing most of these furious anathemas. It is the ingroup of the secular left, which lives in more or less the same ideological box in Australia as its analogs in Western Europe and North America. In other words, this stuff is coming from the very ingroup most busily engaged in cobbling together spiffy new moralities which are to be characterized by universal human brotherhood! Sorry my friends – no ingroup without an outgroup. Even if you ushered in the Brave New World of “human flourishing” by exterminating the very significant proportion of the population that agrees with Cory Bernardi, another outgroup would inevitably crop up to take its place. In the absence of an outgroup, it is our nature to simply create another one.
It’s hard to imagine a less promising ingroup to gladden the rest of us with “human flourishing” than the modern secular left. As Catholic philosopher Joseph Bottum notes in his book, An Anxious Age: The Post-Protestant Ethic and the Spirit of America, in the US these people are the direct descendants of the Puritans. The overbearing self-righteousness evident in these “book reviews” seems to confirm that assessment. They are saturated with a level of bile and hatred of the “other” that one normally expects to find only among religious fanatics. And according to Bottum, that is basically what they are. His take is summarized in a review of his book by David Goldman:
Joseph Bottum, by contrast, examines post-Protestant secular religion with empathy, and contends that it gained force and staying power by recasting the old Mainline Protestantism in the form of catechistic worldly categories: anti-racism, anti-gender discrimination, anti-inequality, and so forth. What sustains the heirs of the now-defunct Protestant consensus, he concludes, is a sense of the sacred, but one that seeks the security of personal salvation through assuming the right stance on social and political issues. Precisely because the new secular religion permeates into the pores of everyday life, it sustains the certitude of salvation and a self-perpetuating spiritual aura. Secularism has succeeded on religious terms. That is an uncommon way of understanding the issue, and a powerful one.
Perhaps “human flourishing” would be a bit more plausible if we were all Benjamin Franklins, or Abraham Lincolns, or even Neville Chamberlains. As William Shakespeare put it in Twelfth Night, “Anything but a devil of a Puritan.”
Posted on March 29th, 2014 1 comment
People worry about a “grounding” for morality. There’s really no need to. As Marc Bekoff and Jessica Pierce pointed out in Wild Justice – The Moral Lives of Animals, there are analogs of moral behavior in many species besides our own. Eventually some bright Ph.D. will design an experiment to scan the brains of chimpanzees as they make morally loaded decisions, and discover that the relevant equipment in their brains is located more or less in the same places as in ours. Other animals don’t wonder why one thing is good and another evil. They’re not intelligent enough to worry about it. Hominids are Mother Nature’s first experiment with creatures that are smart enough to worry about it. The result of this cobbling of big brains onto the already existing mental equipment responsible for moral emotions and perceptions hasn’t been entirely happy. In fact, it has caused endless confusion through the ages.
We can’t just perceive one thing as good, and another as evil, and leave it at that like other animals. We’re too smart for that. We have to invent a story to explain why. We perceive Good and Evil as things independent of ourselves, so we need to come up with some kind of myth about how they got there. It’s an impossible task, because Good and Evil don’t exist as independent things. They are subjective impressions. It is our nature to perceive them as things because morality has always worked best that way, at least until now. That fact has led to endless confusion over the ages, as philosophers and theologians have tried to grasp the mirage.
We are much like the patients described in Michael Gazzaniga’s The Ethical Brain, who had their left and right brain hemispheres severed from each other to relieve severe epilepsy. According to Gazzaniga,
Beyond the finding…that the left hemisphere makes strange input logical, it includes a special region that interprets the inputs we receive every moment and weaves them into stories to form the ongoing narrative of our self-image and our beliefs. I have called this area of the left hemisphere the interpreter because it seeks explanations for internal and external events and expands on the actual facts we experience to make sense of, or interpret, the events of our life.
Experiments on split-brain patients reveal how readily the left brain interpreter can make up stories and beliefs. In one experiment, for example, when the word walk was presented only to the right side of a patients’s brain, he got up and started walking. When he was asked why he did this, the left brain (where language is stored and where the word walk was not presented) quickly created a reason for the action: “I wanted to go get a Coke.”
We constantly invent similar stories to rationalize to ourselves why something we have just perceived as good really is Good, or why something we have perceived as evil really is Evil. Jonathan Haidt describes the same phenomenon in his The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Noting that he will present evidence in the paper to back up his claims, he writes,
These findings offer four reasons for doubting the causality of reasoning in moral judgment: 1) there are two cognitive processes at work — reasoning and intuition — and the reasoning process has been overemphasized; 2) reasoning is often motivated; 3) the reasoning process constructs post-hoc justifications, yet we experience the illusion of objective reasoning; and 4) moral action covaries with moral emotion more than with moral reasoning.
The most common post-hoc justification, of course, has always been God. Coming up with a God-based narrative is a piece of cake compared to the alternative. After all, if the big guy upstairs wants one thing to be Good and another Evil, and promises to fry you in hell forever if you beg to differ with him, it’s easy to find reasons to agree with Him. Take him out of the mix, however, and things get more complicated. We come up with all kinds of amusing and flimsy rationalizations to demonstrate the existence of the non-existent.
Consider, for example, the matter of Rights which, like Good and Evil, exist as subjective impressions that our mind portrays to us as objective things. The website of the Foundation for Economic Education has a regular “Arena” feature hosting debates on various topics, and a while back the question was, “Do Natural Rights Exist?” The affirmative side was taken by Tibor Machan in a piece entitled, “Natural Rights Come From Human Nature.” If you get the sinking feeling on reading this that you’re about to see yet another version of the naturalistic fallacy, unfortunately you would be right. Machan sums up his argument in the final two paragraphs as follows:
We are all dependent upon knowing the nature of things so that we can organize our knowledge of the world. We know, for example, that there are fruits (a class of some kind of beings) and games (another class) and subatomic particles (yet another class) and so on. These classes or natures of things are not something separate from the things being classified, but constitute their common features, ones without which they wouldn’t be what they are. Across the world, for example, apples and dogs and chickens and tomatoes and, yes, human beings are all recognized for what they are because we know their natures even when some cases are difficult to identify fully, completely, or when there are some oddities involved.
So there is good reason that governments do not create rights for us—we have them, instead, by virtue of our human nature. And this puts a limit on what governments may do, including do to us. They need to secure our rights, and as they do so they must also respect them.
Is it just me, or is this transparent conflation of “is” and “ought” sufficiently obvious to any ten-year old? Well, it must be me, because according to the poll accompanying the debate, 66% of the respondents thought that Machan “won” with this argument, according to which Natural Rights “evolved” right along with our hands and feet. Obviously, since people “know in their bones” that Rights are real things, it doesn’t take a very profound argument to convince them that “it must be true.” In a word, if you think that the world will sink into a fetid sewer of moral relativism and debauchery because there is no “grounding of morality,” I have good news for you. It ain’t so. If our moral equipment works perfectly well even when the only thing propping it up is such a flimsy post-hoc rationalization, it can probably get along just as well without one.
Posted on March 26th, 2014 2 comments
Times have changed in Germany since Obama won the Nobel Peace Prize and spoke before 200,000 enraptured fans in Berlin. Only 6,000 turned out to hear him when he returned last year. Meanwhile, the media there, particularly since the recent events in Ukraine, has been resurrecting themes that were familiar during the Cold War. The political left is beginning to turn to Russia, and the political right is decrying the weakness of the Obama Administration. For example, while the overall tone of the main news magazine, Der Spiegel, has been anti-Russian, Jakob Augstein, whose column “When in Doubt, to the Left,” appears there regularly, wrote a couple of days ago:
Media and political pundits want to breathe new life into an old “face of the enemy” (Feindbild): the evil Russian. As far as Russia is concerned, the West is in once again stuck in the same rut as in the cynical days of yesteryear, when US Secretary of Defense Caspar Weinberger publicly expressed his hope that the superpower in the East would go under “with a whimper, not with a bang.” Hillary Clinton just compared Putin to Hitler. That’s how one recommends oneself in the US as a potential Democratic presidential candidate. Meanwhile, the Russia policy of the two East Germans Merkel and (German President) Gauck is as resentful as if they were exploiting their offices for private trauma therapy.
Meanwhile the polls are showing that the public isn’t inclined to tag along. A majority of Germans do not consider Putin unreasonable for viewing the Crimea as a Russian sphere of influence. (As opposed to Putin) the tendency to ignore and violate borders is a characteristic of the West. It constantly seeks to fish in troubled waters (“periklitieren”), to use one of Bismarck’s favorite expressions, outside of its own sphere of influence. Or, more to the point, it claims the whole world as its sphere of interest. That’s just the problem.
The West can never get enough, and is therefore insatiable… The Asians have finally drawn their own conclusions: the lamb must now itself become the wolf.
It’s clear from the reader comments that appeared after a recent Spiegel article on the crisis that Augstein hasn’t misrepresented German attitudes. The article itself, entitled, “The Ukraine: Obama Expresses Scorn for Ukraine as a Regional Power,” includes the understated byline, “This isn’t how de-escalation should look.” Some typical examples:
The ineffectual US President dares to shoot his mouth off like this? He never seems to come up with anything concrete and positive except stupidities… I demand that his Nobel Peace Prize be revoked. (whiteelephant1)
The US is clearly on the path of escalation… It would be nice if the German media would adopt a more critical attitude, and not always just go along with everything the US/EU says. Putin isn’t the danger. The danger comes from those who now sense an opportunity to finish Russia once and for all. That’s what this is really about. (mc6206)
Very nice, Mr. Obama, just keep playing with fire. After all, thank God there’s a buffer zone between Russia and your homeland in case Russia loses its nerve. It’s called EUROPE! (Korf)
If Russia is just a “regional power,” and one has more important problems to deal with, why these hysterical attempts to isolate Russia and portray her in a bad light. Who is supposed to be swallowing such stupidities from Obama? (itf)
Well, we’re not exactly seeing a return to the last super-eruption of anti-Americanism in Germany that reached its climax about 15 years ago, but the honeymoon is clearly over.
UPDATE: Der Spiegel just published its take on an interview with former Chancellor Helmut Schmidt that appeared in the weekly newspaper, Die Zeit. Schmidt is a highly intelligent man whose memoirs are well worth reading, and who can hardly be described as anti-American. Der Spiegel headlines the interview, “Former Chancellor Schmidt Defends Putin’s Ukraine Policy.” The byline reads, “Helmut Schmidt finds the actions of Russia in the Crimea ‘completely understandable,’ and considers sanctions ‘dumb stuff’ (dummes Zeug). No doubt the situation in Ukraine is dangerous – however, in the former Chancellor’s opinion, the West is at fault.”
A few excerpts from the article in Der Spiegel:
Schmidt was highly critical of the way in which the Crimea crisis has been handled in the West. He referred to the sanctions imposed on Russia by the European Union and the USA as “dumb stuff.” In Schmidt’s opinion, attempts to impose further sanctions would be misguided. For the most part they would have merely symbolic value, “but they would affect the West just as much as the Russians.
Schmidt’s words provide support to those taking part in the debate in Germany who favor looking at things from the Russian point of view. Former Chancellor and party colleague Gerhard Schröder recently spoke in similar terms.
According to Schmidt, the situation in Ukraine is “dangerous, because the West has worked itself into a frenzy.” (literally, “has become terribly excited”) As a result, “the overwrought reaction in the West has naturally led to a similar overwrought reaction in Russian public opinion and politics.” Referring to the (reserved) policy of Chancellor Angela Merkel the 95-year old said, “In this case praise for the caution of the German Chancellor is appropriate.”
So far the editorial narrative at Der Spiegel has been mainly anti-Russian. However, there has been a shift to a more circumspect approach lately, with articles critical of right wing nationalists in the current Kiev regime, taking note of western media darling Yulia Timoshenko’s hateful tirade against Putin in an overheard telephone conversation in which she said she was “ready to pick up a machine pistol and shoot this piece of crap in the head,” suggesting the use of nuclear weapons to kill Russians, and so on. It is noteworthy that the German Green Party, which has tacked to the right in recent years, immediately condemned Schmidt’s comments, while the Party of the Left, positioned to the left of the German Socialist Party (SPD), praised his remarks.
Posted on March 24th, 2014 2 comments
No doubt sports fans are aware of the “C’mon Man” collections of the sports week’s worst bloopers and blown calls on ESPN. That was my reaction on reading a piece entitled Yes, Atheism and Conservatism Are Compatible by fellow conservative atheist Charles C. W. Cooke at National Review Online. The article was a reaction to the recent unceremonious eviction of the atheist group American Atheists from a booth at CPAC after they had been invited to attend by current Chair of the American Conservative Union, Al Cardenas.
The invitation extended by the ACU, Al Cardenas and CPAC to American Atheists to have a booth is more than an attack on conservative principles. It is an attack on God Himself. American Atheists is an organization devoted to the hatred of God. How on earth could CPAC, or the ACU and its board of directors, and Al Cardenas condone such an atrocity?
to which Cooke quite reasonably responds,
The particular merits of the American Atheists group to one side, this is a rather astounding thing for Bozell to have said. In just 63 words, he confuses disbelief in God for “hatred” for God — a mistake that not only begs the question but is inherently absurd (one cannot very well hate what one does not believe is there); he condemns an entire conference on the basis of one participant — not a good look for a struggling movement, I’m afraid; and, most alarmingly perhaps, he insinuates that one cannot simultaneously be a conservative and an atheist. I reject this idea — and with force.
If atheism and conservatism are incompatible, then I am not a conservative. And nor, I am given to understand, are George Will, Charles Krauthammer, Anthony Daniels, Walter Olson, Heather Mac Donald, James Taranto, Allahpundit, or S. E. Cupp.
He continues with the same point that I made in a recent post:
One of the problems we have when thinking about atheism in the modern era is that the word has been hijacked and turned into a political position when it is no such thing. The Oxford English Dictionary defines an “atheist” as someone who exhibits “disbelief in, or denial of, the existence of a god.” That’s me right there — and that really is the extent of it.
Cooke continues with an assessment of the Christian legacy in world history which is rather more benevolent than anything I would venture. And then he goes completely off the tracks. As readers of this blog might guess, it happens in the context of an issue that speaks to our moral emotions – the question of Rights. Again quoting Cooke,
A great deal of the friction between atheists and conservatives seems to derive from a reasonable question. “If you don’t consider that human beings are entitled to ‘God given’ liberties,” I am often asked, “don’t you believe that the unalienable rights that you spend your days defending are merely the product of ancient legal accidents or of the one-time whims of transient majorities?” Well, no, not really. As far as I can see, the American settlement can thrive perfectly well within my worldview. God or no God, the Constitution, the Bill of Rights, and the Declaration of Independence are all built upon centuries of English law, human experience, and British and European philosophy, and the natural-law case for them stands nicely on its own.
Not really. Sorry, but without a God, the “natural-law case for them” collapses as a non sequitur. Without a God, “natural law” can’t grab a single Right, Good, or Evil out of anyone’s subjective consciousness and magically transmute it into a thing-in-itself. And in spite of the fervent wringing of hands of every conservative on the face of the planet, the fact that it can’t won’t cause a God to miraculously spring into existence. The subjective perception of rights in the human consciousness will continue to function just as it always has. That perception isn’t going anywhere, and neither requires, nor will it pay any attention to the Christians who are disappointed because there’s no God to transmute the perception into an independent Thing, nor to atheists, conservative and otherwise, are disappointed because they can’t transmute it into a Thing by invoking equally imaginary “natural laws.” Adding insult to injury, Cooke continues,
“Of the nature of this being (God),” Jefferson wrote to John Adams in 1817, “we know nothing,” Neither do I. Indeed, I do not believe that there is a “being” at all. And yet one can reasonably take Jefferson’s example and, without having to have an answer as to what created the world, merely rely upon the same sources as he did – upon Locke and Newton and Cicero and Bacon and, ultimately, upon one’s own human reason. From this, one can argue that the properties of the universe suggest self-ownership, that this self-ownership yields certain rights that should be held to be unalienable, and that among these are Life, Liberty, and the Pursuit of Happiness. After all, that’s what we’re all righting for, Right?
(Pause for loud forehead slap.) Locke, Newton, Cicero and Bacon? Smart men, no doubt, but what on earth could they conceivably have known about the evolutionary origins of such concepts as Rights? Good grief, Locke was a Blank Slater, albeit one of a much different color than the likes of John Stuart Mill or Ashley Montagu. Are we really to believe that one can become enlightened concerning “Rights” by reading Locke, Newton, Cicero, and Bacon until one reaches a state of Don Quixote-like stupefaction? ”Human reason?” Hey, I’m game, as long as the chain of rational arguments doesn’t include the “miracle happens” step introduced in one of Gary Larson’s “The Far Side” cartoons. And the leap from “human reason” to “self-ownership” as a property of the universe? All I can say is, Cooke should have stopped while he was ahead. C’mon, man!
Posted on March 23rd, 2014 No comments
Massimo d’Azeglio was a 19th century Italian patriot. He occasionally turns up on the Internet as ”Massimo Taparelli” as well. I happened to run across his memoirs in the random walk that accounts for most of what I read. Sometimes you get lucky. So it was with d’Azeglio, who turned out to be a highly original thinker, and whose Recollections are full of all kinds of whimsical bon mots.
It turns out that there’s a lot about d’Azeglio that reminds me of my favorite novelist, Stendhal. He had a highly developed sense of personal honor and dignity. He admired the fine arts, and dabbled in painting himself as a young man, as did Stendhal in acting. Both were profoundly influenced by their experiences in Milan, and Stendhal, who experienced a love affair there that turned out tragically, at least for a Frenchman, because the lady refused to give in, went so far as to call himself “Milanese” on his gravestone. Both were dismayed by foreign domination of their native lands. And finally, both were filled with hope, fear, and anxiety about whether the readers of the future, the people Stendhal dreamed of as “The Happy Few,” would notice them. All of which makes it all the more interesting that d’Azeglio’s take on Napoleon’s occupation of Italy was exactly the opposite of Stendhal’s.
Stendhal, of course, worshipped the great man, as anyone who has read The Red and the Black is well aware. To hear him tell it, the only ones in Italy who opposed the French occupation were a few ultramontane priests and reactionary aristocrats. For example, from The Charterhouse of Parma,
On the 15th of May, 1796, General Bonaparte made his entry into Milan… A whole people discovered that everything that until then it had respected was supremely ridiculous, if not actually hateful. People saw that in order to be really happy after centuries of cloying sensations, it was necessary to love one’s country with a real love and to seek out heroic actions… These French soldiers laughed and sang all day long; they were all under 25 years of age, and their Commander in Chief, who had reached twenty-seven, was reckoned the oldest man in his army. The gaiety, this youthfulness, this irresponsibility, furnished a jocular reply to the furious preachings of the monks, who, for six months, had been announcing from the pulpit that the French were monsters, obliged, upon pain of death, to burn down everything and to cut off everyone’s head… At the most it would have been possible to point to a few families belonging to the higher ranks of the nobility, who had retired to their palaces in the country, as though in a sullen revolt against the prevailing high spirits and the expansion of every heart.
After the French were temporarily driven out in Napoleon’s absence,
These gentlemen, quite worthy people when they were not in a state of panic, but who were always trembling, succeeded in getting round the Austrian General: a good enough man at heart, he let himself be persuaded that severity was the best policy, and ordered the arrest of one hundred and fifty patriots: quite the best men to be found in Italy at the time.
Which brings us to some essential differences between the two men. Whereas d’Azeglio adored his father, Stendhal loathed his, and always blamed him for the loss of his mother, whom he madly adored, at the age of four. And whereas Stendhal always envied the aristocracy he portrayed with such spite, d’Azeglio actually belonged to it. His father might easily have passed for one of “these gentlemen,” although by his son’s account he was a brave soldier who wasn’t given to trembling, and was neither harsh nor greedy. So it was that, though both men were romantic patriots, and both were in some sense products of and profoundly influenced by the ideals of the French Revolution, d’Azeglio’s recollection of the occupation was not so rosy. In his words,
I have already said that to the minds of his contemporaries Napoleon appeared as an irresistible Fate; and this is true. Imagine, then, the bewilderment of all those who, though crushed under that enormous weight, and without hope of rescue, continued to chafe under injustice and disgrace, when the first ray of a possible redemption gleamed forth, – when came the earliest tidings of the report, borne almost on the wind, Napoleon is vanquished! Napoleon is retreating!
At last, one blessed day, came the glad tidings that Napoleon was no longer our master, and that we were, or were about to become, free and independent once more. He who was not at Turin on that day can form no idea of the delirious joy of a whole population at its utmost height.
Quite a difference for two men who were fundamentally quite alike. No doubt a good Marxist would just apply his cookie cutter and come up with a smug class interpretation. I doubt it’s quite that simple. Family loyalties and clashing national patriotisms undoubtedly played a role as well. In any case, d’Azeglio had no illusions about the kind of men who came back to take Napoleon’s place. He was in full agreement with Stendhal on that score:
I felt the reaction – I know its effects; and although even it has not made me regret Napoleon and French dominion in Italy, it is none the less true that we lost a government which, sooner or later, would have secured the triumph of those principles which are the life of human society, to revert to a government of ignorant and imbecile men, full of vanity and prejudice.
Neither the Romans nor Europe could then foresee that the sovereigns, and the ministers representing the re-constituted governments, would be so blind as not to perceive how different were the men of 1814 from those of 1789, and not to know that they would certainly be most unwilling to give up that portion of good to which the great genius of Napoleon and the changes wrought by time had accustomed them. The princes and their ministers who returned from exile found it convenient to accept the heritage of Napoleon sub conditione; they retained the police and the bureaucracy, the taxes, enormous standing armies, and so forth; but the good system of judicial and civil administration, the impulse given to science and personal merit, equalization of classes, improvement and increase of communication, liberty of conscience, and many other excellent features in the government of the great conqueror, were all ruthlessly flung aside.
In a word, in spite of his reflexive loathing for Napoleon, not to mention his aristocratic father and a beloved brother who became a fanatical Jesuit cleric, d’Azeglio was much too intelligent to blind himself to the great man’s virtues. Stendhal would have smiled.
Here are a few more d’Azeglio-isms for the delectation of my readers:
War exercises over nations a more salutary influence than a long peace. Fidelity to a difficult and perilous duty educates men, and makes them fit to perform more peaceful tasks well and worthily… A singular conclusion might be drawn from all this, – viz. that a nation, in order to preserve those virtues which save it from decay, is necessarily obliged to kill a certain number of its neighbors every now and then. I leave the reader to meditate on this question, and intend to study it myself one day. Meanwhile, let us proceed.
It is not in our natures to believe more than the priests themselves; and facts have always shown that the priests of Rome believe very little. The Italians, therefore, have never considered dogmatic questions very seriously.
Both parents had too much good sense to fall into the error so common in those parents who undertake the education of their children, viz. that of studying their own vanity or convenience instead of the good of their pupils. I was never subjected to any of those domestic tortures to which, through maternal vanity, those unhappy children intended to act the laborious part of enfants prodiges are so often condemned… Adulation and incitement to pride and vanity, though they may be a mistaken form of parental affection, are in fact the worst of lessons for the child, and the most baneful in their results.
But my education was governed by the Jesuit system, and the problem it has always so admirably solved is this – to keep a young man till he is twenty constantly employed in studies which are of little or no value in forming his character, his intelligence, and his judgment.
In factious times, past and present, we fall into the habit of calling the men of our own party good, and our adversaries bad; as if it were possible that a country should be divided into two distinct bodies; five millions of honest men, for instance, on one side, and five millions of rascals on the other. Men who profess these ideas are, as is natural, often bamboozled, or worse, by a scoundrel, whom they believe honest for no other reason than that he belongs to their own party. To avoid this, let us forbear from selecting friends and confidants only on account of their political opinions; and let us remember that, if two different opinion professed by two opposite parties cannot be equally true, logical, and good, two men belonging to the said opposite parties are just as likely to be two arrant knaves as two honest men.
It would seem the evolutionary psychologists weren’t the first ones to notice the existence of ingroups and outgroups. The Recollections contain many other interesting and amusing sentiments that you’re not likely to run across on Foxnews or CNN. As they say, read the whole thing. D’Azeglio would have been pleased.