Posted on December 22nd, 2010 No comments
In a series of films made in the late 60’s and early 70’s that are now considered classics of the genre, Christopher Lee plays a Count Dracula who is reduced to dust by sunlight, impaled on crucifixes, and is otherwise discombobulated by all the standard vampire antidotes, only to be improbably revived just in time for the next film. The Blank Slate is like that. It is a wonderfully useful bit of quackery to utopians of all stripes, and so keeps rising from its own ashes in one guise or another. An interesting variant, the theory of morality as exaptation, was devised by Francisco Ayala, a professor of ecology and evolutionary biology at the University of California, Irvine. In his words,
I propose that the capacity for ethics is a necessary attribute of human nature, whereas moral codes are products of cultural evolution. Humans have a moral sense because their biological makeup determines the presence of three necessary conditions for ethical behavior: (i) the ability to anticipate the consequences of one’s own actions; (ii) the ability to make value judgments; and (iii) the ability to choose between alternative courses of action. Ethical behavior came about in evolution not because it is adaptive in itself but as a necessary consequence of man’s eminent intellectual abilities, which are an attribute directly promoted by natural selection. That is, morality evolved as an exaptation, not as an adaptation. Moral codes, however, are outcomes of cultural evolution, which accounts for the diversity of cultural norms among populations and for their evolution through time.
In other words, departing from the old Blank Slate orthodoxy, Ayala is conceding that there is such a thing as human nature. However, it doesn’t matter. Our moral behavior is still completely malleable, because moral rules are almost purely a product of culture, and can come in any flavor you like. This, we are told, is proved by the diversity of human moral systems. According to Ayala, it’s all nice and legal according to Darwin himself. For example,
After the two initial paragraphs of chapter III of The Descent of Man, which assert that the moral sense is the most important difference “between man and the lower animals” …, Darwin states his view that moral behavior is strictly associated with advanced intelligence: “The following proposition seems to me in a high degree probable—namely, that any animal whatever, endowed with well-marked social instincts, would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well developed, or nearly as well developed, as in man” (ref. 1, pp. 68–69). Darwin is affirming that the moral sense, or conscience, is a necessary consequence of high intellectual powers, such as exist in modern humans. Therefore, if our intelligence is an outcome of natural selection, the moral sense would be as well an outcome of natural selection. Darwin’s statement further implies that the moral sense is not by itself directly promoted by natural selection, but only indirectly as a necessary consequence of high intellectual powers, which are the attributes that natural selection is directly promoting.
There’s just one thing wrong with the above statement. Ayala is completely ignoring the phrase “well-marked social instincts.” What did Darwin mean by “well-marked social instincts?” It’s worth quoting him at length to find the answer:
A man who has no assured and ever present belief in the existence of a personal God or of a future existence with retribution and reward, can have for his rule of life, as far as I can see, only to follow those impulses and instincts which are the strongest or which seem to him the best ones. A dog acts in this manner, but he does so blindly. A man, on the other hand, looks forwards and backwards, and compares his various feelings, desires and recollections. He then finds, in accordance with the verdict of all the wisest men that the highest satisfaction is derived from following certain impulses, namely the social instincts. If he acts for the good of others, he will receive the approbation of his fellow men and gain the love of those with whom he lives; and this latter gain undoubtedly is the highest pleasure on this earth. By degrees it will become intolerable to him to obey his sensuous passions rather than his higher impulses, which when rendered habitual may be almost called instincts. His reason may occasionally tell him to act in opposition to the opinion of others, whose approbation he will then not receive; but he will still have the solid satisfaction of knowing that he has followed his innermost guide or conscience.
In other words, “social instincts” are other-regarding instincts or, as we would say today, predispositions, as opposed to such “sensuous passions” as the desire for food, sex, etc. They are what modern scientists refer to when they speak of “hard-wired morality,” and were, for Darwin, as well as for many others since his time who have spoken of morality, not an “exaptation,” but an essential aspect of human nature, a precondition for the development of any manifestation of morality, whether in humans or other animals. In other words, what Darwin was really saying is that “morality” is simply the expression of innate social or moral predispositions in creatures with a superior ability to reason about their subjective moral feelings or emotions. That is how Darwin was understood by a long line of other thinkers, and, in fact, that interpretation would seem to be obvious. Anyone who entertains any doubt on the subject need look no further than his The Expression of the Emotions in Man and Animals with its many parallels between human behavior and that of other animals.
Somehow, however, Ayala missed the point. All that he will allow to the sphere of human nature is a “proclivity to judge” that somehow floats out there in the ether all by itself, with no basis upon which to make judgments. In his words,
The question of whether ethical behavior is biologically determined may, indeed, refer to either one of the following two issues. First, is the capacity for ethics—the proclivity to judge human actions as either right or wrong—determined by the biological nature of human beings? Second, are the systems or codes of ethical norms accepted by human beings biologically determined? A similar distinction can be made with respect to language. The question of whether the capacity for symbolic creative language is determined by our biological nature is different from the question of whether the particular language we speak—English, Spanish, Chinese, etc.—is biologically determined, which in the case of language obviously it is not.
I propose that the moral evaluation of actions emerges from human rationality or, in Darwin’s terms, from our highly developed intellectual powers. Our high intelligence allows us to anticipate the consequences of our actions with respect to other people and, thus, to judge them as good or evil in terms of their consequences for others. But I will argue that the norms according to which we decide which actions are good and which actions are evil are largely culturally determined, although conditioned by biological predispositions, such as parental care to give an obvious example.
Here Ayala tries to leave himself some wiggle room by contradicting himself. His norms are not purely culturally determined, but only “largely” culturally determined, and they are “conditioned” by biological predispositions, but just not enough to matter. All but the wildest and craziest of the old blank slaters used to give themselves a similar “back door.” For example, from zoologist, J. P. Scott,
There may be some biological basis for territorial behavior in people, but it is equally possible that it is a human cultural invention.
and from physical anthropologist Ralph Holloway,
Perhaps egoism and self-esteem are innate properties of the species man, but limited directions depending on the cultural milieu in which various peoples thrive or cope.
In the end, of course, as noted by Steven Pinker in The Blank Slate, none of this mattered. The inevitable conclusion was still that, for all practical purposes, the only thing that mattered in shaping human behavior was culture. The same is true of Ayala and his “predispositions” when it comes to morality. In his words,
Moral codes arise in human societies by cultural evolution. Those moral codes tend to be widespread that lead to successful societies. Since time immemorial, human societies have experimented with moral systems. Some have succeeded and spread widely throughout humankind, like the Ten Commandments, although other moral systems persist in different human societies. Many moral systems of the past have surely become extinct because they were replaced or because the societies that held them became extinct. The moral systems that currently exist in humankind are those that have been favored by cultural evolution.
In fact, Ayala is putting the cart before the horse. Moral behavior is not predicated on a high intelligence, nor is it an “exaptation” of high intelligence, only possible in man. Rather, morality is fundamentally emotional rather than rational. The concepts of good and evil themselves are subjective, predicated on the pre-existence of these emotions, and could not exist without them. Far from suddenly emerging as the result of the previous evolution of high intelligence, and understandable as the outcome of some rational thought process, morality is utterly dependent for its existence on emotions that are entirely analogous to those experience by other animals. Human morality is simply the expression of those moral emotions in creatures with high intelligence. We have a greater capacity to reason about what we feel than other animals, and we can rationally interpret what we feel emotionally in different ways, but, in the end, we are still acting in accordance with those emotions, not based on the outcome of some disoriented logical thought process.
The fact that there must be many variations in the details of moral behavior in creatures such as ourselves goes without saying. The predispositions fundamentally responsible for moral behavior could not be programmed into the brains of wolves or chimpanzees in the form of a string of complex moral rules expressed in terms of human language. The fact remains that these predispositions exist, and are responsible for the many commonalities in human moral behavior across widely different cultures.
There is no need to take what I say on trust regarding these matters. Read books such as “Wild Justice,” and you’ll see that the evidence is already weighty, and will become more so as our diagnostic techniques enable us to probe human emotions and thought processes with ever greater resolution. In fact, Ayala’s theory was born dead, and it appears that, at this point, even he realizes it. In his recent papers, he stubbornly refuses to part with his “exaptation” theory, but adds ever more caveats about what people like Frans de Waal, Jeffrey Masson, and Marc Bekoff have been discovering about animal morality, and ever more weasel words about “predispositions.”
In fact, being stubborn pays. Ayala just won the 2010 Templeton prize, which includes a tidy award of $1.5 million. The prize
… honors a living person who has made an exceptional contribution to affirming life’s spiritual dimension, whether through insight, discovery, or practical works. Established in 1972 by the late Sir John Templeton, the Prize aims, in his words, to identify “entrepreneurs of the spirit”—outstanding individuals who have devoted their talents to expanding our vision of human purpose and ultimate reality. The Prize celebrates no particular faith tradition or notion of God, but rather the quest for progress in humanity’s efforts to comprehend the many and diverse manifestations of the Divine.
Indeed, Ayala apparently considers himself, against all odds, a Trinitarian Christian. All this comes as something of a surprise to his more orthodox fellow believers, who surely would have burned him as a heretic back in the day. See for example, this and this. And no wonder. You could be a Pelagian, a Socinian, a believer in Communion in one or both kinds, or even a wild, unrecanting Arian, and Dr. Ayala can exapt a morality for you that’s as legit as the pope’s. Apparently the Templeton Prize people weren’t so finicky about the minutiae of theology.
Posted on December 21st, 2010 No comments
Here’s the pro:
and here’s the con:
Both articles are useful if you happen to be a knee-jerk liberal or conservative looking for another board to nail onto the ideological box you live in. They’re not so useful if you’re actually interested in understanding the issue of Internet regulation. Both share a common feature of most of the articles that turn up on the Internet about topics that hit people’s ideological hot buttons. Their authors talk right past each other.
I used to like the New Republic back in the day when Andrew Sullivan was editor because its authors had the endearing trait of identifying and taking issue with their opponents’ most important arguments head on. Meanwhile, Sullivan has drifted off into the la-la land of Palin Derangement Syndrome, the New Republic has morphed into a dull version of the Nation, and that kind of writing has become increasingly difficult to find.
Meanwhile, I haven’t found any “Net Neutrality for Dummies” articles that are worth reading. If you’re really interested in developing an informed opinion, I hope you like reading thick drafts of official documents.
Posted on December 20th, 2010 No comments
In our ideology drenched times, it’s the same thing as “good science:” anything that happens to agree with your ideological narrative.
Powerline just served up a typical example relating to that über-politicized topic, global warming. According to the “good scientists” at Powerline, global warming theories are all wrong because they are currently experiencing snow and cold weather in Great Britain. Quoting from the article:
It’s fun to ridicule the warmists because they are so often wrong, but their errors are in fact significant: a scientific theory that implies predictions that turn out to be wrong, is false. A principal feature of climate hysteria is its proponents’ unwillingness to be judged by the standards that govern real science.
I know of not a single reputable climate scientist who would claim their theories “imply the prediction” that localized incidences of cold weather on the planet will no longer occur. In view of the solid evidence that, overall, the planet has, indeed, been warming over the past decade, I would like to know on what evidence Powerline is basing the claim that “warmists” are “so often wrong.” It’s rather cold in the DC area today, too. Does that also “disprove” global warming?
It’s not hard to find the same phenomenon on the other side of the ideological spectrum. There we often hear the claim that theories that significant global warming will occur over, say, the next century have been “proved.” This is “good science” in the same sense as Powerline’s claims about the cold weather in Britain. In the first place, the computational models on which such claims are based are just that; models. Even the best computational models are approximations. Computational models of climate are far from the best. Ideally, they would need to account for billions of degrees of freedom just to model the atmosphere alone, not to mention the coupling of the atmosphere with the oceans, etc. No computer on earth, either now or in the foreseeable future, is capable of solving such a problem without severe simplifying assumptions. The mathematical error bars on those assumptions have never been rigorously proved. Throw in the fact that the data is noisy and often corrupt or nonexistent, and the best models are themselves probabilistic and not deterministic, and the claim that they “prove” anything is absurd.
“Proved” is much too strong a term, but I would buy the claim that significant long term global warming is probable. Given the fact that this is the only planet we have to live on at the moment, it doesn’t make a lot of sense to me that we should be rocking the boat. I doubt that “science” will offer any solutions, though. The hardening of ideological dogmas on both sides won’t allow it. Whatever decisions are finally made, they are far more likely to be based on politics than science.
Posted on December 19th, 2010 1 comment
I can imagine little that it is more important for us to learn the truth about than our own nature. If we fail to learn that truth we literally put our survival at risk. Under the circumstances, it is all the more disturbing that we have a history of obfuscating the path to that truth with pleasant ideological myths. We have been making progress. One of the most pernicious, tenacious, and dangerous myths, that of the Blank Slate, seems finally to have collapsed, buried by common sense and a mountain of evidence that, in the end, became so great that it couldn’t be hand-waved away, even by the time-honored tactic of demonizing the messengers.
Some of the most damning evidence came from primatologists, who finally began to give us accurate information on our closest animal relatives, the great apes. Less than half a century ago, our “scientific” knowledge of their behavioral traits was a farrago of the most ridiculous fairy tales. As recently as 1968, blank slater Ashley Montagu could write with a perfectly straight face, and without fear of contradiction,
The field studies of Schaller on the gorilla, of Goodall on the chimpanzee, of Harrison on the orang-utan, as well as those of others, show these creatures to be anything but irascible. All the field observers agree that these creatures are amiable and quite unaggressive, and there is not the least reason to suppose that man’s prehuman primate ancestors were in any way different.
Alas, in the fullness of time, the apes, too, were roused from their reverie in the Garden of Eden and shown the door. I recommend Demonic Males by primatologists Richard Wrangham and Dale Peterson to anyone who wants to read the details of how we finally discovered that they had actually been munching the forbidden apples all along. The book was originally published in 1996, but I hadn’t actually read it until recently. Peterson cites a blurb from Publisher’s Weekly that sums it up nicely:
Contradicting the common belief that chimpanzees in the wild are gentle creatures, Harvard anthropologist Wrangham and science writer Peterson have witnessed, since 1971, male African chimpanzees carry out rape, border raids, brutal beatings and warfare among rival territorial gangs. In a startling, beautifully written, riveting, provocative inquiry, they suggest that chimpanzee-like violence preceded and paved the way for human warfare, which would make modern humans the dazed survivors of a continuous, five-million-year habit of lethal aggression.
The books main virtue is the wealth of observations and studies it pulls together of the behavior of both apes and human hunter-gatherers. Some excerpts:
That male orangutans regularly rape must be one of the best-kept secrets in the literature of popular zoology.
(Referring to gorillas) When a male kills her infant, the female is an established member of an existing troop, while the killer male is a stranger. If she has seen him before, it has been only during violent interactions when he challenged her mate – a patent threat to her infant, a blur of power as he rammed through the vegetation before being stopped, outfought, and repelled by the resident silverback. And now he has succeeded in his aim. He has managed to get past the defenses of her mate. He has charged directly up to her, even as she screamed and fought back, and in a terrifying show of mastery, he has torn her baby from her and killed it in an instant.
Fists can also grasp invented weapons. Chimpanzees today are close to using hand-held weapons. Throughout the continent, wild chimpanzees will tear off and throw great branches when they are angry or threatened, or they will pick up and throw rocks. Humphrey, when he was the alpha male at Gombe, almost killed me once by sending a melon-size rock whistling less than half a meter from my head. They also hit with big sticks. A celebrated film taken in Guinea shows wild chimpanzees pounding meter-long clubs down on the back of a leopard.
The latter paragraph is of particular significance in view of a remarkable taphonomic finding by South African Prof. Raymond Dart regarding the prevalence of antelope humerus bones among those taken from a cave in the Makapan Valley. The bones, found in association with those of Australopithecus africanus, represented no less than 11% of the all the identifiable types. They would have made ideal double headed clubs, and there was other evidence to indicate that is exactly what they were used for. Another anomalously prevalent bone present was the lower jaw of a small antelope that would have been ideal as a slashing or cutting tool. Published in 1953, Dart’s masterful statistical study was “refuted” in a later work by C. K. Brain, who, not bothering to address Dart’s statistical anomalies, naively claimed that all the bones had been dragged into the cave by leopards and other large predators. Brain has been busily publishing refutations of his “refutation” in recent years. Apparently he realizes that others can look at the evidence for themselves and, if they do, are likely to find gaping holes in his work that they’re not quite as likely to be silent about as they might have been a couple of decades ago. In spite of that, as far as I know he has never apologized for the damage he did to Dart’s reputation. At the time it was published, of course, Brain’s flawed work was eagerly lapped up by blank slaters far and wide as “proof” that Dart had been wrong. If you look around on Google, you can still find a few of their productions, most of them remarkable for the trademark pious indignation they reserve for anyone who dares to threaten their ideological certainties. To this day, no one has ever succeeded in explaining away Dart’s statistical argument. It has simply been ignored. Perhaps it is time the data was revisited.
The book is thought provoking on many levels, and is one of the few recent works on innate human behavior that seriously discusses the issue of aggression, a major theme of early opponents of the blank slate like Robert Ardrey and Konrad Lorenz. Many of them are flawed attempts to “emphasize the positive,” and cobble together new moral systems based on misguided notions of “dialing up” the level of human altruism, accommodating modern political correctness to dubious versions of innate behavior in the same way the blank slate used to accommodate Marxism. It would be well if we could finally pull our heads out of that particular hole in the sand once and for all.
Demonic Males includes several of Wrangham’s fanciful hypotheses, such as the “bulb eating” transition from ape to man, and the notion that human warfare is the result of “pride.” The latter is remarkable, in view of the inevitable vagueness of the term “pride,” but more importantly because Wrangham is clearly aware of and understands what seems to me (not to mention Ardrey and Lorenz) a much more likely explanation; namely ingroup-outgroup behavior, or, as Ardrey referred to it, the Amity/Enmity Complex. The book actually includes a very interesting and quite detailed discussion of the phenomenon, with some data that I had not previously seen, all of it a seemingly compelling argument in favor of its role in warfare. In spite of that, the author somehow managed to convince himself that warfare is all about “pride.” “Pride,” in the sense that Wrangham uses it, is far more plausible in the context of intra-group struggles for status than as an explanation of inter-group warfare. Regardless, it in no way detracts from the significance of the book, which is important because of the source material it makes accessible to a popular audience more than for Wrangham’s theories.
The significance of books like this is nicely summed up by Wrangham himself;
Our Pleistocene ancestors were beleaguered by their own demonic males, surely. But they didn’t have automatic rifles, fertilizer bombs, dynamite, nerve gas, Stealth bombers, or nuclear weapons. We do, and therein lies the danger.
Those words might have been taken directly from one of Ardrey’s books. Indeed, the source material presented in Demonic Males is a triumphant vindication of Ardrey, whose core ideas were always that innate factors influence human behavior, and not all of them predispose us to be kind and inoffensive. As a reward for being right on those fundamental truths, his legacy has been distorted beyond recognition, and his name has been nearly forgotten. But I digress. I will have more to say about Ardrey in later posts.
Another interesting phenomenon is discussed at length in Demonic Males; the remarkable behavioral differences between chimpanzees and bonobos. That, however, is also a topic for another day.
Posted on December 15th, 2010 No comments
People in the “Not in our Genes” school of human behavior are standing on increasingly thin ice scientifically. The evidence, which they are generally adept at ignoring, is leaning heavily against them. They reject hypotheses such as that set forth in Richard Dawkins’ The Selfish Gene, according to which altruism and other forms of moral behavior exist as evolved traits because they promoted the survival and reproduction of the genes carried by individuals. They tend to be people with particularly strong moral emotions themselves, and usually lean to the left of the political spectrum. Scratch one of them, and you will generally find a hidebound ideologue. Their tastes tend to run to ad hominem attacks on their ideological opponents. I’ve run across many Marxists among them, who have generally based their rejection on some version of the “Blank Slate,” or nurture versus nature. Yesterday I came across one of a different color; black not red. This one, by the name of Steve Davis, is a devotee of Peter Kropotkin, and therefore, presumably, an anarchist.
His rationale for rejecting “the selfish gene,” set forth in an article entitled “Altruism: It’s Evolution, It’s Origin, It’s Discontents,” is quite different from that of the average Marxist, and is worth deconstructing in detail. It starts with the following:
Life began when complex molecules came together in cooperation, to perform the functions that we now consider to be characteristics of life.
Cooperation therefore preceded evolution. We do not have to look to evolution to explain the origin of cooperation. It undoubtedly underwent further development through evolution when different forms of cooperation came into being, but cooperation as a concept is linked to life itself, not to evolution.
That’s a new one on me. Thefreedictionary lists one of the definitions of cooperation as “beneficial but inessential interaction between two species in a community,” but I haven’t previously heard it applied in the case of molecules. Their interactions are described rather well by Schrödinger’s equation, and I can see how the Coulomb force is relevant to the interaction, but don’t really see the point of dragging “cooperation” into it. Be that as it may, let’s accept Davis’ definition, and agree that when atoms or molecules combine, they are “cooperating.” Davis continues,
Cooperation is a form of goodness, but how prevalent is it in nature? Well, we see cooperation between molecules, between cells, between organs, between organisms, between groups, and between groups of groups. How much cooperation do we need to see before conceding its significance? How blind do you have to be to ignore cooperation as a factor in evolution? And it’s not hard to see that once cooperation was pulled into the evolutionary process and evolved into different forms, that it’s just one small step to altruism in the accepted meaning of the word, that is, kindness for its own sake. One small step that is, when a particular condition is satisfied.
Really? A form of goodness? In what way did the random combination of molecules suddenly become good? Molecules can also combine to form poisons, or cancer cells. Is that “good” as well? We have accepted Davis’ definition of “cooperation,” but how is it that “cooperation” suddenly acquired the quality of “good?” Atoms and molecules can combine in all kinds of ways, but now Davis has narrowed his definition of “cooperation” without bothering to consult us. It is now no longer just a random combination of molecules, but a “good” combination of molecules, by which is apparently meant one that will lead to the formation of life. If so, it begs the question of how the “goodness” was added in. I submit that it didn’t happen because the molecules wanted to be nice to each other, but because of natural selection. I have never before heard of a mechanism of natural selection other than via the genetic material carried by individuals. Davis, however, has an epiphany for us. It really happens because “cooperation” is “pulled into the evolutionary process.” Am I being presumptuous in asking the mechanism of that “pull?” Supposing, however, that the “pull” somehow happened, we learn that it then “evolved into different forms.” Really? How did it happen? What drove the evolution? According to Davis,
Acts of kindness occur when people (and other animals) see themselves as being part of a greater entity. It is that reality that the advocates for individualism cannot accept. If organisms see themselves as being part of a greater entity, then that’s all that’s needed for group-based trends to appear. And it doesn’t matter what their genes think about it at all!
Now I see! “Group-based trends appeared” because animals “see themselves as being part of a greater entity.” It may well be that some humans see themselves as being part of a greater entity, but chimpanzees? We-e-e-l-l-l, maybe, but according to the latest observations of them in the wild, I have some reservations about the conclusion that they are “cooperative” or “kind” as a result of that world view, even by Davis’ loose definition of the terms. Moving down the line, do buzzards see themselves as “part of a greater entity?” What about slime mold? If so, what is the engine of the “group-based trends?” Evolution by acquired characteristics, because monkeys really, really want their children to be good?
Enough. I could go on and on, but it would only become repetitious. Needless to say, all of this is more akin to mysticism than science. That’s never been a problem for people like Davis. If pressed on these matters, they quickly begin striking pious poses, and accuse their opponents of all kinds of moral lapses. It worked for a long time. It doesn’t work anymore. In the end, the truth doesn’t care whether the Davises of the world consider it immoral or not.
Posted on December 14th, 2010 1 comment
According to Media Line, Iran is pressing ahead with plans to build a fusion reactor:
Rather than bowing to international pressure to curb its nuclear development program, Iran has announced a new project: a nuclear fusion reactor. On Saturday, Iran’s nuclear chief announced that, “The scientific phase of the fusion energy research project is being launched with no budgetary limitation.” The head of Iran’s Nuclear Fusion Research Center told an Iranian news agency that, “We need two years to complete the studies on constructing and then another 10 years to design and build the reactor.”
You go, Ahmedinejad! I can think of no more “useful” way for Iran to spend its oil wealth than on a fusion reactor. If it works, Islam will have a clear edge over Christianity in miracles. The Crusader’s competing ITER reactor isn’t even supposed to be loaded with fuel until 2028. Besides, as seen in the image below, we know large scale fusion reactors are scientifically feasible.
Posted on December 14th, 2010 No comments
Instapundit linked an interesting post about morality today at Shrink Wrapped, whose author describes himself as a psychoanalyst. He avails himself of the recent arrest of a Columbia professor for incest to set forth his rationale for a belief that turns up in some of his other recent posts as well; that government must be based on Judeo-Christian morality. In his words,
…we should be careful of accepting the continual and continued accrual of transgressions against our bourgeois (i.e., Judeo-Christian) morality; at some point, just as termites can destroy a house by eroding its foundation in silence right until the moment, without warning, the house collapses, each small piece torn out of our moral fabric makes the collapse of our consensual culture more likely.
and (from a different post),
Our modern Western Culture and Civilization are emergent structures that rest upon a Judeo-Christian G-d; while religion may not be necessary for any one individual to behave in a moral manner, it has not yet been shown that any society can behave morally without religion.
Such ideas are common, but I have never heard them expressed by anyone who isn’t a Jew or a Christian. There is good reason for that. What the author is actually suggesting is that it’s necessary for us all to pretend we believe in one of those two religions, regardless of whether they are actually true. That’s something that all discussions of such issues as whether civilizations need a particular religion to survive, or whether religion is a force for good, or whether human behavior will be negatively or positively affected by the absence of one religion or another have in common. They all beg the question (and routinely ignore it) of whether or not the religion in question is actually true. What the author is really suggesting is that truth doesn’t matter. We must allow him and his co-religionists to force their religious beliefs on the rest of us, not because they are true, but because they are useful. I beg to differ. It seems to me more reasonable to base our actions on the truth than on falsehoods.
Proponents of the author’s idea are usually aware of this apparent absurdity in their argument at some level, but it’s a minor difficulty to them because, after all, they believe in the religion themselves. They commonly deal with dissenters by simply declaring that they are immoral. For example, again quoting the author (referring to the recent debate between Tony Blair and Chris Hitchens about whether religion is a force for good),
Finally these kinds of debates will always predispose to the victory by the Atheists for a few relatively simple, and therefore unacknowledged, reasons. First, the believer in G-d must, of necessity, admit to himself that such a belief can never be fully grounded in reason; the connection of faith to the irrational parts of our minds are implicit when not made explicit. We use terms like ineffable to make such a connection more acceptable to our reason but ultimately our belief is fueled and preserved by our awareness that it is based upon a mystery at the heart of existence. The Atheist has no such handicap. He is able, using his reason, to convince himself that Atheism has nothing to do with his irrationality. This exhibits, more than anything else, how adept homo rationalis has become at the grand arts of self deception, rationalization and intellectualization. By doing away with G-d, the Atheist has effectively replaced Him with man, without having to countenance his own arrogance.
In other words, the author is telling me that, if I don’t accept his irrational faith in the “mystery at the heart of existence,” and dare to use my brain, which his G-d has presumably given me to serve as something other than a convenient stuffing for my skull, to actually think about whether a God exists or not, I am guilty of the sin of “arrogance” if I come to the “wrong” conclusion, and decide that there is none. Again, the question of whether God really exists or not doesn’t matter. To avoid the charge of “arrogance,” I must somehow find a way to force myself to believe in something that I am perfectly convinced is a fantasy, more or less in the same way that Christian clinics “convert” homosexuals into heterosexuals. What could actually be more arrogant than the claim that anyone who dares to think is “arrogant” if they come to conclusions that happen to differ from those of the author?
There are many instances of similar silliness in the rest of the article. For example,
Yet if we do not privilege the Judeo-Christian ethics that are the underpinnings of our unconscious morality, we have no answer for cultures that take a very different, zero sum, approach to morality, i.e. I take what is yours and do what I want because I can and my god sanctions such behavior. In other words, once we have jettisoned our G-d, we have disarmed intellectually in the war with another’s god.
Where to begin? By arguing that we should “privilege Judeo-Christian ethics,” the author argues for the elimination of any wall of separation between church and state and in favor of a theocracy. That may be where the “progress of civilization” has been heading in Iran, but the same is most definitely not true of the United States and the western democracies, to our great good fortune. I, for one, have no desire to return to the days of Metternich and the Holy Alliance. We can count ourselves lucky if those days are behind us for good.
Judeo-Christian ethics are hardly the underpinnings of our unconscious morality. Rather, our unconscious morality, an evolved trait in our species, is the underpinning of Judeo-Christian ethics, which are merely one example among many of how an innate behavioral trait can be expressed in creatures with large brains. What does the author mean by “Judeo-Christian ethics?” That we should not suffer a witch to live? (Exodus 22:18) That homosexuality is an abomination? (Corinthians 6:9-10) That a man has an obligation to produce a child with his brother’s widow, and, if he refuses, his sister-in-law is to spit in his face in front of the elders. (Deuteronomy 25:5-9)? What about the killing of heretics, approved by St. Augustine, or the innumerable holy wars approved by a long line of popes? If not, what, exactly, are we to understand by the term “Judeo-Christian ethics?” Presumably they are only those bits and pieces of the morality set forth in the Bible or Torah that the author, inspired by his “ineffable awareness guided by a mystery at the heart of creation” agrees with.
How is it that, without Judeo-Christian ethics, “we have no answer for cultures that take a very different, zero sum, approach to morality,” if they seek to take what is ours or otherwise molest us? How about the answer of nuclear weapons? How is it that we are prohibited from defending ourselves unless we can answer one bogus belief with another? There is no better “intellectual armament in the war with another’s god” than to simply point out the obvious; that their god and their transcendental morality are both fantasies.
Again quoting from the article,
Once we have, as a culture, fully adopted an ethic of Just Do It as the apotheosis of our morality, we are helpless against those who wish to Just Do It in ways which are inimical to us.
In other words, unless we allow the author’s version of Judeo-Christian morality to be stuffed down our throats, we somehow implicitly accept “an ethic of Just Do It.” How odd that, somehow, other primates exhibit moral behavior in spite of the fact that no rabbis or priests have ever been found among them. Most of us, including myself, will not follow an ethic of Just Do It because, like other primates, the predispositions that give rise to morality are hard-wired in our brains. If it ever occurs to me that I need some logical reason not to adopt an ethic of Just Do It, I need only recall that creatures who practiced that ethic in eons long past failed to survive. We atheists have the same emotional attachment to survival as everyone else.
Again, the author has so bamboozled himself with morality that he believes that one is somehow prohibited from defending himself unless he can give a moral reason for doing so. If we cannot point out some moral reason for our attackers to avoid such behavior, we are “helpless,” and apparently constrained to stand idly by as they slaughter us. He doesn’t realize that morality preceded both religion and reason, not the other way around. His reasons are mere after the fact rationalizations. It’s as if one couldn’t enjoy sex without first having a reason. Continuing from the article,
The wreckage of the last century should have alerted us to the danger.
Yes, the wreckage of the last century should have alerted us to a danger, but not the one the author thinks. The wreckage of the last century should have alerted us to the danger of trying to apply morality to the governing of large states, or to the relationships between them, period. What would he have us believe? That the zealots of secular religions like Communism or Nazism were any less puritanical than the past and current zealots of the traditional spiritual ones? Is it really credible that they had a Just Do It ethic? Please! Look at the history of Stalin’s great purge trials, or the fate of the dissident generals in Germany after the attempt on Hitler’s life in 1944, or read a few accounts of the Great Cultural Revolution in China. These were all quintessentially moral phenomena, and the mayhem they caused was entirely akin to the Christian slaughter of witches, or their countless wars over trivial differences in religious doctrine, or their repeated mass murders of Jews. No, my friend, what we should have learned from the wreckage of the last century is the absurdity and destructiveness of our continued attempts to apply human morality in situations for which, given its real origins, there can be no reasonable expectation that it would be in the least applicable, or result in any other outcome than more wreckage.
There are consequences to basing our actions on lies, religious or otherwise. The wreckage will continue until we learn that.
UPDATE: The author of Shrink Wrapped immediately deleted a comment I left on his site challenging his post. Interestingly, intolerance of dissent is a traditional characteristic of both Christians and psychoanalysts.
Posted on December 13th, 2010 No comments
Fairy tales about pots of gold at the end of the rainbow are charming because we know the treasure is unreachable. The anonymous Irish story tellers who invented them didn’t have to know about refraction and the wave nature of light to understand that a thing that appears real may turn out to be not quite what it seems. The Good is like a rainbow. So effective are our minds at conjuring with our emotions that we perceive it as a real thing. Mother Nature brooks no shilly-shallying in such matters. It was necessary for us to believe the illusion to survive. She made it so powerful that, even though philosophers have pursued it in vain for thousands of years, we still can’t believe that it’s not there. We are still chasing after the end of the rainbow.
We should know better. After all, Socrates realized that the Good isn’t necessarily what it seems to be two and a half millennia ago. In Plato’s Euthyphro, we find him conversing in his famous dialectic style with one of the “ethics experts” of his day about the definition of piety. The man was so sure that he had reached the end of the rainbow and grasped the Good that he was prosecuting his own father for manslaughter. As Socrates demonstrated in the dialogue, the thing Euthyphro thought he had such a firm hold on was very slippery indeed. Read the Euthyphro and you’ll notice something else that hardly seems worth mentioning because it’s so obvious we take it for granted. Both Socrates and Euthyphro discuss piety, justice, good, and evil, not as feelings or emotions but as real things. That is also the way “righteousness” is conceptualized in the great monotheistic religions, and it is typical of the way most of the philosophers since Socrates have conceived of the Good as well. They do so because that is the way our minds portray it to us.
If, then, the manner in which the Good is perceived is as a thing, or an object, we are faced with the question of whether what we perceive as real actually does exist as we perceive it, independently as a thing-in-itself, or, on the contrary, is a subjective mental construct that has no existence independent of the mind. It seems to me that the most parsimonious explanation is that the latter is the case, and the Good exists in the minds of human beings as an evolved behavioral trait. This hypothesis seems to me to be particularly compelling in view of the mounting evidence that behavioral traits associated with morality are hard-wired in the brain, but would appear to be fairly obvious regardless. After all, if some of the most intelligent among us have been chasing a thing for thousands of years, but have somehow never quite managed to get their hands on it, it would seem to suggest that perhaps the thing they seek doesn’t actually exist.
The contrary claim, namely, that what we perceive as real actually is real, independent of our minds, is tantamount to a religious belief. After all, the entity we are talking about cannot be observed, or measured, or made the subject of repeatable experiments other than as a subjective mental phenomenon. If we believe in it, it must be as something incorporeal, like a spirit. It is a testimony to the power of the illusion that even highly intelligent avowed atheists can believe in this spirit of the Good.
The implications of the above as it touches on the legitimacy of the Good are clear. If an individual claims that what they perceive as the Good is legitimate, they are claiming that not only they, but others as well, should act in accordance with that perception. In other words, they are saying that an evolved trait that came into existence at a distant time in the past that was utterly unlike the present, for the reason that it made it more likely that the genes responsible for the expression of the trait would survive, has somehow magically become binding on other individuals, even if they happen to be unrelated to the bearer of the trait, and in spite of profound social and environmental changes since the time that the trait evolved. Such a claim, it seems to me, is absurd on the face of it. This quasi-mystical belief is nevertheless held by not only the vast majority of human beings, but by all but a small minority of the scientists who are currently active in the behavioral sciences as well. I have already examined some of these “data points” on my blog.
Does what I have written above imply moral relativism? No, it does not, nor does it imply moral absolutism, nor moral determinism, nor anything else of the sort that inevitably comes with an implied ought. I will take up the reasons for this in later posts.
Posted on December 12th, 2010 No comments
These are from various articles and authors in the May 1925 issue of H. L. Mencken’s American Mercury.
What shall the end be? Will that race of men who for a thousand years have asserted the “right of castle,” rejected governmental interference in domestic affairs, proclaimed the right of the free man to regulate his personal habits and to rear and govern his children in accordance with the law of conscience and of love, now become subject to a self-imposed statutory tyranny which from birth to death interferes in the smallest cocerns of life? Shall we endure a legal despotism, the equivalent of which would have provoked rebellion amongst the Saxons even when under the Norman heel?
I doubt not these statutory bonds will be eventually broken. The right of the free man to live his own life, limited only be the inhibition of non-infringement upon the rights of others, will again be asserted. But before that day arrives, will the splendid symmetry of our governmental structure have been destroyed?
Alas, my friend, there is yet no light at the end of the tunnel. Next, from an article about the Mexican border towns entitled “Hell Along the Border,”
I have studiously observed the viciousness and even the mere faults of decorum in Juarez, largest of the corrupting foci, in season and out for a least twelve seasons. I have had my glimpses at the life of the equally ill-reputed Nogales, Mexicali and Tia Juana. I have been in confidential communication with habitual visitors to Nuevo Laredo, Matamoros, Piedras Negras, and Agua Prieta. And I can find in all these towns no sins more gorgeous than those enjoyed by every Massachusetts lodge of Elks at its annual fish-fries prior to 1920.
Regarding the evangelical clergy, the televangelists of the day, immortalized by Sinclair Lewis in his Elmer Gantry,
The net result, as I say, is to inspire those of us who have any surviving respect for God with an unspeakable loathing. We gaze on all this traffic and, without knowing exactly why, we feel a sick, nauseated revulsion. We feel as we felt when we were children, and had a bright glamorous picture of Santo Claus, with his fat little belly and fairy reindeer, and then suddenly came on a vile old loafer ringing a bell over an iron pot. It seems a blasphemous mockery that men can preadch such vulgar nonsense, call it religion, and then belabor the rest of us for not being washed in the blood of the Lamb.
Concerning the latest in the hotel trade,
Whatever I might write were the latest wrinkle would not be the latest wrinkle by the time these lines get into type. But one of the latest, certainly, is radio service in every chamber.
Of anthropology, from an article entitled “The New History,”
The anthropologists have paralleled the achievements of the archeologists by making careful studies of existing primitive peoples. Ten years ago we possessed in this field only the chatty introduction by Marett, and Professor Boas’ highly scholarly but somewhat difficult little book, “The Mind of Primitive Man.” Today we have admirable general works by Goldenweiser, Lowie, Kroeber, Tozzer, Levy-Bruhl and Wissler with several more in immediate prospect. These deal acutely and lucidly with primitive institutions.
As the cognoscenti among my readers are no doubt aware, this was written on the very threshold of anthropology’s spiral into the dark ages of the Blank Slate, from which it has only recently emerged. The good Professor Boas played a major role in pushing it over the cliff.
Concerning the value of morality in regulating society,
Once we give up the pestilent assumption that the only effective sanctions for conduct are those of law and morals, and begin to delimit clearly the field of manners, we shall be by way of discovering how powerful and how easily communicable the sense of manners is, and how efficiently it operates in the very regions where law and morals have so notoriously proven themselves inert. The authority of law and morals does relatively little to build up personal dignity, responsibility and self-respect, while the authority of manners does much… I also venture to emphasize for special notice by the Americanizers and hundred-per-centers among us, the observation of Edmund Burke that “there ought to be a system of manners in every nation which a well-formed mind would be disposed to relish. For us to love our country, our country ought to be lovely.”
and finally, from the collection of anecdotes Mencken always included under the heading Americana,
Effects of the Higher Learning at Yale, as revealed by the answers to a questionnaire submitted to the students there:
Favorite character in world history: Napoleon, 181; Cleopatra, 7; Jeanne d’Arc, 7; Woodrow Wilson, 7; Socrates, 5; Jesus Christ, 4; Mussolini, 3. Favorite prose author: Stevenson, 24; Dumas, 22; Sabatini, 11; Anatole France, 5; Cabell, 5; Bernard Shaw, 4. Favorite magazine: Saturday Evening Post, 94; Atlantic Monthly, 24, New Republic, 3; Time Current History, 3. Favorite political party: Republican, 304; Democratic, 84; none, 22; Independent, 3. Biggest world figure of today: Coolidge, 52; Dawes, 32, Mussolini, 3; Prince of Wales, 24; J. P. Morgan, 15; Einstein, 3; Bernard Shaw, 3. What subject would you like to see added to the curriculum: Elocution and Public Speaking, 24; Business course, 8; Deplomacy, 7; Drama, 4.
Times change in 85 years.
Posted on December 11th, 2010 1 comment
Morality is a set of human emotional traits. The emotional responses we associate with morality exist because they evolved. Morality is, by its very nature, subjective. It can exist only in the form of feelings in individual minds, and has no independent existence as a thing in itself outside of individual minds. Its existence in our minds does not depend on any rational thought process or series of logical deductions. Rather, it is fundamentally emotional in nature. We consider a thing good because we feel that it is good. Computers can execute rule-based logical algorithms and arrive at true conclusions. However, they do not experience emotions. Therefore, they are not moral beings. The perception that something is “really” good corresponds to a fundamentally emotional response. Without emotion, there can be no morality, and without it we would not make moral judgments. We do not perceive the good as a real, objective thing because it actually is real. We perceive the good as a real, objective thing, because perceiving it in that fashion made it more likely that our ancestors would survive and reproduce. Because the good is not a real, objective thing, it is not possible for moral judgments to be legitimate in themselves or in any way objectively valid.
The above conclusions are, in my opinion at least, the bottom line. In other words, they are true. We can reason about them and come to logical conclusions about whether they will have negative or positive consequences as they relate to some goal or aspiration we might have for ourselves, or for mankind in general, but the truth is indifferent to our goals and aspirations. It remains true regardless. In this post-“Blank Slate” world, as we sit on the shoulders of Darwin and gaze about us, it would seem these truths would be obvious. After all, if we see some rule violated that we associate with “the good,” our minds do not respond by executing a logical algorithm leading to the dispassionate conclusion that it is true that the rule has been violated. Rather, we respond emotionally. We may experience outrage, or become indignant. If we go to a movie, and see the bad guy bite the dust, our response is not limited to the rational observation that a human animal acted in a way that had a less than zero probability of leading to that outcome, and, as one of a set of potential outcomes, that outcome (biting the dust) actually did happen. Rather, we again respond emotionally. We may experience gratification, or, if we are really involved in the plot, exultation at the victory of “the good.” In claiming the objective legitimacy of moral judgments, we are really claiming that emotions that evolved in animals with large brains for perfectly understandable reasons, and that are analogous to similar emotions in other animals, have now, for no apparent reason at all, magically come to life on their own, and become objective things independent of the minds that experience them. Logically, that notion is absurd.
These truths, however, are not obvious. They are not obvious to most of the people on the planet, nor are they obvious to those to whom it would seem they should be self-evident; the evolutionary psychologists, neuroanthropologists, ethologists, and others whose research is daily adding to the overwhelming evidence that morality is the result of innate features that are hard wired in our brains. It’s not surprising, really. If we shed the illusion of objective, legitimate good, there is much to be lost along with it. We must free ourselves of the overwhelmingly powerful feeling that what we perceive as good is a real thing. With it we must give up once and for all any claim to a logical basis for the immensely satisfying feeling that we are morally superior to others. We must give up all the claims to wealth, status and power that claims to moral superiority or to a superior knowledge of the “real” good imply, whether as religious leaders, partisans of messianic ideologies, or recognition as ethics “experts.” No wonder then, that the delusion of objective good is so hard for us to give up. The problem is that it simply doesn’t exist. No matter how passionately we embrace this falsehood, it will not be transmuted into truth.
Allow me to suggest that it would be wise for us to throw aside our blinkers and embrace the truth instead. By doing so we will not suddenly plunge the world into chaos. We are moral creatures, and will continue to act as moral creatures because that is our nature. Understanding why we act as moral creatures, and the true nature of our moral emotions will not alter the fact. In our day-to-day interactions with each other, we must act as moral creatures, if only because we lack the cognitive capacity to carefully reason out the logical consequences of every move we make in real time. However, my personal opinion, and one which, it seems to me, follows logically from what I have stated above, is that we should stop trying to apply morality in politics, international relations, or any other modern form of collective interaction between large numbers of people that had no analog at the time our moral emotions evolved. We should also resist attempts by others to apply morality in such situations, other than to the extent that we must take our own nature, and with it our moral nature, into account in constructing a society that is suited to the kind of creatures we are. I suggest that this is a reasonable course of action, not because it is “really good,” but because I consider life a wonderful thing that I wish to savor while I have it, and because I cannot savor it if I am constantly threatened by other human beings.
How is it that I am threatened, or, for that matter, how is it that we are all threatened by continued attempts to apply morality in politics or to any of the other forms of mass social arrangements that have emerged in the modern world, and which are utterly different from anything that existed at the time morality evolved? In the first place, quite obviously, because morality evolved for reasons that have nothing whatsoever to do with the goals that massive political and other organizations, such as modern states, set for themselves. Consequently, there is no apparent reason to expect that acting according to moral emotions will be an effective way of pursuing those goals. There is abundant evidence in the recent history of our species to confirm that they are not only ineffective in pursuing those goals, but potentially extremely dangerous.
Consider, for example, Communism. It was embraced by millions of the most intelligent and idealistic people on the planet as the path to “human flourishing,” confirmed as such by the most advanced “scientific” theories. It was a quintessential attempt to apply morality in the context of modern states. For its adherents it represented the incarnation of “the good,” transcending the petty minds of individuals. It ended in disaster, after having caused the deaths of tens of millions of people. In many of the countries it controlled, those killed included a grossly disproportionate number of the most intelligent and productive members of society. These countries, for all practical purposes, beheaded themselves. How is it that this noble attempt to achieve a perfect state of human happiness via the revolutionary imposition of “the good” ended in a debacle? For the same reason that most such attempts always fail. Human morality is dual in nature. Where ever there is an ultimate “good,” there is always an ultimate “evil” to go right along with it. In the case of Communism, the “evil” was the bourgeoisie. To insure the triumph of the “the good,” it was necessary to wipe out “the evil.” As a result, tens of millions who were unfortunate enough to have a little more than their neighbors, or whose clothes were a little too nice, or whose farms were a little too productive, were murdered. The lives of tens of millions of children were poisoned because their parents were supposed to have been in the wrong class. They were often brutally punished for not taking care to be born into the right social class.
The other obvious example that dominated the 20th century is Nazism. In this case, the German people and their welfare became “the good.” Hitler hardly considered himself an evil man whose goal in life was to deliberately make everyone else as miserable as possible. He passionately believed he represented the ultimate good, and that it was his destiny to lead the German people to a different version of “human flourishing,” thereby acting for the ultimate good of all mankind. In this case, too, the “good” implied an “evil.” The “evil” was the Jews, and the result was the Holocaust.
What about attempts to impose religious versions of morality on society? Ask the tens of millions of victims of religious wars. Ask the countless heretics who were burned. Ask the hundreds of thousands of innocent women who were hung outside the gates of European cities over the centuries as “witches.” Ask the miserable inhabitants of the Papal States in the 19th century. Ask anyone in Iran today who happens not to be a devout Muslim. Ask the victims of Islamic terrorism.
In spite of the monotonous repetition of these disasters, those of us who should know better still don’t get it. They are so devoted to the illusion of their own moral goodness that, instead of coming to the seemingly obvious conclusion that morality itself is the problem, or, more accurately, the attempt to apply it in situations that are utterly divorced from those in which it came into existence in the first place, for reasons that have nothing to do with the reasons that it evolved, they conclude, against all odds, that the solution is merely a matter of “getting it right.” They are cocksure that they are smarter than the myriads who have tried exactly the same nostrums for achieving “human flourishing” before them. Finally, at long last, they fondly believe they have discovered the “real good,” and it remains only to stuff it down the throats of the rest of us poor benighted souls. Open wide!
I have a better idea. Let’s stop playing with fire. What is the alternative to imposing some bright, new, freshly cobbled together version of morality on society? We have large brains. For starters, we might try using them.