The Blogosphere Rediscovers Tarzan

Fausta, Katie Baker, wander off the reservation after discovering they’re not really “new Soviet woman” after all. As Karol at Alarming News puts it:

And yet, the fact is that “me Tarzan, you Jane” is ultimately what makes us hot. That’s what these feminists, who are trained to really, truly believe they want a man who is mostly like a woman, admit in these posts “tee hee, I know I’m not supposed like this, but I kinda do.” You know why? Evo-freaking-lution. Women like the men who take care of them. Whether it’s put food on the table or beat back the saber-tooth tiger. We’re programmed to crave the man who behaves…like a man.

I know, for you connoisseurs of “pop ethology,” this is a bit down in the weeds, but still, the paradigm shift continues. May the day come soon when the neuroscientists can explain this “programming” at a molecular level. What fun it will be to confront the world’s last, hoary behaviorists with the facts about who and what we really are.


Consequences: The Great Question of Should, Part III

In two earlier posts I explored the consequences of the subjective nature of morality. We have already explored some of the ramifications of that conclusion as far as the individual is concerned. In this post we will continue that discussion.

I touched earlier on the virtual impossibility of amoral behavior. We are wired to be moral creatures, and there is a moral context to all our interactions with other human beings. It is for this reason that the argument that religion is necessary because without it we would have no reason to act morally is absurd. We don’t need a reason to act morally. We just do because that is our nature, just as it is the nature of other more intelligent animals that act morally even though they can have no idea of the existence of a God.

Morality did not suddenly appear with the evolution of homo sapiens. Rather, it evolved in other creatures millions of years before we came on the scene. I suspect the expression of morality in human beings represents the interaction of our high intelligence, which evolved in a relatively short time, with predispositions that have undergone only limited change during the same period. One interesting result of this is the fact that we consciously perceive morality as a “thing” having an objective existence of its own independent of ourselves. An artifact of this perception that we have noted earlier is the adoption of complex “transcendental” moral systems by some of our most famous atheists, who obviously believe their versions of morality represent the “real good,” applicable not only to themselves, but to others as well, in spite of the fact that they lack any logical basis for that belief.

We all act according to our moral nature, almost unconsciously applying rules that correspond to a “good” that seems to be external to and independent of ourselves. I am no different than anyone else in that respect. I can no more act amorally than any other human being. I act according to my own moral principles, just as everyone else does. I have a conscience, I can feel shame, and I can become upset, and even enraged, if others treat me or my own “in-groups” in a way that does not correspond to what I consider “good” or “just.” Anyone doubting that fact need only look through my posts in the archives of at Davids Medienkritik. I behave in that way because it is my nature to behave in that way. In fact, if I tried to jettison morality and, instead, rationally weigh each of my actions in accordance with some carefully contrived logical principles, I would only succeed in wasting a great deal of time and making myself appear ludicrous in the process.

However, there are logical consequences to the conclusion that good and evil are not objects that exist on their own, independent of their existence as evolved mental constructs. In the first place, they evolved at a time when the largest social groups were quite small, containing members who were generally genetically related to each other to some extent. They evolved because they promoted the survival of a specific packet of genetic material. That is the only reason they exist. The application of moral standards to the massive human organizations that exist today, such as modern states, is, therefore, logically absurd. Morality evolved in a world where no such organizations existed, and the mere fact that it evolved did not give it any universal legitimacy. We nevertheless attempt to apply morality to international affairs, and to questions of policy within nations involving millions of unrelated people, in spite of the logical disconnect this entails with the reason morality exists to begin with. We do so because that is our nature. We do so not because it is reasonable, but because that is how our minds are programmed. Under the circumstances, assuming that we agree survival is a desirable goal, it would seem we should subject such “moral” behavior to ever increasing logical scrutiny as the size of the groups we are dealing with increases. Our goal should be to insure that our actions actually promote the accomplishment of some reasonable goal more substantial than making us feel virtuous because we have complied with some vague notion of a “universal good.”

When it comes to our personal relationships with other individuals or with the smaller groups we must interact with on a daily basis, we must act according to our moral nature, because, as noted above, it would be impractical to act otherwise. In such cases it seems to me that if our goals are to survive and enjoy life in the process, we should act according to a simple moral code that is in accord with our nature and refrain from attempting to apply contrived “universal moral standards” to our fellow beings that are absurd in the context of the reasons that promoted the evolution of morality in the first place. In other words, we should act in accordance with the well understood principles of what H. L. Mencken referred to as “common decency.”

In the process, we should not lose sight of the dual nature of our moral programming, which can prompt us to act with hostility towards others that is counterproductive in the context of modern civilization. It would behoove us to take steps to channel such behavior as harmlessly as possible, because it will not go away. We cannot afford to ignore the darker side of our nature, or engage in misguided attempts to “reprogram” ourselves based on the mistaken assumption that human nature is infinitely malleable. We must deal with ourselves as we are, not as how we want ourselves to be. The formulation of complex new systems of morality that purport to be in accord with the demands of the modern world may seem like a noble endeavor. In reality, the formulation of new “goods” always implies the formulation of new “evils.” It would be better to understand the destructive aspects of our nature and deal with them logically rather than by creating ever more refined moral systems. To the extent that they fail to take the innate aspects of human behavior into account, these can be dangerous. Consider, for example, the new moral paradigm of Communism, with its “good” proletariat and “bad” bourgeoisie. The practical application of this noble new system resulted in the deaths of 100 million “bourgeoisie,” and what amounted to the national decapitation of Cambodia and the Soviet Union. In view of such recent historical occurrences, the current fashion of demonizing and reacting with moral indignation to those who disagree with us politically would seem to be ill-advised.

Morality is an evolved trait. Our problem is that we perceive it as an independent object, a transcendental thing-in-itself, something that it is not and cannot ever be. We must act according to our moral nature, but let us consult our logical minds in the process.

The Left and its Holy Causes: The Pose is Everything

As Byron York (via Instapundit) points out,

I attended the first YearlyKos convention, in 2006, and have kept up with later ones, and it’s safe to say that while people who attended those gatherings couldn’t stand George W. Bush in general, their feelings were particularly intense when it came to opposing the war in Iraq. It animated their activism; they hated the war, and they hated Bush for starting it. They weren’t that fond of the fighting in Afghanistan, either. Now, with Obama in the White House, all that has changed. . . . Not too long ago, with a different president in the White House, the left was obsessed with America’s wars. Now, they’re not even watching.

Instapundit adds, “Yeah, funny how the fierce moral urgency drained out of the antiwar movement as soon as a Democrat was elected President.” I suspect the level of “fierce moral urgency” has more to do with personalities than parties. After all, the level of antiwar activism on the left was much greater under Johnson, another Democratic president, than it ever was under Bush. Of course, Johnson lacked Obama’s charisma, but I suspect that the main driver of the left’s “noble commitment to peace” in the 60’s was fear of the draft. Once the draft went away, the level of devotion to the cause of world peace became a great deal more subdued.

In any case, it’s obvious that the level of “moral urgency” of the left’s assorted holy causes has more to do with emotional posing than logic. For the time being, peace must take a back seat to the health care issue, at least until the “progressives” succeed in enlisting state power to force their version of “compassion” on the rest of us. Meanwhile, Cindy Sheehan’s blog has become strangely inactive over at Huffpo, although I suspect she’ll surface again at some point as such useful idiots often do.

Emotion trumps reason when it comes to the left’s other pet causes as well. It never bothered them a couple of years ago that hooded anarchists who threatened violence to counter-demonstrators always tagged along at their “peace demonstrations,” but now let grandma and grandpa hold up signs and get a little raucus at a town hall meeting and they suddenly become “enraged, crazy” nut cases, mindless fools manipulated by “astroturfers.” Meanwhile, they would have us believe that they are really serious about reducing greenhouse gases while they continue to oppose nuclear power, the one most effective step we could take to do just that. They preach to us about saving the environmental but, at the same time, wax eloquent in their promotion of illegal immigration to the US and other heavily industrialized countries in spite of the massive increase in global environmental degradation that entails.

Irrational support for holy causes is hardly a monopoly of the left, although it tends to be more a more dangerous characteristic of those who want to change the status quo than those who want to leave it alone. I suspect political proclivities in general are better understood as emotionally conditioned behavior than as a logical response to a given situation. Perhaps we all have innate psychological characteristics that make it more or less likely that we will tend to adopt a “liberal” as opposed to a “conservative” world view. Once we do, our opinions on any given subject will tend to be aligned with the prevailing dogma of our group. Logic will only be brought in as an afterthought to prop up these highly predictable “opinions.”

In a later post I will revisit this subject in the context of an earlier day.

E. O. Wilson: “Consilience,” Ethics and Fate

I first became aware of the work of E. O. Wilson when he published a pair of books in the 70’s (“Sociobiology” in 1975 and “On Human Nature” in 1979) that placed him in the camp of those who, like Ardrey, insisted on the role of genetically programmed predispositions in shaping human behavior. He touches on some of the issues we’ve been discussing here in one of his more recent works, “Consilience.” In a chapter entitled “Ethics and Religion,” he takes up the two competing fundamental assumptions about ethics that, according to Wilson, “make all the difference in the way we view ourselves as a species.” These two contradictory assumptions can be stated as, “I believe in the independence of moral values,” and “I believe that moral values come from humans alone.” This formulation is somewhat imprecise, as animals other than humans act morally. However, I think the general meaning of what Wilson is saying is clear. He refers to these two schools of thought as the “transcendentalists,” and “empiricists,” respectively. He then goes on to express a sentiment with which I very heartily agree;

The time has come to turn the cards face up. Ethicists, scholars who specialize in moral reasoning, are not prone to declare themselves on the foundations of ethics, or to admit fallibility. Rarely do you see an argument that opens with the simple statement: This is my starting point, and it could be wrong. Ethicists instead favor a fretful passage from the particular into the ambiguous, or the reverse, vagueness into hard cases. I suspect that almost all are transcendentalists at heart, but they rarely say so in simple declarative sentences. One cannot blame them very much; it is difficult to explain the ineffable, and they evidently do not wish to suffer the indignity of having their personal beliefs clearly understood. So by and large they steer around the foundation issue altogether.

Here he hits the nail on the head. It’s normal for human beings to be “transcendentalists at heart,” because that’s our nature. We’re wired to think of good and evil as having an objective existence independent of our minds. Unfortunately, that perception is not true and yet the “scholars who specialize in moral reasoning,” appear singularly untroubled by the fact. Someone needs to explain to them that we’re living in the 21st century, not the 18th, and their pronouncements that they “hold these truths to be self-evident” don’t impress us anymore. In the meantime, we’ve had a chance to peek at the man behind the curtain. If they really think one thing is good, and another evil, it’s about time they started explaining why.

Wilson declares himself an empiricist, and yet, as was also evident in his earlier works, he is not quite able to make a clean break with the transcendentalist past. I suspect he has imbibed too deeply at the well of traditional philosophy and theology. As a result, he has far more respect for the logic-free notions of today’s moralists than they deserve. I have a great deal of respect for Martin Luther as one of the greatest liberators of human thought who ever lived, and I revere Voltaire as a man who struck the shackles of obscurantism from the human mind. That doesn’t imply that I have to take Luther’s pronouncements about the Jews or Voltaire’s notions about his deist god seriously.

I once had a friend who, when questioned too persistently about something for which he had no better answer would reply, “Because there are no bones in ice cream.” The proposition that morality is an evolved human trait seems just as obvious to me as the proposition that there are no bones in ice cream. If anyone cares to dispute the matter with me, they need to begin by putting a package with bones on the table. Otherwise I will not take them seriously. The same goes for Wilson’s menagerie of philosophers and theologians. I respect them because, unlike so many others, they took the trouble to think. When it comes to ideas, however, we should respect them not because they are hoary and traditional, but because they are true. We have learned a great deal since the days of Kant and St. Augustine. We cannot ignore what we have learned in the intervening years out of respect for their greatness.

In the final chapter of his book, entitled “To What End,” Wilson discusses topics such as the relationship between environmental degradation and overpopulation, and considers the future of genetic engineering. His comments on the former are judicious enough, and it would be well if the developed countries of the world considered them carefully before continuing along the suicidal path of tolerating massive legal and illegal immigration. As for the latter, here, again, I find myself in agreement with him when he says that, “Once established as a practical technology, gene therapy will become a commercial juggernaut. Thousands of genetic defects, many fatal, are already known. More are discovered each year… It is obvious that when genetic repair becomes safe and affordable, the demand for it will grow swiftly. Some time in the next (21st) century that trend will lead into the full volitional period of evolution… Evolution, including genetic progress in human nature and human capacity, will be from (then) on increasingly the domain of science and technology tempered by ethics and political choice.”

As often happens, Wilson reveals his emotional heart of hearts to us with a bit of hyperbole in his final sentence:

And if we should surrender our genetic nature to machine-aided ratiocination, and our ethics and art and our very meaning to a habit of careless discursion in the name of progress, imagining ourselves godlike and absolved from our ancient heritage, we will become nothing.

This is a bit flamboyant, and begs the question of who or what gets to decide our “meaning.” Still, Wilson’s work is full of interesting and thought-provoking ideas, and he is well worth reading.

Sam Harris and his Butterfly Net: An Account of the Capture of the “Real, Objective” Good

The human brain is a wonderful survival mechanism. It endows our species with unrivaled powers of reasoning, allowing us to discern truths about subatomic particles and distant planets that our unaided senses can’t even detect. It has also supplied us with self-constructed, subjective “truths” about things that exist only in our own minds, endowing them with a legitimacy and reality of their own. Morality is such a thing. It does not and cannot have an independent existence of its own, but believing that it does has promoted our survival. Therefore, we believe. Our brains are wired to perceive good and evil as real things, and so we do. In spite of our vaunted logical powers, some of the greatest thinkers among us cannot rid themselves of the illusion. At some level they have grasped the truth that everything about us, including our minds, emotions, and predispositions, have evolved because they have promoted our survival. On the other hand, they truly believe that one such evolved trait, morality, which we happen to share with many other animals, somehow corresponds to a real thing that has an independent reality of its own. Logically, they cannot justify their belief that good and evil are real, objective things, but, still, they believe it. Nature insists.

The “Big Three” among the “new atheists,” Richard Dawkins, Chris Hitchens, and Sam Harris, provide interesting examples of the phenomena. None of them would be any more capable of providing a logical basis for their belief that there is a real, objective good and a real, objective evil, and that they know the real objective difference between the two anymore than Euthyphro could demonstrate the same to Socrates. Nonetheless, all three of them are convinced that that which their brains are wired to perceive as real must actually be real. They all believe in the objective existence of good and evil, and they all believe that their own moral standards apply not only to themselves, but to others as well. Read their books and you will find all of them laced with the moral judgments that are the artifacts of this belief.

I have pointed out in earlier posts the logical absurdity of the belief that morality, an evolved emotional trait, not only of humans but of other animals as well, somehow has an existence of its own, independent of the minds that host it. Let us consider how one of the “Big Three,” Sam Harris, has nevertheless managed to convince himself that what he perceives as real must actually be real. Harris is a neuroscience researcher. He set forth his thoughts on the subject in an essay entitled, “Brain Science and Human Values,” that recently appeared at the website of the Edge Foundation. After a discussion of the process of discovering scientific truth, Harris asks,

“But what about meaning and morality? Here we appear to move from questions of truth—which have long been in the domain of science if they are to be found anywhere—to questions of goodness. How should we live? Is it wrong to lie? If so, why and in what sense? Which personal habits, uses of attention, modes of discourse, social institutions, economic systems, governments, etc. are most conducive to human well-being? It is widely imagined that science cannot even pose, much less answer, questions of this sort.”

Here, Harris has begun the process of self-obfuscation. Let us set aside the issue of what he actually means by “conducive to human well-being” for the time being and focus on the question of morality. There is no more a logical reason to consider that which is “conducive to human well-being” objectively good than there is a logical reason to consider it objectively good to follow Pythagoras’ admonition to avoid the eating of beans. However, making the logical leap from fact to fiction is no problem for most of us. We “feel” that “human well-being” is a legitimate good. We might even feel the emotion of shame in denying it. If someone demanded that we defend the assertion that “human well-being” is not objectively good, we would likely feel some embarrassment. It is mentally easy for us to associate “human well-being” with “objective good” in this way. It is also illogical.

Instead of simply claiming that good and evil exist because he feels they must exist, all Harris is doing is adding an intermediate step. He points to a “self-evident” good and props it up as a “gold standard,” as “real good.” In essence, this “gold standard” serves the same purpose as God does for religious believers. They believe that God must really be good, and, because He is the standard of that which is good, His laws must really be good as well. Harris substitutes his “gold standard” for God. It must be “really good,” because, after all, everyone agrees it is good. Who can deny it? Everyone has the same perception, the same conviction, the same feeling. In reality, he is just chasing his tail. Instead of simply claiming that the existence of objective good and evil are self-evident to begin with, he claims that it is self-evident that “human well-being” is an objective good. Once we have accepted this “gold standard,” it follows that, since we have established that it is “really good,” then “real good” must exist as well as the basis for making this determination in the first place. Once he has established this “gold standard,” Harris cuts to the chase:

“Much of humanity is clearly wrong about morality—just as much of humanity is wrong about physics, biology, history, and everything else worth understanding. If, as I believe, morality is a system of thinking about (and maximizing) the well being of conscious creatures like ourselves, many people’s moral concerns are frankly immoral.”

In other words, we are to believe that morality isn’t merely a subjective predisposition, but a real thing. It is simply a question of determining scientifically what it is. Once we have done that, then we really should do good and avoid doing evil. Harris continues:

“Morality—in terms of consciously held precepts, social-contracts, notions of justice, etc.—is a relatively recent invention. Such conventions require, at a minimum, language and a willingness to cooperate with strangers, and this takes us a stride or two beyond the Hobbesian ‘state of nature.’”

Here Harris commits the fallacy of associating “Consciously held precepts, social contracts, notions of justice, etc.,” with morality itself. They are not morality, but merely manifestations of morality in human beings living in the modern world. Morality itself predates human beings by millions of years, and many other animal species act morally in addition to ourselves. The most significant difference between us and them is that they lack the capacity to speculate about whether morality is objectively real. Indeed, for them, morality is likely a more effective evolutionary adaptation than it is for us. They simply act as they are wired to act, and feel no need to invent objective reasons for their actions in the form of Gods or Harris’ ersatz god, “the imperative to act for the well being of conscious creatures.”

Harris would do well to go back to square one and consider what morality really is. It is an evolved subjective predisposition that exists because it promoted our survival. Furthermore, it promoted our survival at a time when we existed in small communities of genetically related individuals. It is a dual phenomena. We apply one standard of right and wrong to our interactions with those within our “in-group,” and another standard of right and wrong to “out-groups.” It is reasonable to assume that the wiring in our brain responsible for our predisposition to behave morally, which evolved at a time when we lived in small hunter-gatherer communities, is not ideally suited to similarly promote our survival in a world of gigantic nation states equipped with nuclear weapons. Instead of understanding this problem and addressing it rationally, Harris claims to have discovered the “real good,” in the form of “that which is conducive to human well-being.” In reality, Harris is as religious as the most phantastical southern Baptist. The only difference between him and them is that he believes in a “True Good” instead of a true God. He insists that, instead of understanding our own nature and accommodating ourselves to it, we should all be required to change our nature to conform to his phantasy that a scientifically discernable version of this “True Good” exists. In other words, he wants to take a giant step backwards to the era of the behaviorists and the “new Soviet man,” when it was assumed that human nature was infinitely malleable and could be molded as needed to conform to whatever arbitrary definition of “good” one chose to adopt. He won’t succeed any more than the Communists or all the other architects of heavens on earth have succeeded. Human nature is what it is, and won’t jump through hoops, even for Sam Harris. He thinks he can simply wave his hands, and inconvenient aspects of human morality, such as the Amity-Enmity Complex will just disappear. Others have tried that before him. It doesn’t work. It not only doesn’t work, but, in a world full of nuclear weapons, it is extremely dangerous. If we are to avoid self destruction, it will behoove us to understand our own nature. Creating “brave new moralities” out of thin air and insisting that others conform to them does not promote such understanding. Rather, it amounts to a deliberate burying of our heads in the sand.

I can only suggest that Harris go back to his neuroscientific research. Who knows, one day he may turn up at my doorstep and present me with a vial of distilled “Good”. However, I rather suspect it’s more likely he will eventually come to a more rational understanding of human morality. At least I hope he will, and I hope the same for his two illustrious peers, Hitchens and Dawkins. It happens that the latter has a wonderfully designed website with forums for the philosophically minded. It pleases me to see that, based on their comments, some of the brighter visitors to these forums “get it” when it comes to morality. I suggest that Harris, Dawkins, Hitchens, and the rest of the intellectual gentry at take the time to read them.

Another Paradigm Shifts: The Hunting Hypothesis, Ardrey, and “Pop Ethology”

In 1976, Robert Ardrey published the last in a series of books about the evolution of human nature, entitled “The Hunting Hypothesis.” Ardrey was one of the great thinkers of the 20th century. Unfortunately, his thoughts were not politically correct at the time. They posed a direct challenge to any number of the ideological sacred cows of belief systems ranging from behaviorist psychology to Marxism. They implied that human nature was not infinitely malleable, but based on innate predispositions that rendered mankind unsuitable for the various and sundry utopias the ideologues were cobbling together. In a word, Ardrey had positioned himself squarely in the out-group of all these ideologically defined in-groups. A great collective shriek went up. As usual in such cases, Ardrey’s challenge was not met with dispassionate logic. Rather, he was vilified as a “fascist,” ridiculed as a “pop ethologist,” and denounced as a dilettante playwright who dared to invade the territory of “real scientists.” One would do well to go back and read his books today, because, as it happens, Ardrey was right and the ideologues posing as “scientists” who vilified him were wrong.

In particular he was right about the hunting hypothesis. The best argument his opponents could come up with against it was the absurd claim that, other than a few tortoises and other slow-moving animals, our early meat eating had been limited to scavenging. The idea that the rapid growth of brains with ever increasing energy requirements could have been fueled by the scavenging of four-foot tall, slow moving creatures who had somehow managed to beat sharp-eyed vultures and speedy hyenas to their feasts was really as absurd then as it is now. Ardrey demolished the notion in the first chapter of his book, but, like a dead man walking, it staggered on for years, propped up by the bitter faith of the ideologues.

I suspected at the time “The Hunting Hypothesis” was published that Ardrey and thinkers like him would eventually be vindicated, assuming free research could continue without ideologically imposed restraints. I never imagined it would happen so soon. It’s still hard for me to believe that we’ve passed through such a thorough paradigm shift, and I’m continually surprised when I see articles such as this one, entitled “Pre-humans had Stomach Cramps,” that appeared on the website of the German magazine “Der Spiegel” today. Among its matter-of-factly presented paragraphs regarding the meat eating habits of Australopithecus afarensis, a hominid that lived more than two million years ago, one finds,

The question of when meat consumption began is important because of its association with the development of a larger brain in pre- and early humans. In fact, the human brain is three times as big as that of a chimpanzee. In order to build up an organ of such dimensions, a very large and continuous supply of nourishment must be guaranteed, and that requires meat.

Hunting is the only way of systematically bringing down animals, and this, in turn, assumes a bigger brain. As with the question of what came first, the chicken or the egg, one can’t be sure what came first, meat eating or a larger brain. However, anthropologists assume that, in the beginning, there must have been at least occasional consumption of meat, because, without it, the brain could not have expanded in volume for purely physical reasons.

All this is presented in dead pan fashion, as if no other opinion could ever have prevailed, or the subject could ever have been the subject of the least controversy. Sad, that Ardrey could not have lived to see it.

And the moral of the story? Perhaps we should recall the words of T. S. Eliot from “Little Gidding,”

We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.

We live too much in the present, breathlessly awaiting the latest news from the worlds of science and politics. Occasionally, we would do well to recall that some very bright people, with a very different perspective, not to mention very different standards of political correctness, actually lived before our time. It would behoove us to learn from them if we really want to understand the time we’re in now. Never accept the moral certainties of today. Go back to the sources, and find out for yourself.

Even the Psychologists have Noticed Human Nature!

That invaluable bloodhound of the blogosphere, Instapundit, turned up another interesting link this morning. It turned out to be an article on the website of “Psychology Today.” Now it happens that I was actually a subscriber of PT decades ago, but I stopped reading it after concluding that, if I really wanted to learn something about psychology, my time would be much more profitably spent reading Stendhal. My sedate, philosophical eyebrow raised almost a full notch when, in reading the article in question, I found sections such as,

Most journalists take a number of psychology, sociology, political science, and humanities courses during their early years in college. Unfortunately, these courses have long served as ideological training programs—ignoring biological sources of self-serving, corrupt, and criminal behavior for a number of reasons, including lack of scientific training; postmodern, antiscience bias; and well-intentioned, facts-be-damned desire to have their students view the world from an egalitarian perspective.


But, having worked among the Soviets, I know that large groups of very intelligent people can fall into a collective delusion that what they are doing in certain areas is the right thing, when it’s actually not the right thing at all. It’s rather like the Skinnerian viewpoint on psychology. For a full half century, psychologists insisted it wasn’t proper to posit anything going on inside people’s heads. Advances in psychology ground to a halt during that time, but it was impossible to convince mainstream psychologists that there was anything wrong to their approach. After all–everybody was using Skinner’s approach, and everybody couldn’t be wrong.

Thinking it must be an aberration, or, perhaps, an example of the tokenism so often found in the mainstream media today, I took a closer look at the PT website. Eureka! I soon began turning up links like this. Evolutionary psychology at Psychology Today?! Can you say paradigm shift?

Well, it’s nice to see that progress actually happens, even in psychology, although I suspect I’ll still consult Stendhal as my primary source for the time being. Meanwhile, it would be nice if all the geniuses in the field who had their heads up their collective behaviorist rectums back in the 60’s and 70’s would visit Robert Ardrey’s grave, perhaps decorate it with a rose or two, and murmur, “Sorry for all the abuse, old man. You were right, and we were wrong.”

On the “Morality” of Nuclear Weapons

There is an interesting post over at ArmsControlWonk entitled “Morality and the Bomb.” Key question posed in the article:

“One notable aspect of the current abolitionist wave is that it is powered by national interest arguments, not moral considerations. Is this a good thing, or a bad thing?”

My response:

“It is not a good thing or a bad thing, but a logical thing. Morality is an evolved trait that exists because it promoted our survival at a time when we existed as small communities of hunter/gatherers. Attempts to apply it to the nuclear weapons debate are logically absurd. The basic issue here is very simple. Is it desirable to survive? If so, how should we deal with nuclear weapons?”

This is a good example of an instance in which it’s necessary to step back from morality and think. Morality has a great deal more to do with emotion that logic. It is subjective, and exists only in the minds that host it. Other than that, it has no objective existence in itself. It exists as an evolved trait of our species because it promoted our survival. It did not evolve in response to the threat of nuclear weapons. Therefore, assuming one actually does want to survive, it would be illogical to apply it to the nuclear weapons debate. This is an instance in which one must disconnect the issue from moral considerations, and consider logically what course of action will best promote one’s survival. Survival, after all, explains why morality exists to begin with. To the extent that it doesn’t promote our survival, it is pointless. There can be nothing more immoral than failing to survive.

Genetic Engineering and the Brave New World of Transhuman Machines

I’ve been reading through a collection of essays on the future of science entitled “What’s Next,” edited by Max Brockman. Today I’ll pick up where I left off in an earlier post, and look at a piece entitled “How to Enhance Human Beings,” by Nick Bostrom.

Once upon a time, in the days before the Nazi paradigm shift, eugenics used to be a topic of polite conversation. Now, of course, the Holy Mother Church of public opinion has spoken on the subject, and only the obvious evildoers among us dare to use the term any more, especially when children are present. Nevertheless, there were some spirited debates on the subject before it became obvious that it was necessary to restrict freedom of speech on the matter for our own good. I have unearthed a few interesting examples, both pro and con, in my archaeological peregrinations, and will post them for your amusement and edification one of these days.

In any event, the subject is now moot. Eugenics has gone the way of the horse and buggy. We are now, or will soon be able to vote with our feet, or genes, as the case may be. Depending on whether our tastes run to biological or mechanical tinkering, we are promised a range of options for ourselves or our offspring to enhance everything from intelligence to lifespan. The emerging possibilities have already turned up in the popular culture in video games such as Bioshock, movies such as Gattaca, and the novels of James Patterson. As one might expect, ethical debates are raging over these technologies. As Nick Bostrom puts it in his essay,

The belief in nature’s wisdom – and corresponding doubts about the prudence of tampering with nature, especially human nature – often manifests as diffusely moral objections to enhancement. Many people have intuitions about the superiority of “the natural” and the troublesomeness of human hubris. Some might base these ideas on theological doctrine, but often there is no such underpinning; often there is nothing more than a discomfort with altering the status quo.

To a large extent, these debates are also moot. Parents are incredibly competitive when it comes to putting their children in better schools, or even on cheerleading squads. Offered the choice between having their children become the enhanced movers and shakers of tomorrow, or the unenhanced restroom attendants and parking valets, they are likely to choose the former. This will be especially true in developed countries where the number of children one chooses to have is often limited by their expense, and in countries like China that legally limit the size of one’s family. Under the circumstances, people are likely to be as indifferent to moral arguments against enhancement as they were to moral arguments against alcohol during Prohibition. The new technology may be used above or below the state’s legal radar, but it will be used.

Bostrom has devoted some thought to the question of whether particular enhancements are advisable or not, considering the matter more from a practical than a moral perspective. He has come up with a system of rules which he calls the evolutionary-optimality challenge. They are discussed in a paper he has posted at his website, and seem a reasonable start on a subject that is likely to attract a lot more attention in coming years.

In the final paragraph of his essay, Bostrom takes up the more speculative question of building “entirely artificial systems of equal complexity and performance” to the human organism. Continuing along these lines, he writes:

At some stage, we may learn how to design new organs and bodies ab initio. Someday we may no longer even rely on biological material to implement our bodies and minds. Freed from most practical limitations, the task would then become to make wise use of our powers to self-modify. In other words, the challenge would shift from being primarily scientific to being primarily moral. If that moral task seems comparatively trivial from our current vantage point, this might reflect our present immaturity.

One hopes he is merely indulging in some end of article hyperbole here. If not, one must ask the question, “Whose morality?” In other words, this is another example of the “objective morality” fallacy I have referred to earlier, consisting of assuming that, because we perceive morality as real and objective, it actually is real and objective. Morality is an evolved characteristic that exists in human beings because it has promoted our survival. Bostrom makes the common mistake of assuming that, because he perceives it as independent of his mind, morality actually is independent of his mind, floating out there in space as a real, objective, thing in itself. He makes the further error of confusing his conscious mind with his genetic material. Morality did not evolve because it promoted the survival of conscious minds. It evolved because it promoted the survival of genetic material. As I have noted earlier, nothing can be reasonably considered more immoral than failing to survive. The idea that one could somehow serve a profound moral cause by accepting genetic death and transferring the mind, an ancillary characteristic evolved only because it, too, has promoted the survival of that genetic material, to a machine, is a logical aberration.


The Amity-Enmity Complex: Does This Ring a Bell?

Every habitué of Internet forums and blogs should be very familiar with the kind of behavior Phil Bowermaster refers to in this comment left in response to a post on transhumanism at Accelerating Future:

But that’s not to say that technology has played no role in the recent evolution of political discourse. The rise of the blogosphere and sites like Daily Kos and Free Republic have established a new “accelerated” rhetorical framework for politics which now seems to be more or less universally applied. The basic assumption behind the framework is that there is Our Group and then there is the Other. Any ideas from the Other are subjected to a three-step analysis and response:

1. Hysteria / overreaction

2. Vilification

3. Condemnation

This process has worked great for the political blogs in drawing in huge masses of eager readers, mostly the same people who think they’re up to date on current events because they watch The Colbert Report or listen to Rush Limbaugh.

Does it ring a bell?