Ardipithecus and Pliocene Progressivism

It’s amusing to see how little time elapsed between the spectacular discovery of the Pliocene primate “Ardi” and the revelation, based on study of her canine teeth, that she was a “New Soviet Woman,” endowed with all the progressive behavioral characteristics pertaining thereto. In particular, we learn that she lived harmoniously as a co-equal partner with small-fanged males who shared their food with her, helped rear her offspring, and spared her the unseemly spectacle of fighting with other males for her favors. No doubt if we find a few more teeth we will discover that she was an innocent victim of imperialism and colonialism perpetrated by less worthy primates who diverged from the direct human line at a very early date, and that her carbon footprint was unusually small for a Pliocene mammal.

ardipithecus2

Human Enhancement and Morality: Another Day in the Asylum

The Next Big Future site links to a report released by a bevy of professors that, we are told, is to serve “…as a convenient and accessible starting point for both public and classroom discussions, such as in bioethics seminars.” The report itself may be found here. It contains “25 Questions & Answers,” many of which relate to moral and ethical issues related to human enhancement. For example,

1. What is human enhancement?
2. Is the natural/artificial distinction morally significant in this debate?
3. Is the internal/external distinction morally significant in this debate?
4. Is the therapy/enhancement distinction morally significant in this debate?
9. Could we justify human enhancement technologies by appealing to our right to be free?
10. Could we justify enhancing humans if it harms no one other than perhaps the individual?

You get the idea. Now, search through the report and try to find a few clues about what the authors are talking about when they use the term “morality.” There are precious few. Under question 25 (Will we need to rethink ethics itself?) we read,

To a large extent, our ethics depends on the kinds of creatures that we are. Philosophers traditionally have based ethical theories on assumptions about human nature. With enhancements we may become relevantly different creatures and therefore need to re-think our basic ethical positions.

This is certainly sufficiently coy. There is no mention of the basis we are supposed to use to do the re-thinking. If we look through some of the other articles and reports published by the authors, we find other hints. For example, in “Why We Need Better Ethics for Emerging Technologies” in “Ethics and Information Technology” by Prof. James H. Moor of Dartmouth we find,

… first, we need realistically to take into account that ethics is an ongoing and dynamic enterprise. Second, we can improve ethics by establishing better collaborations among ethicists, scientists, social scientists, and technologists. We need a multi-disciplinary approach (Brey, 2000). The third improvement for ethics would be to develop more sophisticated ethical analyses. Ethical theories themselves are often simplistic and do not give much guidance to particular situations. Often the alternative is to do technological assessment in terms of cost/benefit analysis. This approach too easily invites evaluation in terms of money while ignoring or discounting moral values which are difficult to represent or translate into monetary terms. At the very least, we need to be more proactive and less reactive in doing ethics.

Great! I’m all for proactivity. But if we “do” ethics, what is to be the basis on which we “do” them. If we are to have such a basis, do we not first need to understand the morality on which ethical rules are based? What we have here is another effort by “experts on ethics” who apparently have no clue about the morality that must be the basis for the ethical rules they discuss so wisely if they are to have any legitimacy. If they do have a clue, they are being extremely careful to make sure we are not aware of it. Apparently we are to trust them because, after all, they are recognized “experts.” They don’t want us to peek at the “man behind the curtain.”

This is an excellent example of what E. O. Wilson was referring to when he inveighed against the failure of these “experts” to “put their cards on the table” in his book, “Consilience.” The authors never inform us whether they believe the morality they refer to with such gravity is an object, a thing-in-itself, or, on the contrary, is an evolved, subjective construct, as their vague allusion to a basis in “human nature” would seem to imply. Like so many other similar “experts” in morality and ethics, they are confident that most people will “know what they mean” when they refer to these things and will not press them to explain themselves. After all, they are “experts.” They have the professorial titles and NSF grants to prove it. When it comes to actually explaining what they mean when they refer to morality, to informing us what they think it actually is, and how and why it exists, they become as vague as the Oracle of Delphi.

Read John Stuart Mill’s “Utilitarianism,” and you will quickly see the difference between the poseurs and someone who knows what he’s talking about. Mill was not able to sit on the shoulders of giants like Darwin and the moral theorists who based their ideas on his work, not to mention our modern neuroscientists. Yet, in spite of the fact that these transformational insights came too late to inform his work, he had a clear and focused grasp of his subject. He knew that it was not enough to simply assume others knew what he meant when he spoke of morality. In reading his short essay we learn that he knew the difference between transcendental and subjective morality, that he was aware of and had thought deeply about the theories of those who claimed (long before Darwin) that morality was a manifestation of human nature, and that one could not claim the validity or legitimacy of moral rules without establishing the basis for that legitimacy. In other words, Mill did lay his cards on the table in “Utilitarianism.” Somehow, the essay seems strangely apologetic. Often it seems he is saying, “Well, I know my logic is a bit weak here, but I have done at least as well as the others.” Genius that he was, Mill knew that there was an essential something missing from his moral theories. If he had lived a few decades later, I am confident he would have found it.

Those who would be taken seriously when they discuss morality must first make it quite clear they know what morality is. As those who have read my posts on the topic know, I, too, have laid my cards on the table. I consider morality an evolved human trait, with no absolute legitimacy whatsoever beyond that implied by its evolutionary origin at a time long before the emergence of modern human societies, or any notion of transhumanism or human enhancements. As such, it can have no relevance or connection whatsoever to such topics other than as an emotional response to an issue to which that emotion, an evolved response like all our other emotions, was never “designed” to apply.

On the Irrational Instincts of Psychologists and Anthropologists

William Morton Wheeler was, like E. O. Wilson, an expert on social insects. In his book, “Social Life Among the Insects,” published in 1924, he wrote,

The whole trend of modern thought is toward a greater recognition of the very important and determining role of the irrational and the instinctive, not only in our social but also in our individual lives.

Oddly enough, the same statement would be as accurate today as it was then. Somehow, in the intervening years, we were derailed by the absurd behaviorist psychology of Skinner, Montagu, et.al., and the equally ridiculous “Not in our Genes” anthropology of Lewontin and Levins. Their work never really made any sense. For the most part, they were political ideologues, and their “science” was whatever was necessary to fit their narratives. For a time, and a long time, at that, politics trumped science in psychology and anthropology. For decades, it looked like Trofim Lysenko was winning.

Now, thanks to some remarkable advances, notably in neuroscience, but in many other scientific bailiwicks as well, the Montagus and Lewontins find themselves in a niche with such other variants of their species as the creation “scientists” where they have always belonged.

Since we have now come full circle, perhaps it would be well if the psychologists and anthropologists would leave off chasing the latest scientific trends for a time, and look back over their shoulders. They really owe us an explanation. How is it that people who claim to respect scientific truth were capable of deluding themselves and the rest of us for so long? What are the irrational aspects of our nature as human beings that made it possible for major branches of the sciences to be hijacked by political ideologues over a period of decades? Let them explain themselves. It would go a long way towards restoring their credibility.

But Wasn’t Hitler Evil?

Apologists for objective moral codes often seek to make their point by posing questions such as, “Wasn’t Hitler absolutely evil,” “Wasn’t Stalin absolutely evil?” or some variant thereof. The argument is emotional rather than rational, and relies on the manner in which our moral nature is wired in our brains to deny the dependence of morality on that wiring for its very existence. In other words, they rely on the fact that our brains cause us to perceive morality as an objective thing to argue that, therefore, it really is an objective thing.

Archeologist Timothy Taylor presents a variant of the “Wasn’t Hitler Evil?” argument in an essay entitled “The Trouble with Relativism,” that appeared in one of Edge.org’s latest publications, “What Have You Changed Your Mind About?” In this case, the “self-evident” evil he cites is the human sacrifice of children practiced by the Incas. According to Taylor,

In Cambridge at the end of the 1970s, I began to be inculcated with the idea that understanding the internal logic and value system of a past culture was the best way to do archaeology and anthropology… A ritual killing was not to be judged bad but considered valid within a different worldview… But what happens when relativism says that our concepts of right and wrong, good and evil, kindness and cruelty, are inherently inapplicable? Relativism self-consciously divests itself of a series of anthropocentric and anchronistic skins – modern, white, Western, male-focused, individualist, scientific (or “scientific”) – to say that the recognition of such value-concepts is radically unstable, the “objective” outsider opinion a worthless myth.

He then goes on to dismantle the historical myths that claimed “that being ritually killed to join the mountain gods was an honor that the Incan rulers accorded only to their own privileged offspring.” In fact, his research team discovered that they were actually “peasant children, who, a year before death, were given the outward trappings of high status and a much improved diet in order to make them acceptable offerings.”

Taking advantage of the moral high ground thus established, Taylor goes on,

We need relativism as an aid to understanding past cultural logic, but it does not free us from a duty to discriminate morally, and to understand that there are regularities in the negatives of human behavior as well as in its positives. In this case, it seeks to ignore what Victor Nell has described as “the historical and cross-cultural stability of the uses of cruelty for punishment, amusement, and social control.” By denying the basis for a consistent underlying algebra of positive and negative, yet consistently claiming the necessary rightness of the internal cultural conduct of “the Other,” relativism steps away from logic into incoherence.

Taylor is mistaken in equating recognition of the subjective nature of morality with “relativism.” I am familiar with the mentality of the people he describes, and I reject it as much as he does. He is quite right in pointing out the inconsistency of defending moral relativism while claiming at the same time that the internal cultural conduct of “the Other” is necessarily right. However, he also “steps from logic into incoherence” himself when he exploits the emotional impact of the murder of children for cynical ends to defend a “basis for a consistent underlying algebra of positive and negative.” If, in fact, those who would affirm the objective existence of morality have some logically defensible basis in mind then, as so eloquently suggested by E. O. Wilson in “Consilience,” they should “lay their cards on the table.” They have not been able to do that to date.

Morality is an evolved trait of human beings. We perceive good and evil as absolutes because that is our nature. That is the way we are programmed to perceive them. In reality, they are subjective mental constructs. No moral revulsion or emotional response, no matter how strong, not even to Hitler’s Holocaust, or Stalin’s mass slaughter, or the ritual murder of children, can convert morality from what it really is into that which we perceive it to be.

The Blogosphere Rediscovers Tarzan

Fausta, Katie Baker, et.al. wander off the reservation after discovering they’re not really “new Soviet woman” after all. As Karol at Alarming News puts it:

And yet, the fact is that “me Tarzan, you Jane” is ultimately what makes us hot. That’s what these feminists, who are trained to really, truly believe they want a man who is mostly like a woman, admit in these posts “tee hee, I know I’m not supposed like this, but I kinda do.” You know why? Evo-freaking-lution. Women like the men who take care of them. Whether it’s put food on the table or beat back the saber-tooth tiger. We’re programmed to crave the man who behaves…like a man.

I know, for you connoisseurs of “pop ethology,” this is a bit down in the weeds, but still, the paradigm shift continues. May the day come soon when the neuroscientists can explain this “programming” at a molecular level. What fun it will be to confront the world’s last, hoary behaviorists with the facts about who and what we really are.

mad-men-jon-hamm1

Consequences: The Great Question of Should, Part III

In two earlier posts I explored the consequences of the subjective nature of morality. We have already explored some of the ramifications of that conclusion as far as the individual is concerned. In this post we will continue that discussion.

I touched earlier on the virtual impossibility of amoral behavior. We are wired to be moral creatures, and there is a moral context to all our interactions with other human beings. It is for this reason that the argument that religion is necessary because without it we would have no reason to act morally is absurd. We don’t need a reason to act morally. We just do because that is our nature, just as it is the nature of other more intelligent animals that act morally even though they can have no idea of the existence of a God.

Morality did not suddenly appear with the evolution of homo sapiens. Rather, it evolved in other creatures millions of years before we came on the scene. I suspect the expression of morality in human beings represents the interaction of our high intelligence, which evolved in a relatively short time, with predispositions that have undergone only limited change during the same period. One interesting result of this is the fact that we consciously perceive morality as a “thing” having an objective existence of its own independent of ourselves. An artifact of this perception that we have noted earlier is the adoption of complex “transcendental” moral systems by some of our most famous atheists, who obviously believe their versions of morality represent the “real good,” applicable not only to themselves, but to others as well, in spite of the fact that they lack any logical basis for that belief.

We all act according to our moral nature, almost unconsciously applying rules that correspond to a “good” that seems to be external to and independent of ourselves. I am no different than anyone else in that respect. I can no more act amorally than any other human being. I act according to my own moral principles, just as everyone else does. I have a conscience, I can feel shame, and I can become upset, and even enraged, if others treat me or my own “in-groups” in a way that does not correspond to what I consider “good” or “just.” Anyone doubting that fact need only look through my posts in the archives of at Davids Medienkritik. I behave in that way because it is my nature to behave in that way. In fact, if I tried to jettison morality and, instead, rationally weigh each of my actions in accordance with some carefully contrived logical principles, I would only succeed in wasting a great deal of time and making myself appear ludicrous in the process.

However, there are logical consequences to the conclusion that good and evil are not objects that exist on their own, independent of their existence as evolved mental constructs. In the first place, they evolved at a time when the largest social groups were quite small, containing members who were generally genetically related to each other to some extent. They evolved because they promoted the survival of a specific packet of genetic material. That is the only reason they exist. The application of moral standards to the massive human organizations that exist today, such as modern states, is, therefore, logically absurd. Morality evolved in a world where no such organizations existed, and the mere fact that it evolved did not give it any universal legitimacy. We nevertheless attempt to apply morality to international affairs, and to questions of policy within nations involving millions of unrelated people, in spite of the logical disconnect this entails with the reason morality exists to begin with. We do so because that is our nature. We do so not because it is reasonable, but because that is how our minds are programmed. Under the circumstances, assuming that we agree survival is a desirable goal, it would seem we should subject such “moral” behavior to ever increasing logical scrutiny as the size of the groups we are dealing with increases. Our goal should be to insure that our actions actually promote the accomplishment of some reasonable goal more substantial than making us feel virtuous because we have complied with some vague notion of a “universal good.”

When it comes to our personal relationships with other individuals or with the smaller groups we must interact with on a daily basis, we must act according to our moral nature, because, as noted above, it would be impractical to act otherwise. In such cases it seems to me that if our goals are to survive and enjoy life in the process, we should act according to a simple moral code that is in accord with our nature and refrain from attempting to apply contrived “universal moral standards” to our fellow beings that are absurd in the context of the reasons that promoted the evolution of morality in the first place. In other words, we should act in accordance with the well understood principles of what H. L. Mencken referred to as “common decency.”

In the process, we should not lose sight of the dual nature of our moral programming, which can prompt us to act with hostility towards others that is counterproductive in the context of modern civilization. It would behoove us to take steps to channel such behavior as harmlessly as possible, because it will not go away. We cannot afford to ignore the darker side of our nature, or engage in misguided attempts to “reprogram” ourselves based on the mistaken assumption that human nature is infinitely malleable. We must deal with ourselves as we are, not as how we want ourselves to be. The formulation of complex new systems of morality that purport to be in accord with the demands of the modern world may seem like a noble endeavor. In reality, the formulation of new “goods” always implies the formulation of new “evils.” It would be better to understand the destructive aspects of our nature and deal with them logically rather than by creating ever more refined moral systems. To the extent that they fail to take the innate aspects of human behavior into account, these can be dangerous. Consider, for example, the new moral paradigm of Communism, with its “good” proletariat and “bad” bourgeoisie. The practical application of this noble new system resulted in the deaths of 100 million “bourgeoisie,” and what amounted to the national decapitation of Cambodia and the Soviet Union. In view of such recent historical occurrences, the current fashion of demonizing and reacting with moral indignation to those who disagree with us politically would seem to be ill-advised.

Morality is an evolved trait. Our problem is that we perceive it as an independent object, a transcendental thing-in-itself, something that it is not and cannot ever be. We must act according to our moral nature, but let us consult our logical minds in the process.

E. O. Wilson: “Consilience,” Ethics and Fate

I first became aware of the work of E. O. Wilson when he published a pair of books in the 70’s (“Sociobiology” in 1975 and “On Human Nature” in 1979) that placed him in the camp of those who, like Ardrey, insisted on the role of genetically programmed predispositions in shaping human behavior. He touches on some of the issues we’ve been discussing here in one of his more recent works, “Consilience.” In a chapter entitled “Ethics and Religion,” he takes up the two competing fundamental assumptions about ethics that, according to Wilson, “make all the difference in the way we view ourselves as a species.” These two contradictory assumptions can be stated as, “I believe in the independence of moral values,” and “I believe that moral values come from humans alone.” This formulation is somewhat imprecise, as animals other than humans act morally. However, I think the general meaning of what Wilson is saying is clear. He refers to these two schools of thought as the “transcendentalists,” and “empiricists,” respectively. He then goes on to express a sentiment with which I very heartily agree;

The time has come to turn the cards face up. Ethicists, scholars who specialize in moral reasoning, are not prone to declare themselves on the foundations of ethics, or to admit fallibility. Rarely do you see an argument that opens with the simple statement: This is my starting point, and it could be wrong. Ethicists instead favor a fretful passage from the particular into the ambiguous, or the reverse, vagueness into hard cases. I suspect that almost all are transcendentalists at heart, but they rarely say so in simple declarative sentences. One cannot blame them very much; it is difficult to explain the ineffable, and they evidently do not wish to suffer the indignity of having their personal beliefs clearly understood. So by and large they steer around the foundation issue altogether.

Here he hits the nail on the head. It’s normal for human beings to be “transcendentalists at heart,” because that’s our nature. We’re wired to think of good and evil as having an objective existence independent of our minds. Unfortunately, that perception is not true and yet the “scholars who specialize in moral reasoning,” appear singularly untroubled by the fact. Someone needs to explain to them that we’re living in the 21st century, not the 18th, and their pronouncements that they “hold these truths to be self-evident” don’t impress us anymore. In the meantime, we’ve had a chance to peek at the man behind the curtain. If they really think one thing is good, and another evil, it’s about time they started explaining why.

Wilson declares himself an empiricist, and yet, as was also evident in his earlier works, he is not quite able to make a clean break with the transcendentalist past. I suspect he has imbibed too deeply at the well of traditional philosophy and theology. As a result, he has far more respect for the logic-free notions of today’s moralists than they deserve. I have a great deal of respect for Martin Luther as one of the greatest liberators of human thought who ever lived, and I revere Voltaire as a man who struck the shackles of obscurantism from the human mind. That doesn’t imply that I have to take Luther’s pronouncements about the Jews or Voltaire’s notions about his deist god seriously.

I once had a friend who, when questioned too persistently about something for which he had no better answer would reply, “Because there are no bones in ice cream.” The proposition that morality is an evolved human trait seems just as obvious to me as the proposition that there are no bones in ice cream. If anyone cares to dispute the matter with me, they need to begin by putting a package with bones on the table. Otherwise I will not take them seriously. The same goes for Wilson’s menagerie of philosophers and theologians. I respect them because, unlike so many others, they took the trouble to think. When it comes to ideas, however, we should respect them not because they are hoary and traditional, but because they are true. We have learned a great deal since the days of Kant and St. Augustine. We cannot ignore what we have learned in the intervening years out of respect for their greatness.

In the final chapter of his book, entitled “To What End,” Wilson discusses topics such as the relationship between environmental degradation and overpopulation, and considers the future of genetic engineering. His comments on the former are judicious enough, and it would be well if the developed countries of the world considered them carefully before continuing along the suicidal path of tolerating massive legal and illegal immigration. As for the latter, here, again, I find myself in agreement with him when he says that, “Once established as a practical technology, gene therapy will become a commercial juggernaut. Thousands of genetic defects, many fatal, are already known. More are discovered each year… It is obvious that when genetic repair becomes safe and affordable, the demand for it will grow swiftly. Some time in the next (21st) century that trend will lead into the full volitional period of evolution… Evolution, including genetic progress in human nature and human capacity, will be from (then) on increasingly the domain of science and technology tempered by ethics and political choice.”

As often happens, Wilson reveals his emotional heart of hearts to us with a bit of hyperbole in his final sentence:

And if we should surrender our genetic nature to machine-aided ratiocination, and our ethics and art and our very meaning to a habit of careless discursion in the name of progress, imagining ourselves godlike and absolved from our ancient heritage, we will become nothing.

This is a bit flamboyant, and begs the question of who or what gets to decide our “meaning.” Still, Wilson’s work is full of interesting and thought-provoking ideas, and he is well worth reading.

Sam Harris and his Butterfly Net: An Account of the Capture of the “Real, Objective” Good

The human brain is a wonderful survival mechanism. It endows our species with unrivaled powers of reasoning, allowing us to discern truths about subatomic particles and distant planets that our unaided senses can’t even detect. It has also supplied us with self-constructed, subjective “truths” about things that exist only in our own minds, endowing them with a legitimacy and reality of their own. Morality is such a thing. It does not and cannot have an independent existence of its own, but believing that it does has promoted our survival. Therefore, we believe. Our brains are wired to perceive good and evil as real things, and so we do. In spite of our vaunted logical powers, some of the greatest thinkers among us cannot rid themselves of the illusion. At some level they have grasped the truth that everything about us, including our minds, emotions, and predispositions, have evolved because they have promoted our survival. On the other hand, they truly believe that one such evolved trait, morality, which we happen to share with many other animals, somehow corresponds to a real thing that has an independent reality of its own. Logically, they cannot justify their belief that good and evil are real, objective things, but, still, they believe it. Nature insists.

The “Big Three” among the “new atheists,” Richard Dawkins, Chris Hitchens, and Sam Harris, provide interesting examples of the phenomena. None of them would be any more capable of providing a logical basis for their belief that there is a real, objective good and a real, objective evil, and that they know the real objective difference between the two anymore than Euthyphro could demonstrate the same to Socrates. Nonetheless, all three of them are convinced that that which their brains are wired to perceive as real must actually be real. They all believe in the objective existence of good and evil, and they all believe that their own moral standards apply not only to themselves, but to others as well. Read their books and you will find all of them laced with the moral judgments that are the artifacts of this belief.

I have pointed out in earlier posts the logical absurdity of the belief that morality, an evolved emotional trait, not only of humans but of other animals as well, somehow has an existence of its own, independent of the minds that host it. Let us consider how one of the “Big Three,” Sam Harris, has nevertheless managed to convince himself that what he perceives as real must actually be real. Harris is a neuroscience researcher. He set forth his thoughts on the subject in an essay entitled, “Brain Science and Human Values,” that recently appeared at the website of the Edge Foundation. After a discussion of the process of discovering scientific truth, Harris asks,

“But what about meaning and morality? Here we appear to move from questions of truth—which have long been in the domain of science if they are to be found anywhere—to questions of goodness. How should we live? Is it wrong to lie? If so, why and in what sense? Which personal habits, uses of attention, modes of discourse, social institutions, economic systems, governments, etc. are most conducive to human well-being? It is widely imagined that science cannot even pose, much less answer, questions of this sort.”

Here, Harris has begun the process of self-obfuscation. Let us set aside the issue of what he actually means by “conducive to human well-being” for the time being and focus on the question of morality. There is no more a logical reason to consider that which is “conducive to human well-being” objectively good than there is a logical reason to consider it objectively good to follow Pythagoras’ admonition to avoid the eating of beans. However, making the logical leap from fact to fiction is no problem for most of us. We “feel” that “human well-being” is a legitimate good. We might even feel the emotion of shame in denying it. If someone demanded that we defend the assertion that “human well-being” is not objectively good, we would likely feel some embarrassment. It is mentally easy for us to associate “human well-being” with “objective good” in this way. It is also illogical.

Instead of simply claiming that good and evil exist because he feels they must exist, all Harris is doing is adding an intermediate step. He points to a “self-evident” good and props it up as a “gold standard,” as “real good.” In essence, this “gold standard” serves the same purpose as God does for religious believers. They believe that God must really be good, and, because He is the standard of that which is good, His laws must really be good as well. Harris substitutes his “gold standard” for God. It must be “really good,” because, after all, everyone agrees it is good. Who can deny it? Everyone has the same perception, the same conviction, the same feeling. In reality, he is just chasing his tail. Instead of simply claiming that the existence of objective good and evil are self-evident to begin with, he claims that it is self-evident that “human well-being” is an objective good. Once we have accepted this “gold standard,” it follows that, since we have established that it is “really good,” then “real good” must exist as well as the basis for making this determination in the first place. Once he has established this “gold standard,” Harris cuts to the chase:

“Much of humanity is clearly wrong about morality—just as much of humanity is wrong about physics, biology, history, and everything else worth understanding. If, as I believe, morality is a system of thinking about (and maximizing) the well being of conscious creatures like ourselves, many people’s moral concerns are frankly immoral.”

In other words, we are to believe that morality isn’t merely a subjective predisposition, but a real thing. It is simply a question of determining scientifically what it is. Once we have done that, then we really should do good and avoid doing evil. Harris continues:

“Morality—in terms of consciously held precepts, social-contracts, notions of justice, etc.—is a relatively recent invention. Such conventions require, at a minimum, language and a willingness to cooperate with strangers, and this takes us a stride or two beyond the Hobbesian ‘state of nature.’”

Here Harris commits the fallacy of associating “Consciously held precepts, social contracts, notions of justice, etc.,” with morality itself. They are not morality, but merely manifestations of morality in human beings living in the modern world. Morality itself predates human beings by millions of years, and many other animal species act morally in addition to ourselves. The most significant difference between us and them is that they lack the capacity to speculate about whether morality is objectively real. Indeed, for them, morality is likely a more effective evolutionary adaptation than it is for us. They simply act as they are wired to act, and feel no need to invent objective reasons for their actions in the form of Gods or Harris’ ersatz god, “the imperative to act for the well being of conscious creatures.”

Harris would do well to go back to square one and consider what morality really is. It is an evolved subjective predisposition that exists because it promoted our survival. Furthermore, it promoted our survival at a time when we existed in small communities of genetically related individuals. It is a dual phenomena. We apply one standard of right and wrong to our interactions with those within our “in-group,” and another standard of right and wrong to “out-groups.” It is reasonable to assume that the wiring in our brain responsible for our predisposition to behave morally, which evolved at a time when we lived in small hunter-gatherer communities, is not ideally suited to similarly promote our survival in a world of gigantic nation states equipped with nuclear weapons. Instead of understanding this problem and addressing it rationally, Harris claims to have discovered the “real good,” in the form of “that which is conducive to human well-being.” In reality, Harris is as religious as the most phantastical southern Baptist. The only difference between him and them is that he believes in a “True Good” instead of a true God. He insists that, instead of understanding our own nature and accommodating ourselves to it, we should all be required to change our nature to conform to his phantasy that a scientifically discernable version of this “True Good” exists. In other words, he wants to take a giant step backwards to the era of the behaviorists and the “new Soviet man,” when it was assumed that human nature was infinitely malleable and could be molded as needed to conform to whatever arbitrary definition of “good” one chose to adopt. He won’t succeed any more than the Communists or all the other architects of heavens on earth have succeeded. Human nature is what it is, and won’t jump through hoops, even for Sam Harris. He thinks he can simply wave his hands, and inconvenient aspects of human morality, such as the Amity-Enmity Complex will just disappear. Others have tried that before him. It doesn’t work. It not only doesn’t work, but, in a world full of nuclear weapons, it is extremely dangerous. If we are to avoid self destruction, it will behoove us to understand our own nature. Creating “brave new moralities” out of thin air and insisting that others conform to them does not promote such understanding. Rather, it amounts to a deliberate burying of our heads in the sand.

I can only suggest that Harris go back to his neuroscientific research. Who knows, one day he may turn up at my doorstep and present me with a vial of distilled “Good”. However, I rather suspect it’s more likely he will eventually come to a more rational understanding of human morality. At least I hope he will, and I hope the same for his two illustrious peers, Hitchens and Dawkins. It happens that the latter has a wonderfully designed website with forums for the philosophically minded. It pleases me to see that, based on their comments, some of the brighter visitors to these forums “get it” when it comes to morality. I suggest that Harris, Dawkins, Hitchens, and the rest of the intellectual gentry at Edge.org take the time to read them.

Personal Genetic Testing – One More Step

Personal genetic testing began mainly as a tool for genealogists. The next step, testing for health risks, has already been taken. As the technology continues to develop, individuals will gain increasing control over their own genetic futures. They will, that is, unless the many who, for one reason or another, are opposed to these developments are able to stop them. The only viable way to do that is by enlisting the power of the state. They will certainly make the attempt. It will be interesting to see if they succeed. The forces that have driven human evolution for hundreds of thousands of years have, for all practical purposes, ceased to exist. The outcome of the battle will determine what they will be in the future.