The world as I see it
RSS icon Email icon Home icon
  • The “Moral Progress” Delusion

    Posted on August 14th, 2016 Helian 7 comments

    “Moral progress” is impossible.  It is a concept that implies progress towards a goal that doesn’t exist.  We exist as a result of evolution by natural selection, a process that has simply happened.  Progress implies the existence of an entity sufficiently intelligent to formulate a goal or purpose towards which progress is made.  No such entity has directed the process, nor did one even exist over most of the period during which it occurred.  The emotional predispositions that are the root cause of what we understand by the term “morality” are as much an outcome of natural selection as our hands or feet.  Like our hands and feet, they exist solely because they have enhanced the probability that the genes responsible for their existence would survive and reproduce.  There is increasing acceptance of the fact that morality owes its existence to evolution by natural selection among the “experts on ethics” among us.  However, as a rule they have been incapable of grasping the obvious implication of that fact; that the notion of “moral progress” is a chimera.  It is a truth that has been too inconvenient for them to bear.

    It’s not difficult to understand why.  Their social gravitas and often their very livelihood depend on propping up the illusion.  This is particularly true of the “experts” in academia, who often lack marketable skills other than their “expertise” in something that doesn’t exist.  Their modus operandi consists of hoodwinking the rest of us into believing that satisfying some whim that happens to be fashionable within their tribe represents “moral progress.”  Such “progress” has no more intrinsic value than a five year old’s progress towards acquiring a lollipop.  Often it can be reasonably expected to lead to outcomes that are the opposite of those that account for the existence of the whim to begin with, resulting in what I have referred to in earlier posts as a morality inversion.  Propping up the illusion in spite of recognition of the evolutionary roots of morality in a milieu that long ago dispensed with the luxury of a God with a big club to serve as the final arbiter of what is “really good” and “really evil” is no mean task.  Among other things it requires some often amusing intellectual contortions as well as the concoction of an arcane jargon to serve as a smokescreen.

    Consider, for example, a paper by Professors Allen Buchanan and Russell Powell entitled Toward a Naturalistic Theory of Moral ProgressIt turned up in the journal Ethics, that ever reliable guide to academic fashion touching on the question of “human flourishing.”  Far from denying the existence of human nature after the fashion of the Blank Slaters of old, the authors positively embrace it.  They cheerfully admit its relevance to morality, noting in particular the existence of a predisposition in our species to perceive others of our species in terms of ingroups and outgroups; what Robert Ardrey used to call the Amity/Enmity Complex.  Now, if these things are true, and absent the miraculous discovery of any other contributing “root cause” for morality other than evolution by natural selection, whether in this world or the realm of spirits, it follows logically that “progress” is a term that can no more apply to morality than it does to evolution by natural selection itself.  It further follows that objective Good and objective Evil are purely imaginary categories.  In other words, unless one is merely referring to the scientific investigation of evolved behavioral traits, “experts on ethics” are experts about nothing.  Their claim to possess a philosopher’s stone pointing the way to how we should act is a chimera.  For the last several thousand years they have been involved in a sterile game of bamboozling the rest of us, and themselves to boot.

    Predictly, the embarrassment and loss of gravitas, not to mention the loss of a regular paycheck, implied by such a straightforward admission of the obvious has been more than the “experts” could bear.  They’ve simply gone about their business as if nothing had happened, and no one had ever heard of a man named Darwin.  It’s actually been quite easy for them in this puritanical and politically correct age, in which the intellectual life and self-esteem of so many depends on maintaining a constant state of virtuous indignation and moral outrage.  Virtuous indignation and moral outrage are absurd absent the existence of an objective moral standard.  Since nothing of the sort exists, it is simply invented, and everyone stays outraged and happy.

    In view of this pressing need to prop up the moral fashions of the day, then, it follows that no great demands are placed on the rigor of modern techniques for concocting real Good and real Evil.  Consider, for example, the paper referred to above.  The authors go to a great deal of trouble to assure their readers that their theory of “moral progress” really is “naturalistic.”  In this enlightened age, they tell us, they will finally be able to steer clear of the flaws that plagued earlier attempts to develop secular moralities.  These were all based on false assumptions “based on folk psychology, flawed attempts to develop empirically based psychological theories, a priori speculation, and reflections on history hampered both by a lack of information and inadequate methodology.”  “For the first time,” they tell us, “we are beginning to develop genuinely scientific knowledge about human nature, especially through the development of empirical psychological theories that take evolutionary biology seriously.”  This begs the question, of course, of how we’ve managed to avoid acquiring “scientific knowledge about human nature” and “taking evolutionary biology seriously” for so long.  But I digress.  The important question is, how do the authors manage to establish a rational basis for their “naturalistic theory of moral progress” while avoiding the Scylla of “folk psychology” on the one hand and the Charybdis of “a priori speculation” on the other?  It turns out that the “basis” in question hardly demands any complex mental gymnastics.  It is simply assumed!

    Here’s the money passage in the paper:

    A general theory of moral progress could take a more a less ambitious form.  The more ambitious form would be to ground an account of which sorts of changes are morally progressive in a normative ethical theory that is compatible with a defensible metaethics… In what follows we take the more modest path:  we set aside metaethical challenges to the notion of moral progress, we make no attempt to ground the claim that certain moralities are in fact better than others, and we do not defend any particular account of what it is for one morality to be better than another.  Instead, we assume that the emergence of certain types of moral inclusivity are significant instances of moral progress and then use these as test cases for exploring the feasibility of a naturalized account of moral progress.

    This is indeed a strange approach to being “naturalistic.”  After excoriating the legions of thinkers before them for their faulty mode of hunting the philosopher’s stone of “moral progress,” they simply assume it exists.  It exists in spite of the elementary chain of logic leading inexorably to the conclusion that it can’t possibly exist if their own claims about the origins of morality in human nature are true.  In what must count as a remarkable coincidence, it exists in the form of “inclusivity,” currently in high fashion as one of the shibboleths defining the ideological box within which most of today’s “experts on ethics” happen to dwell.  Those who trouble themselves to read the paper will find that, in what follows, it is hardly treated as a mere modest assumption, but as an established, objective fact.  “Moral progress” is alluded to over and over again as if, by virtue this original, “modest assumption,” the real thing somehow magically popped into existence in the guise of “inclusivity.”

    Suppose we refrain from questioning the plot, and go along with the charade.  If inclusivity is really to count as moral progress, than it must not only be desirable in certain precincts of academia, but actually feasible.  However if, as the authors agree, humans are predisposed to perceive others of their species in terms of ingroups and outgroups, the feasibility of inclusivity is at least in question.  As the authors put it,

    Attempts to draw connections between contemporary evolutionary theories of morality and the possibility of inclusivist moral progress begin with the standard evolutionary psychological assertion that the main contours of human moral capacities emerged through a process of natural selection on hunter-gatherer groups in the Pleistocene – in the so-called environment of evolutionary adaptation (EEA)… The crucial claim, which leads some thinkers to draw a pessimistic inference about the possibility of inclusivist moral progress, is that selection pressures in the EEA favored exclusivist moralisties.  These are moralities that feature robust moral commitments among group members but either deny moral standing to outsiders altogether, relegate out-group members to a substantially inferior status, or assign moral standing to outsiders contingent on strategic (self-serving) considerations.

    No matter, according to the authors, this flaw in our evolved moral repertoire can be easily fixed.  All we have to do is lift ourselves out of the EEA, achieve universal prosperity so great and pervasive that competition becomes unnecessary, and the predispositions in question will simply fade away, more or less like the state under Communism.  Invoking that wonderful term “plasticity,” which seems to pop up with every new attempt to finesse human behavioral traits out of existence, they write,

    According to an account of exclusivist morality as a conditionally expressed (adaptively plastic) trait, the suite of attitudes and behaviors associated with exclusivist tendencies develop only when cues that were in the past highly correlated with out-group threat are detected.

    In other words, it is the fond hope of the authors that, if only we can make the environment in which inconvenient behavioral predispositions evolved disappear, the traits themselves will disappear as well!  They go on to claim that this has actually happened, and that,

    …exclusivist moral tendencies are attenuated in populations inhabiting environments in which cues of out-group threat are absent.

    Clearly we have seen a vast expansion in the number of human beings that can be perceived as ingroup since the Pleistocene, and the inclusion as ingroup of racial and religious categories that once defined outgroups.  There is certainly plasticity in how ingroups and outgroups are actually defined and perceived, as one might expect of traits evolved during times of rapid environmental change in the nature of the “others” one happened to be in contact with or aware of at any given time.  However, this hardly “proves” that the fundamental tendency to distinguish between ingroups and outgroups itself will disappear or is likely to disappear in response to any environmental change whatever.  Perhaps the best way to demonstrate this is to refer to the paper itself.

    Clearly the authors imagine themselves to be “inclusive,” but is that really the case?  Hardly!  It turns out they have a very robust perception of outgroup.  They’ve merely fallen victim to the fallacy that it “doesn’t count” because it’s defined in ideological rather than racial or religious terms.  Their outgroup may be broadly defined as “conservatives.”  These “conservatives” are mentioned over and over again in the paper, always in the guise of the bad guys who are supposed to reject inclusivism and resist “moral progress.”  To cite a few examples,

    We show that although current evolutionary psychological understandings of human morality do not, contrary to the contentions of some authors, support conservative ethical and political conclusions, they do paint a picture of human morality that challenges traditional liberal accounts of moral progress.

    …there is no good reason to believe conservative claims that the shift toward greater inclusiveness has reached its limit or is unsustainable.

    These “evoconservatives,” as we have labeled them, infer from evolutionary explanations of morality that inclusivist moralities are not psychologically feasible for human beings.

    At the same time, there is strong evidence that the development of exclusivist moral tendencies – or what evolutionary psychologists refer to as “in-group assortative sociality,” which is associated with ethnocentric, xenophobic, authoritarian, and conservative psychological orientations – is sensitive to environmental cues…

    and so on, and so on.  In a word, although the good professors are fond of pointing with pride to their vastly expanded ingroup, they have rather more difficulty seeing their vastly expanded outgroup as well, more or less like the difficulty we have seeing the nose at the end of our face.  The fact that the conservative outgroup is perceived with as much fury, disgust, and hatred as ever a Grand Dragon of the Ku Klux Klan felt for blacks or Catholics can be confirmed by simply reading through the comment section of any popular website of the ideological Left.  Unless professors employed by philosophy departments live under circumstances more reminiscent of the Pleistocene than I had imagined this bodes ill for their theory of “moral progress” based on “inclusivity.”  More evidence that this is the case is easily available to anyone who cares to look for “diversity” in the philosophy department of the local university in the form of a professor who can be described as conservative by any stretch of the imagination.

    I note in passing another passage in the paper that demonstrates the fanaticism with which the chimera of “moral progress” is pursued in some circles.  Again quoting the authors,

    Some moral philosophers whom we have elsewhere called “evoliberals,” have tacitly affirmed the evo-conservative view in arguing that biomedical interventions that enhance human moral capacities are likely to be crucial for major moral progress due to evolved constraints on human moral nature.

    In a word, the delusion of moral progress is not necessarily just a harmless toy for the entertainment of professors of philosophy, at least as far as those who might have some objection to “biomedical interventions” carried out be self-appointed “experts on ethics” are concerned.

    What’s the point?  The point is that we are unlikely to make progress of any kind without first accepting the truth about our own nature, and the elementary logical implications of that truth.  Darwin saw them, Westermarck saw them, and they are far more obvious today than they were then.  We continue to ignore them at our peril.

  • More Ardreyania, with Pinker and CRISPR

    Posted on August 11th, 2015 Helian No comments

    Robert Ardrey is the one man the “men of science” in the behavioral disciplines would most like to see drop down the memory hole for good.  Mere playwright that he was, he was presumptuous enough to be right about the existence of human nature when all of them were wrong, and influential enough to make them a laughing stock among educated laypeople for denying it.  They’ve gone to great lengths to make him disappear ever since, even to the extreme of creating an entire faux “history” of the Blank Slate affair.  I, however, having lived through the events in question, and still possessed of a vestigial respect for the truth, will continue to do my meager best to set the record straight.  Indeed, dear reader, I descended into the very depths to glean material for this post, so you won’t have to.  In fine, I unearthed an intriguing Ardrey interview in the February 1971 issue of Penthouse.

    The interview was conducted in New York by Harvey H. Segal, who had served on the editorial board of the New York Times from 1968 to 1969, and was an expert on corporate economics.  The introductory blurb noted the obvious to anyone who wasn’t asleep at the time; that the main theme of all Ardrey’s work was human nature.

    Equipped only with common sense, curiosity, and a practiced pen, Robert Ardrey shouldered his way into the study of human nature and has given a new direction to man’s thinking about man.

    and

    An impact on this scale is remarkable for any writer, but in Ardrey’s case it has the added quality of being achieved in a second career.

    As usual, in this interview as in every other contemporary article and review of his work that I’ve come across, there is no mention of his opinion on group selection.  It will be recalled that Ardrey’s favorable take on this entirely ancillary subject in his book The Social Contract was seized on by Steven Pinker as the specious reason he eventually selected to announce that Ardrey had been “totally and utterly wrong.”  There is much of interest in the interview but, as it happens, Ardrey’s final few remarks bear on the subject of my last post; artificial manipulation of human DNA.

    In case you haven’t read it, that post discussed some remarks on the ethical implications of human gene manipulation by none other than – Steven Pinker.  According to Pinker the moral imperative for the bioethicists who were agonizing over possible applications of such DNA-altering tools as CRISPR-Cas9 was quite blunt; “Get out of the way.”  Their moral pecksniffery should not be allowed to derail the potential of these revolutionary tools for curing or alleviating a great number of genetically caused diseases and disorders or its promise of “vast increases in life, health, and flourishing.”  Pinker dismisses concerns about the possible misuse of the technology as follows:

    A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like “Brave New World’’ and “Gattaca,’’ and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs. Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

    That smacks a bit of what the German would call “Verharmlosung” – insisting that something is harmless when it really isn’t.  Tools like CRISPR certainly have the potential for altering DNA in ways not necessarily intended to merely cure disease.  For example, many intelligence related genes have already been found, and new ones are being found on a regular basis.  Alterations in genes that influence human behavior are also possible.  Ardrey had a somewhat more sober take on the subject in the interview referred to above.  For example,

    Segal:  What about the possibility of altering the brain and human instincts through new advances in genetics, DNA and the like?

    Ardrey:  I don’t have much faith.  Altering of the human being is something to approach with the greatest apprehension because it depends on what kind of human being you want.  It is not so long since H. J. Muller, one of the greatest American geneticists and one of the first eugenicists, was saying that we have to eliminate aggression.  But now there is (Konrad) Lorenz who says that aggression is the basis of almost all life.  Reconstruction of the human being by human beings is too close to domestication, like control of the breeding of animals.  Muller’s plan for the human future was dealing with sheep.  I happen to be one who works best at being something other than a sheep, and I think most people do.

    and a bit later, on the prospect of curing disease:

    I see some important things that might be done with DNA on a very simple scale, such as repairing an error in, say, a hemophiliac – one of those genetic errors that appear at random every so often.  But that is making a thing normal.  It is not impossible that some genetically-caused disease, particularly if it has a one-gene basis, might be fixed.  But genes are like a club or political party with all sorts of jostling and jockeying between them.  You change one and a bell rings at the other end of the line.

    I tend to agree with Ardrey that there is a strong possibility that CRISPR and similar tools will be misused.  However, I also agree with Pinker that the bioethicists are only likely to succeed in stalling the truly beneficial applications, and the most “moral” course for them will be to step aside.  The dangers are there, but they are dangers the bioethicists are most unlikely to have the power to do anything about.

    At the individual level, parents interested in enhancing the intelligence, athletic prowess, or good looks of their offspring will seize the opportunity to do so, taking the moralists with a grain of salt in the process, and if the technology is there, the opportunity to create “designer babies” will be there as well for those rich enough to afford it.  Even more worrisome is the potential misuse of the technology by state actors.  As Ardrey pointed out, they may well take a much greater interest in the ancient bits of the brain that control our feelings, moods and behavior than in the more recently added cortical enhancements responsible for our relatively high intelligence.

    In a word, what we face is less a choice than a fait accompli.  Like nuclear weapons, the technology will eventually be applied in ways the bioethicists are likely to find very disturbing.  It’s not a question of if, but when.  The end result of this new era of artificially accelerated evolution will certainly be interesting for those lucky enough to be around to witness it.

    Robert Ardrey

    Robert Ardrey

  • “Designer Babies” and the Path to Transhumanism

    Posted on April 14th, 2014 Helian No comments

    That great poet among philosophers Friedrich Nietzsche once wrote,

    I teach you the overman.  Man is something that shall be overcome.  What have you done to overcome him?  All beings so far have created something beyond themselves; and do you want to be the ebb of this great flood and even go back to the beasts rather than overcome man?  What is the ape to man?  A laughingstock or a painful embarrassment.  And man shall be just that for the overman:  a laughingstock or a painful embarrassment… Behold, I teach you the overman.  The overman is the meaning of the earth.  Let your will say:  the overman shall be the meaning of the earth!

    Nietzsche was no believer in “scientific morality.”  He knew that if, as his Zarathustra claimed, God was really dead, there was no basis for his preferred version of the future of mankind or his preferred versions of Good and Evil beyond a personal whim.  However, as whims go, the above passage at least has the advantage of being consistent.  In other words, unlike some modern versions of morality, it isn’t a negation of the reasons that morality evolved in the first place.  It would have been interesting to hear the great man’s impressions of a world in which modern genetics is increasingly endowing the individual with the power to decide for himself whether he wants to be the “rope between man and overman” or not.

    Hardly a month goes by without news of some new startup offering the latest version of the power.  For example, a week ago an article turned up in The Guardian describing the “Matchright” technology to be offered by a venture by the name of Genepeeks.  Its title, Startup offering DNA screening of ‘hypothetical babies’ raises fears over designer children, reflects the usual “Gattaca” nightmares that so many seem to associate with such technologies.  It describes “Matchright” as a computational tool that can screen the DNA of potential sperm donors, identifying those who carry a risk of genetically transmitted diseases when matched with the DNA of a recipients egg.  According to the article,

    …for the technology to work it needs to pull off a couple of amazing tricks. For a start, it is not as simple as creating a single digital sperm and an egg based on the parents and putting them together. When an egg and a sperm fuse in real life, they swap a bunch of DNA – a process called recombination – which is part of the reason why each child (bar identical twins) is different. To recreate this process, the software needs to be run 10,000 times for each individual potential donor. They can then see the percentage of these offspring that are affected by the disease.

    It goes on to quote bioethicist Ronald Green of Dartmouth:

    The system will provide the most comprehensive genetic analysis to date of the potential risk of disease in a newborn, without even needing to fertilise a single egg. It gives people more confidence about disease risk, says Green, who is not involved in the work: “If someone I care for was in the market for donor sperm I might encourage them to use this technology,” he says.

    In keeping with the usual custom for such articles, this one ends up with a nod to the moralists:

    As for the ethical issues, (company co-founder Anne) Morriss does not deny they are there, but believes in opening up the discussion “beyond the self-appointed ethicists”. “I think everybody should be involved – the public and the scientists and the regulators.”

    Indeed, “self-appointed ethicists” aren’t hard to find.  There is an interesting discussion of the two sides of this debate in an article recently posted at Huffington Post entitled The Ethics of ‘Designer Babies.‘ Such concerns beg a question that also came up in the debate back in the late 40’s and early 50’s about whether we should develop hydrogen bombs – do we really have a choice?  After all, we’re not the only ones in the game.  Consider, for example, the title of an article that recently appeared on the CBS News website:  Designer babies” on the way? In China, scientists attempt to unravel human intelligence. According to the article,

    Inside a converted shoe factory in Shenzhen, China, scientists have launched an ambitious search for the genes linked to human intelligence.

    The man in charge of the project is 21-year-old science savant, Zhao Bowen. He estimates more than 60 percent of your IQ is decided by your parents, and now they want to prove it.

    Asked how he would describe his ultimate goal, Zhao said it’s to “help people understand themselves and to create a better world.”

    The “self-appointed ethicists” can react to Zhao’s comment as furiously as they please.  The only problem is that they don’t have a monopoly on the right to make the decision.  They may not be personally inclined to become “the rope between man and overman.”  However, I suspect they may reevaluate their ethical concerns when they find themselves left in the dust with the apes.

    Pygmy Chimpanzee Laughs

  • “Designer Babies” and Transhumanism

    Posted on July 18th, 2010 Helian 1 comment

    Internet chatter over “designer babies” has died down considerably since early 2009, when a chain of fertility clinics headquartered in Los Angeles offered to allow prospective parents to select for cosmetic traits such as hair, eye, and skin color. However, the subject bears on the genetic future of mankind, and is of enduring importance whether the media gatekeepers are paying attention to it or not. The clinics in question quickly withdrew the offered services in response to the inevitable “storm of protest” by those who consider themselves the guardians of public morality.  Regardless, pre-implantation genetic diagnosis (PGD), the technology involved, has been around since the early 1990’s, and continues to advance. It involves checking the genetic material in a cell taken from an embryo very early in its development, when it only consists of about six cells. Initially developed to screen for diseases such as Down’s Syndrome, or reduce the probability of developing diseases such as diabetes or cancer, in principle it can be used to select for arbitrary inherited traits.  Recent research has focused on diseases and psychiatric conditions such as schizophrenia that do not appear traceable to simple genetic variations, and are more likely genetically heterogeneous; dependent on what is likely a complex combination of genetic factors.  As our knowledge increases along these lines, we will inevitably learn to better understand and eventually control the similarly complex genetic factors affecting cognitive ability, or intelligence.  One must hope that day comes sooner rather than later, and that when it comes, prospective parents will have the right to use it without state interference.

    If we are to survive, we must become more intelligent, and the sooner the better.  The matter is urgent, and there is no alternative.  If we do survive, we will become more intelligent.  The only question is how.  Will it be by controlled genetic engineering, or by the “survival of the fittest” in the future holocausts we bring on ourselves because we are too stupid to avoid them?  Consider the events of the 20th century.  A great wave of popular idealism that had been growing ever stronger since the days of the American and French Revolutions among a large proportion of the most intelligent and highly educated elements of societies around the world metasticized into the incredibly destructive pseudo-religion, Communism.  The better part of a century and 100 million deaths later, we seem to have weathered that particular ideological storm, at least for the time being.  There is no compelling reason to believe that it was inevitable that we would, or that it was impossible that, under somewhat different but plausible conditions, Communist systems could have dominated the entire world, or that the resultant clash of ideologies might have culminated in a general nuclear exchange.  Orwell’s 1984 might very well have become a reality.  International boundaries might very well have been reduced to the role of marking where one North Korea ended, and another begun.  There is no guarantee that the outcome of the next storm will not be different. 

    Communism was no historical anomaly.  It was a phenomenon dependent for its existence and its power on some of the best and brightest minds of its day.  As such, it provides us with an objective metric of our intelligence.  We are not nearly as smart as we think we are.  Messianic Islamism has already begun occupying the ideological vacuum left by its demise, and the true believers of new and, perhaps, yet unheard of systems will surely swarm forth eventually to promote new “scientific” paths to the “salvation of humanity.”  Meanwhile, the technologies of mass destruction continue to develop at an alarming pace.  Unless we become intelligent enough to control them it is only a question of time until they are used.  If we take control of our own genetic future there is a slim chance that we will be able to avoid the worst.  If not, it will at least improve our chances of surviving it.

    When it comes to making the necessary decisions, it would be best to leave the state out of it.  State eugenic programs have not been remarkably successful in the past, and they are unlikely to be more successful in the future, because states cannot be depended on to act in the interests of the individuals who are their citizens.  Individuals are remarkably acute judges of their own best interests.  Give individuals the power to use the technology or not, as they see fit.  Their genetic survival will be the metric of whether they made the right choices.  As noted in Psychology Today, they have always made those individual choices in the past by selectivity in the choice of a mate.  Technologies such as PGD will not change that.  It will merely give them the opportunity to make the choice more accurately.

    Many articles have been written about the need to explore the “ethical” implications of the choices we must make about these technologies.  In fact, virtually anyone who describes themselves as a “bio-ethicist,” or, for that matter, an “ethics expert” of any other stripe is, objectively, a charlatan.  Their “ethical debates” are merely so much emotional posturing, in which the various sides carry on fantastical arguments about whose deeply felt emotions are the most “legitimate.”  Ethical debates that do not start with the recognition of the evolutionary origin of these emotions, of the reasons and conditions under which they evolved, and their nature as subjective constructs deriving from predispositions that are hard-wired in the brain, are no more rational than the raving of madmen. 

    Values can never be legitimate in themselves.  They are, by their nature, subjective.  They exist, like virtually everything else of significance about us, because the wiring in the brain that gives rise to them promoted our survival.  If, then, one finds it necessary for some reason to pursue a “value,” none can rationally take precedence over survival.  That is the only “value” that can be accepted as seriously at issue here.  We can ignore the rest of the blather about “ethics,” because the “ethicists” quite literally do not know what they’re talking about.

    I wish to survive, and I wish for my species and life in general to survive.  I don’t flatter myself that those wishes have any objective legitimacy, but, subjectively, I am very attached to them.  Assuming there are others out there who also wish to survive, I have a suggestion about how to fulfill that wish.  Let us become more intelligent as quickly as possible.

  • James Lovelock, Democracy, and Human Fate

    Posted on March 30th, 2010 Helian No comments

    James Lovelock, originator of the Gaia Theory, drew a baleful stare from Instapundit this morning for claiming, as Glenn put it, that “We need to get rid of ‘obstructions’ like democracy to deal with global warming,” in an interview for the Guardian.  Dr. Lovelock’s actual remarks weren’t quite so blunt:

    Even the best democracies agree that when a major war approaches, democracy must be put on hold for the time being. I have a feeling that climate change may be an issue as severe as a war. It may be necessary to put democracy on hold for a while.

    In fact, the Guardian article left me with a rather favorable impression.  I don’t take Lovelock’s Gaia theory seriously, but it’s really more an expression of the man’s “spirituality” than an attempt at rigorous science.  Apparently he was grasping for some kind of straw to fill his need for something “greater than himself,” but that’s not an uncommon human foible, even among people as intelligent as Lovelock.  And he is intelligent.  One can tell that by the fact that he thinks outside of the box.  He’s not wearing any of the usual ideological straightjackets.  Consider, for example, the last three paragraphs of the article:

    Lovelock says the events of the recent months have seen him warming to the efforts of the “good” climate sceptics: “What I like about sceptics is that in good science you need critics that make you think: ‘Crumbs, have I made a mistake here?’ If you don’t have that continuously, you really are up the creek. The good sceptics have done a good service, but some of the mad ones I think have not done anyone any favours. You need sceptics, especially when the science gets very big and monolithic.”

    Lovelock, who 40 years ago originated the idea that the planet is a giant, self-regulating organism – the so-called Gaia Theory – added that he has little sympathy for the climate scientists caught up in the UEA email scandal. He said he had not read the original emails – “I felt reluctant to pry” – but that their reported content had left him feeling “utterly disgusted”.

    “Fudging the data in any way whatsoever is quite literally a sin against the holy ghost of science,” he said. “I’m not religious, but I put it that way because I feel so strongly. It’s the one thing you do not ever do. You’ve got to have standards.”

    Obviously, he’s not a hidebound ideologue busily embellishing his “climate denier” demon.  Rather, he’s apparently made a conscientious attempt to think a few things through without balking at the preconceived shibboleths he encountered along the way.  As we gather from Instapundit’s stern disapproval, one such shibboleth was democracy.

    It’s difficult to deny that democracy has its faults.  As Winston Churchill put it, ” No one pretends that democracy is perfect or all-wise.  Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.”  In the end, it may turn out to be a self-annihilating form of government.  In our own day we see it incapable of resisting infiltration by people whose culture may be hostile to its existence, or of resisting the rise of a bloated state power whose coexistence with Liberty is out of the question. 

    Other than that, it is also true that, as Lovelock claims, democracies are in the habit of setting aside their political ideals in time of war.  If the effects of global warming become as severe as a major war, the overriding imperative of survival may, indeed, require that “democracy be put on hold.”  If so, the question will become, “Who gets to play dictator?”  I personally would prefer the CEO of some oil company to a coalition of Greenpeace, PETA, and Code Pink, but that’s just a matter of personal taste. 

    Lovelock makes another comment in the article that I find spot on:

    I don’t think we’re yet evolved to the point where we’re clever enough to handle a complex a situation as climate change. The inertia of humans is so huge that you can’t really do anything meaningful.

    His interviewer bowdlerizes this to ” Humans are too stupid to prevent climate change from radically impacting on our lives over the coming decades,” in typical journalistic fashion, but the statement is, nonetheless, true.  We are not intelligent enough to avoid the chaos and catastrophe that will surely be our future lot in one form or another if we remain as we are.  We can try to avoid the worst by taking control of our own evolution, or we can sit and wait.  Evolution will not stand still, regardless.  Perhaps the result will be the same.  Assuming we don’t annihilate ourselves completely, above average intelligence will surely be a factor in deciding who will survive the wrath to come.  If we prove incapable of making ourselves smarter, nature will do it for us.  It will just be a great deal more painful.

    Lovelock

  • Stephen Hawking, Genetic Engineering, and the Future of Mankind

    Posted on January 11th, 2010 Helian 1 comment

    The Daily Galaxy has chosen Stephen Hawking’s contention that the human species has entered a new stage of evolution as the top story of 2009.  It was included in his Life in the Universe lecture, along with many other thought provoking observations about the human condition.  I don’t agree with his suggestion that we need to redefine the word “evolution” to include the collective knowledge we’ve accumulated since the invention of written language.  The old definition will do just fine, and conflating it with something different can only lead to confusion.  Still, if “top story” billing will get more people to read the lecture, I’m all in favor of it, because it’s well worth the effort.  Agree with him or not, Hawking has a keen eye for picking topics of cosmic importance.  By “cosmic importance,” I mean more likely to retain their relevance 100 years from now than, say, the latest wrinkles in the health care debate or the minutiae of Tiger Woods’ sex life.

    Hawking begins with a salutary demolition of the Creationist argument that life could not have evolved because of the Second Law of Thermodynamics.  The fact that the use of this argument implies ignorance of the relevant theory has done little to deter religious obscurantists from using it, so the more scientists of Hawking’s stature point out its absurdity, the better. 

    The lecture continues with some observations on the possible reasons we have not yet detected intelligent life outside our own planet.  These reasons are summarized as follows:

    1. The probability of life appearing is very low
    2. The probability of life is reasonable, but the probability of intelligence is low
    3. The probability of evolving to our present state is reasonable, but then civilization destroys itself
    4. There is other intelligent life in the galaxy, but it has not bothered to come here

    My two cents worth:  I think the probability of life appearing is low, but the probability that it is limited to earth is also low.  It would be surprising if life only evolved on one planet, but managed to survive long enough on that one planet for intelligent beings like ourselves to evolve.  On the other hand, we may be the only intelligent life form in the universe.  If not, why haven’t we heard from or detected the others?  Let us hope that the proponents of the third possibility are overly pessimistic.

    Later in the lecture, after noting the explosion of human knowledge over the last 300 years, Hawking observes:

    This has meant that no one person can be the master of more than a small corner of human knowledge. People have to specialise, in narrower and narrower fields. This is likely to be a major limitation in the future. We certainly cannot continue, for long, with the exponential rate of growth of knowledge that we have had in the last three hundred years. An even greater limitation and danger for future generations, is that we still have the instincts, and in particular, the aggressive impulses, that we had in cave man days. Aggression, in the form of subjugating or killing other men, and taking their women and food, has had definite survival advantage, up to the present time. But now it could destroy the entire human race, and much of the rest of life on Earth. A nuclear war is still the most immediate danger, but there are others, such as the release of a genetically engineered virus. Or the green house effect becoming unstable.

    I would differ with him on some of the details here.  For example, the bit about aggression oversimplifies the evolution of innate predispositions.  Back in the day when Konrad Lorenz published “On Aggression,” the behaviorists would have dismissed even a gentle soul like Hawking as a “fascist” for speaking of an “instinct” of aggression in such indelicate terms.  Nevertheless, when it comes to the basic premise of the sentence, Hawking gets it right.  We are not purely rational beings, nor is our behavior determined solely by culture and environment.  Rather, we act in response to predispositions that were hard-wired in our brains at a time when our manner of existence was vastly different than it is today.  They had survival value then.  They may doom us in the world of today unless we learn to understand and control them.

    Hawking continues:

    There is no time, to wait for Darwinian evolution, to make us more intelligent, and better natured. But we are now entering a new phase, of what might be called, self designed evolution, in which we will be able to change and improve our DNA. There is a project now on, to map the entire sequence of human DNA. It will cost a few billion dollars, but that is chicken feed, for a project of this importance. Once we have read the book of life, we will start writing in corrections. At first, these changes will be confined to the repair of genetic defects, like cystic fibrosis, and muscular dystrophy. These are controlled by single genes, and so are fairly easy to identify, and correct. Other qualities, such as intelligence, are probably controlled by a large number of genes. It will be much more difficult to find them, and work out the relations between them. Nevertheless, I am sure that during the next century, people will discover how to modify both intelligence, and instincts like aggression.

    Laws will be passed against genetic engineering with humans. But some people won’t be able to resist the temptation, to improve human characteristics, such as size of memory, resistance to disease, and length of life. Once such super humans appear, there are going to be major political problems, with the unimproved humans, who won’t be able to compete. Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings, who are improving themselves at an ever-increasing rate.

    Here, he is right on.  Unless we manage to destroy ourselves in the near future, or at least our highly developed technological societies, individuals will inevitably begin to take advantage of the potential of genetic engineering.  That is a good thing, to the extent that our survival is a good thing, because we are unlikely to survive unless we do develop into what Hawking calls “self-designing beings.”  We have certainly made a hash of things at our present level of development in a very short time.  We can’t go on long the way we are now. 

    Continuing with Hawking:

    If this race manages to redesign itself, to reduce or eliminate the risk of self-destruction, it will probably spread out, and colonise other planets and stars. However, long distance space travel, will be difficult for chemically based life forms, like DNA. The natural lifetime for such beings is short, compared to the travel time. According to the theory of relativity, nothing can travel faster than light. So the round trip to the nearest star would take at least 8 years, and to the centre of the galaxy, about a hundred thousand years. In science fiction, they overcome this difficulty, by space warps, or travel through extra dimensions. But I don’t think these will ever be possible, no matter how intelligent life becomes. In the theory of relativity, if one can travel faster than light, one can also travel back in time. This would lead to problems with people going back, and changing the past. One would also expect to have seen large numbers of tourists from the future, curious to look at our quaint, old-fashioned ways.

    In fact, covering galactic and inter-galactic distances is not theoretically out of the question.  One may not be able to exceed the speed of light, but one can reduce the distances one has to travel via the Lorenz contraction.  Thus, if I could find some means to accelerate myself to nearly the speed of light, the apparent distance to, for example, the Andromeda galaxy would shrink until, finally, I could reach it in a time short compared to a human lifetime.  The only problem is, if I were able to turn around and come back the same way, the Milky Way would be about 3 million years older than when I left.  Accelerating objects the size of a human being to nearly the speed of light and ensuring their survival over large distances would not be easy.  However, accelerating the DNA required to create a human being, along with, say, self-replicating nano-machinery that could create an environment for and then use the DNA to bring a human being to life would be much easier, and, I think plausible.  It may be the way we eventually colonize distant star systems with suitable earth-like planets.  I am not on board with the alternative suggested by Hawking:

    It might be possible to use genetic engineering, to make DNA based life survive indefinitely, or at least for a hundred thousand years. But an easier way, which is almost within our capabilities already, would be to send machines. These could be designed to last long enough for interstellar travel. When they arrived at a new star, they could land on a suitable planet, and mine material to produce more machines, which could be sent on to yet more stars. These machines would be a new form of life, based on mechanical and electronic components, rather than macromolecules. They could eventually replace DNA based life, just as DNA may have replaced an earlier form of life.

    It puzzles me that someone as brilliant as Hawking could find such a vision of the future attractive.  Perhaps he has made the mistake of conflating our consciousness with ourselves, and thinks that “eternal life” is merely a matter of perpetuating consciousness in machines.  In fact, consciousness is just an evolved trait.  Like all our other evolved traits, it exists because it helped to promote our survival.  “We” are not our consciousness.  “We” are our genetic material.  That “we” has lived for many hundreds of millions of years, and is potentially immortal.  Consciousness is just a trait that comes and goes with each reproductive cycle.  If our consciousness fools us into believing that it is really the substantial and important thing about us, and its perpetuation is a good in itself, it may mean the emergence of a new race of machines.  Regardless of their consciousness, however, they won’t be “us.”  Rather, “we” will have finally succeeded in annihilating ourselves, and the future evolution of the universe will have become as pointless as far as we are concerned as if life had never evolved at all.

    stephen_hawking

  • Selective Mass Murder and Historical Lacunae

    Posted on December 15th, 2009 Helian No comments

    If you check the websites of any one of the major booksellers, you can get an idea of the kind of books people are reading these days by checking their offerings. Click on the “history” link, for example, and you’ll quickly find quite a few offerings on U.S. history, with emphasis on the Civil War, the Revolution, and the Founding Fathers. There are lots of books about war, an occasional revelation of how this or that class of victims was victimized, or this or that historical villain perpetrated his evil deeds, and a sprinkling of sports histories, but there are gaping lacunae when it comes to coverage of events that really shaped the times we live in, and the ideological and political developments of yesterday that are portents of what we can expect tomorrow.

    Perhaps the Internet, wonderful as it is, is part of the problem. The wealth of information it provides tends to be sharply focused on the here and now. We have all the minutiae of the health care debate, troop levels in Afghanistan, and the narratives affirming and rejecting global warming at our fingertips, but little to encourage us to take an occasional step back to see things in their historical perspective. As a result, one finds much ranting about Marxism, socialism, fascism, Communism, and related ideological phenomena, but little understanding of how they arose in the first place, how it is they became so prominent, or why they are still relevant.

    Such ideologies appealed to aspects of human nature that haven’t gone anywhere in the meantime. The specific doctrines of Marx, Bakunin, and Hitler are discredited because they didn’t work in practice. That doesn’t mean that new variants with promises of alternate Brave New Worlds won’t arise to take their place. For the time being, Islamism has rushed in to fill the vacuum left by their demise, but I doubt it will satisfy the more secular minded of the chronic zealots among us for very long. The Islamists may have appropriated the political jargon of the “progressive” left, but it’s a stretch to suggest that western leftists are about to become pious Muslims any time soon. Should the economies of the developed nations turn south for an extended period of time, or some other profound social dislocation take place, some new secular faith is likely to arise, promising a way out to the desperate, a new faith for the congenitally fanatical, and a path to power for future would-be Stalins.

    To understand the fanaticisms of bygone days, and perhaps foresee the emergence of those of the future, it would be well if we occasionally stepped back from our obsession with the ideological disputes of the present and pondered the nature and outcome of those of the past. One such outcome was the birth of the United States, and the subsequent replacement of monarchical systems by secular democracies in many countries, accompanied by the movement away from societies highly stratified by class to more egalitarian systems. Personally, I am inclined to welcome that development, but it remains to be seen whether the resultant social and political systems are capable of maintaining their integrity and the cultural identity of the people they represent against the onslaught of alien cultures and religions.

    Another, less positive, outcome has been the emergence of secular dogmas such as those mentioned above, promising rewards in the here and now instead of the hereafter. These have generated levels of fanaticism akin to those generated by religious faith in the past. In fact, as belief systems, they are entirely akin to religion, as various thinkers have repeatedly pointed out over the past two centuries. They are substantially different from religions only in the absence of belief in supernatural beings. These belief systems have spawned all the mayhem that their religious cousins spawned in the past, but with a substantial difference. I suspect that difference is more a function of general advances in literacy, technology, and social awareness than any distinctions of dogma.

    Specifically, for the first time on such a massive scale, the mayhem and slaughter occasioned by fanatical belief in these new secular dogmas has not fallen with more or less equal weight on all the strata of society. Rather, its tendency has been to eliminate the most intelligent, the most productive, and the most creative. Lenin and Stalin were not indiscriminate in their mass murder. They singled out scientists, academics, the most intelligent and productive farmers, the most economically productive, the most politically aware, and the most creative thinkers. Their goal was to eliminate anyone who was likely to oppose them effectively. In general, these were the most intelligent members of society. Similarly, the horrific Khmer Rouge regime in Cambodia systematically eliminated anyone with a hint of education or appearance of intellectual superiority. In another example, of which most of us are only dimly aware, although it happened in living memory, the right in the Spanish Civil War ruthlessly sought out and shot anyone on the left prominent for political thought or leadership capacity, and the left, in turn, sought out and shot anyone who had managed to rise above the bare level of subsistence of the proletariat. The Nazis virtually eliminated a minority famous for its creativity, intelligence, and productivity.

    Mass murder is hardly a novelty among human beings. It has been one of our enduring characteristics since the dawn of recorded time. However, this new variant, in which the best and brightest are selectively eliminated, really only emerged in all its fury in the 20th century. The French Reign of Terror, similarly selective as it was, was child’s play by comparison, with its mere 20,000 victims. The victims of Communism alone approach 100 million. In two countries, at least, it is difficult to see how this will not have profound effects on the ability of the remaining population to solve the many problems facing modern societies. In effect, those two countries, the former Soviet Union and Cambodia, beheaded themselves. The wanton elimination of so much intellectual potential by their former masters is bound to have a significant effect on the quality of the human capabilities available to rebuild society now that the Communist nightmare is over, at least for them. Perhaps, at some future time when we regain the liberty to speculate about such matters without being shouted down as evildoers by the pathologically politically correct, some nascent Ph.D. in psychology will undertake to measure the actual drop in collective intelligence in those countries resulting from the Communist mass murder.

    It behooves us, then, to remember what happened in the 20th century. It is hardly out of the question that new fanatical faiths will emerge, both secular and religious, and that they we be capable of all the social devastation of the Communists and Nazis and then some. Here in America, an earlier generation, even in the darkest days of the Great Depression, rejected the siren song of the fanatics. For that, we owe them much. Let us try to emulate them in the future.

  • Human Enhancement and Morality: Another Day in the Asylum

    Posted on September 6th, 2009 Helian 2 comments

    The Next Big Future site links to a report released by a bevy of professors that, we are told, is to serve “…as a convenient and accessible starting point for both public and classroom discussions, such as in bioethics seminars.” The report itself may be found here. It contains “25 Questions & Answers,” many of which relate to moral and ethical issues related to human enhancement. For example,

    1. What is human enhancement?
    2. Is the natural/artificial distinction morally significant in this debate?
    3. Is the internal/external distinction morally significant in this debate?
    4. Is the therapy/enhancement distinction morally significant in this debate?
    9. Could we justify human enhancement technologies by appealing to our right to be free?
    10. Could we justify enhancing humans if it harms no one other than perhaps the individual?

    You get the idea. Now, search through the report and try to find a few clues about what the authors are talking about when they use the term “morality.” There are precious few. Under question 25 (Will we need to rethink ethics itself?) we read,

    To a large extent, our ethics depends on the kinds of creatures that we are. Philosophers traditionally have based ethical theories on assumptions about human nature. With enhancements we may become relevantly different creatures and therefore need to re-think our basic ethical positions.

    This is certainly sufficiently coy. There is no mention of the basis we are supposed to use to do the re-thinking. If we look through some of the other articles and reports published by the authors, we find other hints. For example, in “Why We Need Better Ethics for Emerging Technologies” in “Ethics and Information Technology” by Prof. James H. Moor of Dartmouth we find,

    … first, we need realistically to take into account that ethics is an ongoing and dynamic enterprise. Second, we can improve ethics by establishing better collaborations among ethicists, scientists, social scientists, and technologists. We need a multi-disciplinary approach (Brey, 2000). The third improvement for ethics would be to develop more sophisticated ethical analyses. Ethical theories themselves are often simplistic and do not give much guidance to particular situations. Often the alternative is to do technological assessment in terms of cost/benefit analysis. This approach too easily invites evaluation in terms of money while ignoring or discounting moral values which are difficult to represent or translate into monetary terms. At the very least, we need to be more proactive and less reactive in doing ethics.

    Great! I’m all for proactivity. But if we “do” ethics, what is to be the basis on which we “do” them. If we are to have such a basis, do we not first need to understand the morality on which ethical rules are based? What we have here is another effort by “experts on ethics” who apparently have no clue about the morality that must be the basis for the ethical rules they discuss so wisely if they are to have any legitimacy. If they do have a clue, they are being extremely careful to make sure we are not aware of it. Apparently we are to trust them because, after all, they are recognized “experts.” They don’t want us to peek at the “man behind the curtain.”

    This is an excellent example of what E. O. Wilson was referring to when he inveighed against the failure of these “experts” to “put their cards on the table” in his book, “Consilience.” The authors never inform us whether they believe the morality they refer to with such gravity is an object, a thing-in-itself, or, on the contrary, is an evolved, subjective construct, as their vague allusion to a basis in “human nature” would seem to imply. Like so many other similar “experts” in morality and ethics, they are confident that most people will “know what they mean” when they refer to these things and will not press them to explain themselves. After all, they are “experts.” They have the professorial titles and NSF grants to prove it. When it comes to actually explaining what they mean when they refer to morality, to informing us what they think it actually is, and how and why it exists, they become as vague as the Oracle of Delphi.

    Read John Stuart Mill’s “Utilitarianism,” and you will quickly see the difference between the poseurs and someone who knows what he’s talking about. Mill was not able to sit on the shoulders of giants like Darwin and the moral theorists who based their ideas on his work, not to mention our modern neuroscientists. Yet, in spite of the fact that these transformational insights came too late to inform his work, he had a clear and focused grasp of his subject. He knew that it was not enough to simply assume others knew what he meant when he spoke of morality. In reading his short essay we learn that he knew the difference between transcendental and subjective morality, that he was aware of and had thought deeply about the theories of those who claimed (long before Darwin) that morality was a manifestation of human nature, and that one could not claim the validity or legitimacy of moral rules without establishing the basis for that legitimacy. In other words, Mill did lay his cards on the table in “Utilitarianism.” Somehow, the essay seems strangely apologetic. Often it seems he is saying, “Well, I know my logic is a bit weak here, but I have done at least as well as the others.” Genius that he was, Mill knew that there was an essential something missing from his moral theories. If he had lived a few decades later, I am confident he would have found it.

    Those who would be taken seriously when they discuss morality must first make it quite clear they know what morality is. As those who have read my posts on the topic know, I, too, have laid my cards on the table. I consider morality an evolved human trait, with no absolute legitimacy whatsoever beyond that implied by its evolutionary origin at a time long before the emergence of modern human societies, or any notion of transhumanism or human enhancements. As such, it can have no relevance or connection whatsoever to such topics other than as an emotional response to an issue to which that emotion, an evolved response like all our other emotions, was never “designed” to apply.

  • E. O. Wilson: “Consilience,” Ethics and Fate

    Posted on August 14th, 2009 Helian 3 comments

    I first became aware of the work of E. O. Wilson when he published a pair of books in the 70’s (“Sociobiology” in 1975 and “On Human Nature” in 1979) that placed him in the camp of those who, like Ardrey, insisted on the role of genetically programmed predispositions in shaping human behavior. He touches on some of the issues we’ve been discussing here in one of his more recent works, “Consilience.” In a chapter entitled “Ethics and Religion,” he takes up the two competing fundamental assumptions about ethics that, according to Wilson, “make all the difference in the way we view ourselves as a species.” These two contradictory assumptions can be stated as, “I believe in the independence of moral values,” and “I believe that moral values come from humans alone.” This formulation is somewhat imprecise, as animals other than humans act morally. However, I think the general meaning of what Wilson is saying is clear. He refers to these two schools of thought as the “transcendentalists,” and “empiricists,” respectively. He then goes on to express a sentiment with which I very heartily agree;

    The time has come to turn the cards face up. Ethicists, scholars who specialize in moral reasoning, are not prone to declare themselves on the foundations of ethics, or to admit fallibility. Rarely do you see an argument that opens with the simple statement: This is my starting point, and it could be wrong. Ethicists instead favor a fretful passage from the particular into the ambiguous, or the reverse, vagueness into hard cases. I suspect that almost all are transcendentalists at heart, but they rarely say so in simple declarative sentences. One cannot blame them very much; it is difficult to explain the ineffable, and they evidently do not wish to suffer the indignity of having their personal beliefs clearly understood. So by and large they steer around the foundation issue altogether.

    Here he hits the nail on the head. It’s normal for human beings to be “transcendentalists at heart,” because that’s our nature. We’re wired to think of good and evil as having an objective existence independent of our minds. Unfortunately, that perception is not true and yet the “scholars who specialize in moral reasoning,” appear singularly untroubled by the fact. Someone needs to explain to them that we’re living in the 21st century, not the 18th, and their pronouncements that they “hold these truths to be self-evident” don’t impress us anymore. In the meantime, we’ve had a chance to peek at the man behind the curtain. If they really think one thing is good, and another evil, it’s about time they started explaining why.

    Wilson declares himself an empiricist, and yet, as was also evident in his earlier works, he is not quite able to make a clean break with the transcendentalist past. I suspect he has imbibed too deeply at the well of traditional philosophy and theology. As a result, he has far more respect for the logic-free notions of today’s moralists than they deserve. I have a great deal of respect for Martin Luther as one of the greatest liberators of human thought who ever lived, and I revere Voltaire as a man who struck the shackles of obscurantism from the human mind. That doesn’t imply that I have to take Luther’s pronouncements about the Jews or Voltaire’s notions about his deist god seriously.

    I once had a friend who, when questioned too persistently about something for which he had no better answer would reply, “Because there are no bones in ice cream.” The proposition that morality is an evolved human trait seems just as obvious to me as the proposition that there are no bones in ice cream. If anyone cares to dispute the matter with me, they need to begin by putting a package with bones on the table. Otherwise I will not take them seriously. The same goes for Wilson’s menagerie of philosophers and theologians. I respect them because, unlike so many others, they took the trouble to think. When it comes to ideas, however, we should respect them not because they are hoary and traditional, but because they are true. We have learned a great deal since the days of Kant and St. Augustine. We cannot ignore what we have learned in the intervening years out of respect for their greatness.

    In the final chapter of his book, entitled “To What End,” Wilson discusses topics such as the relationship between environmental degradation and overpopulation, and considers the future of genetic engineering. His comments on the former are judicious enough, and it would be well if the developed countries of the world considered them carefully before continuing along the suicidal path of tolerating massive legal and illegal immigration. As for the latter, here, again, I find myself in agreement with him when he says that, “Once established as a practical technology, gene therapy will become a commercial juggernaut. Thousands of genetic defects, many fatal, are already known. More are discovered each year… It is obvious that when genetic repair becomes safe and affordable, the demand for it will grow swiftly. Some time in the next (21st) century that trend will lead into the full volitional period of evolution… Evolution, including genetic progress in human nature and human capacity, will be from (then) on increasingly the domain of science and technology tempered by ethics and political choice.”

    As often happens, Wilson reveals his emotional heart of hearts to us with a bit of hyperbole in his final sentence:

    And if we should surrender our genetic nature to machine-aided ratiocination, and our ethics and art and our very meaning to a habit of careless discursion in the name of progress, imagining ourselves godlike and absolved from our ancient heritage, we will become nothing.

    This is a bit flamboyant, and begs the question of who or what gets to decide our “meaning.” Still, Wilson’s work is full of interesting and thought-provoking ideas, and he is well worth reading.

  • Personal Genetic Testing – One More Step

    Posted on August 8th, 2009 Helian No comments

    Personal genetic testing began mainly as a tool for genealogists. The next step, testing for health risks, has already been taken. As the technology continues to develop, individuals will gain increasing control over their own genetic futures. They will, that is, unless the many who, for one reason or another, are opposed to these developments are able to stop them. The only viable way to do that is by enlisting the power of the state. They will certainly make the attempt. It will be interesting to see if they succeed. The forces that have driven human evolution for hundreds of thousands of years have, for all practical purposes, ceased to exist. The outcome of the battle will determine what they will be in the future.