Posted on July 18th, 2010 1 comment
Internet chatter over “designer babies” has died down considerably since early 2009, when a chain of fertility clinics headquartered in Los Angeles offered to allow prospective parents to select for cosmetic traits such as hair, eye, and skin color. However, the subject bears on the genetic future of mankind, and is of enduring importance whether the media gatekeepers are paying attention to it or not. The clinics in question quickly withdrew the offered services in response to the inevitable “storm of protest” by those who consider themselves the guardians of public morality. Regardless, pre-implantation genetic diagnosis (PGD), the technology involved, has been around since the early 1990′s, and continues to advance. It involves checking the genetic material in a cell taken from an embryo very early in its development, when it only consists of about six cells. Initially developed to screen for diseases such as Down’s Syndrome, or reduce the probability of developing diseases such as diabetes or cancer, in principle it can be used to select for arbitrary inherited traits. Recent research has focused on diseases and psychiatric conditions such as schizophrenia that do not appear traceable to simple genetic variations, and are more likely genetically heterogeneous; dependent on what is likely a complex combination of genetic factors. As our knowledge increases along these lines, we will inevitably learn to better understand and eventually control the similarly complex genetic factors affecting cognitive ability, or intelligence. One must hope that day comes sooner rather than later, and that when it comes, prospective parents will have the right to use it without state interference.
If we are to survive, we must become more intelligent, and the sooner the better. The matter is urgent, and there is no alternative. If we do survive, we will become more intelligent. The only question is how. Will it be by controlled genetic engineering, or by the “survival of the fittest” in the future holocausts we bring on ourselves because we are too stupid to avoid them? Consider the events of the 20th century. A great wave of popular idealism that had been growing ever stronger since the days of the American and French Revolutions among a large proportion of the most intelligent and highly educated elements of societies around the world metasticized into the incredibly destructive pseudo-religion, Communism. The better part of a century and 100 million deaths later, we seem to have weathered that particular ideological storm, at least for the time being. There is no compelling reason to believe that it was inevitable that we would, or that it was impossible that, under somewhat different but plausible conditions, Communist systems could have dominated the entire world, or that the resultant clash of ideologies might have culminated in a general nuclear exchange. Orwell’s 1984 might very well have become a reality. International boundaries might very well have been reduced to the role of marking where one North Korea ended, and another begun. There is no guarantee that the outcome of the next storm will not be different.
Communism was no historical anomaly. It was a phenomenon dependent for its existence and its power on some of the best and brightest minds of its day. As such, it provides us with an objective metric of our intelligence. We are not nearly as smart as we think we are. Messianic Islamism has already begun occupying the ideological vacuum left by its demise, and the true believers of new and, perhaps, yet unheard of systems will surely swarm forth eventually to promote new “scientific” paths to the “salvation of humanity.” Meanwhile, the technologies of mass destruction continue to develop at an alarming pace. Unless we become intelligent enough to control them it is only a question of time until they are used. If we take control of our own genetic future there is a slim chance that we will be able to avoid the worst. If not, it will at least improve our chances of surviving it.
When it comes to making the necessary decisions, it would be best to leave the state out of it. State eugenic programs have not been remarkably successful in the past, and they are unlikely to be more successful in the future, because states cannot be depended on to act in the interests of the individuals who are their citizens. Individuals are remarkably acute judges of their own best interests. Give individuals the power to use the technology or not, as they see fit. Their genetic survival will be the metric of whether they made the right choices. As noted in Psychology Today, they have always made those individual choices in the past by selectivity in the choice of a mate. Technologies such as PGD will not change that. It will merely give them the opportunity to make the choice more accurately.
Many articles have been written about the need to explore the “ethical” implications of the choices we must make about these technologies. In fact, virtually anyone who describes themselves as a “bio-ethicist,” or, for that matter, an “ethics expert” of any other stripe is, objectively, a charlatan. Their “ethical debates” are merely so much emotional posturing, in which the various sides carry on fantastical arguments about whose deeply felt emotions are the most “legitimate.” Ethical debates that do not start with the recognition of the evolutionary origin of these emotions, of the reasons and conditions under which they evolved, and their nature as subjective constructs deriving from predispositions that are hard-wired in the brain, are no more rational than the raving of madmen.
Values can never be legitimate in themselves. They are, by their nature, subjective. They exist, like virtually everything else of significance about us, because the wiring in the brain that gives rise to them promoted our survival. If, then, one finds it necessary for some reason to pursue a “value,” none can rationally take precedence over survival. That is the only “value” that can be accepted as seriously at issue here. We can ignore the rest of the blather about “ethics,” because the “ethicists” quite literally do not know what they’re talking about.
I wish to survive, and I wish for my species and life in general to survive. I don’t flatter myself that those wishes have any objective legitimacy, but, subjectively, I am very attached to them. Assuming there are others out there who also wish to survive, I have a suggestion about how to fulfill that wish. Let us become more intelligent as quickly as possible.
Posted on March 30th, 2010 No comments
James Lovelock, originator of the Gaia Theory, drew a baleful stare from Instapundit this morning for claiming, as Glenn put it, that “We need to get rid of ‘obstructions’ like democracy to deal with global warming,” in an interview for the Guardian. Dr. Lovelock’s actual remarks weren’t quite so blunt:
Even the best democracies agree that when a major war approaches, democracy must be put on hold for the time being. I have a feeling that climate change may be an issue as severe as a war. It may be necessary to put democracy on hold for a while.
In fact, the Guardian article left me with a rather favorable impression. I don’t take Lovelock’s Gaia theory seriously, but it’s really more an expression of the man’s “spirituality” than an attempt at rigorous science. Apparently he was grasping for some kind of straw to fill his need for something “greater than himself,” but that’s not an uncommon human foible, even among people as intelligent as Lovelock. And he is intelligent. One can tell that by the fact that he thinks outside of the box. He’s not wearing any of the usual ideological straightjackets. Consider, for example, the last three paragraphs of the article:
Lovelock says the events of the recent months have seen him warming to the efforts of the “good” climate sceptics: “What I like about sceptics is that in good science you need critics that make you think: ‘Crumbs, have I made a mistake here?’ If you don’t have that continuously, you really are up the creek. The good sceptics have done a good service, but some of the mad ones I think have not done anyone any favours. You need sceptics, especially when the science gets very big and monolithic.”
Lovelock, who 40 years ago originated the idea that the planet is a giant, self-regulating organism – the so-called Gaia Theory – added that he has little sympathy for the climate scientists caught up in the UEA email scandal. He said he had not read the original emails – “I felt reluctant to pry” – but that their reported content had left him feeling “utterly disgusted”.
“Fudging the data in any way whatsoever is quite literally a sin against the holy ghost of science,” he said. “I’m not religious, but I put it that way because I feel so strongly. It’s the one thing you do not ever do. You’ve got to have standards.”
Obviously, he’s not a hidebound ideologue busily embellishing his “climate denier” demon. Rather, he’s apparently made a conscientious attempt to think a few things through without balking at the preconceived shibboleths he encountered along the way. As we gather from Instapundit’s stern disapproval, one such shibboleth was democracy.
It’s difficult to deny that democracy has its faults. As Winston Churchill put it, ” No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.” In the end, it may turn out to be a self-annihilating form of government. In our own day we see it incapable of resisting infiltration by people whose culture may be hostile to its existence, or of resisting the rise of a bloated state power whose coexistence with Liberty is out of the question.
Other than that, it is also true that, as Lovelock claims, democracies are in the habit of setting aside their political ideals in time of war. If the effects of global warming become as severe as a major war, the overriding imperative of survival may, indeed, require that “democracy be put on hold.” If so, the question will become, “Who gets to play dictator?” I personally would prefer the CEO of some oil company to a coalition of Greenpeace, PETA, and Code Pink, but that’s just a matter of personal taste.
Lovelock makes another comment in the article that I find spot on:
I don’t think we’re yet evolved to the point where we’re clever enough to handle a complex a situation as climate change. The inertia of humans is so huge that you can’t really do anything meaningful.
His interviewer bowdlerizes this to ” Humans are too stupid to prevent climate change from radically impacting on our lives over the coming decades,” in typical journalistic fashion, but the statement is, nonetheless, true. We are not intelligent enough to avoid the chaos and catastrophe that will surely be our future lot in one form or another if we remain as we are. We can try to avoid the worst by taking control of our own evolution, or we can sit and wait. Evolution will not stand still, regardless. Perhaps the result will be the same. Assuming we don’t annihilate ourselves completely, above average intelligence will surely be a factor in deciding who will survive the wrath to come. If we prove incapable of making ourselves smarter, nature will do it for us. It will just be a great deal more painful.
Posted on January 11th, 2010 1 comment
The Daily Galaxy has chosen Stephen Hawking’s contention that the human species has entered a new stage of evolution as the top story of 2009. It was included in his Life in the Universe lecture, along with many other thought provoking observations about the human condition. I don’t agree with his suggestion that we need to redefine the word “evolution” to include the collective knowledge we’ve accumulated since the invention of written language. The old definition will do just fine, and conflating it with something different can only lead to confusion. Still, if “top story” billing will get more people to read the lecture, I’m all in favor of it, because it’s well worth the effort. Agree with him or not, Hawking has a keen eye for picking topics of cosmic importance. By “cosmic importance,” I mean more likely to retain their relevance 100 years from now than, say, the latest wrinkles in the health care debate or the minutiae of Tiger Woods’ sex life.
Hawking begins with a salutary demolition of the Creationist argument that life could not have evolved because of the Second Law of Thermodynamics. The fact that the use of this argument implies ignorance of the relevant theory has done little to deter religious obscurantists from using it, so the more scientists of Hawking’s stature point out its absurdity, the better.
The lecture continues with some observations on the possible reasons we have not yet detected intelligent life outside our own planet. These reasons are summarized as follows:
1. The probability of life appearing is very low
2. The probability of life is reasonable, but the probability of intelligence is low
3. The probability of evolving to our present state is reasonable, but then civilization destroys itself
4. There is other intelligent life in the galaxy, but it has not bothered to come here
My two cents worth: I think the probability of life appearing is low, but the probability that it is limited to earth is also low. It would be surprising if life only evolved on one planet, but managed to survive long enough on that one planet for intelligent beings like ourselves to evolve. On the other hand, we may be the only intelligent life form in the universe. If not, why haven’t we heard from or detected the others? Let us hope that the proponents of the third possibility are overly pessimistic.
Later in the lecture, after noting the explosion of human knowledge over the last 300 years, Hawking observes:
This has meant that no one person can be the master of more than a small corner of human knowledge. People have to specialise, in narrower and narrower fields. This is likely to be a major limitation in the future. We certainly cannot continue, for long, with the exponential rate of growth of knowledge that we have had in the last three hundred years. An even greater limitation and danger for future generations, is that we still have the instincts, and in particular, the aggressive impulses, that we had in cave man days. Aggression, in the form of subjugating or killing other men, and taking their women and food, has had definite survival advantage, up to the present time. But now it could destroy the entire human race, and much of the rest of life on Earth. A nuclear war is still the most immediate danger, but there are others, such as the release of a genetically engineered virus. Or the green house effect becoming unstable.
I would differ with him on some of the details here. For example, the bit about aggression oversimplifies the evolution of innate predispositions. Back in the day when Konrad Lorenz published “On Aggression,” the behaviorists would have dismissed even a gentle soul like Hawking as a “fascist” for speaking of an “instinct” of aggression in such indelicate terms. Nevertheless, when it comes to the basic premise of the sentence, Hawking gets it right. We are not purely rational beings, nor is our behavior determined solely by culture and environment. Rather, we act in response to predispositions that were hard-wired in our brains at a time when our manner of existence was vastly different than it is today. They had survival value then. They may doom us in the world of today unless we learn to understand and control them.
There is no time, to wait for Darwinian evolution, to make us more intelligent, and better natured. But we are now entering a new phase, of what might be called, self designed evolution, in which we will be able to change and improve our DNA. There is a project now on, to map the entire sequence of human DNA. It will cost a few billion dollars, but that is chicken feed, for a project of this importance. Once we have read the book of life, we will start writing in corrections. At first, these changes will be confined to the repair of genetic defects, like cystic fibrosis, and muscular dystrophy. These are controlled by single genes, and so are fairly easy to identify, and correct. Other qualities, such as intelligence, are probably controlled by a large number of genes. It will be much more difficult to find them, and work out the relations between them. Nevertheless, I am sure that during the next century, people will discover how to modify both intelligence, and instincts like aggression.
Laws will be passed against genetic engineering with humans. But some people won’t be able to resist the temptation, to improve human characteristics, such as size of memory, resistance to disease, and length of life. Once such super humans appear, there are going to be major political problems, with the unimproved humans, who won’t be able to compete. Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings, who are improving themselves at an ever-increasing rate.
Here, he is right on. Unless we manage to destroy ourselves in the near future, or at least our highly developed technological societies, individuals will inevitably begin to take advantage of the potential of genetic engineering. That is a good thing, to the extent that our survival is a good thing, because we are unlikely to survive unless we do develop into what Hawking calls “self-designing beings.” We have certainly made a hash of things at our present level of development in a very short time. We can’t go on long the way we are now.
Continuing with Hawking:
If this race manages to redesign itself, to reduce or eliminate the risk of self-destruction, it will probably spread out, and colonise other planets and stars. However, long distance space travel, will be difficult for chemically based life forms, like DNA. The natural lifetime for such beings is short, compared to the travel time. According to the theory of relativity, nothing can travel faster than light. So the round trip to the nearest star would take at least 8 years, and to the centre of the galaxy, about a hundred thousand years. In science fiction, they overcome this difficulty, by space warps, or travel through extra dimensions. But I don’t think these will ever be possible, no matter how intelligent life becomes. In the theory of relativity, if one can travel faster than light, one can also travel back in time. This would lead to problems with people going back, and changing the past. One would also expect to have seen large numbers of tourists from the future, curious to look at our quaint, old-fashioned ways.
In fact, covering galactic and inter-galactic distances is not theoretically out of the question. One may not be able to exceed the speed of light, but one can reduce the distances one has to travel via the Lorenz contraction. Thus, if I could find some means to accelerate myself to nearly the speed of light, the apparent distance to, for example, the Andromeda galaxy would shrink until, finally, I could reach it in a time short compared to a human lifetime. The only problem is, if I were able to turn around and come back the same way, the Milky Way would be about 3 million years older than when I left. Accelerating objects the size of a human being to nearly the speed of light and ensuring their survival over large distances would not be easy. However, accelerating the DNA required to create a human being, along with, say, self-replicating nano-machinery that could create an environment for and then use the DNA to bring a human being to life would be much easier, and, I think plausible. It may be the way we eventually colonize distant star systems with suitable earth-like planets. I am not on board with the alternative suggested by Hawking:
It might be possible to use genetic engineering, to make DNA based life survive indefinitely, or at least for a hundred thousand years. But an easier way, which is almost within our capabilities already, would be to send machines. These could be designed to last long enough for interstellar travel. When they arrived at a new star, they could land on a suitable planet, and mine material to produce more machines, which could be sent on to yet more stars. These machines would be a new form of life, based on mechanical and electronic components, rather than macromolecules. They could eventually replace DNA based life, just as DNA may have replaced an earlier form of life.
It puzzles me that someone as brilliant as Hawking could find such a vision of the future attractive. Perhaps he has made the mistake of conflating our consciousness with ourselves, and thinks that “eternal life” is merely a matter of perpetuating consciousness in machines. In fact, consciousness is just an evolved trait. Like all our other evolved traits, it exists because it helped to promote our survival. “We” are not our consciousness. “We” are our genetic material. That “we” has lived for many hundreds of millions of years, and is potentially immortal. Consciousness is just a trait that comes and goes with each reproductive cycle. If our consciousness fools us into believing that it is really the substantial and important thing about us, and its perpetuation is a good in itself, it may mean the emergence of a new race of machines. Regardless of their consciousness, however, they won’t be “us.” Rather, “we” will have finally succeeded in annihilating ourselves, and the future evolution of the universe will have become as pointless as far as we are concerned as if life had never evolved at all.
Posted on December 15th, 2009 No comments
If you check the websites of any one of the major booksellers, you can get an idea of the kind of books people are reading these days by checking their offerings. Click on the “history” link, for example, and you’ll quickly find quite a few offerings on U.S. history, with emphasis on the Civil War, the Revolution, and the Founding Fathers. There are lots of books about war, an occasional revelation of how this or that class of victims was victimized, or this or that historical villain perpetrated his evil deeds, and a sprinkling of sports histories, but there are gaping lacunae when it comes to coverage of events that really shaped the times we live in, and the ideological and political developments of yesterday that are portents of what we can expect tomorrow.
Perhaps the Internet, wonderful as it is, is part of the problem. The wealth of information it provides tends to be sharply focused on the here and now. We have all the minutiae of the health care debate, troop levels in Afghanistan, and the narratives affirming and rejecting global warming at our fingertips, but little to encourage us to take an occasional step back to see things in their historical perspective. As a result, one finds much ranting about Marxism, socialism, fascism, Communism, and related ideological phenomena, but little understanding of how they arose in the first place, how it is they became so prominent, or why they are still relevant.
Such ideologies appealed to aspects of human nature that haven’t gone anywhere in the meantime. The specific doctrines of Marx, Bakunin, and Hitler are discredited because they didn’t work in practice. That doesn’t mean that new variants with promises of alternate Brave New Worlds won’t arise to take their place. For the time being, Islamism has rushed in to fill the vacuum left by their demise, but I doubt it will satisfy the more secular minded of the chronic zealots among us for very long. The Islamists may have appropriated the political jargon of the “progressive” left, but it’s a stretch to suggest that western leftists are about to become pious Muslims any time soon. Should the economies of the developed nations turn south for an extended period of time, or some other profound social dislocation take place, some new secular faith is likely to arise, promising a way out to the desperate, a new faith for the congenitally fanatical, and a path to power for future would-be Stalins.
To understand the fanaticisms of bygone days, and perhaps foresee the emergence of those of the future, it would be well if we occasionally stepped back from our obsession with the ideological disputes of the present and pondered the nature and outcome of those of the past. One such outcome was the birth of the United States, and the subsequent replacement of monarchical systems by secular democracies in many countries, accompanied by the movement away from societies highly stratified by class to more egalitarian systems. Personally, I am inclined to welcome that development, but it remains to be seen whether the resultant social and political systems are capable of maintaining their integrity and the cultural identity of the people they represent against the onslaught of alien cultures and religions.
Another, less positive, outcome has been the emergence of secular dogmas such as those mentioned above, promising rewards in the here and now instead of the hereafter. These have generated levels of fanaticism akin to those generated by religious faith in the past. In fact, as belief systems, they are entirely akin to religion, as various thinkers have repeatedly pointed out over the past two centuries. They are substantially different from religions only in the absence of belief in supernatural beings. These belief systems have spawned all the mayhem that their religious cousins spawned in the past, but with a substantial difference. I suspect that difference is more a function of general advances in literacy, technology, and social awareness than any distinctions of dogma.
Specifically, for the first time on such a massive scale, the mayhem and slaughter occasioned by fanatical belief in these new secular dogmas has not fallen with more or less equal weight on all the strata of society. Rather, its tendency has been to eliminate the most intelligent, the most productive, and the most creative. Lenin and Stalin were not indiscriminate in their mass murder. They singled out scientists, academics, the most intelligent and productive farmers, the most economically productive, the most politically aware, and the most creative thinkers. Their goal was to eliminate anyone who was likely to oppose them effectively. In general, these were the most intelligent members of society. Similarly, the horrific Khmer Rouge regime in Cambodia systematically eliminated anyone with a hint of education or appearance of intellectual superiority. In another example, of which most of us are only dimly aware, although it happened in living memory, the right in the Spanish Civil War ruthlessly sought out and shot anyone on the left prominent for political thought or leadership capacity, and the left, in turn, sought out and shot anyone who had managed to rise above the bare level of subsistence of the proletariat. The Nazis virtually eliminated a minority famous for its creativity, intelligence, and productivity.
Mass murder is hardly a novelty among human beings. It has been one of our enduring characteristics since the dawn of recorded time. However, this new variant, in which the best and brightest are selectively eliminated, really only emerged in all its fury in the 20th century. The French Reign of Terror, similarly selective as it was, was child’s play by comparison, with its mere 20,000 victims. The victims of Communism alone approach 100 million. In two countries, at least, it is difficult to see how this will not have profound effects on the ability of the remaining population to solve the many problems facing modern societies. In effect, those two countries, the former Soviet Union and Cambodia, beheaded themselves. The wanton elimination of so much intellectual potential by their former masters is bound to have a significant effect on the quality of the human capabilities available to rebuild society now that the Communist nightmare is over, at least for them. Perhaps, at some future time when we regain the liberty to speculate about such matters without being shouted down as evildoers by the pathologically politically correct, some nascent Ph.D. in psychology will undertake to measure the actual drop in collective intelligence in those countries resulting from the Communist mass murder.
It behooves us, then, to remember what happened in the 20th century. It is hardly out of the question that new fanatical faiths will emerge, both secular and religious, and that they we be capable of all the social devastation of the Communists and Nazis and then some. Here in America, an earlier generation, even in the darkest days of the Great Depression, rejected the siren song of the fanatics. For that, we owe them much. Let us try to emulate them in the future.
Posted on September 6th, 2009 2 comments
The Next Big Future site links to a report released by a bevy of professors that, we are told, is to serve “…as a convenient and accessible starting point for both public and classroom discussions, such as in bioethics seminars.” The report itself may be found here. It contains “25 Questions & Answers,” many of which relate to moral and ethical issues related to human enhancement. For example,
1. What is human enhancement?
2. Is the natural/artificial distinction morally significant in this debate?
3. Is the internal/external distinction morally significant in this debate?
4. Is the therapy/enhancement distinction morally significant in this debate?
9. Could we justify human enhancement technologies by appealing to our right to be free?
10. Could we justify enhancing humans if it harms no one other than perhaps the individual?
You get the idea. Now, search through the report and try to find a few clues about what the authors are talking about when they use the term “morality.” There are precious few. Under question 25 (Will we need to rethink ethics itself?) we read,
To a large extent, our ethics depends on the kinds of creatures that we are. Philosophers traditionally have based ethical theories on assumptions about human nature. With enhancements we may become relevantly different creatures and therefore need to re-think our basic ethical positions.
This is certainly sufficiently coy. There is no mention of the basis we are supposed to use to do the re-thinking. If we look through some of the other articles and reports published by the authors, we find other hints. For example, in “Why We Need Better Ethics for Emerging Technologies” in “Ethics and Information Technology” by Prof. James H. Moor of Dartmouth we find,
… first, we need realistically to take into account that ethics is an ongoing and dynamic enterprise. Second, we can improve ethics by establishing better collaborations among ethicists, scientists, social scientists, and technologists. We need a multi-disciplinary approach (Brey, 2000). The third improvement for ethics would be to develop more sophisticated ethical analyses. Ethical theories themselves are often simplistic and do not give much guidance to particular situations. Often the alternative is to do technological assessment in terms of cost/benefit analysis. This approach too easily invites evaluation in terms of money while ignoring or discounting moral values which are difficult to represent or translate into monetary terms. At the very least, we need to be more proactive and less reactive in doing ethics.
Great! I’m all for proactivity. But if we “do” ethics, what is to be the basis on which we “do” them. If we are to have such a basis, do we not first need to understand the morality on which ethical rules are based? What we have here is another effort by “experts on ethics” who apparently have no clue about the morality that must be the basis for the ethical rules they discuss so wisely if they are to have any legitimacy. If they do have a clue, they are being extremely careful to make sure we are not aware of it. Apparently we are to trust them because, after all, they are recognized “experts.” They don’t want us to peek at the “man behind the curtain.”
This is an excellent example of what E. O. Wilson was referring to when he inveighed against the failure of these “experts” to “put their cards on the table” in his book, “Consilience.” The authors never inform us whether they believe the morality they refer to with such gravity is an object, a thing-in-itself, or, on the contrary, is an evolved, subjective construct, as their vague allusion to a basis in “human nature” would seem to imply. Like so many other similar “experts” in morality and ethics, they are confident that most people will “know what they mean” when they refer to these things and will not press them to explain themselves. After all, they are “experts.” They have the professorial titles and NSF grants to prove it. When it comes to actually explaining what they mean when they refer to morality, to informing us what they think it actually is, and how and why it exists, they become as vague as the Oracle of Delphi.
Read John Stuart Mill’s “Utilitarianism,” and you will quickly see the difference between the poseurs and someone who knows what he’s talking about. Mill was not able to sit on the shoulders of giants like Darwin and the moral theorists who based their ideas on his work, not to mention our modern neuroscientists. Yet, in spite of the fact that these transformational insights came too late to inform his work, he had a clear and focused grasp of his subject. He knew that it was not enough to simply assume others knew what he meant when he spoke of morality. In reading his short essay we learn that he knew the difference between transcendental and subjective morality, that he was aware of and had thought deeply about the theories of those who claimed (long before Darwin) that morality was a manifestation of human nature, and that one could not claim the validity or legitimacy of moral rules without establishing the basis for that legitimacy. In other words, Mill did lay his cards on the table in “Utilitarianism.” Somehow, the essay seems strangely apologetic. Often it seems he is saying, “Well, I know my logic is a bit weak here, but I have done at least as well as the others.” Genius that he was, Mill knew that there was an essential something missing from his moral theories. If he had lived a few decades later, I am confident he would have found it.
Those who would be taken seriously when they discuss morality must first make it quite clear they know what morality is. As those who have read my posts on the topic know, I, too, have laid my cards on the table. I consider morality an evolved human trait, with no absolute legitimacy whatsoever beyond that implied by its evolutionary origin at a time long before the emergence of modern human societies, or any notion of transhumanism or human enhancements. As such, it can have no relevance or connection whatsoever to such topics other than as an emotional response to an issue to which that emotion, an evolved response like all our other emotions, was never “designed” to apply.
Posted on August 14th, 2009 3 comments
I first became aware of the work of E. O. Wilson when he published a pair of books in the 70’s (“Sociobiology” in 1975 and “On Human Nature” in 1979) that placed him in the camp of those who, like Ardrey, insisted on the role of genetically programmed predispositions in shaping human behavior. He touches on some of the issues we’ve been discussing here in one of his more recent works, “Consilience.” In a chapter entitled “Ethics and Religion,” he takes up the two competing fundamental assumptions about ethics that, according to Wilson, “make all the difference in the way we view ourselves as a species.” These two contradictory assumptions can be stated as, “I believe in the independence of moral values,” and “I believe that moral values come from humans alone.” This formulation is somewhat imprecise, as animals other than humans act morally. However, I think the general meaning of what Wilson is saying is clear. He refers to these two schools of thought as the “transcendentalists,” and “empiricists,” respectively. He then goes on to express a sentiment with which I very heartily agree;
The time has come to turn the cards face up. Ethicists, scholars who specialize in moral reasoning, are not prone to declare themselves on the foundations of ethics, or to admit fallibility. Rarely do you see an argument that opens with the simple statement: This is my starting point, and it could be wrong. Ethicists instead favor a fretful passage from the particular into the ambiguous, or the reverse, vagueness into hard cases. I suspect that almost all are transcendentalists at heart, but they rarely say so in simple declarative sentences. One cannot blame them very much; it is difficult to explain the ineffable, and they evidently do not wish to suffer the indignity of having their personal beliefs clearly understood. So by and large they steer around the foundation issue altogether.
Here he hits the nail on the head. It’s normal for human beings to be “transcendentalists at heart,” because that’s our nature. We’re wired to think of good and evil as having an objective existence independent of our minds. Unfortunately, that perception is not true and yet the “scholars who specialize in moral reasoning,” appear singularly untroubled by the fact. Someone needs to explain to them that we’re living in the 21st century, not the 18th, and their pronouncements that they “hold these truths to be self-evident” don’t impress us anymore. In the meantime, we’ve had a chance to peek at the man behind the curtain. If they really think one thing is good, and another evil, it’s about time they started explaining why.
Wilson declares himself an empiricist, and yet, as was also evident in his earlier works, he is not quite able to make a clean break with the transcendentalist past. I suspect he has imbibed too deeply at the well of traditional philosophy and theology. As a result, he has far more respect for the logic-free notions of today’s moralists than they deserve. I have a great deal of respect for Martin Luther as one of the greatest liberators of human thought who ever lived, and I revere Voltaire as a man who struck the shackles of obscurantism from the human mind. That doesn’t imply that I have to take Luther’s pronouncements about the Jews or Voltaire’s notions about his deist god seriously.
I once had a friend who, when questioned too persistently about something for which he had no better answer would reply, “Because there are no bones in ice cream.” The proposition that morality is an evolved human trait seems just as obvious to me as the proposition that there are no bones in ice cream. If anyone cares to dispute the matter with me, they need to begin by putting a package with bones on the table. Otherwise I will not take them seriously. The same goes for Wilson’s menagerie of philosophers and theologians. I respect them because, unlike so many others, they took the trouble to think. When it comes to ideas, however, we should respect them not because they are hoary and traditional, but because they are true. We have learned a great deal since the days of Kant and St. Augustine. We cannot ignore what we have learned in the intervening years out of respect for their greatness.
In the final chapter of his book, entitled “To What End,” Wilson discusses topics such as the relationship between environmental degradation and overpopulation, and considers the future of genetic engineering. His comments on the former are judicious enough, and it would be well if the developed countries of the world considered them carefully before continuing along the suicidal path of tolerating massive legal and illegal immigration. As for the latter, here, again, I find myself in agreement with him when he says that, “Once established as a practical technology, gene therapy will become a commercial juggernaut. Thousands of genetic defects, many fatal, are already known. More are discovered each year… It is obvious that when genetic repair becomes safe and affordable, the demand for it will grow swiftly. Some time in the next (21st) century that trend will lead into the full volitional period of evolution… Evolution, including genetic progress in human nature and human capacity, will be from (then) on increasingly the domain of science and technology tempered by ethics and political choice.”
As often happens, Wilson reveals his emotional heart of hearts to us with a bit of hyperbole in his final sentence:
And if we should surrender our genetic nature to machine-aided ratiocination, and our ethics and art and our very meaning to a habit of careless discursion in the name of progress, imagining ourselves godlike and absolved from our ancient heritage, we will become nothing.
This is a bit flamboyant, and begs the question of who or what gets to decide our “meaning.” Still, Wilson’s work is full of interesting and thought-provoking ideas, and he is well worth reading.
Posted on August 8th, 2009 No comments
Personal genetic testing began mainly as a tool for genealogists. The next step, testing for health risks, has already been taken. As the technology continues to develop, individuals will gain increasing control over their own genetic futures. They will, that is, unless the many who, for one reason or another, are opposed to these developments are able to stop them. The only viable way to do that is by enlisting the power of the state. They will certainly make the attempt. It will be interesting to see if they succeed. The forces that have driven human evolution for hundreds of thousands of years have, for all practical purposes, ceased to exist. The outcome of the battle will determine what they will be in the future.
Posted on August 2nd, 2009 3 comments
I’ve been reading through a collection of essays on the future of science entitled “What’s Next,” edited by Max Brockman. Today I’ll pick up where I left off in an earlier post, and look at a piece entitled “How to Enhance Human Beings,” by Nick Bostrom.
Once upon a time, in the days before the Nazi paradigm shift, eugenics used to be a topic of polite conversation. Now, of course, the Holy Mother Church of public opinion has spoken on the subject, and only the obvious evildoers among us dare to use the term any more, especially when children are present. Nevertheless, there were some spirited debates on the subject before it became obvious that it was necessary to restrict freedom of speech on the matter for our own good. I have unearthed a few interesting examples, both pro and con, in my archaeological peregrinations, and will post them for your amusement and edification one of these days.
In any event, the subject is now moot. Eugenics has gone the way of the horse and buggy. We are now, or will soon be able to vote with our feet, or genes, as the case may be. Depending on whether our tastes run to biological or mechanical tinkering, we are promised a range of options for ourselves or our offspring to enhance everything from intelligence to lifespan. The emerging possibilities have already turned up in the popular culture in video games such as Bioshock, movies such as Gattaca, and the novels of James Patterson. As one might expect, ethical debates are raging over these technologies. As Nick Bostrom puts it in his essay,
The belief in nature’s wisdom – and corresponding doubts about the prudence of tampering with nature, especially human nature – often manifests as diffusely moral objections to enhancement. Many people have intuitions about the superiority of “the natural” and the troublesomeness of human hubris. Some might base these ideas on theological doctrine, but often there is no such underpinning; often there is nothing more than a discomfort with altering the status quo.
To a large extent, these debates are also moot. Parents are incredibly competitive when it comes to putting their children in better schools, or even on cheerleading squads. Offered the choice between having their children become the enhanced movers and shakers of tomorrow, or the unenhanced restroom attendants and parking valets, they are likely to choose the former. This will be especially true in developed countries where the number of children one chooses to have is often limited by their expense, and in countries like China that legally limit the size of one’s family. Under the circumstances, people are likely to be as indifferent to moral arguments against enhancement as they were to moral arguments against alcohol during Prohibition. The new technology may be used above or below the state’s legal radar, but it will be used.
Bostrom has devoted some thought to the question of whether particular enhancements are advisable or not, considering the matter more from a practical than a moral perspective. He has come up with a system of rules which he calls the evolutionary-optimality challenge. They are discussed in a paper he has posted at his website, and seem a reasonable start on a subject that is likely to attract a lot more attention in coming years.
In the final paragraph of his essay, Bostrom takes up the more speculative question of building “entirely artificial systems of equal complexity and performance” to the human organism. Continuing along these lines, he writes:
At some stage, we may learn how to design new organs and bodies ab initio. Someday we may no longer even rely on biological material to implement our bodies and minds. Freed from most practical limitations, the task would then become to make wise use of our powers to self-modify. In other words, the challenge would shift from being primarily scientific to being primarily moral. If that moral task seems comparatively trivial from our current vantage point, this might reflect our present immaturity.
One hopes he is merely indulging in some end of article hyperbole here. If not, one must ask the question, “Whose morality?” In other words, this is another example of the “objective morality” fallacy I have referred to earlier, consisting of assuming that, because we perceive morality as real and objective, it actually is real and objective. Morality is an evolved characteristic that exists in human beings because it has promoted our survival. Bostrom makes the common mistake of assuming that, because he perceives it as independent of his mind, morality actually is independent of his mind, floating out there in space as a real, objective, thing in itself. He makes the further error of confusing his conscious mind with his genetic material. Morality did not evolve because it promoted the survival of conscious minds. It evolved because it promoted the survival of genetic material. As I have noted earlier, nothing can be reasonably considered more immoral than failing to survive. The idea that one could somehow serve a profound moral cause by accepting genetic death and transferring the mind, an ancillary characteristic evolved only because it, too, has promoted the survival of that genetic material, to a machine, is a logical aberration.