But Wait! There are More “Worries” from The Edge!

I won’t parse all 150+ of them, but here are a few more that caught my eye.

Science writer and historian Michael Shermer, apparently channeling Sam Harris, is worried about the “Is-Ought Fallacy of Science and Morality.”  According to Shermer,

…most scientists have conceded the high ground of determining human values, morals, and ethics to philosophers, agreeing that science can only describe the way things are but never tell us how they ought to be. This is a mistake.

It’s only a mistake to the extent that there’s actually some “high ground” to be conceded.  There is not.  Assuming that Shermer is not referring to the trivial case of discovering mere opinions in the minds of individual humans, neither science nor philosophy is capable determining anything about objects that don’t exist.  Values, morals and ethics do not exist as objects.  They are not things-in-themselves.  They cannot leap out of the skulls of individuals and acquire a reality and legitimacy that transcends individual whim.  Certainly, large groups of individuals who discover that they have whims in common can band together and “scientifically” force their whims down the throats of less powerful groups and individuals, but, as they say, that don’t make it right.

Suppose we experience a holocaust of some kind, and only one human survived the mayhem.  No doubt he would still be able to imagine what it was like when there were large groups of other’s like himself.  He might recall how they behaved, “scientifically” categorizing their actions as “good” or “evil,” according to his own particular moral intuitions.  Supposed, now, that his life also flickered out.  What would be left of his whims?  Would the inanimate universe, spinning on towards its own destiny, care about them one way or the other.  Science can determine the properties and qualities of things.  Where, then, would the “good” and “evil” objects reside?  Would they still float about in the ether as disembodied spirits?  I’m afraid not.  Science can have nothing to say about objects that don’t exist.  Michael Shermer might feel “in his bones” that some version of “human flourishing” is “scientifically good,” but there is no reason at all why I or anyone else should agree with his opinion.  By all means, let us flourish together, if we all share that whim, but surely we can pursue that goal without tacking moral intuitions on to it.  “Scientific” morality is not only naive, but, as was just demonstrated by the Communists and the Nazis, extremely dangerous as well. According to Shermer,

We should be worried that scientists have given up the search for determining right and wrong…

In fact, if scientists cease looking for and seeking to study objects that plainly don’t exist, it would seem to me more reason for congratulations all around than worry.  Here’s a sample of the sort of “reasoning” Shermer uses to bolster his case:

We begin with the individual organism as the primary unit of biology and society because the organism is the principal target of natural selection and social evolution. Thus, the survival and flourishing of the individual organism—people in this context—is the basis of establishing values and morals, and so determining the conditions by which humans best flourish ought to be the goal of a science of morality. The constitutions of human societies ought to be built on the constitution of human nature, and science is the best tool we have for understanding our nature.

Forgive me for being blunt, but this is gibberish.  Natural selection can have no target, because it is an inanimate process, and can no more have a purpose or will than a stone.  “Thus, the survival and flourishing of the individual organism – people in this context – is the basis of establishing values and morals”??  Such “reasoning” reminds me of the old “Far Side” cartoon, in which one scientist turns to another and allows that he doesn’t quite understand the intermediate step in his proof:  “Miracle happens.”  If a volcano spits a molten mass into the air which falls to earth and becomes a rock, is not it, in the same sense, the “target” of the geologic processes that caused indigestion in the volcano?  Is not the survival and flourishing of that rock equally a universal “good?”

Of the remaining “worries,” this was the one that most worried me, but there were others.  Kevin Kelly, Editor at Large of Wired Magazine, was worried about the “Underpopulation Bomb.”  Noting the “Ur-worry” of overpopulation, Kelly writes,

While the global population of humans will continue to rise for at least another 40 years, demographic trends in full force today make it clear that a much bigger existential threat lies in global underpopulation.

Apparently the basis of Kelly’s worry is the assumption that, once the earths population peaks in 2050 or thereabouts, the decrease will inevitably continue until we hit zero and die out.  In his words, “That worry seems preposterous at first.”  I think it seem preposterous first and last.

Science writer Ed Regis is worried about, “Being Told That Our Destiny Is Among The Stars.”  After reciting the usual litany of technological reasons that human travel to the stars isn’t likely, he writes,

Apart from all of these difficulties, the more important point is that there is no good reason to make the trip in the first place. If we need a new “Earth 2.0,” then the Moon, Mars, Europa, or other intra-solar-system bodies are far more likely candidates for human colonization than are planets light years away.  So, however romantic and dreamy it might sound, and however much it might appeal to one’s youthful hankerings of “going into space,” interstellar flight remains a science-fictional concept—and with any luck it always will be.

In other words, he doesn’t want to go.  By all means, then, he should stay here.  I and many others, however, have a different whim.  We embrace the challenge of travel to the stars, and, when it comes to human survival, we feel existential Angst at the prospect of putting all of our eggs in one basket.  Whether “interstellar flight remains a science-fiction concept” at the moment depends on how broadly you define “we.”  I see no reason why “we” should be limited to one species.  After all, any species you could mention is related to all the rest.  Interstellar travel may not be a technologically feasible option for me at the moment, but it is certainly feasible for my relatives on the planet, and at a cost that is relatively trivial.  Many simpler life forms can potentially survive tens of thousands of years in interstellar space.  I am of the opinion that we should send them on their way, and the sooner the better.

I do share some of the other worries of the Edge contributors.  I agree, for example, with historian Noga Arikha’s worry about, “Presentism – the prospect of collective amnesia,” or, as she puts it, the “historical blankness” promoted by the Internet.  In all fairness, the Internet has provided unprecedented access to historical source material.  However, to find it you need to have the historical background to know what you’re looking for.  That background about the past can be hard to develop in the glare of all the fascinating information available about the here and now.  I also agree with physicist Anton Zeilinger’s worry about, “Losing Completeness – that we are increasingly losing the formal and informal bridges between different intellectual, mental, and humanistic approaches to seeing the world.”  It’s an enduring problem.  The name “university” was already a misnomer 200 years ago, and in the meantime the problem has only become worse.  Those who can see the “big picture” and have the talent to describe it to others are in greater demand than ever before.  Finally, I agree with astrophysicist Martin Rees’ worry that, “We Are In Denial About Catastrophic Risks.”  In particular, I agree with his comment to the effect that,

The ‘anthropocene’ era, when the main global threats come from humans and not from nature, began with the mass deployment of thermonuclear weapons. Throughout the Cold War, there were several occasions when the superpowers could have stumbled toward nuclear Armageddon through muddle or miscalculation. Those who lived anxiously through the Cuba crisis would have been not merely anxious but paralytically scared had they realized just how close the world then was to catastrophe.

This threat is still with us.  It is not “in abeyance” because of the end of the cold war, nor does that fact that nuclear weapons have not been used since World War II mean that they will never be used again.  They will.  It is not a question of “if,” but “when.”

On the Existence of “Moral Blind Spots”

According to Michael Austin, who writes the Ethics for Everyone blog for Psychology Today, we are suffering from “Moral Blind Spots.”  Referring to the morality-based arguments in favor of slavery of an earlier time, he writes,

When I read these arguments and discuss their flaws with students, I’m reminded of something a professor of mine once asked, “What will future generations think about us? What moral blind spots of ours will they see, that we miss?” There are many possibilities, to be sure, but I think that future generations may look back at the disparity of wealth in our world and wonder how we could have missed the injustices that exist related to this.

If future generations are still capable of rational thought, perhaps they will ask a more pertinent question:  In view of the fact that the nature of morality and the ultimate reasons for its existence should have been obvious to our generation, why did so many of our philosophers, professors, and other assorted Ph.D.’s in the social sciences still believe there was some logical basis for making moral judgments of past generations?  Like so many of the other contemporary experts on morality, his work is informed by the tacit assumption that Good and Evil exist, independent of their subjective origins.  Like the others, he throws out a barrage of “shoulds” without troubling himself in the least about establishing their legitimacy, apparently oblivious to the many books discussing the biological origins of morality that have been appearing lately.  Here are some examples from his article:

It is disturbing, shocking, and disappointing to read arguments which include the attempt to defend the indefensible.

The “indefensible” he refers to is slavery.  Austin is not providing us with a description of his subjective moral intuitions in response to a particular stimulus.  Rather, the implication here is that there are objective reasons to consider slavery indefensible, and attempts to defend it as disturbing, shocking, and disappointing.  In short, he is saying that slavery is objectively Evil.  Why?  We are not told.  Nothing could be easier than striking poses in defense of this particular instance of “expertise in morality.”  Does not everyone agree that slavery is Evil?  What could be easier than simply shouting down anyone who disagrees?  They simply reveal themselves as Evil by association.  In reality, slavery is neither Good, nor Evil, because such categories simply don’t exist as things in themselves.  We can certainly say that the subjective moral judgment of many individuals in the U.S. today is that slavery is evil.  It was also the subjective moral judgment of many individuals in the ante-bellum South that slavery was good.  What we cannot say is that there is some objective basis for deciding which of them is right.  Continuing from the article,

We look back and wonder, “how could educated people believe that slavery was a moral institution?”

Here Austin is assuming that there are objective reasons for considering slavery moral or immoral.  He does not tell us what they are, and for good reason.  There are none.

How could we believe that it is morally permissible for certain parts of the world to have so much, while others, through no fault of their own, die of malaria because they lack access to something as cheap and effective as a bed net or anti-malarial medication.

Here Austin is assuming that there is an objective basis for declaring some things morally permissible, and others not.  Again, he does not trouble himself to explain why.

I pay extra money each month to my satellite tv provider so that I can watch Arsenal on the Fox Soccer Channel, have an occasional overpriced drink at the local coffee shop, and I purchase other things that I don’t really need. To be clear, I don’t think that we should necessarily stop all such spending. What I do think we should consider, however, is the option of curtailing some of this spending and then putting that money to work in ways that can help others who are suffering from treatable illnesses. By making do with a little less, we can help others live. This is not mere charity, it is a matter of justice.

Here, we find a baseless “should” associated with the spending of money in one way as opposed to another, another baseless “should” concerning what it is appropriate for us to consider, and what not, and a declaration that something is a “matter of justice” without the least semblance of an attempt to establish the legitimacy of that assertion.

Morality is a loose description of a collection of behavioral traits, observable in human beings, with analogs in other species.  The ultimate reason for their existence is the evolution of physical traits in the brain and nervous system.  Those traits exist solely because individuals who possessed them were more likely to survive, in times which bear little resemblance to the present, than those who didn’t.  So much is becoming increasingly obvious.  In spite of this, Austin and legions of other modern moralists continue to simply assume the existence of objective morality as a given.  It is nothing of the sort.  Today, objective morality is like a dead man walking.  Perhaps future generations will have the sense to wonder why it is taking the dead man so long to finally collapse.