The world as I see it
RSS icon Email icon Home icon
  • Steven Pinker on “The moral imperative for bioethics”

    Posted on August 9th, 2015 Helian 2 comments

    According to Steven Pinker in The moral imperative for bioethics, an opinion piece he recently wrote for the Boston Globe,

    …the primary moral goal for today’s bioethics can be summarized in a single sentence.  Get out of the way.

    I would strengthen that a bit to something like, “Stop the mental masturbation and climb back into the real world.”  At some level Pinker is aware of the fact that bioethicists and other “experts” in morality are not nearly as useful to the rest of us as they think they are.  He just doesn’t understand why.  As a result he makes the mistake of conceding the objective relevance of morality in solving problems germane to the field of biotechnology.  The fundamental problem is that these people are chasing after imaginary objects, things that aren’t real.  They have bamboozled the rest of us into taking them seriously because we have been hoodwinked by our emotional baggage just as effectively as they have.  There is no premium on reality as far as evolution is concerned.  There is a premium on survival.  We perceive “good” and “evil” as real objects, not because they actually are real objects, but because our ancestors were more likely to pass on the relevant genes if they perceived these fantasies as real things.  Bioethics is just one of the many artifacts of this delusion.

    Consider what the bioethicists are really claiming.  They are saying that mental impressions that exist because they happened to improve the evolutionary fitness of a species of advanced, highly social, bipedal apes correspond to real things, commonly referred to as “good” and “evil,” that have some kind of an objective existence independent of the minds of those creatures.  Not only that, but if one can but capture these objects, which happen to be extremely elusive and slippery, one can apply them to make decisions in the field of biotechnology, which didn’t exist when the mental equipment that gives rise to the impressions in question evolved.  Consider these extracts from the online conversation:

    Carl Elliot, at his blog, Fear and Loathing in Bioethics,

    Forget Tuskegee. Forget Willowbrook and Holmesburg Prison. Pay no attention to the research subjects who died at Kano, Auckland Women’s Hospital or the Fred Hutchinson Cancer Center. Never mind about Jesse Gelsinger, Ellen Roche, Nicole Wan, Tracy Johnson or Dan Markingson. According to Steven Pinker, “we already have ample safeguards for the safety and informed consent of patients and research subjects.”  So bioethicists should just shut up about abuses and let smart people like him get on with their work.

    Pinker:

    Indeed, biotechnology has moral implications that are nothing short of stupendous. But they are not the ones that worry the worriers.

    Julian Savulescu at the Practical Ethics website:

    What we need is less obstruction of good and ethical research, as Pinker correctly observes, and more vigilance at picking up unethical research. This requires competent, professional and trained bioethicists and improvement of ethics review processes.

    Daniel K. Sokol, also at Practical Ethics:

    The idea that research that has the potential to cause harm should be subject to ethical review should not be controversial in the 21st century. The words “this project has been reviewed and approved by the Research Ethics Committee” offers some reassurance that the welfare of participants has been duly considered. The thought of biomedical research without ethical review is a frightening one.

    Pinker:

    A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.”

    One imagines oneself in Bedlam.  These people are all trying to address what most people would agree is a real problem.  They understand that most people don’t want to be victims of anything like the Tuskegee experiments.  They also grasp the fact that most people would prefer to live longer, healthier lives.  True, these, too, are merely subjective goals, whims if you will, but they are whims that most of us will agree with.  The whims aren’t the problem.  The problem is that we are trying to apply a useless tool to reach the goals; human moral emotions.  We are trying to establish truths by consulting emotions to which no truth claims can possibly apply.  Stuart Rennie got it right in spite of himself in his attack on Pinker at his Global Bioethics Blog:

    My first reaction was: how is this new bioethics skill taught? Should there be classes that teach it in a stepwise manner, i.e. where you first learn not to butt in, then how to just step a bit aside, followed by somewhat getting out of the way, and culminating in totally screwing off? What would the syllabus look like? Wouldn’t avoiding bioethics class altogether be a sign of success?

    Pinker, too, iterates to an entirely rational final sentence in his opinion piece:

    Biomedical research will always be closer to Sisyphus than a runaway train — and the last thing we need is a lobby of so-called ethicists helping to push the rock down the hill.

    I, too, would prefer not to be a Tuskegee guinea pig.  I, too, would like to live longer and be healthier.  I simply believe that emotional predispositions that exist because they happen to have been successful in regulating the social interactions within and among small groups of hunter-gatherers millennia ago, are unlikely to be the best tools to achieve those ends.

    bedlam

  • Morality Inversions

    Posted on August 8th, 2015 Helian No comments

    The nature of morality and the reason for its existence have been obvious for more than a century and a half.  Francis Hutcheson demonstrated that it must arise from a “moral sense” early in the 18th century.  Hume agreed, and suggested the possibility that there may be a secular explanation for the existence of this moral sense.  Darwin demonstrated the nature of this secular explanation for anyone willing to peak over the blindfold of faith and look at the evidence.  Westermarck climbed up on the shoulders of these giants, gazed about, and summarized the obvious in his brilliant The Origin and Development of the Moral Ideas.  In short, good and evil have no objective existence.  They are subjective artifacts of behavioral predispositions that exist because they evolved.  Absent that evolved “moral sense,” morality as we know it would not exist.  It evolved because it happened to increase the probability that the genes responsible for its existence would survive and reproduce.  There exists no mechanism whereby those genes can jump out of the DNA of one individual, grab the DNA of another individual by the scruff of the neck, and dictate what kind of behavior that other DNA should regard as “good” or “evil.”

    In the years since Darwin and Westermarck our species has amply demonstrated its propensity to ignore such inconvenient truths.  Once upon a time religion provided some semblance of a justification for belief in an objective “good-in-itself.”  However, latter day “experts” on ethics and morality have jettisoned such anachronisms, effectively sawing off the branch they were sitting on.  Then, with incomparable hubris, they’ve claimed a magical ability to distill objective “goods” and “evils” straight out of the vacuum they were floating in.  In our own time the result is visible as a veritable explosion of abstruse algorithms, incomprehensible to all but a few academic scribblers, for doing just that.  Encouraged by these “experts,” legions of others have indulged themselves in the wonderfully sweet delusion that the particular haphazard grab bag of emotions they happened to inherit from their ancestors provided them with an infallible touchstone for sniffing out “real good” and “real evil.”  The result has been an orgy of secular piety that the religious Puritans of old would have shuddered to behold.

    The manifestations of this latter day piety have been bizarre, to say the least.  Instead of promoting genetic survival, they accomplish precisely the opposite.  Genes that are the end result of an unbroken chain of existence stretching back billions of years into the past now seem intent on committing suicide.  It’s not surprising really.  Other genes gave rise to an intelligence capable of altering the environment so fast that the rest couldn’t possibly keep up.  The result is visible in various forms of self-destructive behavior that can be described as “morality inversions.”

    A classic example is the belief that it is “immoral” to have children.  Reams of essays, articles, and even books have been written “proving” that, for various reasons, reproduction is “bad-in-itself.”  If one searches diligently for the “root cause” of all these counterintuitive artifacts of human nature, one will always find them resting on a soft bed of moral emotions.  What physical processes in the brain give rise to these moral emotions, and how, exactly, do they predispose us to act in some ways, but not others?  No one knows.  It’s a mystery that will probably remain unsolved until we unravel the secret of consciousness.  One thing we do know, however.  The emotions exist because they evolved, and they evolved because they enhanced the odds that the genes that gave rise to them would reproduce; or at least they did in a particular environment that no longer exists.  In the vastly different environment we have now created for ourselves, however, they are obviously capable of promoting an entirely different end, at least in some cases; self destruction.

    Of course, self destruction is not objectively evil because nothing is objectively evil.  Neither is it unreasonable, because, as Hume pointed out, reason by itself cannot motivate us to do anything.  We are motivated by “sentiments” or “passions” that we experience because it is our nature to experience them.  These include the moral passions.  Self destruction is a whim, and reason can be applied to satisfy the whim.  I happen to have a different whim.  I see myself as a link in a vast chain of millions of living organisms, my ancestors, if you will.  All have successfully reproduced, adding another link to the chain.  Suppose I were to fail to reproduce, thus becoming the final link in the chain and announcing, in effect, to those who came before me and made my life possible that, thanks to me, all their efforts had ended in a biological dead end.  In that case I would see myself as a dysfunctional biological unit or, in a word, sick, the victim of a morality inversion.  It follows that I have a different whim; to reproduce.  And so I have.  There can be nothing that renders my whims in any way objectively superior to those of anyone else.  I merely describe them and outline what motivates them.  I’m not disturbed by the fact that others have different whims, and choose self destruction.  After all, their choice to remove themselves from the gene pool and stop taking up space on the planet may well be to my advantage.

    Another interesting example of a morality inversion is the deep emotional high so many people in Europe and North America seem to get from inviting a deluge of genetically and culturally alien immigrants to ignore the laws of their countries and move in.  One can but speculate on the reasons that the moral emotions, mediated by culture as they always are, result in such counterintuitive behavior.  There is, of course, such a thing as human altruism, and it exists because it evolved.  However, that evolutionary process took place in an environment that made it likely that such behavior would enhance the chances that the responsible genes would survive.  People lived in relatively small ingroups surrounded by more or less hostile outgroups.  We still categorize others into ingroups and outgroups, but the process has become deranged.  Thanks to our vastly expanded knowledge of the world around us combined with vastly improved means of communication, the ingroup may now be perceived as “all mankind.”

    Except, of course, for the ever present outgroup.  The outgroup hasn’t gone anywhere.  It has merely adopted a different form.  Now, instead of the clan in the next territory over, the outgroup may consist of liberals, conservatives, Christians, Moslems, atheists, Jews, blacks, whites, or what have you.  The many possibilities are familiar to anyone who has read a little history.  Obviously, the moral equipment in our brains doesn’t have the least trouble identifying the population of Africa, the Middle East, or Mexico as members of the ingroup, and citizens of one’s own country who don’t quite see them in that light as the outgroup.  In that case, anyone who resists a deluge of illegal immigrants is “evil.”  If they point out that similar events in the past have led to long periods of ethnic and/or religious strife, occasionally culminating in civil war, or any of the other obvious drawbacks of uncontrolled immigration, they are simply shouted down with the epithets appropriate for describing the outgroup, “racist” being the most familiar and hackneyed example.  In short, a morality inversion has occurred.  Moral emotions have become dysfunctional, promoting behavior that will almost certainly be self-destructive in the long run.  I may be wrong of course.  The immigrants now pouring into Europe and North America without apparent limit may all eventually be assimilated into a big, happy, prosperous family.  I seriously doubt it.  Wait and see.

    One could cite many other examples.  The faithful, of course, have their own versions, such as removing themselves from the gene pool by acting as human bombs, often taking many others with them in the process.  The “good” in this case is the delusional prospect of enjoying the services of 70 of the best Stepford wives ever heard of in the afterlife.  Regardless, the point is that the evolved emotional baggage that manifests itself in so many forms as human morality has been left in the dust.  It cannot possibly keep up with the frenetic pace of human social and technological progress.  The result is morality inversions; behaviors that accomplish more or less the opposite of what they did in the environment in which they evolved.  Under the circumstances, the practice of allowing people to wallow in their moral emotions, insisting that they have a monopoly on the “good” and anyone who opposes them is “evil” is becoming increasingly problematic.  As noted above, I don’t have a problem with these people voluntarily removing themselves from the gene pool.  I do have a problem with becoming collateral damage.

  • “Ethics” in the 21st Century

    Posted on August 2nd, 2015 Helian 5 comments

    According to the banner on its cover, Ethics is currently “celebrating 125 years.”  It describes itself as “an international journal of social, political, and legal philosophy.”  Its contributors consist mainly of a gaggle of earnest academics, all chasing about with metaphysical butterfly nets seeking to capture that most elusive quarry, the “Good.”  None of them seems to have ever heard of a man named Westermarck, who demonstrated shortly after the journal first appeared that their prey was as imaginary as unicorns, or even Darwin, who was well aware of the fact, but was not indelicate enough to spell it out so blatantly.

    The latest issue includes an entry on the “Transmission Principle,” defined in its abstract as follows:

    If you ought to perform a certain act, and some other action is a necessary means for you to perform that act, then you ought to perform that other action as well.

    As usual, the author never explains how you get to the original “ought” to begin with.  In another article entitled “What If I Cannot Make a Difference (and Know It),” the author begins with a cultural artifact that will surely be of interest to future historians:

    We often collectively bring about bad outcomes.  For example, by continuing to buy cheap supermarket meat, many people together sustain factory farming, and the greenhouse gas emissions of millions of individuals together bring about anthropogenic climate change.

    and goes on to note that,

    Intuitively, these bad outcomes are not just a matter of bad luck, but the result of some sort of moral shortcoming.  Yet in many of these situations, none of the individual agents could have made any difference for the better.

    He then demonstrates that, because a equals b, and b equals c, we are still entirely justified in peering down our morally righteous noses at purchasers of cheap meat and emitters of greenhouse gases.  His conclusion in academic-speak:

    I have shown how Act Consequentialists can find fault with some agent in all cases where multiple agents who have modally robust knowledge of all the relevant facts gratuitously bring about collectively suboptimal outcomes, even if the agents individually cannot make any difference for the better due to the uncooperativeness of others.

    The author does not explain the process by which emotions that evolved in a world without cheap supermarket meat have lately acquired the power to prescribe whether buying it is righteous or not.

    It has been suggested by some that trading, the exchange of goods and services, is a defining feature of our species.  In an article entitled “Markets without Symbolic Limits,” the authors conclude that,

    In many cases, we are morally obligated to revise our semiotics in order to allow for greater commodification.  We ought to revise our interpretive schemas whenever the costs of holding that schema are significant, without counterweight benefits.  It is itself morally objectionable to maintain a meaning system that imbues a practice with negative meanings when that practice would save or improve lives, reduce or alleviate suffering, and so on.

    No doubt that very thought occurred to our hunter-gatherer ancestors, enhancing their overall fitness.  The happy result was the preservation of the emotional baggage that gave rise to it to later inform the pages of Ethics magazine.

    In short, “moral progress,” as reflected in the pages of Ethics, depends on studiously ignoring Darwin, averting our eyes from the profane scribblings of Westermarck, pretending that the recent flood of books and articles on the evolutionary origins of morality and the existence of analogs of human morality in many animals are irrelevant, and gratuitously assuming that there really is some “thing” out there for the butterfly nets to catch.  In other words, our “moral progress” has been a progress away from self-understanding.  It saddens me, because I’ve always considered self-understanding a “good.”  Just another one of my whims.

  • Of Tim Hunt and Elementary Morality

    Posted on June 21st, 2015 Helian 33 comments

    If we are evolved animals, then it is plausible that we have evolved behavioral traits, and among those traits are a “moral sense.”  So much was immediately obvious to Darwin himself.  To judge by the number of books that have been published about evolved morality in the last couple of decades, it makes sense to a lot of other people, too.  The reason such a sense might have evolved is obvious, especially among highly social creatures such as ourselves.  The tendency to act in some ways and not in others enhanced the probability that the genes responsible for those tendencies would survive and reproduce.  It is not implausible that this moral sense should be strong, and that it should give rise to such powerful impressions that some things are “really good,” and others are “really evil,” as to produce a sense that “good” and “evil” exist independently as objective things.  Such a moral sense is demonstrably very effective at modifying our behavior.  It hardly follows that good and evil really are independent, objective things.

    If an evolved moral sense really is the “root cause” for the existence of all the various and gaudy manifestations of human morality, is it plausible to believe that this moral sense has somehow tracked an “objective morality” that floats around out there independent of any subjective human consciousness?  No.  If it really is the root cause, is there some objective mechanism whereby the moral impressions of one human being can leap out of that individual’s skull and gain the normative power to dictate to another human being what is “really good” and “really evil?”  No.  Can there be any objective justification for outrage?  No.  Can there be any objective basis for virtuous indignation?  No.  So much is obvious.  Under the circumstances it’s amazing, even given the limitations of human reason, that so many of the most intelligent among it just don’t get it.  One can only attribute it to the tremendous power of the moral emotions, the great pleasure we get from indulging them, and the dominant role they play in regulating all human interactions.

    These facts were recently demonstrated by the interesting behavior of some of the more prominent intellectuals among us in reaction to some comments at a scientific conference.  In case you haven’t been following the story, the commenter in question was Tim Hunt,- a biochemist who won a Nobel Prize in 2001 with Paul Nurse and Leland H. Hartwell for discoveries of protein molecules that control the division (duplication) of cells.  At a luncheon during the World Conference of Science Journalists in Seoul, South Korea, he averred that women are a problem in labs because “You fall in love with them, they fall in love with you, and when you criticize them, they cry.”

    Hunt’s comment evoked furious moral emotions, not least among atheist intellectuals.  According to PZ Myers, proprietor of Pharyngula, Hunt’s comments revealed that he is “bad.”  Some of his posts on the subject may be found here, here, and here.  For example, according to Myers,

    Oh, no! There might be a “chilling effect” on the ability of coddled, privileged Nobel prize winners to say stupid, demeaning things about half the population of the planet! What will we do without the ability of Tim Hunt to freely accuse women of being emotional hysterics, or without James Watson’s proud ability to call all black people mentally retarded?

    I thought Hunt’s plaintive whines were a big bowl of bollocks.

    All I can say is…fuck off, dinosaur. We’re better off without you in any position of authority.

    We can glean additional data in the comments to these posts that demonstrate the human version of “othering.”  Members of outgroups, or “others,” are not only “bad,” but also typically impure and disgusting.  For example,

    Glad I wasn’t the only–or even the first!–to mention that long-enough-to-macramé nose hair. I think I know what’s been going on: The female scientists in his lab are always trying hard to not stare at the bales of hay peeking out of his nostrils and he’s been mistaking their uncomfortable, demure behaviour as ‘falling in love with him’.

    However, in creatures with brains large enough to cogitate about what their emotions are trying to tell them, the same suite of moral predispositions can easily give rise to stark differences in moral judgments.  Sure enough, others concluded that Myers and those who agreed with him were “bad.”  Prominent among them was Richard Dawkins, who wrote in an open letter to the London Times,

    Along with many others, I didn’t like Sir Tim Hunt’s joke, but ‘disproportionate’ would be a huge underestimate of the baying witch-hunt that it unleashed among our academic thought police: nothing less than a feeding frenzy of mob-rule self-righteousness.”

    The moral emotions of other Nobel laureates informed them that Dawkins was right.  For example, according to the Telegraph,

    Sir Andre Geim, of the University of Manchester who shared the Nobel prize for physics in 2010 said that Sir Tim had been “crucified” by ideological fanatics , and castigated UCL for “ousting” him.

    Avram Hershko, an Israeli scientist who won the 2004 Nobel prize in chemistry, said he thought Sir Tim was “very unfairly treated.”  He told the Times: “Maybe he wanted to be funny and was jet lagged, but then the criticism in the social media and in the press was very much out of proportion. So was his prompt dismissal — or resignation — from his post at UCL .”

    All these reactions have one thing in common.  They are completely irrational unless one assumes the existence of “good” and “bad” as objective things rather than subjective impressions.  Or would you have me believe, dear reader, that statements like, “fuck off, dinosaur,” and allusions to crucifixion by “ideological fanatics” engaged in a “baying witch-hunt,” are mere cool, carefully reasoned suggestions about how best to advance the officially certified “good” of promoting greater female participation in the sciences?  Nonsense!  These people aren’t playing a game of charades, either.  Their behavior reveals that they genuinely believe, not only in the existence of “good” and “bad” as objective things, but in their own ability to tell the difference better than those who disagree with them.  If they don’t believe it, they certainly act like they do.  And yet these are some of the most intelligent representatives of our species.  One can but despair, and hope that aliens from another planet don’t turn up anytime soon to witness such ludicrous spectacles.

    Clearly, we can’t simply dispense with morality.  We’re much too stupid to get along without it.  Under the circumstances, it would be nice if we could all agree on what we will consider “good” and what “bad,” within the limits imposed by the innate bedrock of morality in human nature.  Unfortunately, human societies are now a great deal different than the ones that existed when the predispositions that are responsible for the existence of morality evolved, and they tend to change very rapidly.  It stands to reason that it will occasionally be necessary to “adjust” the types of behavior we consider “good” and “bad” to keep up as best we can.  I personally doubt that the current practice of climbing up on rickety soap boxes and shouting down anathemas on anyone who disagrees with us, and then making the “adjustment” according to who shouts the loudest, is really the most effective way to accomplish that end.  Among other things, it results in too much collateral damage in the form of shattered careers and ideological polarization.  I can’t suggest a perfect alternative at the moment, but a little self-knowledge might help in the search for one.  Shedding the illusion of objective morality would be a good start.

    soapbox

  • The Regrettable Overreach of “Faith versus Fact”

    Posted on June 12th, 2015 Helian 10 comments

    The fact that the various gods that mankind has invented over the years, including the currently popular ones, don’t exist has been sufficiently obvious to any reasonably intelligent pre-adolescent who has taken the trouble to think about it since at least the days of Jean Meslier.  That unfortunate French priest left us with a Testament that exposed the folly of belief in imaginary super-beings long before the days of Darwin.  It included most of the “modern” arguments, including the dubious logic of inventing gods to explain everything we don’t understand, the many blatant contradictions in the holy scriptures, the absurdity of the notion that an infinitely wise and perfect being could be moved to fury or even offended by the pathetic sins of creatures as abject as ourselves, the lack of any need for a supernatural “grounding” for human morality, and many more.  Over the years these arguments have been elaborated and expanded by a host of thinkers, culminating in the work of today’s New Atheists.  These include Jerry Coyne, whose Faith versus Fact represents their latest effort to talk some sense into the true believers.

    Coyne has the usual human tendency, shared by his religious opponents, of “othering” those who disagree with him.  However, besides sharing a “sin” that few if any of us are entirely free of, he has some admirable traits as well.  For example, he has rejected the Blank Slate ideology of his graduate school professor/advisor, Richard Lewontin, and even goes so far as to directly contradict him in FvF.  In spite of the fact that he is an old “New Leftist” himself, he has taken a principled stand against the recent attempts of the ideological Left to dismantle freedom of speech and otherwise decay to its Stalinist ground state.  Perhaps best of all as far as a major theme of this blog is concerned, he rejects the notion of objective morality that has been so counter-intuitively embraced by Sam Harris, another prominent New Atheist.

    For the most part, Faith versus Fact is a worthy addition to the New Atheist arsenal.  It effectively dismantles the “sophisticated Christian” gambit that has encouraged meek and humble Christians of all stripes to imagine themselves on an infinitely higher intellectual plane than such “undergraduate atheists” as Richard Dawkins and Chris Hitchens.  It refutes the rapidly shrinking residue of “God of the gaps” arguments, and clearly illustrates the difference between scientific evidence and religious “evidence.”  It destroys the comfortable myth that religion is an “other way of knowing,” and exposes the folly of seeking to accommodate religion within a scientific worldview.  It was all the more disappointing, after nodding approvingly through most of the book, to suffer one of those “Oh, No!” moments in the final chapter.  Coyne ended by wandering off into an ideological swamp with a fumbling attempt to link obscurantist religion with “global warming denialism!”

    As it happens, I am a scientist myself.  I am perfectly well aware that when an external source of radiation such as that emanating from the sun passes through an ideal earthlike atmosphere that has been mixed with a dose of greenhouse gases such as carbon dioxide, impinges on an ideal earthlike surface, and is re-radiated back into space, the resulting equilibrium temperature of the atmosphere will be higher than if no greenhouse gases were present.  I am also aware that we are rapidly adding such greenhouse gases to our atmosphere, and that it is therefore reasonable to be concerned about the potential effects of global warming.  However, in spite of that it is not altogether irrational to take a close look at whether all the nostrums proposed as solutions to the problem will actually do any good.

    In fact, the earth does not have an ideal static atmosphere over an ideal static and uniform surface.  Our planet’s climate is affected by a great number of complex, interacting phenomena.  A deterministic computer model capable of reliably predicting climate change decades into the future is far beyond the current state of the art.  It would need to deal with literally millions of degrees of freedom in three dimensions, in many cases using potentially unreliable or missing data.  The codes currently used to address the problem are probabilistic, reduced basis models, that can give significantly different answers depending on the choice of initial conditions.

    In a recently concluded physics campaign at Lawrence Livermore National Laboratory, scientists attempted to achieve thermonuclear fusion ignition by hitting tiny targets containing heavy isotopes of hydrogen with the most powerful laser system ever built.  The codes they used to model the process should have been far more accurate than any current model of the earth’s climate.  These computer models included all the known relevant physical phenomena, and had been carefully benchmarked against similar experiments carried out on less powerful laser systems.  In spite of that, the best experimental results didn’t come close to the computer predictions.  The actual number of fusion reactions hardly came within two orders of magnitude of expected values.  The number of physical approximations that must be used in climate models is far greater than were necessary in the Livermore fusion codes, and their value as predictive tools must be judged accordingly.

    In a word, we have no way of accurately predicting the magnitude of the climate change we will experience in coming decades.  If we had unlimited resources, the best policy would obviously be to avoid rocking the only boat we have at the moment.  However, this is not an ideal world, and we must wisely allocate what resources we do have among competing priorities.  Resources devoted to fighting climate change will not be available for medical research and health care, education, building the infrastructure we need to maintain a healthy economy, and many other worthy purposes that could potentially not only improve human well-being but save many lives.  Before we succumb to frantic appeals to “do something,” and spend a huge amount of money to stop global warming, we should at least be reasonably confident that our actions will measurably reduce the danger.  To what degree can we expect “science” to inform our decisions, whatever they may be?

    For starters, we might look at the track record of the environmental scientists who are now sounding the alarm.  The Danish scientist Bjorn Lomborg examined that record in his book, The Skeptical Environmentalist, in areas as diverse as soil erosion, storm frequency, deforestation, and declining energy resources.  Time after time he discovered that they had been crying “wolf,” distorting and cherry-picking the data to support dire predictions that never materialized.  Lomborg’s book did not start a serious discussion of potential shortcomings of the scientific method as applied in these areas.  Instead he was bullied and vilified.  A kangaroo court was organized in Denmark made up of some of the more abject examples of so-called “scientists” in that country, and quickly found Lomborg guilty of “scientific dishonesty,” a verdict which the Danish science ministry later had the decency to overturn.  In short, the same methods were used against Lomborg as were used decades earlier to silence critics of the Blank Slate orthodoxy in the behavioral sciences, resulting in what was possibly the greatest scientific debacle of all time.  At the very least we can conclude that all the scientific checks and balances that Coyne refers to in such glowing terms in Faith versus Fact have not always functioned with ideal efficiency in promoting the cause of truth.  There is reason to believe that the environmental sciences are one area in which this has been particularly true.

    Under the circumstances it is regrettable that Coyne chose to equate “global warming denialism” a pejorative term used in ideological squabbles that is by its very nature unscientific, with some of the worst forms of religious obscurantism.  Instead of sticking to the message, in the end he let his political prejudices obscure it.  Objections to the prevailing climate change orthodoxy are hardly coming exclusively from the religious fanatics who sought to enlighten us with “creation science,” and “intelligent design.”  I invite anyone suffering from that delusion to have a look at some of the articles the physicist and mathematician Lubos Motl has written about the subject on his blog, The Reference Frame.  Examples may be found here, here and, for an example with a “religious” twist,  here.  There he will find documented more instances of the type of “scientific” behavior Lomborg cited in The Skeptical Environmentalist.  No doubt many readers will find Motl irritating and tendentious, but he knows his stuff.  Anyone who thinks he can refute his take on the “science” had better be equipped with more knowledge of the subject than is typically included in the bromides that appear in the New York Times.

    Alas, I fear that I am once again crying over spilt milk.  I can only hope that Coyne has an arrow or two left in his New Atheist quiver, and that next time he chooses a publisher who will insist on ruthlessly chopping out all the political Nebensächlichkeiten.  Meanwhile, have a look at his Why Evolution is True website.  In addition to presenting a convincing case for evolution by natural selection and a universe free of wrathful super beings, Professor Ceiling Cat, as he is known to regular visitors for reasons that will soon become apparent to newbies, also posts some fantastic wildlife pictures.  And if it’s any consolation, I see his book has been panned by John Horgan.  Anyone with enemies like that can’t be all bad.  Apparently Horgan’s review was actually solicited by the editors of the Wall Street Journal.  Go figure!  One wonders what rock they’ve been sleeping under lately.

  • Faith versus Fact: New Atheism Rejects the Blank Slate

    Posted on June 6th, 2015 Helian 6 comments

    Jerry Coyne just launched another New Atheist salvo against the Defenders of the Faith in the form of his latest book, Faith versus Fact.  It’s well written and well reasoned, effectively squashing the “sophisticated Christian” gambit of the faithful, and storming some of their few remaining “God of the gaps” redoubts.  However, one of its most striking features is its decisive rejection of the Blank Slate.  The New Atheists have learned to stop worrying and love innate morality!

    Just like the Blank Slaters of yore, the New Atheists may be found predominantly on the left of the political spectrum.  In Prof. Coyne’s case the connection is even more striking.  As a graduate student, his professor/advisor was none other than Blank Slate kingpin Richard Lewontin of Not In Our Genes fame!  In spite of that, in Faith versus Fact he not only accepts but positively embraces evolutionary psychology in general and innate morality in particular.  Why?

    It turns out that, along with the origin of life, the existence of consciousness, the “fine tuning” of physical constants, etc.,  one of the more cherished “gaps” in the “God of the gaps” arguments of the faithful is the existence of innate morality.  As with the other “gap” gambits, the claim is that it couldn’t exist unless God created it.  As noted in an earlier post, the Christian philosopher Francis Hutcheson used a combination of reason and careful observation of his own species to demonstrate the existence of an innate “moral sense,” building on the earlier work of Anthony Ashley-Cooper and others early in the 18th century.  The Blank Slaters would have done well to read his work.  Instead, they insisted on the non-existence of human nature, thereby handing over this particular “gap” to the faithful by default.   Obviously, Prof. Coyne had second thoughts, and decided to snatch it back.  However, he doesn’t quite succeed in breaking entirely with the past.  Instead, he insists on elevating “cultural morality” to a co-equal status with innate morality, and demonstrates that he has swallowed Steven Pinker’s fanciful “academic version” of the history of the Blank Slate in the process.  Allow me to quote at length some of the relevant passages from his book:

    Evolution disproves critical parts of both the Bible and the Quran – the creation stories – yet millions have been unable to abandon them.  Finally, and perhaps most important, evolution means that human morality, rather than being imbued in us by God, somehow arose via natural processes:  biological evolution involving natural selection on behavior, and cultural evolution involving our ability to calculate, foresee, and prefer the results of different behaviors.

    Here we encounter the conflation of biological and cultural evolution, which are described as if they were independent factors accounting for the “rise” of human morality.  This tendency to embrace innate explanations while at the same time clinging to the “culture and learning” of the Blank Slate as a distinct, quasi-independent determinant of moral behavior is a recurring theme in FvF.  A bit later Coyne seems to return to the Darwinian fold, citing his comments on “well-marked social instincts.”

    In his 1871 book The Descent of Man, and Selection in Relation to Sex, where Darwin first applied his theory of evolution by natural selection to humans, he did not neglect morality.  In chapter 3, he floats what can be considered the first suggestion that our morality may be an elaboration by our large brains of social instincts evolved in our ancestors:  “The following proposition seems to me in a high degree probable – namely, that any animal whatever, endowed with well-marked social instincts, would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well developed, or nearly as well developed, as in man.”

    This impression is apparently confirmed in the following remarkable passage:

    A century later, the biologist Edward O. Wilson angered many by asserting the complete hegemony of biology over ethics:  “Scientists and humanists should consider together the possibility that the time has come for ethics to be removed temporarily from the hands of the philosophers and biologicized.”  Wilson’s statement, in the pathbreaking book Sociobiology:  The New Synthesis, really began the modern incursion of evolution into human behavior that has become the discipline of evolutionary psychology.  In the last four decades psychologists, philosophers, and biologists have begun to dissect the cultural and evolutionary roots of morality.

    Here we find, almost verbatim, Steven Pinker’s bowdlerized version of the “history” of the Blank Slate, featuring E. O. Wilson as the knight in shining armor who came out of nowhere to “begin the modern incursion of evolution into human behavior,” with the publication of Sociobiology in 1975.  Anyone with even a faint familiarity with the source material knows that Pinker’s version is really nothing but a longish fairy tale.  The “modern incursion of evolution into human behavior” was already well underway in Europe in 1951, when Niko Tinbergen published his The Study of Instinct.  It was continued there through the 50’s and 60’s in the work of Konrad Lorenz, Irenäus Eibl-Eibesfeldt, and many others.  Long before the appearance of Sociobiology, Robert Ardrey began the publication of a series of four books on evolved human nature that really set in motion the smashing of the Blank Slate orthodoxy in the behavioral sciences.  There is literally nothing of any significance in Sociobiology bearing on the “incursion of evolution into human behavior” or the emergence of what came to be called evolutionary psychology that is not merely an echo of work that had been published by Ardrey, Lorenz, Tinbergen, and others many years earlier.  No matter.  It would seem that Pinker’s fanciful “history” has now been transmogrified into one of Coyne’s “facts.”

    But I digress.  As noted above, even as Coyne demolishes morality as one of the “gaps” that must be filled by inventing a God by noting its emergence as an evolved trait, and even as he explicitly embraces evolutionary psychology, which has apparently only recently become “respectable,” he can never quite entirely free himself from the stench of the Blank Slate.  Finally, as if frightened by his own temerity, and perhaps feeling the withering gaze of his old professor/advisor Lewontin, Coyne executes a partial retreat from the territory he has just attempted to reconquer:

    In The Better Angels of Our Nature, Steven Pinker makes a strong case that since the Middle Ages most societies have become much less brutal, due largely to changes in what’s considered moral.  So if morality is innate, it’s certainly malleable.  And that itself refutes the argument that human morality comes from God, unless the moral sentiments of the deity are equally malleable.  The rapid change in many aspects of morality, even in the last century, also suggests that much of its “innateness” comes not from evolution but from learning.  That’s because evolutionary change simply doesn’t occur fast enough to explain societal changes like our realization that women are not an inferior moiety of humanity, or that we shouldn’t torture prisoners.  The explanation for these changes must reside in reason and learning:  our realization that there is no rational basis for giving ourselves moral privilege over those who belong to other groups.

    Here we find the good professor behaving for all the world like one of Niko Tinbergen’s famous sticklebacks who, suddenly realizing he has strayed far over the established boundary of his own territory, rushes back to more familiar haunts.  Only one of Lewontin’s “genetic determinists” would be obtuse enough to suggest that the meanderings of 21st century morality are caused by “evolution,” and those are as rare as unicorns.  Obviously, no such extraordinarily rapid evolution is necessary.  The innate wellsprings of human morality need not “evolve” at all to account for these wanderings, which are adequately accounted for by the fact that they represent the mediation of a relatively static “moral sense” in a rapidly changing environment through the consciousness of creatures with large brains.  As brilliantly demonstrated by Hutcheson in his An Essay on the Nature and Conduct of the Passions and Affections, absent this “root cause” in the form of evolved behavioral predispositions, “reason and learning” could chug along for centuries without spitting out anything remotely resembling morality.  Innate behavioral predispositions are the basis of all moral behavior, and without them morality as we know it would not exist.  The only role of “reason and learning” is in interpreting and mediating the “moral passions.”  Absent those passions, there would be literally nothing to be reasoned about or learned that would manifest itself as moral behavior.  They, and not “reason and learning” are the sine qua non for the existence of morality.

    But let us refrain from looking this particular gift horse in the mouth.  In general, as noted above, the New Atheists may be found more or less in the same region of the ideological spectrum as was once occupied by the Blank Slaters.  If they are now constrained to add innate behavior to their arsenal as one more weapon in their continuing battle against the faithful, so much the better for all of us.  If nothing else it enhances the chances that, at least for the time being, students of human behavior will be able to continue acquiring the knowledge we need to gain self-understanding without fear of being bullied and intimidated for pointing out facts that happen to be politically inconvenient.

  • Whither Morality?

    Posted on April 19th, 2015 Helian 4 comments

    The evolutionary origins of morality and the reasons for its existence have been obvious for over a century.  They were no secret to Edvard Westermarck when he published The Origin and Development of the Moral Ideas in 1906, and many others had written books and papers on the subject before his book appeared.  However, our species has a prodigious talent for ignoring inconvenient truths, and we have been studiously ignoring that particular truth ever since.

    Why is it inconvenient?  Let me count the ways!  To begin, the philosophers who have taken it upon themselves to “educate” us about the difference between good and evil would be unemployed if they were forced to admit that those categories are purely subjective, and have no independent existence of their own.  All of their carefully cultivated jargon on the subject would be exposed as gibberish.  Social Justice Warriors and activists the world over, those whom H. L. Mencken referred to collectively as the “Uplift,” would be exposed as so many charlatans.  We would begin to realize that the legions of pious prigs we live with are not only an inconvenience, but absurd as well.  Gaining traction would be a great deal more difficult for political and religious cults that derive their raison d’être from the fabrication and bottling of novel moralities.  And so on, and so on.

    Just as they do today, those who experienced these “inconveniences” in one form or another pointed to the drawbacks of reality in Westermarck’s time.  For example, from his book,

    Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism.  If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it.  This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong.  To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief.  The unfortunate circumstance that there do exist dangerous things in the world, proves that something may be dangerous and yet true.  another question is whether any scientific truth really is mischievous on the whole, although it may cause much discomfort to certain people.  I venture to believe that this, at any rate, is not the case with that form of ethical subjectivism which I am here advocating.

    I venture to believe it as well.  In the first place, when we accept the truth about morality we make life a great deal more difficult for people of the type described above.  Their exploitation of our ignorance about morality has always been an irritant, but has often been a great deal more damaging than that.  In the 20th century alone, for example, the Communist and Nazi movements, whose followers imagined themselves at the forefront of great moral awakenings that would lead to the triumph of Good over Evil, resulted in the needless death of tens of millions of people.  The victims were drawn disproportionately from among the most intelligent and productive members of society.

    Still, just as Westermarck predicted more than a century ago, the bugaboo of “moral relativism” continues to be “repeated as an argument” in our own day.  Apparently we are to believe that if the philosophers and theologians all step out from behind the curtain after all these years and reveal that everything they’ve taught us about morality is so much bunk, civilized society will suddenly dissolve in an orgy of rape and plunder.

    Such notions are best left behind with the rest of the impedimenta of the Blank Slate.  Nothing could be more absurd than the notion that unbridled license and amorality are our “default” state.  One can quickly disabuse ones self of that fear by simply reading the comment thread of any popular news website.  There one will typically find a gaudy exhibition of moralistic posing and pious one-upmanship.  I encourage those who shudder at the thought of such an unpleasant reading assignment to instead have a look at Jonathan Haidt’s The Righteous Mind.  As he puts it in the introduction to his book,

    I could have titled this book The Moral Mind to convey the sense that the human mind is designed to “do” morality, just as it’s designed to do language, sexuality, music, and many other things described in popular books reporting the latest scientific findings.  But I chose the title The Righteous Mind to convey the sense that human nature is not just intrinsically moral, it’s also intrinsically moralistic, critical and judgmental… I want to show you that an obsession with righteousness (leading inevitably to self-righteousness) is the normal human condition.  It is a feature of our evolutionary design, not a bug or error that crept into minds that would otherwise be objective and rational.

    Haidt also alludes to a potential reason that some of the people already mentioned above continue to evoke the scary mirage of moral relativism:

    Webster’s Third New World Dictionary defines delusion as “a false conception and persistent belief in something that has no existence in fact.”  As an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history:  the rationalist delusion.  It’s the idea that reasoning is our most noble attribute, one that makes us like the gods (for Plato) or that brings us beyond the “delusion” of believing in gods (for the New Atheists).  The rationalist delusion is not just a claim about human nature.  It’s also a claim that the rational caste (philosophers or scientists) should have more power, and it usually comes along with a utopian program for raising more rational children.

    Human beings are not by nature moral relativists, and they are in no danger of becoming moral relativists merely by virtue of the fact that they have finally grasped what morality actually is.  It is their nature to perceive Good and Evil as real, independent things, independent of the subjective minds that give rise to them, and they will continue to do so even if their reason informs them that what they perceive is a mirage.  They will always tend to behave as if these categories were absolute, rather than relative, even if all the theologians and philosophers among them shout at the top of their lungs that they are not being “rational.”

    That does not mean that we should leave reason completely in the dust.  Far from it!  Now that we can finally understand what morality is, and account for the evolutionary origins of the behavioral predispositions that are its root cause, it is within our power to avoid some of the most destructive manifestations of moral behavior.  Our moral behavior is anything but infinitely malleable, but we know from the many variations in the way it is manifested in different human societies and cultures, as well as its continuous and gradual change in any single society, that within limits it can be shaped to best suit our needs.  Unfortunately, the only way we will be able to come up with an “optimum” morality is by leaning on the weak reed of our ability to reason.

    My personal preferences are obvious enough, even if they aren’t set in stone.  I would prefer to limit the scope of morality to those spheres in which it is indispensable for lack of a viable alternative.  I would prefer a system that reacts to the “Uplift” and unbridled priggishness and self-righteousness with scorn and contempt.  I would prefer an educational system that teaches the young the truth about what morality actually is, and why, in spite of its humble origins, we can’t get along without it if we really want our societies to “flourish.”  I know; the legions of those whose whole “purpose of life” is dependent on cultivating the illusion that their own versions of Good and Evil are the “real” ones stands in the way of the realization of these whims of mine.  Still, one can dream.

  • On the Malleability and Plasticity of the History of the Blank Slate

    Posted on March 22nd, 2015 Helian 17 comments

    Let me put my own cards on the table.  I consider the Blank Slate affair the greatest debacle in the history of science.  Perhaps you haven’t heard of it.  I wouldn’t be surprised.  Those who are the most capable of writing its history are often also those who are most motivated to sweep the whole thing under the rug.  In any case, in the context of this post the Blank Slate refers to a dogma that prevailed in the behavioral sciences for much of the 20th century according to which there is, for all practical purposes, no such thing as human nature.  I consider it the greatest scientific debacle of all time because, for more than half a century, it blocked the path of our species to self-knowledge.  As we gradually approach the technological ability to commit collective suicide, self-knowledge may well be critical to our survival.

    Such histories of the affair as do exist are often carefully and minutely researched by historians familiar with the scientific issues involved.  In general, they’ve personally lived through at least some phase of it, and they’ve often been personally acquainted with some of the most important players.  In spite of that, their accounts have a disconcerting tendency to wildly contradict each other.  Occasionally one finds different versions of the facts themselves, but more often its a question of the careful winnowing of the facts to select and record only those that support a preferred narrative.

    Obviously, I can’t cover all the relevant literature in a single blog post.  Instead, to illustrate my point, I will focus on a single work whose author, Hamilton Cravens, devotes most of his attention to events in the first half of the 20th century, describing the sea change in the behavioral sciences that signaled the onset of the Blank Slate.  As it happens, that’s not quite what he intended.  What we see today as the darkness descending was for him the light of science bursting forth.  Indeed, his book is entitled, somewhat optimistically in retrospect, The Triumph of Evolution:  The Heredity-Environment Controversy, 1900-1941.  It first appeared in 1978, more or less still in the heyday of the Blank Slate, although murmurings against it could already be detected among academic and professional experts in the behavioral sciences after the appearance of a series of devastating critiques in the popular literature in the 60’s by Robert Ardrey, Konrad Lorenz, and others, topped off by E. O. Wilson’s Sociobiology in 1975.

    Ostensibly, the “triumph” Cravens’ title refers to is the demise of what he calls the “extreme hereditarian” interpretations of human behavior that prevailed in the late 19th and early 20th century in favor of a more “balanced” approach that recognized the importance of culture, as revealed by a systematic application of the scientific method.  One certainly can’t fault him for being superficial.  He introduces us to most of the key movers and shakers in the behavioral sciences in the period in question.  There are minutiae about the contents of papers in old scientific journals, comments gleaned from personal correspondence, who said what at long forgotten scientific conferences, which colleges and universities had strong programs in psychology, sociology and anthropology more than 100 years ago, and who supported them, etc., etc.  He guides us into his narrative so gently that we hardly realize we’re being led by the nose.  Gradually, however, the picture comes into focus.

    It goes something like this.  In bygone days before the “triumph of evolution,” the existence of human “instincts” was taken for granted.  Their importance seemed even more obvious in light of the rediscovery of Mendel’s work.  As Cravens put it,

    While it would be inaccurate to say that most American experimentalists concluded as  the result of the general acceptance of Mendelism by 1910 or so that heredity was all powerful and environment of no consequence, it was nevertheless true that heredity occupied a much more prominent place than environment in their writings.

    This sort of “subtlety” is characteristic of Cravens’ writing.  Here, he doesn’t accuse the scientists he’s referring to of being outright genetic determinists.  They just have an “undue” tendency to overemphasize heredity.  It is only gradually, and by dint of occasional reading between the lines that we learn the “true” nature of these believers in human “instinct.”  Without ever seeing anything as blatant as a mention of Marxism, we learn that their “science” was really just a reflection of their “class.”  For example,

    But there were other reasons why so many American psychologists emphasized heredity over environment.  They shared the same general ethnocultural and class background as did the biologists.  Like the biologists, they grew up in middle class, white Anglo-Saxon Protestant homes, in a subculture where the individual was the focal point of social explanation and comment.

    As we read on, we find Cravens is obsessed with white Anglo-Saxon Protestants, or WASPs, noting that the “wrong” kind of scientists belong to that “class” scores of times.  Among other things, they dominate the eugenics movement, and are innocently referred to as Social Darwinists, as if these terms had never been used in a pejorative sense.  In general they are supposed to oppose immigration from other than “Nordic” countries, and tend to support “neo-Lamarckian” doctrines, and believe blindly that intelligence test results are independent of “social circumstances and milieu.”  As we read further into Section I of the book, we are introduced to a whole swarm of these instinct-believing WASPs.

    In Section II, however, we begin to see the first glimmerings of a new, critical and truly scientific approach to the question of human instincts.  Men like Franz Boas, Robert Lowie, and Alfred Kroeber, began to insist on the importance of culture.  Furthermore, they believed that their “culture idea” could be studied in isolation in such disciplines as sociology and anthropology, insisting on sharp, “territorial” boundaries that would protect their favored disciplines from the defiling influence of instincts.  As one might expect,

    The Boasians were separated from WASP culture; several were immigrants, of Jewish background, or both.

    A bit later they were joined by joined by John Watson and his behaviorists who, after performing some experiments on animals and human infants, apparently experienced an epiphany.  As Cravens puts it,

    To his amazement, Watson concluded that the James-McDougall human instinct theory had no demonstrable experimental basis.  He found the instinct theorists had greatly overestimated the number of original emotional reactions in infants.  For all practical purposes, he realized that there were no human instincts determining the behavior of adults or even of children.

    Perhaps more amazing is the fact that Cravens suspected not a hint of a tendency to replace science with dogma in all this.  As Leibniz might have put it, everything was for the best, in this, the best of all possible worlds.  Everything pointed to the “triumph of evolution.”  According to Cravens, the “triumph” came with astonishing speed:

    By the early 1920s the controversy was over.  Subsequently, psychologists and sociologists joined hands to work out a new interdisciplinary model of the sources of human conduct and emotion stressing the interaction of heredity and environment, of innate and acquired characters – in short, the balance of man’s nature and his culture.

    Alas, my dear Cravens, the controversy was just beginning.  In what follows, he allows us a glimpse at just what kind of “balance” he’s referring to.  As we read on into Section 3 of the book, he finally gets around to setting the hook:

    Within two years of the Nazi collapse in Europe Science published an article symptomatic of a profound theoretical reorientation in the American natural and social sciences.  In that article Theodosius Dobzhansky, a geneticist, and M. F. Ashley-Montagu, an anthropologist, summarized and synthesized what the last quarter century’s work in their respective fields implied for extreme hereditarian explanations of human nature and conduct.  Their overarching thesis was that man was the product of biological and social evolution.  Even though man in his biological aspects was as subject to natural processes as any other species, in certain critical respects he was unique in nature, for the specific system of genes that created an identifiably human mentality also permitted man to experience cultural evolution… Dobzhansky and Ashley-Montagu continued, “Instead of having his responses genetically fixed as in other animal species, man is a species that invents its own responses, and it is out of this unique ability to invent…  his responses that his cultures are born.”

    and, finally, in the conclusions, after assuring us that,

    By the early 1940s the nature-nurture controversy had run its course.

    Cravens leaves us with some closing sentences that epitomize his “triumph of evolution:”

    The long-range, historical function of the new evolutionary science was to resolve the basic questions about human nature in a secular and scientific way, and thus provide the possibilities for social order and control in an entirely new kind of society.  Apparently this was a most successful and enduring campaign in American culture.

    At this point, one doesn’t know whether to laugh or cry.  Apparently Cravens, who has just supplied us with arcane details about who said what at obscure scientific conferences half a century and more before he published his book was completely unawares of exactly what Ashley Montagu, his herald of the new world order, meant when he referred to “extreme hereditarian explanations,” in spite of the fact that he spelled it out ten years earlier in an invaluable little pocket guide for the followers of the “new science” entitled Man and Aggression.  There Montagu describes the sort of “balance of man’s nature and his culture” he intended as follows:

    Man is man because he has no instincts, because everything he is and has become he has learned, acquired, from his culture, from the man-made part of the environment, from other human beings.

    and,

    There is, in fact, not the slightest evidence or ground for assuming that the alleged “phylogenetically adapted instinctive” behavior of other animals is in any way relevant to the discussion of the motive-forces of human behavior.  The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.

    So much for Cravens’ “balance.”  He spills a great deal of ink in his book assuring us that the Blank Slate orthodoxy he defends was the product of “science,” little influenced by any political or ideological bias.  Apparently he also didn’t notice that, not only in Man and Aggression, but ubiquitously in the Blank Slate literature, the “new science” is defended over and over and over again with the “argument” that anyone who opposes it is a racist and a fascist, not to mention far right wing.

    As it turns out, Cravens didn’t completely lapse into a coma following the publication of Ashley Montagu’s 1947 pronunciamiento in Science.  In his “Conclusion” we discover that, after all, he had a vague presentiment of the avalanche that would soon make a shambles of his “new evolutionary science.”  In his words,

    Of course in recent years something approximating at least a minor revival of the old nature-nurture controversy seems to have arisen in American science and politics.  It is certainly quite possible that this will lead to a full scale nature-nurture controversy in time, not simply because of the potential for a new model of nature that would permit a new debate, but also, as one historian has pointed out, because our own time, like the 1920s, has been a period of racial and ethnic polarization.  Obviously any further comment would be premature.

    Obviously, my dear Cravens.  What’s the moral of the story, dear reader?   Well, among other things, that if you really want to learn something about the Blank Slate, you’d better not be shy of wading through the source literature yourself.  It’s still out there, waiting to be discovered.  One particularly rich source of historical nuggets is H. L. Mencken’s American Mercury, which Ron Unz has been so kind as to post online.  Mencken took a personal interest in the “nature vs. nurture” controversy, and took care to publish articles by heavy hitters on both sides.  For a rather different take than Cravens on the motivations of the early Blank Slaters, see for example, Heredity and the Uplift, by H. M. Parshley.  Parshley was an interesting character who took on no less an opponent than Clarence Darrow in a debate over eugenics, and later translated Simone de Beauvoir’s feminist manifesto The Second Sex into English.

    chimp-thinking

  • The Objective Morality Delusion

    Posted on March 15th, 2015 Helian No comments

    Human morality is the manifestation of innate behavioral traits in animals with brains large enough to reason about their own emotional reactions.  It exists because those traits evolved.  They did not evolve to serve any purpose, but purely because they happened to enhance the probability that individuals carrying them would survive and reproduce.  In the absence of those traits morality as we know it would not exist.  Darwin certainly suspected as much.  Now, more than a century and a half after the publication of On the Origin of Species, so much is really obvious.

    Scores of books have been published recently on the innate emotional wellsprings of morality.  Its analogs have been clearly identified in other animals.  Its expression has been demonstrated in infants, long before the they could have learned the responses in question via cultural transmission.  Unless all these books are pure gibberish, and all these observations are delusions, morality is ultimately the expression of physical phenomena happening in the brains of individuals.  In other words, it is subjective.  It does not have an independent existence as a thing-in-itself, outside of the minds of individuals.  It follows that it cannot somehow jump out of the skulls of those individuals and gain some kind of an independent, legitimate power to prescribe to other individuals what they should or should not do.

    In spite of all that, the faith in objective morality persists, in defiance of the obvious.  The truth is too jarring, too uncomfortable, too irreconcilable with what we “feel,” and so we have turned away from it.  As the brilliant Edvard Westermarck put it in his The Origin and Development of the Moral Ideas,

    As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivise the moral estimate to which it gives rise, in other words, to assign to it universal validity.  The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments.  The intensity of his emotions makes him the victim of an illusion.

    It follows that, as Westermarck puts it,

    The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood.  The ultimate reason for this is, that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.

    and therefore,

    If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.

    Westermarck wrote those words in 1906.  More than a century later, we are still whistling past the graveyard of objective morality.  Interested readers can confirm this by a quick trip to their local university library.  Browsing through the pages of Ethics, one of the premier journals devoted to the subject, they will find articles on deontological, consequentialist, and several other abstruse flavors of morality.  They will find a host of helpful recipes for what should or should not be done in a given situation.  They will discover that it is their “duty” to do this, that, or the other thing.  Finally, they will find all of the above ensconced in an almost impenetrable smokescreen of academic jargon.  In a word, most of the learned contributors to Ethics have ignored Westermarck, and are still chasing their tails, doggedly pursuing a “scientific ethics” that will “fix rules for human conduct” once and for all.

    Challenge one of these learned philosophers, and their response is typically threadbare enough.  A common gambit is no more complex than the claim that objective morality must exist, because if it didn’t then the things we all know are bad wouldn’t be bad anymore.  An example of the genre recently turned up on the opinion pages of The New York Times, entitled, Why Our Children Don’t Think There Are Moral Facts.  Its author, Justin McBrayer, an associate professor of philosophy at Fort Lewis College in Durango, Colorado, opens with the line,

    What would you say if you found out that our public schools were teaching children that it is not true that it’s wrong to kill people for fun or cheat on tests? Would you be surprised?

    Now, as Westermarck pointed out, it is impossible for things to be “true” if they have no objective existence.  Read the article carefully, and you’ll see that McBrayer doesn’t even attempt to dispute the logic behind Westermarck’s observation.  Rather, he tells him the same thing Socrates’ judges told him as they handed him the hemlock:  “I’m right and you’re wrong because what you claim is true is bad for the children.”  In other words, there must be an objective bad because otherwise it would be bad.   Other than that, the only attempt at an argument in the whole article is the following ad hominem remark about any philosopher who denies the existence of objective morality:

    There are historical examples of philosophers who endorse a kind of moral relativism, dating back at least to Protagoras who declared that “man is the measure of all things,” and several who deny that there are any moral facts whatsoever. But such creatures are rare.

    In other words, objective morality must be true, because those who deny it are “creatures.”  No doubt, such “defenses” of objective morality have been around since time immemorial.  They certainly were in Westermarck’s day.  His response was as valid then as it is now:

    Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism.  If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it.  This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong.  To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief.

    Obviously, as Westermarck foresaw, the argument is “still repeated” more than a century later.  In McBrayer’s case, it goes like this:

    Indeed, in the world beyond grade school, where adults must exercise their moral knowledge and reasoning to conduct themselves in the society, the stakes are greater. There, consistency demands that we acknowledge the existence of moral facts. If it’s not true that it’s wrong to murder a cartoonist with whom one disagrees, then how can we be outraged? If there are no truths about what is good or valuable or right, how can we prosecute people for crimes against humanity? If it’s not true that all humans are created equal, then why vote for any political system that doesn’t benefit you over others?

    He adds,

    As a philosopher, I already knew that many college-aged students don’t believe in moral facts. While there are no national surveys quantifying this phenomenon, philosophy professors with whom I have spoken suggest that the overwhelming majority of college freshmen in their classrooms view moral claims as mere opinions that are not true or are true only relative to a culture.

    One often hears such remarks about the supposed pervasiveness of moral relativism.  They are commonly based on the fallacy that human morality is the product of human reason rather than human emotion.  The reality is that Mother Nature has been blithely indifferent to the repeated assertions of philosophers that, unless we listen to them, morality will disappear.  She designed morality to work, for better or worse, whether we take the trouble to reason about it or not.  All these fears of moral relativism can’t even pass the “ho ho” test.  They fly in the face of all the observable facts about moral behavior in the real world.  Moral relativism on campus, you say?  Please!  There has never been such a hotbed of extreme, moralistic piety as exists today in academia since the heyday of the Puritans.  No less a comedian than Chris Rock won’t even perform on college campuses anymore because of repeated encounters with the extreme manifestations of priggishness one finds there.  One can’t tell a joke without “offending” someone.

    Morality isn’t going anywhere.  It will continue to function just as it always has, oblivious to whether it has the permission of philosophers or not.  As can be seen by the cultural differences in the way that moral emotions are “acted out,” within certain limits morality is malleable.  We have some control over whether it is “acted out” by the immolation of enemy pilots and the beheading and crucifixion of “infidels,” or in forms that promote what Sam Harris might call “human flourishing.”  Regardless of our choice, I suspect that our chances of successfully shaping a morality that most of us would find agreeable will be enhanced if we base our actions on what morality actually is rather than on what we want it to be.

  • On the Resurrection and Transfiguration of the Blank Slate

    Posted on February 28th, 2015 Helian No comments

    All appearances to the contrary in the popular media, the Blank Slate lives on.  Of course, its heyday is long gone, but it slumbers on in the more obscure niches of academia.  One of its more recent manifestations just turned up at Scientia Salon in the form of a paper by one Mark Fedyk, an assistant professor of philosophy at Mount Allison University in Sackville, Canada.  Entitled, “How (not) to Bring Psychology and Biology Together,” it provides the interested reader with a glimpse at several of the more typical features of the genre as it exists today.

    Fedyk doesn’t leave us in doubt about where he’s coming from.  Indeed, he lays his cards on the table in plain sight in the abstract, where he writes that, “psychologists should have a preference for explanations of adaptive behavior in humans that refer to learning and other similarly malleable psychological mechanisms – and not modules or instincts or any other kind of relatively innate and relatively non-malleable psychological mechanisms.”  Reading on into the body of the paper a bit, we quickly find another trademark trait of both the ancient and modern Blank Slaters; their tendency to invent strawman arguments, attribute them to their opponents, and then blithely ignore those opponents when they point out that the strawmen bear no resemblance to anything they actually believe.

    In Fedyk’s case, many of the strawmen are incorporated in his idiosyncratic definition of the term “modules.”  Among other things, these “modules” are “strongly nativist,” they don’t allow for “developmental plasticity,” they imply a strong, either-or version of the ancient nature vs. nurture dichotomy, and they are “relatively innate and relatively non-malleable.”  In Fedyk’s paper, the latter phrase serves the same purpose as the ancient “genetic determinism” strawman did in the heyday of the Blank Slate.  Apparently that’s now become too obvious, and the new jargon is introduced by way of keeping up appearances.  In any case, we gather from the paper that all evolutionary psychologists are supposed to believe in these “modules.”  It matters not a bit to Fedyk that his “modules” have been blown out of the water literally hundreds of times in the EP literature stretching back over a period of two decades and more.  A good example that patiently dissects each of his strawmen one by one is “Modularity in Cognition:  Framing the Debate,” published by Barrett and Kurzban back in 2006.  It’s available free online, and I invite my readers to have a look at it.  It can be Googled up by anyone in a few seconds, but apparently Fedyk has somehow failed to discover it.

    Once he has assured us that all EPers have an unshakable belief in his “modules,” Fedyk proceeds to concoct an amusing fairy tale based on that assumption.  In the process, he presents his brilliant and original theory of “anticipated consilience.”  According to this theory, researchers in new fields, such as EP, should rely on the findings of more mature “auxiliary disciplines,” particularly those which have been “extremely successful” in the past, to inform their own research.  In the case of evolutionary psychology, the “auxiliary discipline” turns out to be evolutionary biology.  As Fedyk puts it,

    One of the more specific ways of doing this is to rely upon what can be called the principle of anticipated consilience, which says that it is rational to have a prima facie preference for those novel theories commended by previous scientific research which are most likely to be subsequently integrated in explanatorily- or inductively-fruitful ways with the relevant discipline as it expands.  The principle will be reliable simply because the novel theories which are most likely to be subsequently integrated into the mature scientific discipline as it expands are just those novel theories which are most likely to be true.

    He then proceeds to incorporate his strawmen into an illustration of how this “anticipated consilience” would work in practice:

    To see how this would work, consider, for example, two fairly general categories of proximate explanations for adaptive behaviors in humans, nativist (i.e., bad, ed.) psychological hypotheses which posit some kind of module (namely the imaginary kind invented by Fedyk, ed.) and non-nativist (i.e., good, ed.) psychological hypotheses, which posit some kind of learning routine (i.e., the Blank Slate, ed.)

    As the tale continues, we learn that,

    …it is plausible that, for approximately the first decade of research in evolutionary psychology following its emergence out of sociobiology in the 1980s, considerations of anticipated consilience would have likely rationalized a preference for proximate explanations which refer to modules and similar types of proximate mechanisms.

    The reason for this given by Fedyk turns out to be the biggest thigh-slapper in this whole, implausible yarn,

    So by the time evolutionary psychology emerged in reaction to human sociobiology in the 1980s, (Konrad) Lorenz’s old hydraulic model of instincts really was the last positive model in biology of the proximate causes of adaptive behavior.

    Whimsical?  Yes, but stunning is probably a better adjective.  If we are to believe Fedyk, we are forced to conclude that he never even heard of the Blank Slate!  After all, some of that orthodoxy’s very arch-priests, such as Richard Lewontin and Stephen Jay Gould are/were evolutionary biologists.  They, too, had a “positive model in biology of the proximate causes of adaptive behavior,” in the form of the Blank Slate.  Fedyk is speaking of a time in which the Blank Slate dogmas were virtually unchallenged in the behavioral sciences, and anyone who got out of line was shouted down as a fascist, or worse.  And yet we are supposed to swallow the ludicrous imposture that Lorenz’ hydraulic theory not only overshadowed the Blank Slate dogmas, but was the only game in town!  But let’s not question the plot.  Continuing on with Fedyk’s adjusted version of history, we discover that (voila!) the evolutionary biologists suddenly recovered from their infatuation with hydraulic theory, and got their minds right:

    …what I want to argue is that, in the last decade or so, a new understanding of the biological importance of developmental plasticity has implications for evolutionary psychology. Whereas previously considerations of anticipated consilience with evolutionary biology and cognitive science may have provided support for those proximate hypotheses which posited modules, I argue in this section that these very same considerations now support significantly non-nativist proximate hypotheses. The argument, put simply, is that traits which have high degrees of plasticity will be more evolutionarily robust than highly canalized innately specified non-malleable traits like mental modules. The upshot is that a mind comprised mostly of modules is not plastic in this specific sense, and is therefore ultimately unlikely to be favoured by natural selection. But a mind equipped with powerful, domain general learning routines does have the relevant plasticity.

    I leave it as an exercise for the student to pick out all the innumerable strawmen in this parable of the “great change of heart” in evolutionary biology.  Suffice it to say that, as a result of this new-found “plasticity,” anticipated consilience now requires evolutionary psychologists to reject their silly notions about human nature in favor of a return to the sheltering haven of the Blank Slate.  Fedyk helpfully spells it out for us:

    This means that, given a choice between proximate explanations which reflect a commitment to the massive modularity hypothesis and proximate explanations which, instead, reflect an approach to the mind which privileges learning…, the latter is most plausible in light of evolutionary biology.

    The kicker here is that if anyone even mildly suggests any connection between this latter day manifestation of cultural determinism and the dogmas of the Blank Slate, the Fedyks of the world scream foul.  Apparently we are to believe that the “proximate explanations” of evolutionary psychology aren’t completely excluded as long as one can manage a double back flip over the rather substantial barrier of “anticipated consilience” that blocks the way.  How that might actually turn out to be possible is never explained.  In spite of these scowling denials, I personally will continue to prefer the naïve assumption that, if something walks like a duck, quacks like a duck, and flaps its wings like a duck, then it actually is a duck, or Blank Slater, as the case may be.