Posted on May 24th, 2015 1 comment
Philosophers have been masticating the question of free will for many centuries. The net result of their efforts has been a dizzying array of different “flavors” of free will or the lack thereof. I invite anyone with the patience to attempt disentangling the various permutations and combinations thereof to start with the Wiki page, and take it from there. For the purpose of this post I will simply define free will as the ability to make choices that are not predetermined before we make the choice. This implies that our conscious minds are not entirely subject to deterministic physical laws, and have the power to alter physical reality. Lack of free will means the absence of this power, and implies that we lack the power to alter physical reality in any way. I personally have no idea whether we have free will or not. In my opinion, we currently lack the knowledge to answer the question. However, I believe that debating the matter is useless. Instead, we should assume that there is free will as the “default” position, and get on with our lives.
Of course, if there is no free will, my advice is useless. I am simply an automaton among automatons, adding to the chorus of sound and fury that signifies nothing. In that case the debate over free will is merely another amusing case of pre-programmed robots arguing over what they “should” believe, and what they “ought” to do as a consequence, in a world in which the words “should” and “ought” are completely meaningless. These words imply an ability to choose between two alternatives, but no such choice can exist if there is no free will. “Ought” we to alter the criminal justice system because we have decided there is no such thing as free will? If we have no free will, the question is meaningless. We cannot possibly alter the predetermined outcome of the debate, or the predetermined evolution of the criminal justice system, or even our opinion on whether it “ought” to be changed or not. Under the circumstances it can hardly hurt to assume that we do have free will. If so, the assumption must have been foreordained, and no conscious agency exists that could have altered the fact. If we don’t have free will, it is also absurd, if inevitable, to blame me or even take issue with me for advocating that we act as if we have free will. After all, in that case I couldn’t have acted or thought any differently, assuming my mind is an artifact of the physical world, and not a “ghost in the machine.” If we believe in free will but there is no free will, debate about the matter may or may not be inevitable, but it is certainly futile, because the outcome of the debate has been predetermined.
On the other hand, if we decide that there is no free will, but there actually is, it can potentially “hurt” a great deal. In that case, we will be basing our actions and our conclusions about what “ought” or “ought not” to be done on a false assumption. Whatever our idiosyncratic goals happen to be, it is more probable that we will attain them if we base our strategy for achieving them on truth rather than falsehood. If we have free will, the outcome of the debate matters. Suppose, for example, that the anti-free will side has much better debaters and convinces those watching the debate that they have no free will even if they do. Plausible results include despair, a sense of purposelessness, fatalism, a lethargic and indifferent attitude towards life, a feeling that nothing matters, etc. No doubt there are legions of philosophers out there who can prove that, because a = b and b = c, none of these reactions are reasonable. They will, however, occur whether they are reasonable or not.
I doubt that my proposed default position will be difficult to implement. Even the most diehard free will denialists seldom succeed in completely accepting the implications of their own theories. Look through their writings, and before long you’ll find a “should.” Read a bit further and you’re likely to stumble over an “ought” as well. However, as noted above, speaking of “should” and “ought” in the absence of free will is absurd. They imply the possibility of a choice between two alternatives that will lead to different outcomes. If there is no free will, there can be no choice. Individuals will do what they “ought” to do or “ought not” to do just as the arrangement of matter and energy in the universe happens to dictate. It is absurd to blame them for doing something they could not avoid. However, the question of whether they actually will be blamed or not is also predetermined. It is just as absurd to blame the blamers.
In short, I propose we all stop arguing and accept the default. If there is no free will, then obviously I am proposing it because of my programming. I can’t do otherwise even if I “ought” to. It’s possible my proposal may change things, but, if so, the change was inevitable. However, if there is free will, then believing in it is simply believing in the truth, and a truth that, at least from my point of view, happens to be a great deal more palatable than the alternative.
Posted on May 17th, 2015 4 comments
New Atheist bashing is all the rage these days. The gloating tone at Salon over New Atheist Sam Harris’ humiliation by Noam Chomsky in their recent exchange over the correct understanding of something that doesn’t exist referred to in my last post is but one of many examples. In fact, New Atheists aren’t really new, and neither is New Atheist bashing. Thumb through some of the more high brow magazines of the 1920’s, for example, and chances are you’ll run across an article describing the then current crop of atheists as aggressive, ignorant, clannish, self-righteous and, in short, prone to all the familiar maladies that supposedly also afflict the New Atheists of today. And just as we see today, the more “laid back” atheists were gleefully piling on then as now. They included H. L. Mencken, probably the most famous atheist of the time, who deplored aggressive atheism in his recently republished autobiographical trilogy. Unfortunately he’s no longer around to explain the difference between “aggressive” atheism, and his own practice of heaping scorn and ridicule on the more backward believers. Perhaps it had something to do with the fact that Mencken was by nature a conservative. He abhorred any manifestation of the “Uplift,” a term which in those days meant more or less the same thing as “progressive” today.
I think the difference between these two species of atheists has something to do with the degree to which they resent belonging to an outgroup. Distinguishing between ingroups and outgroups comes naturally to our species. This particular predisposition is ostensibly not as beneficial now as it was during the period over which it evolved. A host of pejorative terms have been invented to describe its more destructive manifestations, such as racism, anti-Semitism, xenophobia, etc., all of which really describe the same phenomenon. Those among us who harbor no irrational hatreds of this sort must be rare indeed. One often finds it present in its more virulent forms in precisely those individuals who consider themselves immune to it. Atheists are different, and that’s really all it takes to become identified as an outgroup,
Apparently some atheists don’t feel themselves particularly inconvenienced by this form of “othering,” especially in societies that have benefited to some extent from the European Enlightenment. Others take it more seriously, and fight back using the same tactics that have been directed against them. They “other” their enemies and seek to aggressively exploit human moral emotions to gain the upper hand. That is exactly what has been done quite successfully at one time or another by many outgroups, including women, blacks, and quite spectacularly lately, gays. New Atheists are merely those who embrace such tactics in the atheist community.
I can’t really blame my fellow atheists for this form of activism. One doesn’t choose to be an atheist. If one doesn’t believe in God, then other than in George Orwell’s nightmare world of “1984,” one can’t be “cured” into becoming a Christian or a Moslem, any more than a gay can be “cured” into becoming heterosexual, or a black “cured” into becoming white. However, for reasons having to do with the ideological climate in the world today that are much too complicated to address in a short blog post, New Atheists are facing a great deal more resistance than members of some of society’s other outgroups. This resistance is coming, not just from religious believers, but from their “natural” allies on the ideological left.
Noam Chomsky’s scornful treatment of Sam Harris, accompanied by the sneers of the leftist editors of Salon, is a typical example of this phenomenon. Such leaders as Harris, Richard Dawkins, and the late Christopher Hitchens are the public “face” of the New Atheist movement, and as a consequence are often singled out in this way. Of course they have their faults, and I’ve criticized the first two myself on this blog and elsewhere. However, many of the recent attacks, especially from the ideological left, are neither well-reasoned nor, at least in terms of my own subjective moral emotions, even fair. Often they conform to hackneyed formulas; the New Atheists are unsophisticated, they don’t understand what they’re talking about, they are bigoted, they are harming people who depend on religious beliefs to give “meaning” to their lives, etc.
A typical example, which was also apparently inspired by the Harris/Chomsky exchange, recently turned up at Massimo Pigliucci’s Scientia Salon. Entitled “Reflections on the skeptic and atheist movements,” it was ostensibly Pigliucci’s announcement that, after being a longtime member and supporter, he now wishes to “disengage” from the club. As one might expect, he came down squarely in favor of Chomsky, who is apparently one of his heroes. That came as no surprise to me, as fawning appraisals of Blank Slate kingpins Richard Lewontin and Stephen Jay Gould have also appeared at the site. It had me wondering who will be rehabilitated next. Charles Manson? Jack the Ripper? Pigliucci piques himself on his superior intellect which, we are often reminded, is informed by both science and a deep reading of philosophy. In spite that, he seems completely innocent of any knowledge that the Blank Slate debacle ever happened, or of Lewontin’s and Gould’s highly effective role in propping it up for so many years, using such “scientific” methods as bullying, vilification and mobbing of anyone who disagreed with them, including, among others, Robert Trivers, W. D. Hamilton, Konrad Lorenz, and Richard Dawkins. Evidence of such applications of “science” are easily accessible to anyone who makes even a minimal effort to check the source material, such as Lewontin’s Not in Our Genes.
No matter, Pigliucci apparently imagines that the Blank Slate was just a figment of Steven Pinker’s fevered imagination. With such qualifications as a detector of “fools,” he sagely nods his head as he informs us that Chomsky “doesn’t suffer fools (like Harris) gladly.” With a sigh of ennui, he goes on, “And let’s not go (again) into the exceedingly naive approach to religious criticism that has made Dawkins one of the “four horsemen” of the New Atheism.” The rest of the New Atheist worthies come in for similar treatment. By all means, read the article. You’ll notice that, like virtually every other New Atheist basher, whether on the left or the right of the ideological spectrum, Pigliucci never gets around to mentioning what these “naïve” criticisms of religion actually are, far less to responding to or discussing them.
It’s not hard to find Dawkins’ “naïve” criticisms of religion. They’re easily available to anyone who takes the trouble to look through the first few chapters of his The God Delusion. In fact, most of them have been around at least since Jean Meslier wrote them down in his Testament almost 300 years ago. Religious believers have been notably unsuccessful in answering them in the ensuing centuries. No doubt they might seem naïve if you happen to believe in the ephemeral and hazy versions of God concocted by the likes of David Bentley Hart and Karen Armstrong. They’ve put that non-objective, non-subjective, insubstantial God so high up on the shelf that it can’t be touched by atheists or anyone else. The problem is that that’s not the God that most people believe in. Dawkins can hardly be faulted for directing his criticisms at the God they do believe in. If his arguments against that God are really so naïve, what can possibly be the harm in actually answering them?
As noted above, New Atheist bashing is probably inevitable given the current ideological fashions. However, I suggest that those happy few who are still capable of thinking for themselves think twice before jumping on the bandwagon. In the first place, it is not irrational for atheists to feel aggrieved at being “othered,” any more than it is for any other ostracized minority. Perhaps more importantly, the question of whether religious beliefs are true or not matters. Today one actually hears so-called “progressive” atheists arguing that religious beliefs should not be questioned, because it risks robbing the “little people” of a sense of meaning and purpose in their lives. Apparently the goal is to cultivate delusions that will get them from cradle to grave with as little existential Angst as possible. It would be too shocking for them to know the truth. Beyond the obvious arrogance of such an attitude, I fail to see how it is doing anyone a favor. People supply their own “meaning of life,” depending on their perceptions of reality. Blocking the path to truth and promoting potentially pathological delusions in place of reality seems more a betrayal than a “service” to me. To the extent that anyone cares to take my own subjective moral emotions seriously, I can only say that I find substituting bland religious truisms for a chance to experience the stunning wonder, beauty and improbability of human existence less a “benefit” than an exquisite form of cruelty.
Posted on May 16th, 2015 2 comments
Sam Harris and Noam Chomsky have a lot in common. Both are familiar public intellectuals, both are atheists, and both are well to the left of center politically. Both are also true believers in the fantasy of objective morality. As I noticed on my latest visit to the Salon website, however, that hasn’t deterred them from hurling anathemas at each other. Harris landed some weak jabs in a recent exchange of verbal fisticuffs, but according to Salon, Chomsky won by a knockout in the later rounds. A complete, blow by blow account may be found on Sam’s website, along with his own post mortem.
Apparently it all began when Harris tried to, in his words, “engineer a public conversation with Chomsky about the ethics of war, terrorism, state surveillance, and related topics.” As he wrote on his blog,
For decades, Noam Chomsky has been one of the most prominent critics of U.S. foreign policy, and the further left one travels along the political spectrum, the more one feels his influence. Although I agree with much of what Chomsky has said about the misuses of state power, I have long maintained that his political views, where the threat of global jihadism is concerned, produce dangerous delusions. In response, I have been much criticized by those who believe that I haven’t given the great man his due.
To clear the air, he wrote a pleasant note to Chomsky suggesting that they engage in a public conversation to, “explore these disagreements, clarify any misunderstandings,” and “attempt to find some common ground.” Not one to be taken in by such pleasantries, old pro Chomsky immediately positioned himself on the moral high ground. His tart reply:
Perhaps I have some misconceptions about you. Most of what I’ve read of yours is material that has been sent to me about my alleged views, which is completely false. I don’t see any point in a public debate about misreadings. If there are things you’d like to explore privately, fine. But with sources.
Harris should have known going in that hardcore “progressive” leftists never have friendly differences of opinion with anyone on matters more significant than the weather. Anyone who disagrees with them is automatically tossed into their outgroup, and acquires all the usual characteristics of the denizens thereof. They are, of course, always immoral, and commonly disgusting and mentally incompetent as well. That’s often how Harris portrays those who disagree with him on questions of morality himself. Nevertheless, he walked right into Chomsky’s punch, admitting the possibility that he may have misread him. He merely threw in the caveat that, if so, it could only have happened in a passage in his first book, The End of Faith, as that was the only time he’d ever mentioned Chomsky’s work in writing. That was plenty for Chomsky. In effect, Harris had just handed him the opportunity to pick his own battlefield. He did so with alacrity. As it happens, in the passage in question, Harris had objected to Chomsky’s condemnation of the Clinton Administration’s decision to bomb the Al-Shifa pharmaceutical plant in Sudan in the context of remarks about the 9/11 attacks. As he put it:
Chomsky does not hesitate to draw moral equivalences here: “For the first time in modern history, Europe and its offshoots were subjected, on home soil, to the kind of atrocity that they routinely have carried out elsewhere.”
Citing the passage in his own work Harris referred to, Chomsky immediately fired back, denying that it had ever been his intent to “draw moral equivalences”:
Let’s turn to what you did say—a disquisition on “moral equivalence.” You fail to mention, though, that I did not suggest that they were “morally equivalent” and in fact indicated quite the opposite. I did not describe the Al-Shifa bombing as a “horrendous crime” committed with “wickedness and awesome cruelty.” Rather, I pointed out that the toll might be comparable, which turns out on inquiry (which is not undertaken here, and which apologists for our crimes ignore), turns out to be, quite likely, a serious understatement.
Having thus seized the moral high ground, he proceeded to rain down pious punches on Harris, demonstrating that he was not merely wrong, but grossly immoral. His ensuing replies include such choice examples as,
You also ignored the fact that I had already responded to your claim about lack of intention—which, frankly, I find quite shocking on elementary moral grounds, as I suspect you would too if you were to respond to the question raised at the beginning of my quoted comment.
Harris is willfully blind to the crimes of the Clinton Administration:
And of course they knew that there would be major casualties. They are not imbeciles, but rather adopt a stance that is arguably even more immoral than purposeful killing, which at least recognizes the human status of the victims, not just killing ants while walking down the street, who cares?
He is morally depraved for abetting this crime:
Your own moral stance is revealed even further by your complete lack of concern about the apparently huge casualties and the refusal even to investigate them.
I’ve seen apologetics for atrocities before, but rarely at this level – not to speak of the refusal to withdraw false charges, a minor fault in comparison.
Chomsky closes on a magnanimous note:
I’ll put aside your apologetics for the crimes for which you and I share responsibility, which, frankly, I find quite shocking, particularly on the part of someone who feels entitled to deliver moral lectures.
Harris is game enough, but staggers on rubbery legs for the rest of the fight. Even in the midst of these blows, he can’t rid himself of the idée fixe that it’s possible to have a polite exchange with someone like Chomsky on differences of opinion about morality. In the post mortem on his website, it’s clear that he still doesn’t know what hit him. It’s virtually impossible to win arguments about objective morality with the likes of Chomsky unless you grasp the fundamental truth that there’s no such thing as objective morality. In fact, the whole debate was about subjective perceptions that are, as Westermarck put it, entirely outside the realm of truth claims.
I can only suggest that next time, instead of getting “down in the weeds,” as he puts it, in a debate with Chomsky about who is “really” the most morally pure, Harris consider the matter pragmatically. In fact, Chomsky is, and always has been, what Lenin referred to as “a useful idiot.” The net effect of all his moralistic hair splitting has been to aid and abet ideologies for which most sane people would just as soon avoid serving as guinea pigs, and to demoralize those who would seek to stand in their way. The most egregious example is probably the moral support he provided for the Khmer Rouge regime in Cambodia at the very time it was perpetrating what was probably, at least on a per capita basis, the worst act of genocide in human history, resulting in the virtual decapitation of a whole country and the annihilation of a large percentage of its population. There are many accounts of his role in this affair on the Internet, and I invite interested readers to have a look at them. One of the more balanced accounts may be found here. Here, too, Chomsky would run rings around Harris if he attempted to debate his role on moralistic grounds. Here, too, he could claim that he had never deliberately drawn any “moral equivalence,” that he had never intended to support the Khmer Rouge, and that those who suggest otherwise are immoral because of a, b, and c. However, it is a fact that Pol Pot and his cronies made very effective use of his remarks in their propaganda, among other things, predictably exploiting them to draw “moral equivalence” in blithe disregard of Chomsky’s assertions about his “intent.”
In fact, Chomsky has been a virtual poster boy for potential tyrannies of all stripes. One might say he has been an “equal opportunity” useful idiot. Once when I was visiting Germany I happened to glance at the offerings of a local newsstand, and saw the smiling face of none other than Noam Chomsky smiling down at me from the front page of the neo-Nazi “Deutsche National-Zeitung!” In the accompanying article, the fascists cited him as an ideal example of a true American hero. I note in passing that tyrants themselves usually have no illusions about the real nature of such paragons of morality. Once Stalin had successfully exploited them to gain absolute power, he shot or consigned to the Gulag every single one he could lay his hands on.
In a word, I suggest that Sam take some advice that my father once passed down to me regarding such affairs: “Never get in a pissing contest with a skunk.” You don’t need to convince anyone that you’re more morally pure than Chomsky in order to realistically assess the net effect of all his “piety.” You just need to realize that, from a purely subjective point of view, it is “good” to survive.
Posted on April 26th, 2015 1 comment
I would rank the Blank Slate debacle as the greatest scientific disaster of all time. For half a century and more, the “men of science” created and maintained a formidable obstacle in the way of our gaining the self-knowledge as a species that may be critical to our survival. This obstacle was the denial that human behavior is in any way influenced by innate human nature. For the time being, at least, the Blank Slate orthodoxy has been crushed. It would seem however, that the scientific community is still traumatized by the affair. The whimsical “histories” that continue to be concocted of the affair and of the roles of the key players in it is a manifestation thereof.
For example, Robert Ardrey, the most influential and effective opponent of the Blank Slate orthodoxy in its heyday, has been thoroughly vindicated as far as the main theme of all his work is concerned. In spite of that, he is a virtual unperson today. Having shamed the “men of science,” it would seem that it is now beneath their dignity to even take notice of the fact that he ever existed. Meanwhile, Richard Lewontin, one of the high priests of the Blank Slate, is revered, and continues to win prestigious awards as a “great scientist.” Among people who should certainly know better, the mere mention of the fact that he was a kingpin of the Blank Slate orthodoxy is greeted with stunned disbelief.
Recently Lewontin was interviewed by David Sloan Wilson, one of today’s foremost defenders of group selection, a topic with a fascinating history of its own in connection with the Blank Slate. We find that, like the Bourbons who were propped back up as French monarchs by the victorious allies after the defeat of Napoleon, he has learned nothing and forgotten nothing. He has merely become more circumspect about revealing the ideological motivations behind his “science.” This becomes obvious when Wilson gets around to asking Lewontin about the connection between The Spandrels of San Marco, a paper he co-authored with Stephen Jay Gould in 1979, and Sociobiology. Lewontin demurely replies that it may have been “contextually relevant,” but the paper was mainly an attack on naïve adaptationism. Wilson: “I’m interested to know that was the primary motivation for the article, not Sociobiology.” Lewontin: “Yeah.” Balked in this first attempt, later in the interview, Wilson becomes a bit more blunt. (I delete some of the exchange for brevity. I encourage readers to look at the entire interview.)
DSW: Dick, I’d like to spend a little bit of time on Sociobiology and also Evolutionary Psychology, because even though that didn’t motivate the Spandrels paper, it still motivated you to be a critic and Steve too.
RL: Look, when I look at Sociobiology, the book or some of the other books he (E. O. Wilson) has written, it drives me mad. For example, if you read – I’ll take an extremely nasty example because it’s so clear – it is written that aggression is a part of human nature. It says that in the book, it lists features of human nature and aggression is one of them. So then I have said to Ed and others of his school, what do you do about people who have spent almost their entire lives in jail because they refuse to be conscripted into the army? What do you think the answer is? That is their form of aggression.
DSW: Well, OK, that’s facile.
RL: I don’t know what you can do about it. If everything can be said to be a form of aggression, even the refusal to be physically aggressive, what kind of science is that? …Because if everything by definition can be shown to be aggression then it ceases to be a useful concept in our scientific discussions.
As it happens, Lewontin uses the same argument in Not In Our Genes, a book he co-authored with fellow Blank Slaters Steven Rose and Leon Kamin in 1984. It makes no more sense now than it did then. Obviously, what’s still sticking in Lewontin’s craw after all these years is a series of books on the subject of human aggression that appeared back in the 60’s, the most famous of which was “On Aggression,” by Konrad Lorenz, published in the U.S. in 1966. In fact, the notion that the anecdote about an imprisoned pacifist demolishes what Lorenz and others actually wrote about human aggression is the sheerest nonsense. Lorenz and the others never dreamed that any of their theories on the subject precluded the possibility of conscientious objectors in any way, shape or form. In reality Lewontin is refuting, not Lorenz, but his favorite strawman then and now, the “genetic determinist.” Lewontin’s “genetic determinist” is one who believes that “human nature” forces people to behave in certain ways and not in others, regardless of culture or environment. If such beasts exist, they must be as rare as unicorns, because in all my reading I have never encountered one, not even among the most hard-core 19th century social Darwinists. Lewontin imagines them behind every bush. For him, all sociobiologists and evolutionary psychologists must necessarily be “genetic determinists.”
Lewontin spares Wilson any mention of his obsession with “genetic determinists,” but lays his cards on the table nevertheless. He’s still as much of a Blank Slater as ever. For example, at the end of the interview,
My main complaint is… the underlying claim that there exists a human nature, which then the claimant must give examples of, and so each claimant gives examples that are convenient for his or her pet theory. I think the worst thing we can do in science is to create concepts where what is included or not included within the concept is not delimited to begin with, it allows us to claim anything. That’s my problem with Sociobiology. It’s too loose.
Well, not exactly. Readers who really want to crawl into the mind of a Blank Slater should read Not In Our Genes, the book I referred to above. There it will be found that Lewontin’s problem isn’t that Sociobiology is “too loose,” but that he perceives it as an impediment to the glorious socialist revolution. You see, Lewontin is a Marxist, and Not In Our Genes is not a book of science, but a political tract. In its pages one will find over and over and over again the assertion that those who believe in human nature are stooges of the bourgeoisie. Sociobiology and the other sciences that affirm the existence of human nature are merely so many contrived, ideologically motivated ploys to defend the capitalist status quo and stave off the glorious dawn of socialism. For example, quoting from the book,
Each of us has been engaged… in research, writing, speaking, teaching, and public political activity in opposition to the oppressive forms in which determinist ideology manifests itself. We share a commitment to the prospect of the creation of a more socially just – a socialist – society. And we recognize that a critical science is an integral part of the struggle to create that society, just as we also believe that the social function of much of today’s science is to hinder the creation of that society by acting to preserve the interests of the dominant class, gender, and race.
Biological determinist ideas are part of the attempt to preserve the inequalities of our society and to shape human nature in their own image. The exposure of the fallacies and political content of those ideas is part of the struggle to eliminate those inequalities and to transform our society. In that struggle we transform our own nature.
Those who possess power and their representatives can most effectively disarm those who would struggle against them by convincing them of the legitimacy and inevitability of the reigning social organization. If what exists is right, then one ought not oppose it; if it exists inevitably, one can never oppose it successfully.
Here, then, we see that Lewontin is being a bit coy when he claims that he only objects to Sociobiology and the other sciences that affirm the existence of human nature because they are “too loose.” In perusing the book, we find that not only Konrad Lorenz and Robert Ardrey, but also Richard Dawkins, Robert Trivers, and W. D. Hamilton are all really just so many hirelings of the capitalist system. No matter that Trivers is a radical leftist, and Ardrey almost became a Communist himself in the 1930’s.
It is amusing to read Lewontin’s pecksniffery about the lack of scientific rigor in the work of these “capitalist stooges,” followed in short order by praise for the “scientific” work of Mao, Marx, and Engels. I can only encourage anyone in need of a good belly laugh to read Engels’ Dialectics of Nature. Therein he will find the great St. Paul of Marxism lecturing the greatest scientists of his day about all the errors he’s discovered in their work because they don’t pay enough attention to the dialectic. Lewontin’s confirmation of one important facet of innate human nature, ingroup/outgroup identification, referred to by Ardrey as the Amity/Enmity Complex, by his furious ranting against the “bourgeoisie” in a book that claims there is no such thing as human nature would also be amusing, were it not for the fact that 100 million “bourgeoisie,” give or take, paid with their lives for this particular manifestation of outgroup identification.
If one is determined to cobble together a version of “reality” in which Lewontin figures as a “great scientist” instead of the Blank Slate kingpin he actually was, he will find no better place to look than the pages of Not In Our Genes. It comes complete with sage warnings against running to the opposite extreme of “cultural determinism,” and anathemas against the proponents of tabula rasa. To this I can only reply that nowhere in any of his work has Lewontin ever affirmed the existence of anything resembling the innate predispositions that one normally refers to in the vernacular as human nature, and he has consistently condemned anyone who does as politically suspect. If “good science” were a matter of condemning anyone who disagrees with your version of reality as a hireling of the forces of evil, Lewontin would take the cake.
UPDATE: Whyvert tweeted a link to a great article by Robert Trivers posted at the Unz Review website entitled, Vignettes of Famous Evolutionary Biologists, Large and Small. Included is a vignette of none other than Richard Lewontin. As it happens, Prof. Trivers was among those singled out by Lewontin as an evil minion of the bourgeoisie in his Not In Our Genes. His article includes some very interesting observations on the disintegrating effects of politics on Lewontin’s scientific career.
Posted on April 19th, 2015 4 comments
The evolutionary origins of morality and the reasons for its existence have been obvious for over a century. They were no secret to Edvard Westermarck when he published The Origin and Development of the Moral Ideas in 1906, and many others had written books and papers on the subject before his book appeared. However, our species has a prodigious talent for ignoring inconvenient truths, and we have been studiously ignoring that particular truth ever since.
Why is it inconvenient? Let me count the ways! To begin, the philosophers who have taken it upon themselves to “educate” us about the difference between good and evil would be unemployed if they were forced to admit that those categories are purely subjective, and have no independent existence of their own. All of their carefully cultivated jargon on the subject would be exposed as gibberish. Social Justice Warriors and activists the world over, those whom H. L. Mencken referred to collectively as the “Uplift,” would be exposed as so many charlatans. We would begin to realize that the legions of pious prigs we live with are not only an inconvenience, but absurd as well. Gaining traction would be a great deal more difficult for political and religious cults that derive their raison d’être from the fabrication and bottling of novel moralities. And so on, and so on.
Just as they do today, those who experienced these “inconveniences” in one form or another pointed to the drawbacks of reality in Westermarck’s time. For example, from his book,
Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism. If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it. This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong. To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief. The unfortunate circumstance that there do exist dangerous things in the world, proves that something may be dangerous and yet true. another question is whether any scientific truth really is mischievous on the whole, although it may cause much discomfort to certain people. I venture to believe that this, at any rate, is not the case with that form of ethical subjectivism which I am here advocating.
I venture to believe it as well. In the first place, when we accept the truth about morality we make life a great deal more difficult for people of the type described above. Their exploitation of our ignorance about morality has always been an irritant, but has often been a great deal more damaging than that. In the 20th century alone, for example, the Communist and Nazi movements, whose followers imagined themselves at the forefront of great moral awakenings that would lead to the triumph of Good over Evil, resulted in the needless death of tens of millions of people. The victims were drawn disproportionately from among the most intelligent and productive members of society.
Still, just as Westermarck predicted more than a century ago, the bugaboo of “moral relativism” continues to be “repeated as an argument” in our own day. Apparently we are to believe that if the philosophers and theologians all step out from behind the curtain after all these years and reveal that everything they’ve taught us about morality is so much bunk, civilized society will suddenly dissolve in an orgy of rape and plunder.
Such notions are best left behind with the rest of the impedimenta of the Blank Slate. Nothing could be more absurd than the notion that unbridled license and amorality are our “default” state. One can quickly disabuse ones self of that fear by simply reading the comment thread of any popular news website. There one will typically find a gaudy exhibition of moralistic posing and pious one-upmanship. I encourage those who shudder at the thought of such an unpleasant reading assignment to instead have a look at Jonathan Haidt’s The Righteous Mind. As he puts it in the introduction to his book,
I could have titled this book The Moral Mind to convey the sense that the human mind is designed to “do” morality, just as it’s designed to do language, sexuality, music, and many other things described in popular books reporting the latest scientific findings. But I chose the title The Righteous Mind to convey the sense that human nature is not just intrinsically moral, it’s also intrinsically moralistic, critical and judgmental… I want to show you that an obsession with righteousness (leading inevitably to self-righteousness) is the normal human condition. It is a feature of our evolutionary design, not a bug or error that crept into minds that would otherwise be objective and rational.
Haidt also alludes to a potential reason that some of the people already mentioned above continue to evoke the scary mirage of moral relativism:
Webster’s Third New World Dictionary defines delusion as “a false conception and persistent belief in something that has no existence in fact.” As an intuitionist, I’d say that the worship of reason is itself an illustration of one of the most long-lived delusions in Western history: the rationalist delusion. It’s the idea that reasoning is our most noble attribute, one that makes us like the gods (for Plato) or that brings us beyond the “delusion” of believing in gods (for the New Atheists). The rationalist delusion is not just a claim about human nature. It’s also a claim that the rational caste (philosophers or scientists) should have more power, and it usually comes along with a utopian program for raising more rational children.
Human beings are not by nature moral relativists, and they are in no danger of becoming moral relativists merely by virtue of the fact that they have finally grasped what morality actually is. It is their nature to perceive Good and Evil as real, independent things, independent of the subjective minds that give rise to them, and they will continue to do so even if their reason informs them that what they perceive is a mirage. They will always tend to behave as if these categories were absolute, rather than relative, even if all the theologians and philosophers among them shout at the top of their lungs that they are not being “rational.”
That does not mean that we should leave reason completely in the dust. Far from it! Now that we can finally understand what morality is, and account for the evolutionary origins of the behavioral predispositions that are its root cause, it is within our power to avoid some of the most destructive manifestations of moral behavior. Our moral behavior is anything but infinitely malleable, but we know from the many variations in the way it is manifested in different human societies and cultures, as well as its continuous and gradual change in any single society, that within limits it can be shaped to best suit our needs. Unfortunately, the only way we will be able to come up with an “optimum” morality is by leaning on the weak reed of our ability to reason.
My personal preferences are obvious enough, even if they aren’t set in stone. I would prefer to limit the scope of morality to those spheres in which it is indispensable for lack of a viable alternative. I would prefer a system that reacts to the “Uplift” and unbridled priggishness and self-righteousness with scorn and contempt. I would prefer an educational system that teaches the young the truth about what morality actually is, and why, in spite of its humble origins, we can’t get along without it if we really want our societies to “flourish.” I know; the legions of those whose whole “purpose of life” is dependent on cultivating the illusion that their own versions of Good and Evil are the “real” ones stands in the way of the realization of these whims of mine. Still, one can dream.
Posted on April 13th, 2015 No comments
The 19th century? I might just as well have said the 18th century. Today’s moral philosophers, gazing timidly about amidst the rubble of the Blank Slate, have only recently realized that one is not rendered grossly and hopelessly immoral by virtue of merely suggesting the possibility that there is such a thing as human nature. Indeed, some of them have even been daring enough to admit that there might be something to what the evolutionary psychologists have been telling them after all. In terms of their own specialty, that means they have boldly advanced to the point that they can dare to acknowledge the existence of the “moral sense” that was proposed quite convincingly by Shaftesbury more than 300 years ago, and demonstrated logically by Hutcheson a bit later in a form that no one has come close to refuting to this day. True, Shaftesbury and Hutcheson thought that God had concocted this “moral sense.” We didn’t know where it really came from until Darwin came along and gently alluded to it in the context of his great theory, and that was in the 19th century. One might even labor the point and say that we had to wait until the dawn of the 20th century before Westermarck came along and bluntly pointed out, for the benefit of those too dense to put two and two together,
…there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is, that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.
Shortly thereafter, of course, the “men of science” concocted the Blank Slate debacle, and the darkness fell. What we are witnessing today is the desultory attempts of moral philosophers, or at least a few of them, to pick up the pieces. I recently ran across a link to an example that appeared shortly after the collapse of the Blank Slate orthodoxy at 3 Quarks Daily in an article on moral realism by Mike Lopresto. Entitled A Darwinian Dilemma for the Realist Theories of Value, by Sharon Street, it appeared in the journal Philosophical Studies back in 2006. Street opens with the following:
Contemporary realist theories of value claim to be compatible with natural science. In this paper, I call this claim into question by arguing that Darwinian considerations pose a dilemma for these theories. The main thrust of my argument is this. Evolutionary forces have played a tremendous role in shaping the content of human evaluative attitudes. The challenge for realist theories of value is to explain the relation between these evolutionary influences on our evaluative attitudes, on the one hand, and the independent evaluative truths that realism posits, on the other. Realism, I argue, can give no satisfactory account of this relation.
A bit later, Street gets around to explaining exactly what she means by “evolutionary forces:”
In his 1990 book Wise Choices, Apt Feelings, Allan Gibbard notes that his arguments “should be read as having a conditional form: If the psychological facts are roughly as I speculate, here is what might be said philosophically.” I attach a similar caveat to my argument in this paper: If the evolutionary facts are roughly as I speculate, here is what might be said philosophically. I try to rest my arguments on the least controversial, most well-founded evolutionary speculations possible. But they are speculations nonetheless, and they, like some of Gibbard’s theorizing in Wise Choices, Apt Feelings, fall within a difficult and relatively new subfield of evolutionary biology known as evolutionary psychology.
Obviously, Street still had a lively fear of the anathemas of the Blank Slate priesthood, carefully referring to evolutionary psychology as “mere speculation.” One can place the collapse of the Blank Slate, at least as far as the popular media are concerned, at around the turn of the century, give or take a few years. I doubt that she would have dared to write such heresies ten years earlier. I certainly know of nothing similar that appeared in any of the philosophy rags prior to say, 1995. Street continues,
According to this subfield, human cognitive traits are (in some cases) just as susceptible to Darwinian explanation as human physical traits are (in some cases). For example, a cognitive trait such as the widespread human tendency to value the survival of one’s offspring may, according to evolutionary psychology, be just as susceptible to evolutionary explanation as physical traits such as our bipedalism or our having opposable thumbs.
Having thus invited the lightening bolts, she then hurries to placate the offended gods of the Blank Slate, citing the familiar flim flam of two of its high priests, Stephen Jay Gould and Richard Lewontin:
There are many pitfalls that such evolutionary theorizing must avoid, the most important of which is the mistake of assuming that every observable trait (whether cognitive or physical) is an adaptation resulting from natural selection, as opposed to the result of any number of other complex (non-selective or only partially selective) processes that could have produced it. It is more than I can do here to describe such pitfalls in depth or to defend at length the evolutionary claims that my argument will be based on. Instead, it must suffice to emphasize the hypothetical nature of my arguments, and to say that while I am skeptical of the details of the evolutionary picture I offer, I think its outlines are certain enough to make it well worth exploring the philosophical implications.
To make a long story short, having thus established her own moral purity, Street feels safe enough to follow her “mere speculation” to its logical conclusions. Noting that there are two “flavors” of moral realists, including the “naturalist” kind, who claim that, while value judgments may have evolutionary roots, natural selection favors “true morality,” and the “non-naturalists,” who deny any such connection, she proceeds to debunk both versions. Noting the “striking continuity” between the more basic evaluative tendencies in other animals and our own evaluative judgments, she makes short work of the “non-naturalists.” Somewhat more sophisticated arguments are demanded to deal with the “naturists,” who insist that our evolved natural predispositions “track” actual moral truths. Street provides them, in very convincing form, in Section 6 of her paper, and I encourage readers who are daunted at the prospect of wading through the entire 48 pages to at least have a look at it.
Now, however, as Alex might have put it in “A Clockwork Orange,” comes the weepy part of the story. Just as Nietzsche predicted in his Human, All Too Human, having climbed up her philosophical ladder to get a glance at the truth, Street shrinks back from what she sees. What she sees is that evolved human behavioral predispositions are the root cause of what we refer to as morality, and, as a result, Westermarck was right when he pointed out that moral judgments “fall entirely outside the category of truth.” In the end, she can’t face the full implications of this truth. Instead, she temporizes. In her conclusion she writes,
Now that there are creatures like us with marvelously complicated systems of valuings up and running, it is quite possible to come to value something because one recognizes that it has a value independent of oneself—not in the realist’s sense, but in an antirealist’s more modest sense. Thus, although valuing ultimately came first, value grew to be able to stand partly on its own. It grew to achieve its own, limited sort of priority over valuing—a priority that we can understand while at the same time being fully conscious of great biddings from the outside.
Hurrah! The poor, wooden puppet Pinocchio becomes a real boy after all! The oppressive and ludicrous piety that prevails in modern academia is vindicated, and philosophers can continue to write blather about how moral emotions can acquire the magical power to jump out of mammal A’s skull, hop onto mammal B’s back, and prescribe to mammal B what he ought and ought not do. Well, dear reader, we can forgive such regrettable weakness. After all, many choice jobs in academia would be rendered absurd, and many frail and pious egos would be rendered laughable by a straight up dose of reality. What of it? At least we can now utter the phrase “human nature” without fear of being doused with ice water. At least the philosophers have struggled back into the 19th century, and are almost on the same page with Darwin again. Better to rejoice in the progress we have made than grieve over the imbecilities we must still endure.
Posted on April 4th, 2015 No comments
If you’re worried that the demise of religion implies the demise of morality, I suggest you search the term “Memories Pizza.” As it happens, Memories Pizza is (or was) a small business in the town of Walkerton, Indiana. By all accounts, its owners had never refused to serve gays, or uttered a harsh word about the gay community. Then, however, a reporter by the name of Alyssa Marino strolled in fishing for a story about Indiana’s recently enacted “Religious Freedom Restoration Act.” Apparently attracted by the signage in the restaurant that made it obvious that the owners were Christians, Marino asked the proprietor a question that had never come up in the decade the business had been in business, and was unlikely to come up in the future; Would the business cater a gay wedding. The reply: “If a gay couple came in and wanted us to provide pizzas for their wedding, we would have to say no.” Marino promptly wrote a story about her visit under the headline, “RFRA: First Michiana business to publicly deny same-sex service.” This was a bit disingenuous, to say the least. As Robbie Soave at Hit and Run put it,
That headline implies two things that are false. The O’Connors had no intention of becoming the first Michiana business to do anything discriminatory with respect to gay people; they had merely answered a hypothetical question about what would happen if a gay couple asked them to cater a wedding. And the O’Connors had every intention of providing regular service to gay people—just not their weddings.
No matter, the story went viral, provoking a furious (and threatening) response from the gay ingroup. Hundreds of reviews suddenly appeared on Yelp, with comments such as,
I you like your pizza with a side of bigoted hatred and ignorance this is the spot for you. If you’re not a piece of trash I would stay away.
This is an excellent place to bring back that old time, nostalgia feeling. For those who want to experience what life was like under Jim Crow, this is the place for you!
Terrible place, owners chose to be heterosexual. The biggest bigots are the most closeted. No gay man or woman is going to order pizza for a wedding. These people should be put out of business. O yeah, I’m going to kill your Jesus. Try and stop me.
and, finally, the apocalyptic,
DO NOT EAT HERE – The owners are hateful bigots who twist the meaning of Christianity to satisfy their own insecurities by indoctrinating their children with hate, further poisoning our world and future generations.
Who’s going to Walkerton, IN t0 burn down #memoriespizza w me?
Of course, all this was treated as a mere bagatelle by the mainstream media. After all, the owners were nothing but a couple of hinds in flyover country, and Christians to boot. If victims can’t be portrayed as leftist martyrs, what’s the point of protecting them? Regardless of which “side” you choose, the story certainly demonstrates an important truth, and for the umpteenth time: God or no God, morality isn’t going anywhere.
Whether you agree with the gay activists or not, it is abundantly clear that their responses are instances of moral behavior. Furthermore, they demonstrate the dual nature of human morality, characterized by radically different types of moral responses to others depending on whether they are perceived to belong to one’s ingroup or outgroup. They also clearly demonstrate the human tendency to interpret moral emotions as representations of objective things, commonly referred to as Good and Evil, which are imagined to exist independently of the subjective minds that give rise to them. In the minds of the gays, the attitude of the Memories Pizza folks towards gay marriage isn’t just an expression of one of many coequal cultural alternatives. It can’t be dismissed as a mere difference of opinion. It doesn’t reflect the interpretation of one of many possible moralities, all equally valid relative to each other. No, clearly, in the minds of the gays, the owners have violated THE moral law. Otherwise their response, as reflected in tweets, e-mails and threats, would be inexplicable.
What rational basis is there for this furious reaction? As far as I can tell, none. Certainly, the gays cannot rely on holy scripture to legitimize their outrage. In spite of whimsical attempts at Biblical exegesis by the gay community, both the Bible and the Quran are quite explicit and blunt in their condemnations of gay behavior. The compassionate and merciful God of the Quran even threatens those who ignore the prohibition with quintillions of years in hell experiencing what ISIS recently inflicted on a Jordanian pilot for a few seconds, and that just for starters. I find no other sanction, whether in religion or philosophy, for the conclusion that opposition to gay marriage is not only wrong, but is actually absolutely evil. In other words, the behavior of the gay activists is completely irrational. It is also completely normal.
The evolved behavioral traits that are the “root cause” of moral behavior exist because they happened to increase the odds that those who were “wired” for such traits would be more likely to survive and reproduce. Mother Nature saw to it that moral emotions would be powerful, experienced as reflections of absolutes, and perceived as the independently existing “things,” Good and Evil. She didn’t bother with anything other than the big picture, the gross effect. As a result she treated such ostensibly comical manifestations of morality as the raining down of pious anathemas on devout Christians, who tend to be relatively successful at reproduction, by gays, who normally don’t reproduce at all, with a grain of salt, confident (and rightly so) that the vast majority of humans would be too stupid to perceive their own absurdity.
In a word, fears that the demise of religion implies the demise of morality are overblown. It will continue to exist in its manifold “different but similar” manifestations, regardless of whether it enjoys the sanction of religious scripture or the scribbling of philosophers. Morality is hardly infinitely malleable, but it can be shaped to some extent. It would probably behoove us to do so, making it quite clear in the process to what sorts of behavior it does and does not apply. The list should be kept as short and simple as possible, consonant with keeping the interactions of individuals as harmonious and productive as possible.
Back in the day, the religious types whose tastes ran to foisting Prohibition on an unwilling nation used to promote the idea of “one morality.” It probably wasn’t such a bad idea in itself, although I personally would likely have taken exception to the particular flavor they had in mind. I would favor a “one morality” that was free of religious influence, and that would apply in situations that the long experience of our species has taught us will arouse moral emotions in any case. Beyond that, it would apply to as limited an additional subset of behaviors as possible. Finally, this “one morality” would make it crystal clear that subjecting any other forms of behavior to moral judgment is itself immoral.
There could be no ultimate sanction or source of legitimacy for such a “one morality” than there could be for any other kind, by virtue of the very nature of morality itself. However, if it were properly formulated, it would be experienced as an absolute, just like all the rest, regardless of all the fashionable blather about moral relativism. There would, of course, always be those who question why they “ought” to do one thing, and “ought not” to do another. As a society, we would do well to see to it that the answer is just what Mother Nature “intended”: You “ought” to do what is “right,” because you will find the consequences of doing what is “right” a great deal more agreeable than doing what is “wrong.”
Posted on March 22nd, 2015 17 comments
Let me put my own cards on the table. I consider the Blank Slate affair the greatest debacle in the history of science. Perhaps you haven’t heard of it. I wouldn’t be surprised. Those who are the most capable of writing its history are often also those who are most motivated to sweep the whole thing under the rug. In any case, in the context of this post the Blank Slate refers to a dogma that prevailed in the behavioral sciences for much of the 20th century according to which there is, for all practical purposes, no such thing as human nature. I consider it the greatest scientific debacle of all time because, for more than half a century, it blocked the path of our species to self-knowledge. As we gradually approach the technological ability to commit collective suicide, self-knowledge may well be critical to our survival.
Such histories of the affair as do exist are often carefully and minutely researched by historians familiar with the scientific issues involved. In general, they’ve personally lived through at least some phase of it, and they’ve often been personally acquainted with some of the most important players. In spite of that, their accounts have a disconcerting tendency to wildly contradict each other. Occasionally one finds different versions of the facts themselves, but more often its a question of the careful winnowing of the facts to select and record only those that support a preferred narrative.
Obviously, I can’t cover all the relevant literature in a single blog post. Instead, to illustrate my point, I will focus on a single work whose author, Hamilton Cravens, devotes most of his attention to events in the first half of the 20th century, describing the sea change in the behavioral sciences that signaled the onset of the Blank Slate. As it happens, that’s not quite what he intended. What we see today as the darkness descending was for him the light of science bursting forth. Indeed, his book is entitled, somewhat optimistically in retrospect, The Triumph of Evolution: The Heredity-Environment Controversy, 1900-1941. It first appeared in 1978, more or less still in the heyday of the Blank Slate, although murmurings against it could already be detected among academic and professional experts in the behavioral sciences after the appearance of a series of devastating critiques in the popular literature in the 60’s by Robert Ardrey, Konrad Lorenz, and others, topped off by E. O. Wilson’s Sociobiology in 1975.
Ostensibly, the “triumph” Cravens’ title refers to is the demise of what he calls the “extreme hereditarian” interpretations of human behavior that prevailed in the late 19th and early 20th century in favor of a more “balanced” approach that recognized the importance of culture, as revealed by a systematic application of the scientific method. One certainly can’t fault him for being superficial. He introduces us to most of the key movers and shakers in the behavioral sciences in the period in question. There are minutiae about the contents of papers in old scientific journals, comments gleaned from personal correspondence, who said what at long forgotten scientific conferences, which colleges and universities had strong programs in psychology, sociology and anthropology more than 100 years ago, and who supported them, etc., etc. He guides us into his narrative so gently that we hardly realize we’re being led by the nose. Gradually, however, the picture comes into focus.
It goes something like this. In bygone days before the “triumph of evolution,” the existence of human “instincts” was taken for granted. Their importance seemed even more obvious in light of the rediscovery of Mendel’s work. As Cravens put it,
While it would be inaccurate to say that most American experimentalists concluded as the result of the general acceptance of Mendelism by 1910 or so that heredity was all powerful and environment of no consequence, it was nevertheless true that heredity occupied a much more prominent place than environment in their writings.
This sort of “subtlety” is characteristic of Cravens’ writing. Here, he doesn’t accuse the scientists he’s referring to of being outright genetic determinists. They just have an “undue” tendency to overemphasize heredity. It is only gradually, and by dint of occasional reading between the lines that we learn the “true” nature of these believers in human “instinct.” Without ever seeing anything as blatant as a mention of Marxism, we learn that their “science” was really just a reflection of their “class.” For example,
But there were other reasons why so many American psychologists emphasized heredity over environment. They shared the same general ethnocultural and class background as did the biologists. Like the biologists, they grew up in middle class, white Anglo-Saxon Protestant homes, in a subculture where the individual was the focal point of social explanation and comment.
As we read on, we find Cravens is obsessed with white Anglo-Saxon Protestants, or WASPs, noting that the “wrong” kind of scientists belong to that “class” scores of times. Among other things, they dominate the eugenics movement, and are innocently referred to as Social Darwinists, as if these terms had never been used in a pejorative sense. In general they are supposed to oppose immigration from other than “Nordic” countries, and tend to support “neo-Lamarckian” doctrines, and believe blindly that intelligence test results are independent of “social circumstances and milieu.” As we read further into Section I of the book, we are introduced to a whole swarm of these instinct-believing WASPs.
In Section II, however, we begin to see the first glimmerings of a new, critical and truly scientific approach to the question of human instincts. Men like Franz Boas, Robert Lowie, and Alfred Kroeber, began to insist on the importance of culture. Furthermore, they believed that their “culture idea” could be studied in isolation in such disciplines as sociology and anthropology, insisting on sharp, “territorial” boundaries that would protect their favored disciplines from the defiling influence of instincts. As one might expect,
The Boasians were separated from WASP culture; several were immigrants, of Jewish background, or both.
A bit later they were joined by joined by John Watson and his behaviorists who, after performing some experiments on animals and human infants, apparently experienced an epiphany. As Cravens puts it,
To his amazement, Watson concluded that the James-McDougall human instinct theory had no demonstrable experimental basis. He found the instinct theorists had greatly overestimated the number of original emotional reactions in infants. For all practical purposes, he realized that there were no human instincts determining the behavior of adults or even of children.
Perhaps more amazing is the fact that Cravens suspected not a hint of a tendency to replace science with dogma in all this. As Leibniz might have put it, everything was for the best, in this, the best of all possible worlds. Everything pointed to the “triumph of evolution.” According to Cravens, the “triumph” came with astonishing speed:
By the early 1920s the controversy was over. Subsequently, psychologists and sociologists joined hands to work out a new interdisciplinary model of the sources of human conduct and emotion stressing the interaction of heredity and environment, of innate and acquired characters – in short, the balance of man’s nature and his culture.
Alas, my dear Cravens, the controversy was just beginning. In what follows, he allows us a glimpse at just what kind of “balance” he’s referring to. As we read on into Section 3 of the book, he finally gets around to setting the hook:
Within two years of the Nazi collapse in Europe Science published an article symptomatic of a profound theoretical reorientation in the American natural and social sciences. In that article Theodosius Dobzhansky, a geneticist, and M. F. Ashley-Montagu, an anthropologist, summarized and synthesized what the last quarter century’s work in their respective fields implied for extreme hereditarian explanations of human nature and conduct. Their overarching thesis was that man was the product of biological and social evolution. Even though man in his biological aspects was as subject to natural processes as any other species, in certain critical respects he was unique in nature, for the specific system of genes that created an identifiably human mentality also permitted man to experience cultural evolution… Dobzhansky and Ashley-Montagu continued, “Instead of having his responses genetically fixed as in other animal species, man is a species that invents its own responses, and it is out of this unique ability to invent… his responses that his cultures are born.”
and, finally, in the conclusions, after assuring us that,
By the early 1940s the nature-nurture controversy had run its course.
Cravens leaves us with some closing sentences that epitomize his “triumph of evolution:”
The long-range, historical function of the new evolutionary science was to resolve the basic questions about human nature in a secular and scientific way, and thus provide the possibilities for social order and control in an entirely new kind of society. Apparently this was a most successful and enduring campaign in American culture.
At this point, one doesn’t know whether to laugh or cry. Apparently Cravens, who has just supplied us with arcane details about who said what at obscure scientific conferences half a century and more before he published his book was completely unawares of exactly what Ashley Montagu, his herald of the new world order, meant when he referred to “extreme hereditarian explanations,” in spite of the fact that he spelled it out ten years earlier in an invaluable little pocket guide for the followers of the “new science” entitled Man and Aggression. There Montagu describes the sort of “balance of man’s nature and his culture” he intended as follows:
Man is man because he has no instincts, because everything he is and has become he has learned, acquired, from his culture, from the man-made part of the environment, from other human beings.
There is, in fact, not the slightest evidence or ground for assuming that the alleged “phylogenetically adapted instinctive” behavior of other animals is in any way relevant to the discussion of the motive-forces of human behavior. The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.
So much for Cravens’ “balance.” He spills a great deal of ink in his book assuring us that the Blank Slate orthodoxy he defends was the product of “science,” little influenced by any political or ideological bias. Apparently he also didn’t notice that, not only in Man and Aggression, but ubiquitously in the Blank Slate literature, the “new science” is defended over and over and over again with the “argument” that anyone who opposes it is a racist and a fascist, not to mention far right wing.
As it turns out, Cravens didn’t completely lapse into a coma following the publication of Ashley Montagu’s 1947 pronunciamiento in Science. In his “Conclusion” we discover that, after all, he had a vague presentiment of the avalanche that would soon make a shambles of his “new evolutionary science.” In his words,
Of course in recent years something approximating at least a minor revival of the old nature-nurture controversy seems to have arisen in American science and politics. It is certainly quite possible that this will lead to a full scale nature-nurture controversy in time, not simply because of the potential for a new model of nature that would permit a new debate, but also, as one historian has pointed out, because our own time, like the 1920s, has been a period of racial and ethnic polarization. Obviously any further comment would be premature.
Obviously, my dear Cravens. What’s the moral of the story, dear reader? Well, among other things, that if you really want to learn something about the Blank Slate, you’d better not be shy of wading through the source literature yourself. It’s still out there, waiting to be discovered. One particularly rich source of historical nuggets is H. L. Mencken’s American Mercury, which Ron Unz has been so kind as to post online. Mencken took a personal interest in the “nature vs. nurture” controversy, and took care to publish articles by heavy hitters on both sides. For a rather different take than Cravens on the motivations of the early Blank Slaters, see for example, Heredity and the Uplift, by H. M. Parshley. Parshley was an interesting character who took on no less an opponent than Clarence Darrow in a debate over eugenics, and later translated Simone de Beauvoir’s feminist manifesto The Second Sex into English.
Posted on March 21st, 2015 No comments
The National Ignition Facility, or NIF, at Lawrence Livermore National Laboratory (LLNL) in California was designed and built, as its name implies, to achieve fusion ignition. The first experimental campaign intended to achieve that goal, the National Ignition Campaign, or NIC, ended in failure. Scientists at LLNL recently published a paper in the journal Physics of Plasmas outlining, to the best of their knowledge to date, why the experiments failed. Entitled “Radiation hydrodynamics modeling of the highest compression inertial confinement fusion ignition experiment from the National Ignition Campaign,” the paper concedes that,
The recently completed National Ignition Campaign (NIC) on the National Ignition Facility (NIF) showed significant discrepancies between post-shot simulations of implosion performance and experimentally measured performance, particularly in thermonuclear yield.
To understand what went wrong, it’s necessary to know some facts about the fusion process and the nature of scientific attempts to achieve fusion in the laboratory. Here’s the short version: The neutrons and protons in an atomic nucleus are held together by the strong force, which is about 100 times stronger than the electromagnetic force, and operates only over tiny distances measured in femtometers. The average binding energy per nucleon (proton or neutron) due to the strong force is greatest for the elements in the middle of the periodic table, and gradually decreases in the directions of both the lighter and heavier elements. That’s why energy is released by fissioning heavy atoms like uranium into lighter atoms, or fusing light atoms like hydrogen into heavier atoms. Fusion of light elements isn’t easy. Before the strong force that holds atomic nuclei together can take effect, two light nuclei must be brought very close to each other. However, atomic nuclei are all positively charged, and like charges repel. The closer they get, the stronger the repulsion becomes. The sun solves the problem with its crushing gravitational force. On earth, the energy of fission can also provide the necessary force in nuclear weapons. However, concentrating enough energy to accomplish the same thing in the laboratory has proved a great deal more difficult.
The problem is to confine incredibly hot material at sufficiently high densities for a long enough time for significant fusion to take place. At the moment there are two mainstream approaches to solving it: magnetic fusion and inertial confinement fusion, or ICF. In the former, confinement is achieved with powerful magnetic lines of force. That’s the approach at the international ITER fusion reactor project currently under construction in France. In ICF, the idea is to first implode a small target of fuel material to extremely high density, and then heat it to the necessary high temperature so quickly that its own inertia holds it in place long enough for fusion to happen. That’s the approach being pursued at the NIF.
The NIF consists of 192 powerful laser beams, which can concentrate about 1.8 megajoules of light on a tiny spot, delivering all that energy in a time of only a few nanoseconds. It is much larger than the next biggest similar facility, the OMEGA laser system at the Laboratory for Laser Energetics in Rochester, NY, which maxes out at about 40 kilojoules. The NIC experiments were indirect drive experiments, meaning that the lasers weren’t aimed directly at the BB-sized, spherical target, or “capsule,” containing the fuel material (a mixture of deuterium and tritium, two heavy isotopes of hydrogen). Instead, the target was mounted inside of a tiny, cylindrical enclosure known as a hohlraum with the aid of a thin, plastic “tent.” The lasers were fired through holes on each end of the hohlraum, striking the walls of the cylinder, generating a pulse of x-rays. These x-rays then struck the target, ablating material from its surface at high speed. In a manner similar to a rocket exhaust, this drove the remaining target material inward, causing it to implode to extremely high densities, about 40 times heavier than the heaviest naturally occurring elements. As it implodes, the material must be kept as “cold” as possible, because it’s easier to squeeze and compress things that are cold than those that are hot. However, when it reaches maximum density, a way must be found to heat a small fraction of this “cold” material to the very high temperatures needed for significant fusion to occur. This is accomplished by setting off a series of shocks during the implosion process that converge at the center of the target at just the right time, generating the necessary “hot spot.” The resulting fusion reactions release highly energetic alpha particles, which spread out into the surrounding “cold” material, heating it and causing it to fuse as well, in a “burn wave” that propagates outward. “Ignition” occurs when the amount of fusion energy released in this way is equal to the energy in the laser beams that drove the target.
As noted above, things didn’t go as planned. The actual fusion yield achieved in the best experiment was less than that predicted by the best radiation hydrodynamics computer codes available at the time by a factor of about 50, give or take. The LLNL paper in Physics of Plasmas discusses some of the reasons for this, and describes subsequent improvements to the codes that account for some, but not all, of the experimental discrepancies. According to the paper,
Since these simulation studies were completed, experiments have continued on NIF and have identified several important effects – absent in the previous simulations – that have the potential to resolve at least some of the large discrepancies between simulated and experimental yields. Briefly, these effects include larger than anticipated low-mode distortions of the imploded core – due primarily to asymmetries in the x-ray flux incident on the capsule, – a larger than anticipated perturbation to the implosion caused by the thin plastic membrane or “tent” used to support the capsule in the hohlraum prior to the shot, and the presence, in some cases, of larger than expected amounts of ablator material mixed into the hot spot.
In a later section, the LLNL scientists also note,
Since this study was undertaken, some evidence has also arisen suggesting an additional perturbation source other than the three specifically considered here. That is, larger than anticipated fuel pre-heat due to energetic electrons produced from laser-plasma interactions in the hohlraum.
In simple terms, the first of these passages means that the implosions weren’t symmetric enough, and the second means that the fuel may not have been “cold” enough during the implosion process. Any variation from perfectly spherical symmetry during the implosion can rob energy from the central hot spot, allow material to escape before fusion can occur, mix cold fuel material into the hot spot, quenching it, etc., potentially causing the experiment to fail. The asymmetries in the x-ray flux mentioned in the paper mean that the target surface would have been pushed harder in some places than in others, resulting in asymmetries to the implosion itself. A larger than anticipated perturbation due to the “tent” would have seeded instabilities, such as the Rayleigh-Taylor instability. Imagine holding a straw filled with water upside down. Atmospheric pressure will prevent the water from running out. Now imagine filling a perfectly cylindrical bucket with water to the same depth. If you hold it upside down, the atmospheric pressure over the surface of the water is the same. Based on the straw experiment, the water should stay in the bucket, just as it did in the straw. Nevertheless, the water comes pouring out. As they say in the physics business, the straw experiment doesn’t “scale.” The reason for this anomaly is the Rayleigh-Taylor instability. Over such a large surface, small variations from perfect smoothness are gradually amplified, growing to the point that the surface becomes “unstable,” and the water comes splashing out. Another, related instability, the Richtmeyer-Meshkov instability, leads to similar results in material where shocks are present, as in the NIF experiments.
Now, with the benefit of hindsight, it’s interesting to look back at some of the events leading up to the decision to build the NIF. At the time, government used a “key decision” process to approve major proposed projects. The first key decision, known as Key Decision 0, or KD0, was approval to go forward with conceptual design. The second was KD1, approval of engineering design and acquisition. There were more “key decisions” in the process, but after passing KD1, it could safely be assumed that most projects were “in the bag.” In the early 90’s, a federal advisory committee, known as the Inertial Confinement Fusion Advisory Committee, or ICFAC, had been formed to advise the responsible agency, the Department of Energy (DOE), on matters relating to the national ICF program. Among other things, its mandate including advising the government on whether it should proceed with key decisions on the NIF project. The Committee’s advice was normally followed by DOE.
At the time, there were six major “program elements” in the national ICF program. These included the three weapons laboratories, LLNL, Los Alamos National Laboratory (LANL), and Sandia National Laboratories (SNL). The remaining three included the Laboratory for Laser Energetics at the University of Rochester (UR/LLE), the Naval Research Laboratory (NRL), and General Atomics (GA). Spokespersons from all these “program elements” appeared before the ICFAC at a series of meetings in the early 90’s. The critical meeting as far as approval of the decision to pass through KD1 is concerned took place in May 1994. Prior to that time, extensive experimental programs at LLNL’s Nova laser, UR/LLE’s OMEGA, and a host of other facilities had been conducted to address potential uncertainties concerning whether the NIF could achieve ignition. The best computer codes available at the time had modeled proposed ignition targets, and predicted that several different designs would ignite, typically producing “gains,” the ratio of the fusion energy out to the laser energy in, of from 1 to 10. There was just one major fly in the ointment – a brilliant physicist named Steve Bodner, who directed the ICF program at NRL at the time.
Bodner told the ICFAC that the chances of achieving ignition on the NIF were minimal, providing his reasons in the form of a detailed physics analysis. Among other things, he noted that there was no way of controlling the symmetry because of blow-off of material from the hohlraum wall, which could absorb both laser light and x-rays. Ablated material from the capsule itself could also absorb laser and x-ray radiation, again destroying symmetry. He pointed out that codes had raised the possibility of pressure perturbations on the capsule surface due to stagnation of the blow-off material on the hohlraum axis. LLNL’s response was that these problems could be successfully addressed by filling the hohlraum with a gas such as helium, which would hold back the blow-off from the walls and target. Bodner replied that such “solutions” had never really been tested because of the inability to do experiments on Nova with sufficient pulse length. In other words, it was impossible to conduct experiments that would “scale” to the NIF on existing facilities. In building the NIF, we might be passing from the “straw” to the “bucket.” He noted several other areas of major uncertainty with NIF-scale targets, such as the possibility of unaccounted for reflection of the laser light, and the possibility of major perturbations due to so-called laser-plasma instabilities.
In light of these uncertainties, Bodner suggested delaying approval of KD1 for a year or two until these issues could be more carefully studied. At that point, we may have gained the technological confidence to proceed. However, I suspect he knew that two years would never be enough to resolve the issues he had raised. What Bodner really wanted to do was build a much larger facility, known as the Laboratory Microfusion Facility, or LMF. The LMF would have a driver energy of from 5 to 10 megajoules compared to the NIF’s 1.8. It had been seriously discussed in the late 80’s and early 90’s. Potentially, such a facility could be built with Bodner’s favored KrF laser drivers, the kind used on the Nike laser system at NRL, instead of the glass lasers that had been chosen for NIF. It would be powerful enough to erase the physics uncertainties he had raised by “brute force.” Bodner’s proposed approach was plausible and reasonable. It was also a forlorn hope.
Funding for the ICF program had been cut in the early 90’s. Chances of gaining approval for a beast as expensive as LMF were minimal. As a result, it was now officially considered a “follow-on” facility to the NIF. No one took this seriously at the time. Everyone knew that, if NIF failed, there would be no “follow-on.” Bodner knew this, the scientists at the other program elements knew it, and so did the members of the ICFAC. The ICFAC was composed of brilliant scientists. However, none of them had any real insight into the guts of the computer codes that were predicting ignition on the NIF. Still, they had to choose between the results of the big codes, and Bodner’s physical insight bolstered by what were, in comparison, “back of the envelope” calculations. They chose the big codes. With the exception of Tim Coffey, then Director of NRL, they voted to approve passing through KD1 at the May meeting.
In retrospect, Bodner’s objections seem prophetic. The NIC has failed, and he was not far off the mark concerning the reasons for the failure. It’s easy to construe the whole affair as a morality tale, with Bodner playing the role of neglected Cassandra, and the LLNL scientists villains whose overweening technological hubris finally collided with the grim realities of physics. Things aren’t that simple. The LLNL people, not to mention the supporters of NIF from the other program elements, included many responsible and brilliant scientists. They were not as pessimistic as Bodner, but none of them was 100% positive that the NIF would succeed. They decided the risk was warranted, and they may well yet prove to be right.
In the first place, as noted above, chances that an LMF might be substituted for the NIF after another year or two of study were very slim. The funding just wasn’t there. Indeed, the number of laser beams on the NIF itself had been reduced from the originally proposed 240 to 192, at least in part, for that very reason. It was basically a question of the NIF or nothing. Studying the problem to death, now such a typical feature of the culture at our national research laboratories, would have led nowhere. The NIF was never conceived as an energy project, although many scientists preferred to see it in that light. Rather, it was built to serve the national nuclear weapons program. It’s supporters were aware that it would be of great value to that program even if it didn’t achieve ignition. In fact, it is, and is now providing us with a technological advantage that rival nuclear powers can’t match in this post-testing era. Furthermore, LLNL and the other weapons laboratories were up against another problem – what you might call a demographic cliff. The old, testing-era weapons designers were getting decidedly long in the tooth, and it was necessary to find some way to attract new talent. A facility like the NIF, capable of exploring issues in inertial fusion energy, astrophysics, and other non-weapons-related areas of high energy density physics, would certainly help address that problem as well.
Finally, the results of the NIC in no way “proved” that ignition on the NIF is impossible. There are alternatives to the current indirect drive approach with frequency-tripled “blue” laser beams. Much more energy, up to around 4 megajoules, might be available if the known problems of using longer wavelength “green” light can be solved. Thanks to theoretical and experimental work done by the ICF team at UR/LLE under the leadership of Dr. Robert McCrory, the possibility of direct drive experiments on the NIF, hitting the target directly instead of shooting the laser beams into a “hohlraum” can, was also left open, using a so-called “polar” illumination approach. Another possibility is the “fast ignitor” approach to ICF, which would dispense with the need for complicated converging shocks to produce a central “hot spot.” Instead, once the target had achieved maximum density, the hot spot would be created on the outer surface using a separate driver beam.
In other words, while the results of the NIC are disappointing, stay tuned. Pace Dr. Bodner, the scientists at LLNL may yet pull a rabbit out of their hats.
Posted on March 15th, 2015 No comments
Human morality is the manifestation of innate behavioral traits in animals with brains large enough to reason about their own emotional reactions. It exists because those traits evolved. They did not evolve to serve any purpose, but purely because they happened to enhance the probability that individuals carrying them would survive and reproduce. In the absence of those traits morality as we know it would not exist. Darwin certainly suspected as much. Now, more than a century and a half after the publication of On the Origin of Species, so much is really obvious.
Scores of books have been published recently on the innate emotional wellsprings of morality. Its analogs have been clearly identified in other animals. Its expression has been demonstrated in infants, long before the they could have learned the responses in question via cultural transmission. Unless all these books are pure gibberish, and all these observations are delusions, morality is ultimately the expression of physical phenomena happening in the brains of individuals. In other words, it is subjective. It does not have an independent existence as a thing-in-itself, outside of the minds of individuals. It follows that it cannot somehow jump out of the skulls of those individuals and gain some kind of an independent, legitimate power to prescribe to other individuals what they should or should not do.
In spite of all that, the faith in objective morality persists, in defiance of the obvious. The truth is too jarring, too uncomfortable, too irreconcilable with what we “feel,” and so we have turned away from it. As the brilliant Edvard Westermarck put it in his The Origin and Development of the Moral Ideas,
As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivise the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion.
It follows that, as Westermarck puts it,
The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is, that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.
If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth.
Westermarck wrote those words in 1906. More than a century later, we are still whistling past the graveyard of objective morality. Interested readers can confirm this by a quick trip to their local university library. Browsing through the pages of Ethics, one of the premier journals devoted to the subject, they will find articles on deontological, consequentialist, and several other abstruse flavors of morality. They will find a host of helpful recipes for what should or should not be done in a given situation. They will discover that it is their “duty” to do this, that, or the other thing. Finally, they will find all of the above ensconced in an almost impenetrable smokescreen of academic jargon. In a word, most of the learned contributors to Ethics have ignored Westermarck, and are still chasing their tails, doggedly pursuing a “scientific ethics” that will “fix rules for human conduct” once and for all.
Challenge one of these learned philosophers, and their response is typically threadbare enough. A common gambit is no more complex than the claim that objective morality must exist, because if it didn’t then the things we all know are bad wouldn’t be bad anymore. An example of the genre recently turned up on the opinion pages of The New York Times, entitled, Why Our Children Don’t Think There Are Moral Facts. Its author, Justin McBrayer, an associate professor of philosophy at Fort Lewis College in Durango, Colorado, opens with the line,
What would you say if you found out that our public schools were teaching children that it is not true that it’s wrong to kill people for fun or cheat on tests? Would you be surprised?
Now, as Westermarck pointed out, it is impossible for things to be “true” if they have no objective existence. Read the article carefully, and you’ll see that McBrayer doesn’t even attempt to dispute the logic behind Westermarck’s observation. Rather, he tells him the same thing Socrates’ judges told him as they handed him the hemlock: “I’m right and you’re wrong because what you claim is true is bad for the children.” In other words, there must be an objective bad because otherwise it would be bad. Other than that, the only attempt at an argument in the whole article is the following ad hominem remark about any philosopher who denies the existence of objective morality:
There are historical examples of philosophers who endorse a kind of moral relativism, dating back at least to Protagoras who declared that “man is the measure of all things,” and several who deny that there are any moral facts whatsoever. But such creatures are rare.
In other words, objective morality must be true, because those who deny it are “creatures.” No doubt, such “defenses” of objective morality have been around since time immemorial. They certainly were in Westermarck’s day. His response was as valid then as it is now:
Ethical subjectivism is commonly held to be a dangerous doctrine, destructive to morality, opening the door to all sorts of libertinism. If that which appears to each man as right or good, stands for that which is right or good; if he is allowed to make his own law, or to make no law at all; then, it is said, everybody has the natural right to follow his caprice and inclinations, and to hinder him from doing so is an infringement on his rights, a constraint with which no one is bound to comply provided that he has the power to evade it. This inference was long ago drawn from the teaching of the Sophists, and it will no doubt be still repeated as an argument against any theorist who dares to assert that nothing can be said to be truly right or wrong. To this argument may, first, be objected that a scientific theory is not invalidated by the mere fact that it is likely to cause mischief.
Obviously, as Westermarck foresaw, the argument is “still repeated” more than a century later. In McBrayer’s case, it goes like this:
Indeed, in the world beyond grade school, where adults must exercise their moral knowledge and reasoning to conduct themselves in the society, the stakes are greater. There, consistency demands that we acknowledge the existence of moral facts. If it’s not true that it’s wrong to murder a cartoonist with whom one disagrees, then how can we be outraged? If there are no truths about what is good or valuable or right, how can we prosecute people for crimes against humanity? If it’s not true that all humans are created equal, then why vote for any political system that doesn’t benefit you over others?
As a philosopher, I already knew that many college-aged students don’t believe in moral facts. While there are no national surveys quantifying this phenomenon, philosophy professors with whom I have spoken suggest that the overwhelming majority of college freshmen in their classrooms view moral claims as mere opinions that are not true or are true only relative to a culture.
One often hears such remarks about the supposed pervasiveness of moral relativism. They are commonly based on the fallacy that human morality is the product of human reason rather than human emotion. The reality is that Mother Nature has been blithely indifferent to the repeated assertions of philosophers that, unless we listen to them, morality will disappear. She designed morality to work, for better or worse, whether we take the trouble to reason about it or not. All these fears of moral relativism can’t even pass the “ho ho” test. They fly in the face of all the observable facts about moral behavior in the real world. Moral relativism on campus, you say? Please! There has never been such a hotbed of extreme, moralistic piety as exists today in academia since the heyday of the Puritans. No less a comedian than Chris Rock won’t even perform on college campuses anymore because of repeated encounters with the extreme manifestations of priggishness one finds there. One can’t tell a joke without “offending” someone.
Morality isn’t going anywhere. It will continue to function just as it always has, oblivious to whether it has the permission of philosophers or not. As can be seen by the cultural differences in the way that moral emotions are “acted out,” within certain limits morality is malleable. We have some control over whether it is “acted out” by the immolation of enemy pilots and the beheading and crucifixion of “infidels,” or in forms that promote what Sam Harris might call “human flourishing.” Regardless of our choice, I suspect that our chances of successfully shaping a morality that most of us would find agreeable will be enhanced if we base our actions on what morality actually is rather than on what we want it to be.