Posted on February 5th, 2016 No comments
Commenter Christian asked whether I would make an exception for the Führer in the post Is Trump Evil? I would not. Questions of good or evil are not subject to truth claims, period!
Let me say some things up front about the implications of this claim. The fact that Hitler was not evil does not imply that he was good. It does not imply moral relativism. It does not imply the impossibility of moral standards that are perceived and treated as absolute. It does not imply that all of us “should” be able to do whatever we feel like. Nor does it imply that the many soldiers, including my father, who put themselves in harm’s way to smash Hitler’s armies were acting in vain, or that the sacrifice of those who fell fighting him was irrational or absurd. What the claim does imply is that the source of moral claims is not to be sought floating about in the form of some kind of an independent thing, but in the subjective emotions of individuals.
Let’s consider whether the claim that Hitler was evil is rational or not. That claim is very different from the claim that Hitler is thought to be evil. In other words, it implies nothing about subjective emotions, but implies that Hitler was evil independent of them, or of anything that goes on in the minds of individuals. How could that be? If so, some agency independent of the mind must exist as a basis for the claim. Otherwise it is based on nothing. I don’t believe in a God or gods. However, it has been suggested that, if one exists, objective good and evil can be determined by His opinion on the matter. This claim was debunked more than two millennia ago in Plato’s Euthyphro. What else might be floating around in the aether that could serve as a basis for truth claims about morality? Something made of matter as we know it? I find it very hard to make such a connection, although I am always open to suggestions. Something made of energy? As Einstein pointed out, the two are convertible, so that doesn’t get us anywhere.
If it doesn’t consist of either matter or energy, where, then, are we to look for the source of this elusive grounding of moral claims? In the spirit world? By all means, if you think it’s reasonable to believe in things for which there is no credible evidence. What other “thing” or “entity” could there possibly be that could fill the need? Again, I’m open to suggestions, but I’m not aware of anything of the sort, and I’m not prepared to accept the argument that there is an objective basis for morality, but that the basis is nothing.
Consider moral emotions. They are certainly capable of explaining why some things or individuals are thought to be evil. However, analogs of these emotions are to be found in other animals. It seems reasonable to suppose that their existence in both human beings and other species can be explained by natural selection. In other words, the existence of the genes responsible for spawning the relevant behavioral predispositions apparently increased the probability that those genes would survive and reproduce, or at least that they did at the time that the genes first appeared. Mathematical models seem to confirm this conclusion, and great heaps of books and papers have been published based on it. However, if there is an objective basis for moral claims, presumably it must be independent these randomly selected emotional predispositions. The “real” good and “real” evil must either have no connection to them, or there must be some reason why randomly evolved genes not only improve the odds of survival, but at the same time mysteriously conform to objective moral standards. This conclusion seems neither rational nor plausible to me. What does seem a great deal more rational and plausible is what Edvard Westermarck wrote on the subject more than a century ago:
As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion.
The presumed objectivity of moral judgments thus being a chimera there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is that the moral concepts are based upon emotions and that the contents of an emotion fall entirely outside the category of truth.
Consider the case of individual Nazis. Goebbels is a good example, as, unlike Hitler, he left extensive diaries. Read them, and you will discover an individual not unlike those who are occasionally described as “social justice warriors” in our own time. He was an activist who sacrificed his time and occasionally his health in the fight to right what seemed to him a terrible injustice; the “enslaving” of the German people by the Treaty of Versailles. He was hardly a man who woke up every morning scratching his head wondering what evil deed he could do that day. Rather, he was firmly convinced he was fighting for the good, in the form of the liberation of the German people from the clutches of those who he imagined sought to enslave and crush them. He was a convinced socialist, well to the left of Hitler in that regard. He honored and loved his family, and believed firmly in the Christian God, frequently invoking His aid in the diaries. He often railed at the “gypsy life” he lived before the Nazis came to power, constantly traveling here and there for speeches and demonstrations, and bewailed his rundown condition because of constant overwork. He fantasized about running off to Switzerland with one of his many lady loves. His strong sense of duty, however, held him to his work in pursuit of what he firmly believed was the “good.”
Clearly, then, Goebbels was incapable of distinguishing between “good and evil” as they are commonly defined today, at least, in the U.S. and much of Europe. The same may be said of Hitler, who was a very similar type, dedicated to what he imagined was a noble and highly ethical cause, as can be seen in the pages of his Mein Kampf. If he actually was “evil,” then, we must conclude, based at least on the standards prevailing in U.S. courts of law, that he was less “evil” than those who know the difference between right and wrong. If we were to insist on the existence of objective morality, we could go on multiplying these “extenuating circumstances” indefinitely, having a fine time in the process debating the precise level of Hitler’s criminal liability for his deeds in terms of “real” good and “real” evil. I submit that it would be more reasonable, not to mention less mentally taxing, to simply admit the obvious; that the categories “real” good and “real” evil are chimeras.
Which brings us back to my earlier comments about moral relativity. I do not believe that it is possible for one individual to be more objectively good or more objectively evil than another. In spite of that, I make moral judgments about other drivers on the road all the time. We make moral judgments because it is our nature to make moral judgments. For the most part, at least, it is not our nature to be “moral relativists,” and all the scribblings of all the philosophers on the planet won’t alter human nature, as the Communists, among others, discovered at great cost, both to themselves and the rest of us. The fact that Hitler and the rest of the Nazis weren’t objectively evil does not somehow render the fight against Nazism irrational or impermissible. As Hume pointed out long ago, we are motivated to do things by emotion, not reason, and reason must ever be the slave of emotion.
Most of us have an emotional attachment to staying alive, and to ensuring the survival of those we love. If Nazis or anyone else wanted to kill or enslave us or them, there is no objective reason why we should resist. However, in my case and, I think, in most others, it would be my nature to resist, and just as there is no objective reason why I should, there is also no objective reason why I should not. It might occur to me in the process that my reaction to the emotional desire to resist was in harmony with the reasons that the desire existed in the first place, namely, because it increased the odds of genetic survival. In my case, this would increase my will to resist, especially in the world of today where so many actions in response to moral emotions seem better calculated to result in genetic suicide. In the process of resisting, I would hardly dispense with such powerful weapons as moral emotions merely because I am aware of the non-existence of objective good and evil. On the contrary, I would exploit every opportunity to portray my enemy as evil, and there would be nothing either contradictory or objectively “wrong” about doing so.
As for absolute morality, no such thing is possible in an objective sense, but it is certainly possible in a subjective sense. There is no objective reason whatsoever why we should not come up with a version of morality consistent with our nature, seek to live by it, and punish those who don’t. Eventually, we would tend to imagine compliance with those moral rules to be “really good” and failure to comply with them to be “really evil,” because that is our nature. I personally would prefer living under such a system, assuming we were vigilant in preventing morality from overstepping its bounds.
As for the Nazis, it will greatly facilitate the historical task of understanding what manner of people they were and why they did what they did if we go into it unencumbered with fantasies about objective good and evil. Communism was actually a very similar phenomenon. Its most substantial difference from Nazism was probably the mere substitution of “bourgeoisie” for Jews as the outgroup of choice. The fool’s errand of trying to pigeonhole the Nazis on some imaginary moral scale did not help us to avoid Communism, nor is it likely to help us avoid similar historical blunders in the future. It would be better to actually understand the emotional nature of individuals like Hitler and Goebbels, which is probably a great deal more similar to the emotional nature of the rest of us than we care to admit, and how it motivated them to do what they did. Or at least it would be better for those of us who would prefer to avoid another dose of Communism or Nazism.
Posted on February 1st, 2016 1 comment
Inertial confinement fusion, or ICF, is one of the two “mainstream” approaches to harnessing nuclear fusion in the laboratory. As its name would imply, it involves dumping energy into nuclear material, commonly consisting of heavy isotopes of hydrogen, so fast that its own inertia hold it in place long enough for significant thermonuclear fusion to occur. “Fast” means times on the order of billionths of a second. There are, in turn, two main approaches to supplying the necessary energy; direct drive and indirect drive. In direct drive the “target” of fuel material is hit directly by laser or some other type of energetic beams. In indirect drive, the target is mounted inside of a “can,” referred to as a “hohlraum.” The beams are aimed through holes in the hohlraum at the inner walls. There they are absorbed, producing x-rays, which supply the actual energy to the target.
To date, the only approach used at the biggest ICF experimental facility in the world, the National Ignition Facility, or NIF, at Lawrence Livermore National Laboratory (LLNL), has been indirect drive. So far, it has failed to achieve the goal implied by the facility’s name – ignition – defined as more fusion energy out than laser energy in. A lot of very complex physics goes on inside those cans, and the big computer codes used to predict the outcome of the experiments didn’t include enough of it to be right. They predicted ignition, but LLNL missed it by over a factor of 10. That doesn’t necessarily mean that the indirect drive approach will never work. However, the prospects of that happening are becoming increasingly dim.
Enter direct drive. It has always been the preferred approach at the Naval Research Laboratory and the Laboratory for Laser Energetics (LLE) at the University of Rochester, the latter home of the second biggest laser fusion facility in the world, OMEGA. They lost the debate to the guys at LLNL as the NIF was being built, but still managed to keep a crack open for themselves, in the form of polar direct drive. It would have been too difficult and expensive to configure the NIF beams so that they would be ideal for indirect drive, but could then be moved into a perfectly symmetric arrangement for direct drive. However, by carefully tailoring the length and power in each of the 192 laser beams, and delicately adjusting the thickness of the target at different locations, it is still theoretically possible to get a symmetric implosion. That is the idea behind polar direct drive.
With indirect drive on the ropes, there are signs that direct drive may finally have its turn. One such sign was the recent appearance in the prestigious journal, Physics of Plasmas, of a paper entitled Direct-drive inertial confinement fusion: A review. At the moment it is listed as the “most read” of all the articles to appear in this month’s issue, a feat that is probably beyond the ability of non-experts. The article is more than 100 pages long, and contains no less than 912 references to work by other scientists. However, look at the list of authors. They include familiar direct drive stalwarts like Bob McCrory, John Sethian, and Dave Meyerhofer. However, one can tell which way the wind is blowing by looking at some of the other names. They include some that haven’t been connected so closely with direct drive in the past. Notable among them is Bill Kruer, a star in the ICF business who specializes in theoretical plasma physics, but who works at LLNL, home turf for the indirect drive approach.
Will direct drive ignition experiments happen on the NIF? Not only science, but politics is involved, and not just on Capitol Hill. Money is a factor, as operating the NIF isn’t cheap. There has always been a give and take, or tug of war, if you will, between the weapons guys and the fusion energy guys. It must be kept in mind that the NIF was built primarily to serve the former, and they have not historically always been full of enthusiasm for ignition experiments. There is enough energy in the NIF beams to create conditions sufficiently close to those that occur in nuclear weapons without it. Finally, many in the indirect drive camp are far from being ready to throw in the towel.
In spite of that, some tantalizing signs of a change in direction are starting to turn up. Of course, the “usual suspects” at NRL and LLE continue to publish direct drive papers, but a paper was also just published in the journal High Energy Density Physics entitled, A direct-drive exploding-pusher implosion as the first step in development of a monoenergetic charged-particle backlighting platform at the National Ignition Facility. An exploding pusher target is basically a little glass shell filled with fusion fuel, usually in gaseous form. For various reasons, such targets are incapable of reaching ignition/breakeven. However, they were the type of target used in the first experiments to demonstrate significant fusion via laser implosion at the now defunct KMS Fusion, Inc., back in 1974. According to the paper, all of the NIF’s 192 beams were used to implode such a target, and they were, in fact, tuned for polar direct drive. However, they were “dumbed down” to deliver only a little over 43 kilojoules to the target, only a bit more than two percent of the design limit of 1.8 megajoules! Intriguingly enough, that happens to be just about the same energy that can be delivered by OMEGA. The target was filled with a mixture of deuterium (hydrogen with an extra neutron), and helium 3. Fusion of those two elements produces a highly energetic proton at 14.7 MeV. According to the paper copious amounts of these mono-energetic protons were detected. Ostensibly, the idea was to use the protons as a “backlighter.” In other words, they would be used merely as a diagnostic, shining through some other target to record its behavior at very high densities. That all sounds a bit odd to me. If all 192 beams are used for the backlighter, what’s left to hit the target that’s supposed to be backlighted? My guess is that the real goal here was to try out polar direct drive for later attempts at direct drive ignition.
All I can say is, stay tuned. The guys at General Atomics down in San Diego who make the targets for NIF may already be working on a serious direct drive ignition target for all I know. Regardless, I hope the guys at LLNL manage to pull a rabbit out of their hat and get ignition one way or another. Those “usual suspects” among the authors I mentioned have all been at it for decades now, and are starting to get decidedly long in the tooth. It would be nice if they could finally reach the goal they’ve been chasing for so long before they finally fade out of the picture. Meanwhile, I can but echo the words of Edgar Allan Poe:
Over the Mountains
Of the Moon,
Down the Valley of the Shadow,
Ride, boldly ride,
The shade replied —
If you seek for El Dorado.
Posted on January 31st, 2016 2 comments
The question itself is absurd. It implies the existence of things – objective good and evil – that are purely imaginary. Good and evil seem to be real, but they are actually only words we assign to subjective emotional responses. Darwin was aware of the fact, as demonstrated in his writings. Westermarck stated it as a scientific theory in his Ethical Relativity. Arthur Keith and others before him noted critical aspect of human morality that is commonly ignored to this day; its dual nature. Robert Ardrey referred to it as the “Amity-Enmity Complex,” noting that we categorize others into ingroups, with which we associate “good” qualities, and outgroups, with which we associate “evil.” Denial of the dual nature of morality has been one of the more damaging legacies of the Blank Slate. Among other things, it has obscured the reasons for the existence of such variants of outgroup identification as racism, religious bigotry, and anti-Semitism. In the process, it has obscured from the consciousness of those who are loudest in condemning these “evils” that they, too, have outgroups, which they commonly hate more bitterly and irrationally than those they accuse of such sins.
The current attempts in the UK to establish a travel ban on Donald Trump are a good illustration of the absurdities that are commonly the result of failure to recognize the simple truths stated above. As I write this, 570,000 Brits have signed a petition calling for such a ban. In response, the British parliament has begun debating the issue. All this is justified on moral grounds. Ask one of the petition signers why, exactly, Trump is evil, and typical responses would include the claim that he is a racist, a religious bigot, spreads “hate-speech,” etc. If one were to continue the line of questioning, asking why racism is bad, they might respond that it leads to inequality. Ask them why equality is good, and they might start losing patience with the questioner because, in fact, they don’t know. None of these saintly petition signers has the faintest clue why Trump is “really evil.” It’s no wonder. Legions of philosophers have been trying to catch the gaudy butterflies of “good” and “evil” for the last few millennia. They have failed for a very good reason. The butterflies don’t exist.
Let us attempt to bring the debate back into the real world. Trump wants to expel illegal immigrants from the U.S., and end immigration of Muslims. These are not irrational goals. As history demonstrates, they are both legally and physically possible. In both cases, they would recognize the existence of human nature in general, and our tendency to perceive others in terms of ingroups and outgroups in particular. They will result in the exclusion from the country of people who have historically perceived the lion’s share of the existing population of the United States as an outgroup. In the case of Moslems, their holy book, the Quran, includes many passages forbidding friendship with Christians and condemning those with commonly held Christian beliefs to burn in hell for eternity. In the case of Hispanics, they come from cultures that have historically perceived North Americans as exploiters and imperialist aggressors. Both of these groups, in turn, are typically perceived by Americans as belonging to outgroups. Allowing them to remain in or enter the country has already resulted in civil strife. If history is any guide, there is a non-trivial possibility that the eventual result will be civil war. These are outcomes that most current US citizens would prefer to avoid. They are being told, however, that to avoid being “racist,” or “bigoted,” or, in fine, “immoral,” they must accept these outcomes. In other words, to be “good,” they must practice an absurd form of altruism, in which they must make tangible sacrifices, even though the chances that they will ever receive anything back in return are nil. Otherwise, they will be “evil.” This unusual form of moral behavior is not encountered elsewhere in the animal kingdom.
Moral emotions certainly do not exist to promote “good” and defeat “evil.” They exist solely because, at points in time that were utterly unlike the present, they happened to increase the odds that the genes responsible for spawning them would survive and reproduce. Importing civil strife and, potentially, civil war, are not good strategies for promoting genetic survival. The subjective desire to direct moral emotions in order to accomplish goals that are in harmony with the reasons those emotions exist to begin with is neither “good” nor “evil.” However, as long as one recognizes the necessarily subjective nature of those goals, there is no basis for the claim that pursuing them is irrational. In short, expelling illegal immigrants and banning Muslim immigration are not “evil,” because there is no such thing as “evil,” beyond its subjective and dependent existence in the consciousness of individuals. They are, however, rational, in the sense that they are legal and achievable, and are also in harmony with the goal of genetic, not to mention cultural survival. Most US citizens seem to recognize this fact at some conscious or subconscious level. This explains their support for Trump, and what one might call their immune response to a deluge of culturally alien immigrants, whether legal or illegal. As so often happens, many of those who don’t “get it” are intellectuals, who have a disconcerting tendency to bamboozle themselves with ideological concoctions from which they imagine they can distill the “good,” often at the expense of others not afflicted with a similar talent for self-delusion.
The petition signers, on the other hand, would be somewhat embarrassed if asked to justify their condemnation of Trump on grounds other than such imaginary categories as “good” and “evil.” Perhaps they might argue that he is acting against the “brotherhood of man,” and that the “brotherhood of man” is a rational goal because it would reduce or eliminate inter-species warfare and other forms of violence, goals which are also in harmony with genetic survival. To this, one need merely respond, “Look in the mirror.” There, if they look closely, they will see the reflection of their own hatreds, and of their own outgroups. They are no more immune to human nature than the racists and bigots they so piously condemn. After their own fashion, like virtually every other human being on the planet, they are “racists and bigots” themselves. The only difference between them and those they condemn is in the choice of outgroup. Their own hatreds expose the “brotherhood of man” as a fantasy.
In short, all these Brits who imagine themselves dwelling on pinnacles of righteousness don’t oppose Trump’s policies on rational grounds. They oppose him because they hate him, and they hate him because he is included in their outgroup, and must, therefore, be “immoral.” In that they are similar to American Trump-haters. Typical Brits, on the other hand, have many other hatreds in common. Many of them have a long and abiding hatred of Americans. Going back to the years just after we gained our independence, one may consult the pages of the British Quarterly Review, probably their most influential journal during the first half of the 19th century. There you will find nothing but scorn for Americans and their “silly paper Constitution.” As anyone who has read a little history is aware, little changed between then and the most recent orgasm of anti-American hatred in Europe, in which the Brits were eager participants. It’s ironic that these hatemongers are now sufficiently droll to accuse others of “hate-speech.” Ideologues may be defined as those who identify their in- and outgroups according to ideological criteria. In common with ideologues everywhere, British ideologues hatred of the “other,” so defined, is as virulent as the hatreds of any racist ever heard of. In other words, to judge by their “racism,” they are at least as “evil” as the outgroups they condemn. The only difference is that their hatred is aroused by “races” that differ from them in political alignment rather than skin color. It is this variant of “racism” that they are now directing at Trump.
If Trump does become President, it would not be surprising to see him retaliate against the British hatemongers, if not in response to moral emotions, perhaps as a mere matter of self-defense. To begin, for example, he might expel the British scientists who are now so ubiquitous at our national weapons laboratories, with free access to both our classified nuclear weapons information and to expensive experimental facilities, to the construction and maintenance of which they have contributed little if anything. Beyond that, we might deliver some broad hints as to the violation of the Monroe Doctrine posed by their occupation of the Malvinas Islands, accompanied by some judicious arming of Argentina.
None of what I have written above implies nihilism, or moral relativism, nor does it exclude the possibility of an absolute morality. I merely recognize the fact that good and evil are not objective things, and draw the obvious conclusions. Facts are not good or evil. They are simply facts.
Posted on January 17th, 2016 7 comments
That inimitable and irascible physicist Lubos Motl, who blogs at The Reference Frame, sought to vindicate the existence of free will in a recent post entitled Free Will of Particles and People. To begin, he insisted that he must have free will because he feels like he has it:
The actual reason why I am sure about the existence of free will (and I mean my free will) is that I feel it.
Well, I feel it, too, but human beings have been known to feel any number of things that aren’t true, so I don’t find that argument convincing. Lubos’ second argument is based on the fact that the universe is not deterministic in the classical sense. We live in a quantum universe, and quantum phenomena appear to be random. Since free will, at least as defined by Lubos, exists at the level of atomic and sub-atomic particles, and single particles can change the state of cells, and single cells can change the state of the human brain, then we, too, must have free will. I’m not so sure about that one either. True, the outcome of a measurement at the quantum scale is unpredictable, and therefore appears to be random, but we don’t really know that it is. We can never measure exactly the same thing twice. We can repeat experiments, but we can never measure exactly the same particle at exactly the same time in exactly the same place twice.
Then there’s the problem of what all this stuff we’re measuring really is. We know how matter behaves at the atomic scale in great detail. The fact that the atomic bomb worked demonstrated that convincingly enough. We can use Maxwell’s equations and the Schrödinger Equation to make particles of matter and energy jump through hoops, but that doesn’t alter the fact that we don’t really know what they are at the most fundamental level, or even why they exist at all. In short, I have a problem with making positive claims about things we don’t understand. Positive claims about free will assume a level of knowledge that we just don’t have.
On the other hand, I have no problem at all with assuming that we do have free will. As Lubos says, it certainly feels like we do, and if we actually do, then we are merely assuming something that is true. On the other hand, if we don’t have free will, then assuming that we do couldn’t change things for the worse, for the very good reason that, lacking free will, we would be incapable of changing anything.
Arguments against the existence of free will are absurd, because they imply the assumption of free choice. If there is no free will, then there is no point in arguing about it, because it can’t possibly change anything in a way that wasn’t pre-programmed before the argument started. True, if there is no free will, than the one making the argument couldn’t decide not to make it, but the fundamental absurdity remains. What could possibly be the point of arguing with me about my assumptions regarding free will if I have no choice in the matter? The future will be different depending on whether a robot tightens or loosens a screw. However, if the robot is pre-programmed, and has no choice in the matter, it won’t alter a thing. Nothing will shake the future out of its predestined rut. In spite of that, I suspect that the most insistent deniers of free will don’t really believe their arguments are pointless. And yet their arguments would be completely pointless unless they believed in their heart of hearts, either that they could make a free choice to argue one way or the other, or that the person listening could may such a choice.
If there is no free will, then my assumption that there is won’t change a thing. If, on the other hand, we do have free will, and my assumption that we do despite my lack of any proof to that effect actually represents a free choice, then it seems to me that it’s a choice that is likely to make life a great deal more pleasant. Where’s the fun in being a robot? As far as I’m concerned, the assumption is justified if I can relieve even a single person of the despair and sense of futility that are predictable responses to the opposite assumption.
We can certainly debate the question of free will as stubbornly as we please. However, I would contend that we lack the knowledge necessary to decide the matter one way or the other. Perhaps one day that knowledge will be ours. If it turns out we actually don’t have free will, then it will be illogical to blame me for my assumption that we do. If, on the other hand, we discover that we actually do have free will, then it seems that those who argued furiously that we don’t will look rather foolish. Why take the risk?
Posted on January 16th, 2016 No comments
Ethics is generally looked upon as a “normative” science, the object of which is to find and formulate moral principles and rules possessing objective validity. The supposed objectivity of moral values, as understood in this treatise, implies that they have a real existence apart from any reference to a human mind, that what is said to be good or bad, right or wrong, cannot be reduced merely to what people think to be good or bad, right or wrong. It makes morality a matter of truth and falsity, and to say that a judgment is true obviously means something different from the statement that it is thought to be true. The objectivity of moral judgments does not presuppose the infallibility of the individual who pronounces such a judgment, nor even the accuracy of a general consensus of opinion; but if a certain course of conduct is objectively right, it must be thought to be right by all rational beings who judge truly of the matter and cannot, without error, be judged to be wrong.
Westermarck dismissed moral realism as a chimera. So do I. Indeed, in view of what we now know about the evolutionary origins of moral emotions, the idea strikes me as ludicrous. It is, however, treated as matter-of-factly as if it were an unquestionable truth, and not only in the general public. Philosophers merrily discuss all kinds of moral conundrums and paradoxes in academic journals, apparently in the belief that they have finally uncovered the “truth” about such matters, to all appearances with no more fear of being ridiculed than the creators of the latest Paris fashions. The fact is all the more disconcerting if one takes the trouble to excavate the reasons supplied for this stubborn belief that subjective emotional constructs in the minds of individuals actually relate to independent things. Typically, they are threadbare almost beyond belief.
Recently I discussed the case of G. E. Moore, who, after dismissing the arguments of virtually everyone who had attempted a “proof” of moral realism before him as fatally flawed by the naturalistic fallacy, supplied a “proof” of his own. It turned out that the “objective good” consisted of those things that were most likely to please an English country gentleman. The summum bonum was described as something like sitting in a cozy house with a nice glass of wine while listening to Beethoven. The only “proof” supplied for the independent existence of this “objective good” was Moore’s assurance that he was an expert in such matters, and that it was obvious to him that he was right.
I recently uncovered another such “proof,” this time concocted in the fertile imagination of the Swedish philosopher Torbjörn Tännsjö. It turned up in an interview on the website of 3:AM Magazine under the title, The Hedonistic Utilitarian. In response to interviewer Richard Marshall’s question,
Why are you a moral realist and what difference does this make to how you go about investigating morals from, for example, a non-realist?
I am indeed a moral realist. In particular, I believe that one basic question, what we ought to do, period (the moral question), is a genuine one. There exists a true answer to it, which is independent of our thought and conceptualization. My main argument in defense of the position is this. It is true (independently of our conceptualization) that it is wrong to inflict pain on a sentient creature for no reason (she doesn’t deserve it, I haven’t promised to do it, it is not helpful to this creature or to anyone else if I do it, and so forth). But if this is a truth, existing independently of our conceptualization, then at least one moral fact (this one) exists and moral realism is true. We have to accept this, I submit, unless we can find strong reasons to think otherwise.
In reading this, I was reminded of PFC Littlejohn, who happened to serve in my unit when I was a young lieutenant in the Army. Whenever I happened to pull his leg more egregiously than even he could bear, he would typically respond, “You must be trying to bullshit me, sir!” Apparently Tännsjö doesn’t consider Darwin’s theory, or Darwin’s own opinion regarding the origin of the moral emotions, or the flood of books and papers on the evolutionary origins of moral behavior, or the convincing arguments for the selective advantage of just such an emotional response as he describes, or the utter lack of evidence for the physical existence of “moral truths” independent of our “thought and conceptualization,” as sufficiently strong reasons “to think otherwise.” Tännsjö continues,
Moral nihilism comes with a price we can now see. It implies that it is not wrong (independently of our conceptualization) to do what I describe above; this does not mean that it is all right to do it either, of course, but yet, for all this, I find this implication from nihilism hard to digest. It is not difficult to accept for moral reasons. If it is false both that it is wrong to perform this action and that it is righty to perform it, then we need to engage in difficult issues in deontic logic as well.
Yes, in the same sense that deontic logic is necessary to determine whether it is true or false that there are fairies in Richard Dawkins’ garden. No deontic logic is necessary here – just the realization that Tännsjö is trying to make truth claims about something that is not subject to truth claims. The claim that it is objectively “not wrong” to do what he describes is as much a truth claim, and therefore just as irrational, as the claim that it is wrong. As for his equally irrational worries about “moral nihilism,” his argument is similar to those of the religious true believers who think that, because they find a world without a God unpalatable, one must therefore perforce pop into existence. Westermarck accurately described the nature of Tännsjö’s “proof” in his The Origin and Development of the Moral Ideas, where he wrote,
As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivise the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion
The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood. The ultimate reason for this is, that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.
Today, Westermarck is nearly forgotten, while G. E. Moore is a household name among moral philosophers. The Gods and angels of traditional religions seem to be in eclipse in Europe and North America, but “the substance of things hoped for,” and “the evidence of things not seen” are still with us, transmogrified into the ghosts and goblins of moral realism. We find atheist social justice warriors hurling down their anathemas and interdicts more furiously than anything ever dreamed of by the Puritans and Pharisees of old, supremely confident in their “objective” moral purity.
And what of moral nihilism? Dream on! Anyone who seriously believes that anything like moral nihilism can result from the scribblings of philosophers has either been living under a rock, or is constitutionally incapable of observing the behavior of his own species. Human beings will always behave morally. The question is, what kind of a morality can we craft for ourselves that is both in harmony with our moral emotions, that does the least harm, and that most of us can live with. I personally would prefer one that is based on an accurate understanding of what morality is and where it comes from.
Do I think that anything of the sort is on the horizon in the foreseeable future? No. When it comes to belief in religion and/or moral realism, one must simply get used to living in Bedlam.
Posted on January 9th, 2016 3 comments
I’m hardly the only one who’s noticed the evolutionary origins of morality. I’m not even the only one who’s put two and two together and realized that, as a consequence, objective morality is a chimera. Edvard Westermarck arrived at the same conclusion more than a century ago, pointing out the impossibility of truth claims about good and evil. Many of my contemporaries agree on these fundamental facts. However, it would seem that very few of them agree with me on the implications of these truths for each of us as individuals.
Consider, for example, a recent post by Michael Shepanski, entitled Morality without smoke, that appeared on his blog Step Back, Step Forward. Shepanski appears to have no reservations about the evolutionary origins of morality, noting that those origins don’t imply the Hobbesian conclusion that all human behavior is motivated by self-interest:
To begin, human nature is not the horrible thing that some have imagined. I’m looking at you, Thomas Hobbes:
Hereby it is manifest, that during the time men live without a common Power to keep them all in awe, they are in that condition which is called Warre; and such a warre, as is of every man, against every man.
That was written in 1651, and since then we have learnt something about evolutionary psychology. It turns out that our genes have equipped us with more than narrow self-interest: we come with innate tendencies towards (among other things) altruism, empathy, loyalty, and retribution. Also society has systems of rewards and punishments to keep us mostly in line, whether we’re innately disposed to it or not.
Shepanski also appears to accept the conclusion that, as a result of the evolutionary origins of morality, all the religious and secular edifices concocted so far as a “basis” for it are really just so much smoke. However, he denies the implication that this implies the “end of morality.” Rather, he would prefer a “morality without smoke,” meaning one that doesn’t have a “mystical foundation.” The problem is that he has no such morality to offer:
About now, you might expect me to put forward some non-mystical basis for morals: something from science perhaps. No, that is not my plan. I don’t believe it’s possible. I’m with the philosopher David Hume, who said we can never reason from matters of fact alone to a moral conclusion: we can’t derive an “ought” from an “is”.
I agree with Shepanski about the lack of an objective, or “smoke-free” basis for morality, and I also agree that the lack of such a basis does not imply the “end of morality.” Morality certainly isn’t going anywhere, regardless of the musings of the philosophers. It is part of our nature, and a part that we could not well do without even if that were possible, which it isn’t. This is where things really get interesting, however, and not just in the context of Shepanski’s paper, but in general. What are the consequences of the facts set forth above? What “should” we do in view of them? What do they imply in terms of how individuals should interpret their own moral emotions?
According to Shepanski,
And I agree with Hume because (a) as a matter of logic, I don’t see how you can ever get a conclusion that uses the moral words (“ought”, “should”, “good”, “evil” etc.) from premises that don’t use those words (unless the conclusion is completely vacuous), and (b) to my knowledge, no-one has ever found a way around Hume’s law (and even if some ingenious workaround can be found, we don’t want to put morality on hold while we’re waiting for it).
Summing up so far: basing morals on mysticism is noxious, and basing morals on science alone looks impossible. What next?
Tell the truth? Accept the fact that objective morality is as imaginary as Santa Claus, and consider rationally where we go from there? Well, not quite. Again quoting Shepanski,
When you get to that point, and someone asks you what your moral bedrock is based on, my advice is: don’t answer. Keep mum. Better to remain silent and be thought a fool than to speak and remove all doubt. Zip it. (Or if you really must have some words to fill an awkward conversational silence, then it’s probably harmless to say any of the following: “It’s a deeply held personal belief,” “It’s just the way I was brought up,” or “These truths we hold to be self-evident”. Just don’t attempt a real defense: don’t attempt to deduce your moral bedrock from anything else.)
In other words, as my old drama teacher used to put it, “Ad lib!” Just make sure you never reveal the little man behind the curtain. Manipulate moral emotions to your heart’s content, but just make sure you never tell the truth. Of course, this “solution” is very convenient for the “experts on ethics.” They get to continue pretending that they’re actually experts about something real. That, of course, is exactly what the legions of them plying their trade in academia and elsewhere are doing as I write this.
As I pointed out above, what’s interesting about Shepanski’s take on morality is that he derives it from the same basic facts as my own. In short, he realizes that there is no such thing as objective morality, and he knows that morality is the expression of evolved behavioral traits in creatures with large brains, capable of reasoning about what their moral emotions are trying to tell them. As he puts it:
Without smoke, moral principles are pegged to one thing only: our willingness to accept their consequences. Which consequences we are willing to accept is determined, to a large extent, by our evolved psychological tendencies, including altruism, empathy, and self-interest. Within the human species these evolved tendencies are probably more similar than different, so there is hope that our moral principles will converge. Reasoning with one other…, can bring the convergence forward.
Is that really what we “ought” to do? Embrace a future in which the best manipulators of moral emotions get to guide their “convergence” to whatever end state they happen to prefer? I can’t answer that question. Like Shepanski, I lack any “bedrock” basis for telling anyone what they “ought” to do as a matter of principle. When it comes to “oughts,” I must limit myself to suggesting what they “ought” to do as a mere matter of utility in order to best achieve goals that, for one reason or another, happen to be important to them. With that caveat, I suggest that they ought not to follow Shepanski’s advice.
If human morality is really the expression of evolved behavioral traits, as Shepanski and I both agree, than those traits didn’t just suddenly pop into existence. Perhaps, like the human eye, they arose from extremely primitive origins, and were gradually refined to their present state over the eons. Regardless of the precise sequence of events, it’s clear that they evolved in times radically different from the present. If they evolved, then they must have had some survival value at the time they evolved. It is certainly not obvious, and indeed it would be surprising, if they were similarly effective in promoting our survival today. One can cite many examples in which they appear to be accomplishing precisely the opposite, leading to what I have referred to elsewhere as “morality inversions.”
In other words, while I think it likely that most of us have some subjective notion of purpose, of the meaning of life, of aspirations or goals that are important to us, I very much doubt that a “convergence” of morality will prove to be the most effective way for most of us to achieve those ends. In the first place, manipulating atavistic emotions strikes me as a dangerous game. In the second, human moral emotions don’t promote “convergence.” As Sir Arthur Keith pointed out long ago, they are dual in nature, and include an innate tendency to identify an outgroup, whose members it is natural to despise and hate. One need only glance through the comment section of any blog or website hosted by some proponent of the “brotherhood of man” to find abundant artifacts of the intense hatred felt for the ideological “other.” Hatred of the “other” has been with us throughout recorded history, is alive and well today, especially among those of us who most pique themselves on their superior piety and moral purity, and will certainly continue to be a prominent trait of our species for a long time to come.
What do I suggest as a more “useful” approach than Shepanski’s “convergence?” The truth is always a good place to start. We have the misfortune to live in an age dominated by Puritans in both the traditional spiritual and modern secular flavors. Their demands to be taken seriously as well as the wellsprings of such power as they possess is absolutely dependent on maintaining the illusion that there are such things as objective good and evil. As a result, promoting a general knowledge and appreciation of the consequences of the truth won’t be easy. It will entail pulling the rug out from under these obscurantists. Beyond that, we need to restrict morality to the limits within which we can’t do without it, such as the common, day-to-day interactions of human beings. My personal preference would be to come up with a common morality that limits the harm we do to each other as much as possible, while at the same time leaving each of us as free as possible to pursue whatever goals in life we happen to have.
None of the above are likely to happen anytime soon. No doubt that will come as some comfort to those who “feel in their bones” that good and evil are real things, independent of the human minds that concoct them. Still, it seems that there’s an increasing tendency, at least in some parts of the world, for people to jettison the silly notion of God into the same realms as Santa Claus and the Easter Bunny. If that trend is any indication, perhaps there is at least a ray of hope.
Posted on November 29th, 2015 No comments
There are few better demonstrations of the fact that the term Homo sapiens is an oxymoron then the results of our species’ attempts to “interpret” the innate emotional responses that are the source of all the gaudy manifestations of human morality. Moral emotions exist. Evolution by natural selection is the reason for their existence. If they did not exist, there would be no morality as we know it. In other words, the only reason for the illusion that Good and Evil are objects, things-in-themselves that don’t depend on any mind, human or otherwise, for their existence, is the fact that, over some period of time, that illusion made it more likely that the genes responsible for spawning it would survive and reproduce. Recently it has been amply demonstrated that, over a different period of time, under different conditions, the very same emotions spawned by the very same genes can accomplish precisely the opposite. In other words, they can promote their own destruction. Mother Nature, it would seem, has a fondness for playing practical jokes.
The elevation of colonialism in some circles to the status of Mother of all Evils is a case in point. It has long been the “root cause” of choice for all sorts of ills. Prominent among them lately has been Islamic terrorism, as may be seen here, here, here and here. Even prominent politicians have jumped on the bandwagon, and we find them engaged in the ludicrous pursuit of explaining to Islamic terrorists, who have been educated in madrassas and know the Quran by heart, that they are not “real Moslems.” It must actually be quite frustrating for the terrorists, who have insisted all along that they are acting on behalf of and according to the dictates of their religion. It also begs the question of how, if Islam is a “religion of peace,” all of north Africa, much of the Middle East outside of Arabia, Turkey, significant parts of Europe, Iran, etc., formerly parts of the Christian Roman Empire or the Zoroastrian Persian Empire, ever became Moslem. Of course, it was accomplished by military force, and the ensuing colonization of these countries resulted in the destruction of the “indigenous” cultures and traditions that were overrun. Interestingly, we seldom find this Moslem version of colonialism treated as a form of immorality. Apparently we are to assume that there is a statute of limitations on the application of the relevant moral principles.
Be that as it may, in bygone days colonialism was often also invoked as the “root cause” for the promiscuous massacres of the Communists, and is the “root cause” of choice for the ills, real or imagined, of all sorts of minorities as well. I have long maintained that Good and Evil have no objective existence. However, whether one agrees with that assertion or not, it seems only reasonable that the terms at least be defined in a way that is consistent with their evolutionary roots. In that case, the notion that colonialism was evil becomes absurd. It is yet another example of a morality inversion, characterized by the whimsical tendency of human moral emotions to stand on their heads in response to sufficiently drastic changes to the external environment.
What were the actual results of colonialism? We will limit our examination to white colonialism, as colonialism by other ethnic groups, although of frequent occurrence in the past, is not generally held to be such an “evil.” Rather, colonialism as practiced by other than whites is deemed a mere expression of “culture.” It would therefore be “racist” to consider it evil. In the first place, then, white colonialism has led to a vast expansion in the area of the planet inhabited primarily by whites. They are now the dominant ethnic groups on whole continents that they never knew existed little over half a century ago. This must certainly be considered good if we are to define the Good consistently with the “root causes” of morality itself. Interestingly, colonialism was also good in this way for other ethnic groups. Sub-Saharan blacks, for example, now have a prominent presence over wide territories that they never would have seen in the absence of the white practice of carrying slaves to their colonies. It is unlikely that, if faced with the choice, blacks would trade a world that never experienced white colonialism with the more “evil” world we actually inhabit.
Even if one chooses to divorce morality entirely from its evolutionary roots, and assume that Good and Evil are independent entities floating about in the luminiferous aether with no biological strings attached whatsoever, it is not entirely obvious that white colonialism was an unmitigated evil. Indeed, if we are to accept the modern secular humanist take on objective morality, as outlined, for example, in Sam Harris’ The Moral Landscape, it would seem that the opposite is the case. According to this version of morality, “human flourishing” is the summum bonum. I would maintain that a vastly greater number of humans are flourishing today because of white colonialism than would otherwise be the case. Thanks to white colonialism, the continents on which its impact was greatest now support much larger populations of healthier people who live for longer times on average, and are less likely to die violent deaths than if it had not occurred. This, of course, is not necessarily true of every race involved. The aborigines of Tasmania, for example, were entirely wiped out, and there has probably been a significant decline in the population of the pre-Columbian inhabitants of North America. However, the opposite has been the case in Africa and India. In any case, if we are to believe the ideological shibboleths that often emanate from the same ideological precincts that gave rise to the latest versions of morality based on “human flourishing,” all these distinctions by race don’t matter, because race is a mere social construct.
I often wonder what makes modern secular Puritans imagine that they will be judged any differently by future generations than they are in the habit of judging the generations of the past. After all, the vast majority of the inhabitants of Great Britain, France, and the other major colonialist countries did not imagine that they were being deliberately immoral during the heyday of colonialism. On what basis is it justified to judge others out of the context of their time? No one has ever come up with a rational answer to that question, for the very good reason that no such basis is possible.
The proponents of colonialism left behind a great many books on the subject. Typically, they perceived colonialism as a benign pursuit that benefited the colonial peoples as much as the colonizers. There is an interesting chapter on the subject in Volume XII (The Latest Age) of the Cambridge Modern History (Chapter XX, The European Colonies), first published in 1910. In reading it, one finds no hint of evidence that the author of the chapter, a university professor who no doubt considered himself enlightened according to the standards of the time, perceived colonialism as other than a benign force, and an expression of the energy and economic growth of the colonizing countries. Some typical passages include,
The few years under present consideration form a brief period in this long process (of European colonization since the 15th century). Yet they have seen an awakened interest in colonization and an extension of the field of enterprise which give them a unique significance. The comparative tranquility of domestic and foreign affairs in most countries of Europe has favoured a great outburst of colonizing energy, for which the growth of population and industry has provided the principal motive. The growth of population has swollen the stream of emigration; the expansion of industry has increased the desire to control sources of supply for raw materials and markets for finished products. A rapid improvement in means of communication and transport has facilitated intercourse between distant parts of the world. A vast store of accumulated wealth in old countries has been available for investment in the new.
In other words, colonization was considered a manifestation of social progress. The rights of indigenous peoples were not simply ignored as is so often claimed today. It was commonly believed, and not without reason, that they, too, benefited from colonization. Epidemic diseases were controlled, pervasive intertribal warfare and the slave trade were ended, and the brutal mistreatment of women was discouraged. On the other hand, the abuse of native populations was also recognized. Quoting again from a section of the book dealing with the Belgian Congo, the author writes,
Its history would be a fine tale of European energy applied to the development of a tropical country, had not the work been marred by a cruel spirit of exploitation gaining the upper hand. The first ten years of its existence were a period of great activity, during which a marvelous change came over the land. Splendid pioneering work was done. Experienced missionaries and travelers explored the great streams. The drink traffic, the slave trade, and cannibalism, were much diminished. The ancient Arab dominion in Central Africa was overthrown after a hard and costly struggle (1890-3). Routes of communication were opened, and railway building commenced…
But it was by its treatment of the native peoples that the Congo State attained that evil eminence which accumulating proof shows it to have well deserved. The system of administration lent itself to abuses. Large powers were devolved upon men not always adequately paid or capable of bearing their responsibilities. The supervision of their activities in the interior was impossible from places so distant as Boma and Brussels. The native was wronged by the disregard of his system of land ownership and of the tribal rights to hunt and gather produce in certain areas, as well as by a system of compulsory labor in the collection of produce on behalf of the State, enforced by barbarous punishments and responsible for continual and devastating warfare… Finally, the Belgian Parliament taking up the question, the Congo State was in 1908 transferred to Belgium, and its rulers have thus become responsible to the public opinion of a nation.
Except, perhaps, during the most active periods of European competition for colonies during the last half of the 19th century, eventual independence was recognized not merely as an ideal but as practically inevitable. In the last paragraph of the chapter the author writes,
(Great Britain’s) colonial policy has been inspired by an understanding and a wise recognition of facts. Settlers in new countries form societies; such societies, as their strength grows, desire the control of their own life; common interests draw contiguous societies together, and union creates and fosters the sense of nationality. Perceiving the course of this development, the mother country has continually readjusted the ties that bound her to her colonies, so that they might be appropriate to the stage of growth which each colony had reached. Wherever possible, she has conceded to them the full control of their own affairs; and she has encouraged contiguous colonies to unite, so that in dimensions, resources, population, and economic strength, the indispensable material foundations of a self-governing state could be formed.
The author closes with sentiments that are likely to shock modern university professors out of their wits:
Slowly the British empire is shaping itself into a league of Anglo-Saxon peoples, holding under its sway vast tropical dependencies as well as many small communities of mixed race. Strong bonds of common loyalty, race, and history, as well as the need of cooperation for defense, unite the white peoples. But the course of progress has carried the empire to an unfamiliar point in political development. Loose and elastic in its structure, it may well take a new shape under the influence of external pressure, political and economic.
In other words, the author did not share the modern penchant among the “Anglo-Saxons” for committing ethnic suicide. In our own day, of course, while it is still perfectly acceptable for every other ethnic group on the planet to speak in a similar fashion, it has become a great sin for whites to do so. Far be it for me to challenge this development on moral grounds, for the simple reason that there are no moral grounds one way or the other. Similarly, this post is in no way intended to morally condone or serve as a form of moral apologetics for colonialism. There exists no objective basis for morally judging colonialism, or anything else, for that matter. I merely point out that the moral standards relating to colonialism have evolved over time. Beyond that, one might add that colonialism accomplished ends in harmony with the reasons that led to the evolution of moral emotions to begin with, whereas the manipulation of those emotions to condemn colonialism on illusory moral grounds accomplishes precisely the opposite. That is not at all the same thing as claiming that colonialism was Good, and anti-colonialism is evil. It is merely stating a fact.
One can certainly choose to oppose, and even actively fight against, colonialism, or anything else to which one happens to have an aversion. I merely suggest that, before one does so, one have a reasonably accurate understanding of the emotions that are the cause of the aversion, and why they exist. Moral emotions seem to point to objective things, Good and Evil, that are perceived as real, but aren’t. I don’t wish to imply that no one should ever act. I merely suggest that, before they do, they should understand the illusion.
Posted on November 26th, 2015 5 comments
The history of the rise and fall of the Blank Slate is fascinating, and not only as an example of the pathological derailment of whole branches of science in favor of ideological dogmas. The continuing foibles of the “men of science” as they attempt to “readjust” that history are nearly as interesting in their own right. Their efforts at post-debacle damage control are a superb example of an aspect of human nature at work – tribalism. There is much at stake for the scientific “tribe,” not least of which is the myth of the self-correcting nature of science itself. What might be called the latest episode in the sometimes shameless, sometimes hilarious bowdlerization of history just appeared in the form of another PBS special; E. O. Wilson – Of Ants and Men. You can watch it online by clicking on the link.
Before examining the latest twists in this continuously evolving plot, it would be useful to recap what has happened to date. There is copious source material documenting not only the rise of the Blank Slate orthodoxy to hegemony in the behavioral sciences, but also the events that led to its collapse, not to mention the scientific apologetics that followed its demise. In its modern form, the Blank Slate manifested itself as a sweeping denial that innate behavioral traits, or “human nature,” had anything to do with human behavior beyond such basic functions as breathing and the elimination of waste. It was insisted that virtually everything about our behavior was learned, and a reflection of “culture.” By the early 1950’s its control of the behavioral sciences was such that any scientist who dared to publish anything in direct opposition to it was literally risking his career. Many scientists have written of the prevailing atmosphere of fear and intimidation, and through the 1950s, ‘60s, and early ‘70s there was little in the way of “self-correction” emanating from within the scientific professions themselves.
The “correction,” when it came, was supplied by an outsider – a playwright by the name of Robert Ardrey who had taken an interest in anthropology. Beginning with African Genesis in 1961, he published a series of four highly popular books that documented the copious evidence for the existence of human nature, and alerted a wondering public to the absurd extent to which its denial had been pursued in the sciences. It wasn’t a hard sell, as that absurdity was obvious enough to any reasonably intelligent child. Following Ardrey’s lead, a few scientists began to break ranks, particularly in Europe where the Blank Slate had never achieved a level of control comparable to that prevailing in the United States. They included the likes of Konrad Lorenz (On Aggression, first published in German in 1963), Desmond Morris (The Naked Ape, 1967), Lionel Tiger (Men in Groups, 1969), and Robin Fox (The Imperial Animal, 1971, with Lionel Tiger). The Blank Slate reaction to these works, not to mention the copious coverage of Ardrey and the rest that began appearing in the popular media, was furious. Man and Aggression, a collection of Blank Slater rants directed mainly at Ardrey and Lorenz, with novelist William Golding thrown in for good measure, is an outstanding piece of historical source material documenting that reaction. Edited by Ashley Montagu and published in 1968, it typifies the usual Blank Slate MO – attacks on straw men combined with accusations of racism and fascism. That, of course, remains the MO of the “progressive” Left to this day.
The Blank Slaters could intimidate the scientific community, but not so the public at large. Thanks to Ardrey and the rest, by the mid-70s the behavioral sciences were in danger of becoming a laughing stock. Finally, in 1975, E. O. Wilson broke ranks and published Sociobiology, a book that was later to gain a notoriety in the manufactured “history” of the Blank Slate out of all proportion to its real significance. Of the 27 chapters, 25 dealt with animal behavior. Only the first and last chapters focused on human behavior. Nothing in those two chapters, nor in Wilson’s On Human Nature, published in 1978, could reasonably be described as other than an afterthought to the works of Ardrey and others that had appeared much earlier as far as human nature is concerned. Its real novelty wasn’t its content, but the fact that it was the first popular science book asserting the existence and importance of human nature by a scientist in the United States that reached a significant audience. This fact was well known to Wilson, not to mention his many Blank Slate detractors. In their diatribe Against Sociobiology, which appeared in the New York Review of Books in 1975 they wrote, “From Herbert Spencer, who coined the phrase “survival of the fittest,” to Konrad Lorenz, Robert Ardrey, and now E. O. Wilson, we have seen proclaimed the primacy of natural selection in determining most important characteristics of human behavior.
As we know in retrospect, the Blank Slaters were facing a long, losing battle against recognition of the obvious. By the end of the 1990s, even the editors at PBS began scurrying off the sinking ship. Finally, in the scientific shambles left in the aftermath of the collapse of the Blank Slate orthodoxy, Steven Pinker published his The Blank Slate. It was the first major attempt at historical revisionism by a scientist, and it contained most of the fairytales about the affair that are now widely accepted as fact. I had begun reading the works of Ardrey, Lorenz and the rest in the early 70s, and had followed the subsequent unraveling of the Blank Slate with interest. When I began reading The Blank Slate, I assumed I would find a vindication of the seminal role they had played in the 1960s in bringing about its demise. I was stunned to find that, instead, as far as Pinker was concerned, the 60s never happened! Ardrey was mentioned only a single time, and then only with the assertion that “the sociobiologists themselves” had declared him and Lorenz “totally and utterly” wrong! The “sociobiologist” given as the source for this amazing assertion was none other than Richard Dawkins! Other than the fact that Dawkins was never a “sociobiologist,” and especially not in 1972 when he published The Selfish Gene, the book from which the “totally and utterly wrong” quote was lifted, he actually praised Ardrey in other parts of the book. He never claimed that Ardrey and the rest were “totally and utterly wrong” because they defended the importance of innate human nature, in Ardrey’s case the overriding theme of all his work. Rather, Dawkins limited that claim to their support of group selection, a fact that Pinker never gets around to mentioning in The Blank Slate. Dropping Ardrey, Lorenz and the rest down the memory hole, Pinker went on to assert that none other than Wilson had been the real knight in shining armor who had brought down the Blank Slate. As readers who have followed this blog for a while are aware, the kicker came in 2012, in the form of E. O. Wilson’s The Social Conquest of Earth. In the crowning (and amusing) irony of this whole shabby affair, Wilson outed himself as more “totally and utterly wrong” than Ardrey and Lorenz by a long shot. He wholeheartedly embraced – group selection!
Which finally brings me to the latest episode in the readjustment of Blank Slate history. It turned up recently in the form of a PBS special entitled, E. O. Wilson – Of Ants and Men. It’s a testament to the fact that Pinker’s deification of Wilson has succeeded beyond his wildest dreams. The only problem is that now it appears he is in danger of being tossed on the garbage heap of history himself. You see, the editors at the impeccably politically correct PBS picked up on the fact that, at least according to Wilson, group selection is responsible for the innate wellsprings of selflessness, love of others, at least in the ingroup, altruism, and all the other endearing characteristics that make the hearts of the stalwart leftists who call the tune at PBS go pitter-pat. Pinker, on the other hand, for reasons that should be obvious by now, must continue to reject group selection, lest his freely concocted “history” become a laughing stock. To see how all this plays out circa 2015, let’s take a closer look at the video itself.
Before I begin, I wish to assure the reader that I have the highest respect for Wilson himself. He is a great scientist, and his publication of Sociobiology was an act of courage regardless of its subsequent exploitation by historical revisionists. As we shall see, he has condoned the portrayal of himself as the “knight in shining armor” invented by Pinker, but that is a forgivable lapse by an aging scientist who is no doubt flattered by the “legacy” manufactured for him.
With that, on to the video. It doesn’t take long for us to run into the first artifact of the Wilson legend. At the 3:45 minute mark, none other than Pinker himself appears, informing us that Wilson, “changed the intellectual landscape by challenging the taboo against discussing human nature.” He did no such thing. Ardrey had very effectively “challenged the taboo” in 1961 with his publication of African Genesis, and many others had challenged it in the subsequent years before publication of Sociobiology. Pinker’s statement isn’t even accurate in terms of U.S. scientists, as several of them in peripheral fields such as political science, had insisted on the existence and importance of human nature long before 1975, and others, like Tiger and Fox, although foreign born, had worked at U.S. universities. At the 4:10 mark Gregory Carr chimes in with the remarkable assertion that,
If someone develops a theory about human nature or biodiversity, and in common living rooms across the world, it seems like common sense, but in fact, a generation ago, we didn’t understand it, it tells you that that person, in this case Ed Wilson, has changed the way all of us view the world.
One can but shake one’s head at such egregious nonsense. In the first place, Wilson didn’t “develop a theory about human nature.” He simply repeated hypotheses that Darwin himself and many others since him had developed. There is nothing of any significance about human nature in any of his books that cannot also be found in the works of Ardrey. People “in common living rooms” a generation ago understood and accepted the concept of human nature perfectly well. The only ones who were still delusional about it at the time were the so-called “experts” in the behavioral sciences. Many of them were also just as aware as Wilson of the absurdity of the Blank Slate dogmas, but were too intimidated to challenge them.
My readers should be familiar by now with such attempts to inflate Wilson’s historical role, and the reasons for them. The tribe of behavioral scientists has never been able to bear the thought that their “science” was not “self-correcting,” and they would probably still be peddling the Blank Slate dogmas to this day if it weren’t for the “mere playwright,” Ardrey. All their attempts at historical obfuscation won’t alter that fact, and source material is there in abundance to prove it to anyone who has the patience to search it out and look at it. We first get an inkling of the real novelty in this particular PBS offering at around minute 53:15, when Wilson, referring to eusociality in ant colonies, remarks,
This capacity of an insect colony to act like a single super-organism became very important to me when I began to reconsider evolutionary theory later in my career. It made me wonder if natural selection could operate not only on individuals and their genes, but on the colony as a whole. That idea would create quite a stir when I published it, but that was much later.
Which brings us to the most amusing plot twist in this whole, sorry farce; PBS’ wholehearted embrace of group selection. Recall that Pinker’s whole rationalization for ignoring Ardrey was based on some good things Ardrey had to say about group selection in his third book, The Social Contract. The subject hardly ever came up in his interviews, and was certainly not the central theme of all his books, which, as noted above, was the existence and significance of human nature. Having used group selection to declare Ardrey an unperson, Pinker then elevated Wilson to the role of the “revolutionary” who was the “real destroyer” of the Blank Slate in his place. Wilson, in turn, in what must have seemed to Pinker a supreme act of ingratitude, embraced group selection more decisively than Ardrey ever thought of doing, making it a central and indispensable pillar of his theory regarding the evolution of eusociality. Here’s how the theme plays out in the video.
Wilson at 1:09:50
Humans don’t have to be taught to cooperate. We do it instinctively. Evolution has hardwired us for cooperation. That’s the key to eusociality.
Wilson at 1:13:40
Thinking on this remarkable fact (the evolution of eusociality) has made me reconsider in recent years the theory of natural selection and how it works in complex social animals.
Pinker at 1:18:50
Starting in the 1960s, a number of biologists realized that if you think rigorously about what natural selection does, it operates on replicators. Natural selection, Darwin’s theory, is the theory of what happens when you have an entity that can make a copy of itself, and so it’s very clear that the obvious target of selection in Darwin’s theory is the gene. That became close to a consensus among evolutionary biologists, but I think it’s fair to say that Ed Wilson was always ambivalent about that turn in evolutionary theory.
I never doubted that natural selection works on individual genes or that kin selection is a reality, but I could never accept that that is the whole story. Our group instincts, and those of other eusocial species, go far beyond the urge to protect our immediate kin. After a lifetime studying ant societies, it seemed to me that the group must also have an important role in evolution, whether or not its members are related to each other.
1:20:15 Jonathan Haidt:
So there’ve been a few revolutions in evolutionary thinking. One of them happened in the 1960s and ‘70s, and it was really captured in Dawkins famous book ‘The Selfish Gene,’ where if you just take the gene’s eye view, you have the simplest elements, and then you sort of build up from there, and that works great for most animals, but Ed was studying ants, and of course you can make the gene’s eye view work for ants, but when you’re studying ants, you don’t see the ant as the individual, you don’t see the ant as the organism, you see the colony or the hive as the entity that really matters.
At 1:20:55 Wilson finally spells it out:
Once you see a social insect colony as a superorganism, the idea that selection must work on the group as well as on the individual follows very naturally. This realization transformed my perspective on humanity, too. So I proposed an idea that goes all the way back to Darwin. It’s called group selection.
Ed was able to see group selection in action. It’s just so clear in the ants, the bees, the wasps, the termites and the humans.” Wilson: “The fact of group selection gives rise to what I call multilevel evolution, in which natural selection is operating both at the level of the individual and the level of the group… And that got Ed into one of the biggest debates of his career, over multilevel selection, or group selection.
Ed Wilson did not give up the idea that selection acted on groups, while most of his fellow biologists did. Then several decades later, revived that notion in a full-throated manifesto, which I think it would be an understatement to say that he did not convince his fellow biologists.
At this point, a picture of Wilson’s The Social Conquest of Earth, appears on the screen, shortly followed by stills of a scowling Richard Dawkins. Then we see an image of the cover of his The Selfish Gene. The film describes Dawkins furious attack on Wilson for daring to promote group selection.
The brouhaha over group selection has brought me into conflict with defenders of the old faith, like Richard Dawkins and many others who believe that ultimately the only thing that counts in the evolution of complex behavior, is the gene, the selfish gene. They believe the gene’s eye view of social evolution can explain all of our groupish behavior. I do not.
And finally, at 1:25, after Wilson notes Pinker is one of his opponents, Pinker reappears to deny the existence of group selection:
Most people would say that, if there’s a burning building, and your child is in one room and another child is in another room, then you are entitled to rescue your child first, right? There is a special bond between, say, parents and children. This is exactly what an evolutionary biologist would predict because any gene that would make you favor your child will have a copy of itself sitting in the body of that child. By rescuing your child the gene for rescuing children, so to speak, will be helping a copy of itself, and so those genes would proliferate in the population. Not just the extreme case of saving your child from a burning building but for being generous and loyal to your siblings, your very close cousins. The basis of tribalism, kinship, family feelings, have a perfectly sensible sensible evolutionary basis. (i.e., kin selection)
At this point one can imagine Pinker gazing sadly at the tattered remains of his whole, manufactured “history” of the Blank Slate lying about like a collapsed house of cards, faced with the bitter realization that he had created a monster. Wilson’s group selection schtick was just too good for PBS to pass up. I seriously doubt whether any of their editors really understand the subject well enough to come up with a reasoned opinion about it one way or the other. However, how can you turn your nose up at group selection if, as Wilson claims, it is responsible for altruism and all the other “good” aspects of our nature, whereas the types of selection favored by Pinker, not to mention Dawkins, are responsible for selfishness and all the other “bad” parts of our nature?
And what of Ardrey, whose good words about group selection no longer seem quite as “totally and utterly wrong” as Pinker suggested when he swept him under the historical rug? Have the editors at PBS ever even heard of him? We know very well that they have, and that they are also perfectly well aware of his historical significance, because they went to the trouble of devoting a significant amount of time to him in another recent special covering the discovery of Homo naledi. It took the form of a bitter denunciation of Ardrey for supporting the “Killer Ape Theory,” a term invented by the Blank Slaters of yore to ridicule the notion that pre-human apes hunted and killed during the evolutionary transition from ape to man. This revealing lapse demonstrated the continuing strength of the obsession with the “unperson” Ardrey, the man who was “totally and utterly wrong.” That obsession continues, not only among ancient, unrepentant Blank Slaters, but among behavioral scientists in general who happen to be old enough to know the truth about what happened in the 15 years before Wilson published Sociobiology, in spite of Pinker’s earnest attempt to turn that era into an historical “Blank Slate.”
Dragging in Ardrey was revealing because, in the first place, it was irrelevant in the context of a special about Homo naledi. As far as I know, no one has published any theories about the hunting behavior of that species one way or the other. It was revealing in the second place because of the absurdity of bringing up the “Killer Ape Theory” at all. That straw man was invented back in the 60s, when it was universally believed, even by Ardrey himself, that chimpanzees were, as Ashley Montagu put it, “non-aggressive vegetarians.” That notion, however, was demolished by Jane Goodall, who observed chimpanzees both hunting and killing, not to mention their capacity for extremely aggressive behavior. Today, few people like to mention the vicious, ad hominem attacks she was subjected to at the time for publishing those discoveries, although those attacks, too, are amply documented for anyone who cares to look for them. In the ensuing years, even the impeccably PC Scientific American has admitted the reality of hunting behavior in early man. In other words, the “Killer Ape Theory” debate has long been over, and Ardrey, who spelled out his ideas on the subject in his last book, The Hunting Hypothesis, won it hands down.
Why does all this matter? It seems to me the integrity of historical truth is worth defending in its own right. Beyond that, there is much to learn from the Blank Slate affair and its aftermath regarding the integrity of science itself. It is not invariably self-correcting. It can become derailed, and occasionally outsiders must play an indispensable role in putting it back on the tracks. Ideology can trump reason and common sense, and it did in the behavioral sciences for a period of more than half a century. Science is not infallible. In spite of that, it is still the best way of ferreting out the truth our species has managed to come up with so far. We can’t just turn our back on it, because, at least in my opinion, all of the alternatives are even worse. As we do science, however, it would behoove us to maintain a skeptical attitude and watch for signs of ideology leaking through the cracks.
I note in passing that excellent readings of all of Ardrey’s books are now available at Audible.com.
Posted on November 10th, 2015 2 comments
Many pre-Darwinian philosophers realized that the source of human morality was to be found in innate “sentiments,” or “passions,” often speculating that they had been put there by God. Hume put the theory on a more secular basis. Darwin realized that the “sentiments,” were there because of natural selection, and that human morality was the result of their expression in creatures with large brains. Edvard Westermarck, perhaps at the same time the greatest and the most unrecognized moral philosopher of them all, put it all together in a coherent theory of human morality, supported by copious evidence, in his The Origin and Development of the Moral Ideas.
Westermarck is all but forgotten today, probably because his insights were so unpalatable to the various academic and professional tribes of “experts on ethics.” They realized that, if Westermarck were right, and morality really is just the expression of evolved behavioral predispositions, they would all be out of a job. Under the circumstances, its interesting that his name keeps surfacing in modern works about evolved morality, innate behavior, and evolutionary psychology. For example, I ran across a mention of him in famous primatologist Frans de Waal’s latest book, The Bonobo and the Atheist. People like de Waal who know something about the evolved roots of behavior are usually quick to recognize the significance of Westermarck’s work.
Be that as it may, G. E. Moore, the subject of my last post, holds a far more respected place in the pantheon of moral philosophers. That’s to be expected, of course. He never suggested anything as disconcerting as the claim that all the mountains of books and papers they had composed over the centuries might as well have been written about the nature of unicorns. True, he did insist that everyone who had written about the subject of morality before him was delusional, having fallen for the naturalistic fallacy, but at least he didn’t claim that the subject they were writing about was a chimera.
Most of what I wrote about in my last post came from the pages of Moore’s Principia Ethica. That work was published in 1903. Nine years later he published another little book, entitled Ethics. As it happens, Westermarck’s Origin appeared between those two dates, in 1906. In all likelihood, Moore read Westermarck, because parts of Ethics appear to be direct responses to his book. Moore had only a vague understanding of Darwin, and the implications of his work on the subject of human behavior. He did, however, understand Westermarck when he wrote in the Origin,
If there are no general moral truths, the object of scientific ethics cannot be to fix rules for human conduct, the aim of all science being the discovery of some truth. It has been said by Bentham and others that moral principles cannot be proved because they are first principles which are used to prove everything else. But the real reason for their being inaccessible to demonstration is that, owing to their very nature, they can never be true. If the word “Ethics,” then, is to be used as the name for a science, the object of that science can only be to study the moral consciousness as a fact.
Now that got Moore’s attention. Responding to Westermarck’s theory, or something very like it, he wrote:
Even apart from the fact that they lead to the conclusion that one and the same action is often both right and wrong, it is, I think, very important that we should realize, to begin with, that these views are false; because, if they were true, it would follow that we must take an entirely different view as to the whole nature of Ethics, so far as it is concerned with right and wrong, from what has commonly been taken by a majority of writers. If these views were true, the whole business of Ethics, in this department, would merely consist in discovering what feelings and opinions men have actually had about different actions, and why they have had them. A good many writers seem actually to have treated the subject as if this were all that it had to investigate. And of course questions of this sort are not without interest, and are subjects of legitimate curiosity. But such questions only form one special branch of Psychology or Anthropology; and most writers have certainly proceeded on the assumption that the special business of Ethics, and the questions which it has to try to answer, are something quite different from this.
Indeed they have. The question is whether they’ve actually been doing anything worthwhile in the process. Note the claim that Westermarck’s views were “false.” This claim was based on what Moore called a “proof” that it couldn’t be true that appeared in the preceding pages. Unfortunately, this “proof” is transparently flimsy to anyone who isn’t inclined to swallow it because it defends the relevance of their “expertise.” Quoting directly from his Ethics, it goes something like this:
- It is absolutely impossible that any one single, absolutely particular action can ever be both right and wrong, either at the same time or at different times.
- If the whole of what we mean to assert, when we say that an action is right, is merely that we have a particular feeling towards it, then plainly, provided only we really have this feeling, the action must really be right.
- For if this is so, and if, when a man asserts an action to be right or wrong, he is always merely asserting that he himself has some particular feeling towards it, then it absolutely follows that one and the same action has sometimes been both right and wrong – right at one time and wrong at another, or both simultaneously.
- But if this is so, then the theory we are considering certainly is not true. (QED)
Note that this “proof” requires the positive assertion that it is possible to claim that an action can be right or wrong, in this case because of “feelings.” A second, similar proof, also offered in Chapter III of Ethics, “proves” that an action can’t possible be right merely because one “thinks” it right, either. With that, Moore claims that he has “proved” that Westermarck, or someone with identical views, must be wrong. The only problem with the “proof” is that Westermarck specifically pointed out in the passage quoted above that it is impossible to make truth claims about “moral principles.” Therefore, it is out of the question that he could ever be claiming that any action “is right,” or “is wrong,” because of “feelings” or for any other reason. In other words, Moore’s “proof” is nonsense.
The fact that Moore was responding specifically to evolutionary claims about morality is also evident in the same Chapter of Ethics. Allow me to quote him at length.
…it is supposed that there was a time, if we go far enough back, when our ancestors did have different feelings towards different actions, being, for instance, pleased with some and displeased with others, but when they did not, as yet, judge any actions to be right or wrong; and that it was only because they transmitted these feelings, more or less modified, to their descendants, that those descendants at some later stage, began to make judgments of right and wrong; so that, in a sense, or moral judgments were developed out of mere feelings. And I can see no objection to the supposition that this was so. But, then, it seems also to be supposed that, if our moral judgments were developed out of feelings – if this was their origin – they must still at this moment be somehow concerned with feelings; that the developed product must resemble the germ out of which it was developed in this particular respect. And this is an assumption for which there is, surely, no shadow of ground.
In fact, there was a “shadow of ground” when Moore wrote those words, and the “shadow” has grown a great deal longer in our own day. Moore continues,
Thus, even those who hold that our moral judgments are merely judgments about feelings must admit that, at some point in the history of the human race, men, or their ancestors, began not merely to have feelings but to judge that they had them: and this along means an enormous change.
Why was this such an “enormous change?” Why, of course, because as soon as our ancestors judged that they had feelings, then, suddenly those feelings could no longer be a basis for morality, because of the “proof” given above. Moore concludes triumphantly,
And hence, the theory that moral judgments originated in feelings does not, in fact, lend any support at all to the theory that now, as developed, they can only be judgments about feelings.
If Moore’s reputation among them is any guide, such “ironclad logic” is still taken seriously by todays crop of “experts on ethics.” Perhaps it’s time they started paying more attention to Westermarck.
The Moral Philosophy of G. E. Moore, or Why You Don’t Need to Bother with Aristotle, Hegel, and KantPosted on November 7th, 2015 No comments
G. E. Moore isn’t exactly a household name these days, except perhaps among philosophers. You may have heard of his most famous concoction, though – the “naturalistic fallacy.” If we are to believe Moore, not only Aristotle, Hegel and Kant, but virtually every other philosopher you’ve ever heard of got morality all wrong because of it. He was the first one who ever got it right. On top of that, his books are quite thin, and he writes in the vernacular. When you think about it, he did us all a huge favor. Assuming he’s right, you won’t have to struggle with Kant, whose sentences can run on for a page and a half before you finally get to the verb at the end, and who is comprehensible, even to Germans, only in English translation. You won’t have to agonize over the correct interpretation of Hegel’s dialectic. Moore has done all that for you. Buy his books, which are little more than pamphlets, and you’ll be able to toss out all those thick tomes and learn all the moral philosophy you will ever need in a week or two.
Or at least you will if Moore got it right. It all hinges on his notion of the “Good-in-itself.” He claims it’s something like what philosophers call qualia. Qualia are the content of our subjective experiences, like colors, smells, pain, etc. They can’t really be defined, but only experienced. Consider, for example, the difficulty of explaining “red” to a blind person. Moore’s description of the Good is even more vague. As he puts it in his rather pretentiously named Principia Ethica,
Let us, then, consider this position. My point is that ‘good’ is a simple notion, just as ‘yellow’ is a simple notion; that, just as you cannot, by any manner of means, explain to any one who does not already know it, what yellow is, so you cannot explain what good is.
In other words, you can’t even define good. If that isn’t slippery enough for you, try this:
They (metaphysicians) have always been much occupied, not only with that other class of natural objects which consists in mental facts, but also with the class of objects or properties of objects, which certainly do not exist in time, are not therefore parts of Nature, and which, in fact, do no exist at all. To this class, as I have said, belongs what we mean by the adjective “good.” …What is meant by good? This first question I have already attempted to answer. The peculiar predicate, by reference to which the sphere of Ethics must be defined, is simple, unanalyzable, indefinable.
Or, as he puts it elsewhere, the Good doesn’t exist. It just is. Which brings us to the naturalistic fallacy. If, as Moore claims, Good doesn’t exist as a natural, or even a metaphysical, object, it can’t be defined with reference to such an object. Attempts to so define it are what he refers to as the naturalistic fallacy. That, in his opinion, is why every other moral philosopher in history, or at least all the ones whose names happen to turn up in his books, have been wrong except him. The fallacy is defined at Wiki and elsewhere on the web, but the best way to grasp what he means is to read his books. For example,
The naturalistic fallacy always implies that when we think “This is good,” what we are thinking is that the thing in question bears a definite relation to some one other thing.
That fallacy, I explained, consists in the contention that good means nothing but some simple or complex notion, that can be defined in terms of natural qualities.
To hold that from any proposition asserting “Reality is of this nature” we can infer, or obtain confirmation for, any proposition asserting “This is good in itself” is to commit the naturalistic fallacy.
In short, all the head scratching of all the philosophers over thousands of years about the question of what is Good has been so much wasted effort. Certainly, the average layman had no chance at all of understanding the subject, or at least he didn’t until the fortuitous appearance of Moore on the scene. He didn’t show up a moment too soon, either, because, as he explains in his books, we all have “duties.” It turns out that, not only did the intuition “Good,” pop up in his consciousness, more or less after the fashion of “yellow,” or the smell of a rose. He also “intuited” that it came fully equipped with the power to dictate to other individuals what they ought and ought not to do. Again, I’ll allow the philosopher to explain.
Our “duty,” therefore, can only be defined as that action, which will cause more good to exist in the Universe than any possible alternative… When, therefore, Ethics presumes to assert that certain ways of acting are “duties” it presumes to assert that to act in those ways will always produce the greatest possible sum of good.
But how on earth can we ever even begin to do our duty if we have no clue what Good is? Well, Moore is actually quite coy about explaining it to us, and rightly so, as it turns out. When he finally takes a stab at it in Chapter VI of Principia, it turns out to be paltry enough. Basically, it’s the same “pleasure,” or “happiness” that many other philosophers have suggested, only it’s not described in such simple terms. It must be part of what Moore describes as an “organic whole,” consisting not only of pleasure itself, for example, but also a consciousness capable of experiencing the pleasure, the requisite level of taste to really appreciate it, the emotional equipment necessary to react with the appropriate level of awe, etc. Silly old philosophers! They rashly assumed that, if the Good were defined as “pleasure,” it would occur to their readers that they would have to be conscious in order to experience it without them spelling it out. Little did they suspect the coming of G. E. Moore and his naturalistic fallacy.
When he finally gets around to explaining it to us, we gather that Moore’s Good is more or less what you’d expect the intuition of Good to be in a well-bred English gentleman endowed with “good taste” around the turn of the 20th century. His Good turns out to include nice scenery, pleasant music, and chats with other “good” people. Or, as he put it somewhat more expansively,
We can imagine the case of a single person, enjoying throughout eternity the contemplation of scenery as beautiful, and intercourse with persons as admirable, as can be imagined.
By far the most valuable things which we know or can imagine, are certain states of consciousness, which may be roughly described as the pleasures of human intercourse and the enjoyment of beautiful objects. No one, probably, who has asked himself the question, has ever doubted that personal affection and the appreciation of what is beautiful in Art or Nature, are good in themselves.
Really? No one? One can only surmise that Moore’s circle of acquaintance must have been quite limited. Unsurprisingly, Beethoven’s Fifth is in the mix, but only, of course, as part of an “organic whole.” As Moore puts it,
What value should we attribute to the proper emotion excited by hearing Beethoven’s Fifth Symphony, if that emotion were entirely unaccompanied by any consciousness, either of the notes, or of the melodic and harmonic relations between them?
It would seem, then, that even if you’re such a coarse person that you can’t appreciate Beethoven’s Fifth yourself, it is still your “duty” to make sure that it’s right there on everyone else’s smart phone.
Imagine, if you will, Mother Nature sitting down with Moore, holding his hand, looking directly into his eyes, and revealing to him in all its majesty the evolution of life on this planet, starting from the simplest, one celled creatures more than four billion years ago, and proceeding through ever more complex forms to the almost incredible emergence of a highly intelligent and highly social species known as Homo sapiens. It all happened, she explains to him with a look of triumph on her face, because, over all those four billion years, the chain of life remained unbroken because the creatures that made up the links of that chain survived and reproduced. Then, with a serious expression on her face, she asks him, “Now do you understand the reason for the existence of moral emotions?” “Of course,” answers Moore, “they’re there so I can enjoy nice landscapes and pretty music.” (Loud forehead slap) Mother Nature stands up and walks away shaking her head, consoling herself with the thought that some more advanced species might “get it” after another million years or so of natural selection.
And what of Aristotle, Hegel and Kant? Throw out your philosophy books and forget about them. Imagine being so dense as to commit the naturalistic fallacy!