Morality: On Whose Authority?

There are two very basic truths that one must grasp to avoid living in a world of illusions. There is no God, and morality exists by virtue of natural selection. We are inclined by what we refer to as our human nature to prefer the world of illusion; to believe in both God and objective moral goods and evils. However, if one thinks about these things with an open mind, it seems to me the truth should be evident to any reasonably intelligent person. Unfortunately, there are legions of individuals in our societies who benefit from propping up these mirages. The first sort promises us that we will live on in the hereafter for billions and trillions of years, apparently accomplishing nothing of any particular use to anyone other than avoiding death. The second sort flatter our desire to be noble champions of a nonexistent Good, and assure us that, of the myriad versions of the same on offer, theirs is the only genuine article. Among the latter are the editors and contributors to Ethics, a journal which caters to duly certified experts in mirage recognition.

Darwin explained what morality is and why it exists more than a century and a half ago in his The Descent of Man. It is an artifact of natural selection that happened to increase the odds that the genes that are its root cause would survive. Absent those genes, morality, good and evil, would not exist. It follows that, since there is no way for simple facts of nature to spawn objective “oughts,” good and evil are not objective things, and they have no independent existence outside of the minds of individuals. They may have been useful illusions at some point, but they are illusions regardless. These rather simple and obvious facts are commonly treated as if they were in bad taste, particularly as far as the journal Ethics is concerned.

Consider, for example the latest issue of this flagship publication of our “experts on ethics.” The first article is entitled “Democratic Equality and the Justification of Welfare-State Capitalism.” Needless to say, nothing could be more irrelevant to human morality than welfare-state capitalism, since neither welfare-states nor capitalism existed at the time the genes responsible for the existence of morality evolved. The process of evolution is a fact of nature, and as such is incapable of “justifying” anything. On whose authority are we to base the claim that “democratic equality” is an “objective good”? It is a bastard child of human morality, spawned in a modern environment alien to the one in which it evolved. It is not clear that “democratic equality” will promote the survival of the relevant genes in its modern proponents. Indeed, there is reason to believe that the opposite may be the case. No matter, “democratic equality” happens to evoke the emotional response “good,” in a great many individuals, including the members of the author’s academic tribe. Since these worthies all agree that “democratic equality” is good, it is assumed that it must really be Good. This is the rather flimsy basis for the objective “goodness” of democratic equality. Or it is at least as far as that particular tribe is concerned. The ”authority” we are looking for is nothing more substantial than the whim of that tribe.

The next article is entitled “Proportionality in War: Revising Revisionism.” Here, again, we are dealing with another weird artifact of morality that can occur in creatures with large brains when they ponder what their emotions are trying to tell them without taking into account why those emotions exist to begin with. Modern warfare did not exist at the time these emotions evolved. In spite of that, they have caused some individuals to imagine that “proportionality in war” is “good.” Again, no authority is cited for this conclusion. Apparently, we must assume it is true because it is “intuitively obvious to the casual observer.” In reality, the only “authority” for this “objective good” is the majority opinion prevailing among the academic tribe that controls the content of a particular journal. Since modern warfare is, at least in some cases, a struggle for mere survival, it seems that “win the war” would be a more appropriate moral “good” in warfare than “proportionality.” Of course, since we are dealing with emotional responses rather than reason, it doesn’t matter.

Another article in the latest Ethics is entitled “Rank-Weighted Utilitarianism and the Veil of Ignorance.” It is a discussion of some of the latest algorithms fashionable among Utilitarians for calculating utility. Again, when we ask on whose authority we are to base the claim that there is any connection between utility and “objective good,” we are left in the dark. Certainly, John Stuart Mill, who wrote the book on Utilitarianism, is no such authority. He didn’t believe in objective or, as he put it, transcendental morality. He proposed utilitarianism as a mere matter of expedience, based on the assumption that, when it came to morality, human beings are perfectly malleable, or a Blank Slate, if you will. As Darwin pointed out some years later, that assumption is wrong. The very existence of morality is a reflection of innate behavioral predispositions. Unless this very basic fact is taken into account, calculating how much utility it takes to add up to a moral good is as futile as calculating how many angels can dance on the head of a pin.

In short, if you seek the answer to the question, “On whose authority?”, it is unlikely that you will find it in the pages of Ethics. The claim of our modern “experts on ethics” that they know all about Good is similar to the claim by priests and mullahs that they know all about God. Both claim special knowledge of things that don’t exist. In both cases, their claim to respect in society and often their very livelihood depend on their ability to convince others that an illusion is real.

If Darwin was right, then morality is a bottom an emotional phenomenon. It exists by virtue of emotionally driven behavioral predispositions that exist because they evolved, and they evolved in an environment that no longer exists. One cannot speak credibly about ethics or morality at all without taking these facts into account. In view of this, consider the following paragraph from the conclusion of the article in Ethics referred to above:

“I myself am inclined to reject both REU theory and RWU for reasons independent of these issues. But the results of this article provide some reason for fans of these theories – or, more generally, of any nonseparable theories of distribution or decision – not to appeal to the veil of ignorance. The veil of ignorance may be a valuable heuristic device for ensuring impartiality, but, as Parfit puts it, “it does that crudely, like frontal lobotomy.” It requires us to ignore information that may be relevant to distributive justice – that is, which utilities belong to whom, and in which outcomes. We should not make distributive choices by depriving ourselves of this information, but by ensuring that we are impartial in other ways, if we can.”

Forget the acronyms and consider the assumptions implied by this paragraph.  The most fundamental assumption is that “distributive justice” is an object, a thing. It is further assumed that this justice object is good-in-itself. No authority is given for this conclusion. Apparently, we are to believe that it is intuitively obvious to all right-thinking philosophers that distributive justice is good, period, independently of any individual’s opinion on the matter. The author would have us believe that, by carefully parsing the outcomes of different schemes of distribution, he has arrived at a superior algorithm for maximizing “distributive justice.” All that is necessary for us to be morally good is to apply this algorithm.

If Darwin was right about morality (and he was right), such speculations are reduced to the pure gibberish they appear to be to casual readers of Ethics. It is hardly surprising that human beings have come up with the notion of “distributive justice.” Natural selection has predisposed us to think that way. Obviously, thinking that way must have enhanced the odds that the responsible genes would survive and reproduce in the context of the small groups that existed when the trait in question evolved. However, it can hardly be assumed that the behavior resulting from that predisposition will promote the survival of the relevant genes in modern societies consisting of hundreds of millions of individuals the same way it did in groups of a hundred hunter-gatherers in a completely different environment. Under the circumstances it seems reasonable to ask the promoters of “distributive justice”, “Why are you doing this.” If Darwin was right, then “distributive justice,” regardless of how it is defined, cannot be good, nor can it be evil, for the simple reason that these categories have no objective existence. They don’t exist regardless of the powerful, emotionally driven illusion that they do exist. That illusion exists because it was selected at the level of the individual, and perhaps at the level of small groups. Notions to the effect that it was selected for “the good of the species,” or for “human flourishing,” or for “the welfare of all mankind,” are all equally absurd.

A rational answer to the question would be something like this: “I realize why my moral emotions exist. I realize that the odds that blindly responding to them in the environment we live in today will promote my genetic survival the same way they did eons ago are vanishingly small. However, I’ve decided, even though I’m aware of the facts that account for my existence, that I’m not interested in survival. I just want to be happy. One thing that makes me happy is to pretend that I am morally good, even though I am also aware that no such thing as “good” exists, and is just an emotionally spawned illusion.” However, the promoters of these emotionally driven exercises in self-deception are never satisfied to promote “distributive justice” on their own. They insist that the rest of us also behave according to their complicated recipes for maximizing it. The inform us that if we fail to assign the same value to their version of “distributive justice” that they do, then they will declare us “evil.” There is but one rational response to that assertion.

“On whose authority?”

 

Artifacts of a Historical Scavenger Hunt

Today we suffer from a sort of historical myopia due to our obsession with social media. In our struggle to stay abreast of what’s happening in the here and now, we neglect the past. Instead of going back and examining the source material for ourselves, we leave it to others to interpret it for us. These interpretations are commonly bowdlerized to fit a preferred narrative. It’s a shame, because the past holds a rich mine of material relevant to the present. Pick up and old book, or an old magazine, and you’ll often find that they bring the reality of today into sharper focus. Nuggets of insight will pop up in the strangest places, often in articles that ostensibly have nothing to do with the insight in question.

Consider, for example, the following excerpt from the October, 1842 issue of the Edinburgh Review, one of the dominant British journals of literature and politics in the first half of the 19th century. It came from an article about the recently published autobiography of one M. Berryer, a prominent lawyer and eyewitness of some of the worst atrocities of the Reign of Terror during the French Revolution. In one of the opening paragraphs of his review, the anonymous author offers the following general comments about human nature:

Few men know the fluctuating nature of their own character; – how much it has varied from ten years to ten years, or even on the recurrence of similar events. Few men attempt to distinguish between the original predispositions and the accidental influences which, sometimes controlling and sometimes aggravating one another, together formed at any particular epoch their character for the time being. Still fewer attempt to estimate the relative force of each; and fewer still would succeed in such an attempt.

Amazing, really! That passage might have been lifted from an introduction to a book about the latest advances in Genome Wide Association Studies. It demonstrates that people were perfectly well aware of the existence of “original predispositions” almost 200 years ago. This brief passage shows more insight into the nuances of the entanglement of “nature” and “nurture” in our species than the vast majority of the tomes of psychology, sociology, and anthropology published during the hegemony of the Blank Slate. It puts in sharp relief the extent to which we managed to dumb ourselves down in the service of ideologically motivated truisms. To read it is to wonder at our success in willfully blinding ourselves to the truth in an area as potentially critical to our survival as self-understanding.

Perhaps most prominent among the ideologies that required an imaginary version of human beings rather than the real thing was and remains socialism. By reading old books one can gain an appreciation of how familiar “Marxist” ideas had become long before Marx became a household name. Consider, for example, the following passages from “Sybil,” published in 1845 by Benjamin Disraeli. Most remember him as a British Prime Minister during the reign of Queen Victoria, but he was also an outstanding and prolific novelist. Sybil, the heroine of the novel, is the daughter of a leader of the proletariat, and speaks of him as follows:

When I heard my father speak the other night, my heart glowed with emotion; my eyes were suffused with tears; I was proud to be his daughter; and I gloried in a race of forefathers who belonged to the oppressed, and not to the oppressors.

According the Devilsdust, one of Disraeli’s working-class characters,

We’ll clean out the Savings Banks; the Benefits and Burials will shell out; I am treasurer of the Ancient Shepherds ( a trade union), and we passed a resolution yesterday unanimously, that we would devote all our funds to the sustenance of Labour in this its last and triumphant struggle against Capital.

Later Devilsdust is recorded as saying of Stephen Morley, a labor journalist who might have served as a prototype for Lenin,

…if ever the great revolution were to occur, by which the rights of labour were to be recognized, though bolder spirits and brawnier arms might consummate the change, there was only one head among them that would be capable, when they had gained their power, to guide it for the public weal…, and that was Morley.

In short, the idea of class struggle culminating in a proletarian revolution was already well developed before Marx wrote “Das Kapital.” What he added was a “scientific” theory distilled from Hegelian philosophy according to which the revolution was inevitable, and the proletariat would emerge victorious and establish a worker’s paradise by the force of historical “laws.” The conviction that one was fighting for the Good, and must inevitably win the fight, served as a powerful intoxicant for already radicalized fanatics, and, as we now know, would culminate in a nightmare.

Perhaps most prominent among the public intellectuals who sought to warn us of the perils of listening to the Marxist siren song was Herbert Spencer. For his trouble, he was vilified as a “social Darwinist” and forgotten. That’s ironic, because Spencer was never a Darwinist to begin with. His ideas about evolution were much more Lamarckian in character. His brilliant critique of socialism, however, was based on insights about human nature that are seldom equaled among modern scholars. It turned out to be a prophecy of uncanny accuracy about the reality of Communism. Consider, for example, the following passages, written in the introduction to a collection of essays published in 1891 entitled “A Plea for Liberty.” The first refers to an earlier summary of some of the more prominent features of the innate human behavior denied by Blank Slaters, then and now.

The traits thus shown must be operative in any new social organization, and the question to be asked is – What will result from their operation when they are relieved from all restraints? At present the separate bodies of men displaying them are in the midst of a society partially passive, partially antagonistic; are subject to the criticisms and reprobations of an independent press; and are under the control of law, enforced by police. If in these circumstances these bodies habitually take courses which override individual freedom, what will happen when, instead of being only scattered parts of the community, governed by their separate sets of regulators, they constitute the whole community, governed by a consolidated system of such regulators; when functionaries of all orders, including those who officer the press, form parts of the regulative organization; and when the law is both enacted and administered by this regulative organization? The fanatical adherents of a social theory are capable of taking any measures, no matter how extreme, for carrying out their views: holding, like the merciless priesthoods of past times, that the end justifies the means. And when a general socialistic organization has been established, the vast, ramified, and consolidated body of those who direct its activities, using without check whatever coercion seems to them needful in the interests of the system (which will practically become their own interests) will have no hesitation in imposing their rigorous rule over the entire lives of the actual workers; until, eventually, there is developed an official oligarchy, with its various grades, exercising a tyranny more gigantic and more terrible than any which the world has seen.

Astonishing, no? If your education about the reality of Communism doesn’t extend beyond what’s taught in the public school system, by all means read Orwell’s “1984,” or, better yet, “The New Class,” by Milovan Djilas, one of the most brilliant political writers of the 20th century. If that’s not enough to impress you, check this out:

Misery has necessarily to be borne by a constitution out of harmony with its conditions; and a constitution inherited from primitive men is out of harmony with conditions imposed on existing men.

These seemingly obvious facts, that we possess innate behavioral traits, and they evolved in conditions radically different from the ones we live in now, are seemingly beyond the grasp of virtually every prominent public intellectual today. They speak of morality, community, and politics as if these salient facts didn’t exist. We continue this type of self-imposed obscurantism at our peril.

The above historical artifacts all bear on the reality of the here and now, characterized by the hegemony of equalist dogmas. Equalism started out benignly enough, as a reaction to the gross exploitation and abuse of a majority of the population by an elite distinguished by nothing but the accident of birth. It has now morphed into a monster that demands that we all pretend we believe things that are palpably untrue on pain of censorship, social ostracism, and loss of employment and educational opportunity.  From the first item cited above we can see that the interplay of innate human nature with experience and learning was a matter of common knowledge to an anonymous book reviewer more than a century and a half ago. Even children have a rudimentary familiarity with human nature and have acted based on that knowledge for millennia before that. It is all the more astounding that the Blank Slate orthodoxy required denial of the very existence of human nature for upwards of half a century, and virtually every academic and professional “expert” in the behavioral sciences meekly went along. This orthodoxy was eventually destroyed by its own absurdity, strikingly portrayed to a wondering lay public in a series of books by a man named Robert Ardrey. Now Ardrey is remembered, if at all, as a bete noire with which to terrify young associate professors. Today the Blank Slate is well on the way to making a comeback. Now, however, instead of making themselves laughing stocks by denying the existence of human nature, its resurgent clergy merely see to it that no research is done in anything of real relevance to the human condition.

As for Communism, we can count ourselves lucky that we’ve been there, done that, along with “democratic” socialism, national socialism, and a grab bag of other versions. These repeated failures have at least slowed our progress towards stumbling off the same cliff yet again.  Of course, they haven’t stopped equalist ideologues from claiming that the only reason socialism has been such an abject failure to date is because it hasn’t been “done right,” or that previous versions weren’t “real socialism.” Fasten your seatbelts.

Meanwhile, I suggest that you take the time occasionally to read old things; novels, magazines, newspapers, it doesn’t really matter. You’ll find that the self-imposed stupidity and politically correct piety of modern societies aren’t inevitable. There have been other times and other cultures in which people could speak their minds a great deal more freely than under the secular Puritanism that prevails today. The fact that the culture we live in today is a “natural” outcome for our species doesn’t mean you are obligated to either accept it or refrain from fighting to change it.

Corona Comments

There are no objective oughts, no objective goods, no objective values, and no objective moral virtues. That is a simple statement of fact, and implies nothing whatsoever regarding how we ought to behave. Facts bear no implications about what we should do, except as means to an end. We must decide for ourselves what ends to seek. Objective facts may then inform us what we “should” do if we want to achieve the goals we set for ourselves.
Whatever the goals we set for ourselves happen to be, in large measure if not totally, they are a response to our “nature”; predispositions that are as much innate as our arms and legs. These predispositions are similar but not identical among human individuals, and they exist by virtue of natural selection. In other words, at some point and in some environment, they promoted the survival and reproduction of our ancestors. It cannot be assumed that their influence on our behavior will have that result in the very different environment most of us live in today.

Our nature does not determine our behavior, in the sense that it does not dictate what we must do in this or that situation. Rather, it inclines us to act in some ways, and not in others. It is fundamentally emotional, in humans as well as in other animals. We happen to have very large brains, and so can ponder over what our emotions are trying to tell us. We can reason about how we ought to respond to them. However, our reason is far from infallible. As the reasoning process becomes more complex, the outcome regarding what we “ought” to do will vary increasingly among individuals. This is doubly true by virtue of the fact that most individuals respond to their emotions blindly, never considering or taking into account why those emotions exist to begin with.

The above is illustrated by the response of our societies to the spread of COVID-19. The situation is anomalous, in that few of us have experienced anything like it. As a result, an appropriate response to it is not neatly packaged among our preferred or habitual responses to everyday occurrences. One result of this is that we find unusual differences of opinion about how we should react to the virus, even among those whose ideology, whether “liberal” or “conservative,” was formerly a reliable predicter of what their response to a given situation would be. Two factions have formed; those who tend to agree that we ought to take extreme measures to control the spread of the virus, and those who tend to believe that this “cure” is worse than the disease. At the moment the former faction has the upper hand, although the latter hasn’t been silenced completely.

Both factions present their arguments as if they are defending an objective truth. In fact, that is impossible, because objective “oughts” do not exist. What they are really defending is something they want, or value, and what they want or value represents their response to emotions that exist because they evolved. That statement applies not just to our response to a virus, but to every other form of conscious human behavior.

Emotional responses are bound to vary to some extent across populations that have been widely separated by time and space, but they tend to be quite similar, as one would expect of traits that happen to promote survival in a given species. Fear and avoidance of death is one trait almost all of us have in common. The emotional root cause of this fear probably hasn’t changed much, but in creatures with large brains such as ourselves, our behavior isn’t rigidly determined by our genes. We think about what our emotions are trying to tell us, and how we should behave in response. Needless to say, we don’t always all come to the same conclusions, regardless of how similar the underlying emotions happen to be.

In the modern human societies that exist in western Europe and North America, fear of death may well be a greater motivator than ever before. We have few children, and can reasonably expect that those children will survive to adulthood. That was not the case in societies that are more typical of our past, where a large fraction of children didn’t survive past their first few years. Death was not exactly welcomed, but we were more likely to accept it as a matter of course. Now we are more inclined to treat it as an unmitigated calamity, and one that must be staved off as long as possible at all costs. In the case of the virus, it almost seems some of us believe they will be immortal if only they can avoid catching it. Under the circumstances, such drastic steps as shutting down complex modern economies appear to be completely rational. We hand wave away any negative affect this may have on our own and future generations by simply assuming that the global economy will quickly recover afterwards. If we follow the chain of logic that is used to justify this behavior to its ultimate source, we will always find an emotion. The emotion is followed blindly, without regard for the reason it exists to begin with. That reason is that it once enhanced the odds of survival and reproduction of the genes that give rise to it. The question of whether it will have the same result if blindly reacted to in a completely different environment is treated as if it were entirely irrelevant.

In the case of the virus, our innate fear of death has triumphed over all other emotions. We don’t take into account the fact that, while that fear exists for a reason, the programmed death of our physical bodies and consciousness occurs for exactly the same reason. Our fear of death and our programmed death both promote the survival of our genes. Our genes don’t protect us from death indefinitely. Rather, they insure that we will die, but at a time that is optimum for insuring that they will not die. They have been around, in different forms but in an unbroken chain, for more than two billion years. For all practical purposes, they are potentially immortal. I happen to share the goal of my genes. That goal is no more intrinsically good or virtuous than someone else’s goal to accomplish the opposite. However, it does seem to me to have the virtue of being in harmony with the reasons I exist to begin with, and to be formed in full awareness of why the emotions that motivate it exist to begin with as well.

It does not seem “better” to me to be blindly blown about by the shifting winds of my emotions in a completely different environment than the one in which they evolved. The blind fear of death can be and often is trumped by an equally blind response to other emotions. Consider, for example, such slogans as “Death before dishonor,” “Give me liberty or give me death,” and “A fate worse than death.” Those who coined these slogans and those who were moved by them were no hypocrites. In the past we can find myriad examples of such individuals laying down their lives in defense of their principles. These principles were based on other innate emotions than fear of death, perhaps including hatred of the outgroup, or territoriality, or the struggle for status. Thus, while emotions are the basis of all our actions, they can motivate goals that are diametrically opposed to each other in different situations. I merely suggest that, instead of reacting to them blindly, we may find it useful to consider why they exist to begin with. That seems to me particularly true in the case of events as profound as global pandemics.

Is, Ought, and the Evolution of Morality

I recently read a book entitled Nature’s Virtue by James Pontuso, a professor of political science at Hampden-Sydney College. He informs his readers that his goal in writing the book was to demonstrate a foundation for virtue. In his words,

It is in taking up the challenge of anti-foundationalism that I hope this book will contribute to the on-going dialogue about the place of virtue in human life. It will attempt to define virtue in the course of a discussion of its friends and adversaries.

Pontuso then takes us on a rambling discussion of what the postmodernists, Nietzsche, Heidegger, Kant, Plato, Aristotle, and several other thinkers had to say about virtue. All this may be enlightening for students of philosophy, but it is neither here nor there as far as establishing a foundation for virtue is concerned. In fact, the last two paragraphs of the book are the closest he comes to “taking up the challenge.” There he writes, Continue reading “Is, Ought, and the Evolution of Morality”

Morality: Another Shade of Unicorn

In my last post I noted that the arguments in an article by Ronald Dworkin defending the existence of objective moral truths could be used equally well to defend the existence of unicorns. Dworkin is hardly unique among modern philosophers in this respect. Prof. Katia Vavova of Mt. Holyoke College also defended objective morality in an article entitled Debunking Evolutionary Debunking published in 2014 in the journal Oxford Studies in Metaethics. According to Prof. Vavova’s version of the argument, it is impossible to accurately describe the characteristics of unicorns without assuming the existence of unicorns. Therefore, we must assume the existence of unicorns. QED

As the title of her article would imply, Prof. Vavova focuses on arguments against the existence of objective moral truths based on the Theory of Evolution. In fact, Darwin’s theory is hardly necessary to debunk moral objectivity. If objective moral truths exist independently of what anyone merely thinks to be true, they can’t be nothing. They must be something. Dworkin was obviously aware of this problem in the article I referred to in my last post. He was also aware that no one has ever detected moral objects in a form accessible to our familiar senses. He referred derisively to the notion that the existed as moral particles, or “morons,” or as “morality fields” accessible to the laws of physics. To overcome this objection, however, he was forced to rely on the even more dubious claim that moral truths exist in some sort of transcendental plane of their own, floating about as unphysical spirits. Continue reading “Morality: Another Shade of Unicorn”

Secular Humanism and Religion; Standoff at Quillette

As I noted in a recent post, (Is Secular Humanism a Religion? Is Secular Humanist Morality Really Subjective), John Staddon, a Professor of Psychology and Professor of Biology emeritus at Duke, published a very timely and important article at Quillette entitled Is Secular Humanism a Religion noting the gaping inconsistencies and irrationalities in secular humanist morality. These included its obvious lack of any visible means of support, even as flimsy as a God, for its claims to authority and legitimacy. My post included a link to a review by Prof. Jerry Coyne, proprietor of the Why Evolution is True website and New Atheist stalwart, that called Prof. Staddon’s article the “worst” ever to appear on Quillette, based on the false assumption that he actually did maintain that secular humanism is a religion. In fact, it’s perfectly obvious based on a fair reading of the article that he did nothing of the sort.

Meanwhile, Quillette gave Prof. Coyne the opportunity to post a reply to Staddon. His rebuttal, entitled Secular Humanism is Not a Religion, doubled down on the false assertion that Staddon had claimed it is. Then, in a counterblast, entitled Values, Even Secular Ones, Depend on Faith: A Reply to Jerry Coyne, Staddon simply pointed out Prof. Coyne’s already obvious “confusion” about what he had actually written, and elaborated on his contention that secular values depend on faith. As I noted in the following comment I posted at Quillette, I couldn’t agree more: Continue reading “Secular Humanism and Religion; Standoff at Quillette”

On the Illusion of Objective Morality; We Should Have Listened to Westermarck

The illusion of objective morality is amazingly powerful. The evidence is now overwhelming that morality is a manifestation of emotions, and that these emotions exist by virtue of natural selection. It follows that there can be no such thing as objective moral truths. The brilliant Edvard Westermarck explained why more than a century ago in his The Origin and Development of the Moral Ideas:

As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity. The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments. The intensity of his emotions makes him the victim of an illusion.

Westermarck, in turn, was merely pointing out some of the more obvious implications of what Darwin had written about morality in his The Descent of Man, published in 1871. Today Westermarck is nearly forgotten, what Darwin wrote about morality is ignored as if it didn’t exist, and the illusion is as powerful and persistent as it was more than a century ago. Virtually every human being on the planet either believes explicitly in objective moral truths, or behaves as if they did regardless of whether they admit to believing in them or not. Continue reading “On the Illusion of Objective Morality; We Should Have Listened to Westermarck”

Has It Ever Occurred To You That None Of Us Are Acting Rationally?

Do you imagine that you are acting for the good of all mankind? You are delusional. What is your actual goal when you imagine you are acting for the good of all mankind? Maximization of human happiness? Maximization of the rate at which our species as a whole reproduces? Complete elimination of our species? All of these mutually exclusive goals are deemed by some to be for the “good of all mankind.” How is that possible if there really is such a thing as “the good of all mankind?” The answer is that there is no such thing, for the simple reason that there is no such thing as good, unless one is speaking of a subjective impression.

Look, just stop arguing with me in your mind for a moment and try a thought experiment. Imagine that what I’ve said above about good – that it is merely a subjective impression – is true. In that case, how can we account for the existence of this subjective impression, this overpowering belief that some things are good and other things are evil? It must exist for the same reason that all of our other behavioral predispositions and traits exist – by virtue of natural selection, the same process that accounts for our very existence to begin with. In that case, these subjective impressions, these overpowering beliefs, must exist because, in the environment in which they evolved, they enhanced the odds that the responsible genes would survive and reproduce. How, then, is it possible for us to imagine that our goal is “the good of all mankind.” Natural selection does not operate at the level of “all mankind.” It operates at the level of the individual and, perhaps, at the level of small groups. If our goal is to act for “the good of the species,” we can only conclude that the behavioral predispositions responsible for this desire have become “dysfunctional,” in the sense that they are no longer likely to promote the survival of the responsible genes. The most plausible reason they have become “dysfunctional” is the fact that they exist in the context of a radically changed environment.

This has some obvious implications as far as the rationality of our behavior is concerned. Try following the reasons you imagine you’re doing what you do down through the accumulated “rational” muck to the emotional bedrock where they originate. You can string as many reasons together as you want, one following the other, and all perfectly rational, but eventually the chain of reasons must lead back to the origin of them all. That origin cannot be the “good in itself,” because such an object does not exist. It is imaginary. In fact, the bedrock we are seeking consists of behavioral predispositions that exist because they evolved. As the result of a natural process, they cannot possibly be “rational,” in the sense of having some deeper purpose or meaning more fundamental than themselves. It is evident that these behavioral traits exist because, at least at some point in time and in some environment, they enhanced the odds that the individuals possessing these traits would survive and reproduce. That, however, is not their purpose, or their function, because there was no one around to assign them a purpose or function. They have no purpose or function. They simply are.

That’s what I mean when I say that none of us acts rationally. The sun does not act rationally when it melts solid objects that happen to fall into it. It does not have the purpose or goal of melting them. It simply does. The ocean does not act rationally when it drowns air breathing creatures that are unfortunate enough to sink beneath its surface. Millions of creatures have drowned in the ocean, but the ocean didn’t do it on purpose, nor did it have a goal in doing so. In the same sense, our behavioral traits do not have a goal or purpose when they motivate us to act in one way or another. Just as it is a fact of nature that the sun melts solid objects, and the ocean drowns land creatures, it is a fact of nature that we are motivated to do some things, and avoid others. That is what I mean when I say that our behavior is irrational. I don’t mean that it can’t be explained. I do mean that it has no underlying purpose or goal for doing what it does. Goals and purposes are things we assign to ourselves. They cannot be distilled out of the natural world as independent objects or things in themselves.

Consider what this implies when it comes to all the utopian schemes that have ever been concocted for our “benefit” over the millennia. A goal that many of these schemes have had in common is “moral progress.” It is one of the more prominent absurdities of our day that even those among us who are most confident that Darwin was right, and who have admitted that there is a connection between morality and our innate behavioral predispositions, and who also realize and have often stated publicly that morality is subjective, nevertheless embrace this goal of “moral progress.” This begs the question, “Progress towards what?” Assuming one realizes and has accepted the fact that morality is subjective, it can’t be progress towards any objective Good, existing independently of what anyone thinks about it. It must, then, be progress towards something going on in conscious minds. However, as noted above, conscious minds are a fact of nature, existing by virtue of natural processes that have no function and have no goal. They simply are. Furthermore, our conscious minds are not somehow connected all across the planet in some mystical collective. They all exist independently of each other. They include predispositions that motivate the individuals to whom they belong to have desires and goals. However, those desires and goals cannot possibly exist by virtue of the fact that they benefit all mankind. They exist by virtue of the fact that they enhanced the odds that the responsible genetic material would survive and reproduce. They were selected at the level of the individual, and perhaps of small groups. They were definitely not selected by virtue of any beneficial effect on all mankind.

In other words, when one speaks of “moral progress,” what one is in reality speaking of is progress towards satisfying the whims of some individual. The reason for the existence of these whims has nothing to do with the welfare of all mankind. To the extent that the individual imagines they have some such connection, the whims have become “dysfunctional,” in the sense that they have been redirected towards a goal that is disconnected from the reasons they exist to begin with. Belief in “moral progress,” then, amounts to a blind emotional response to innate whims on the part of individuals who have managed to profoundly delude themselves about exactly what it is they’re up to. The problem, of course, is that they’re not the only ones affected by their delusion. Morality is always aimed at others. They insist that everyone else on the planet must respect their delusion, and allow it to dictate how those others should or should not behave.

This fundamental irrationality applies not just to morality, but to every other aspect of human behavior. Whether it’s a matter of wanting to be “good,” or of “serving mankind,” or accumulating wealth, or having sex, or striving for “success” and recognition, we are never motivated by reason. We are motivated by whims, although we certainly can and do reason about what the whims are trying to tell us. This process of reasoning about whims can result in a bewildering variety of conclusions, most of which have nothing to do with the reasons the whims exist to begin with. You might say that our brains have evolved too quickly. Our innate behavioral baggage has not kept up, and remains appropriate only to environments and forms of society that most of us left behind thousands of years ago. We continue to blindly respond to our emotions without understanding why they exist, pursuing goals that have nothing to do with the reasons they exist. In effect, we are living in an insane asylum.

I am not suggesting that we all stop having goals and aspirations. Life would be extremely boring without them, and they can be just as noble as we please, at least from our own point of view. From my point of view, the fact that creatures like us can exist at all seems wildly improbable, wonderful, and sublime. For all we know, the life we are a part of may exist on only one of the trillions of planets in our universe. I personally deem it precious, and one of my personal goals is that it be preserved. Others may have different goals. I merely suggest that, regardless of what they are, we keep in mind what motivates us to seek them in the first place. I personally would prefer that we avoid botching the wildly improbable, wonderful, and sublime experiment of nature that is us by failing to understand ourselves.

A New York Intellectual’s Unwitting Expose; Human Nature Among the Ideologues

Norman Podhoretz is one of the New York literati who once belonged to a group of leftist intellectuals he called the Family. He wrote a series of books, including Making It, Breaking Ranks, and Ex-Friends, describing what happened when he underwent an ideological metamorphosis from leftist radical to neoconservative. In the process he created a wonderful anthropological study of human nature in the context of an ingroup defined by ideology. Behavior within that ingroup was similar to behavior within ingroups defined by race, class, religion, ethnicity, or any of the other often subtle differences that enable ingroups to distinguish themselves from the “others.” The only difference was that, in the case of Podhoretz’ Family, the ingroup was defined by loyalty to ideological dogmas. Podhoretz described a typical denizen as follows:

Such a person takes ideas as seriously as an orthodox religious person takes, or anyway used to take, doctrine or dogma. Though we cluck our enlightened modern tongues at such fanaticism, there is a reason why people have been excommunicated, and sometimes even put to death, by their fellow congregants for heretically disagreeing with the official understanding of a particular text or even of a single word. After all, to the true believer everything important – life in this world as well as life in the next – depends on obedience to these doctrines and dogmas, which in turn depends on an accurate interpretation of their meaning and which therefore makes the spread of heresy a threat of limitless proportions.

This fear and hatred of the heretic, together with the correlative passion to shut him up one way or the other, is (to say the least, and in doing so I am bending over backward) as much a character trait of so-called liberal intellectuals as it is of conservatives… For we have seen that “liberal” intellectuals who tell us that tolerance and pluralism are the highest values, who profess to believe that no culture is superior to any other, and who are on that account great supporters of “multiculturalism” will treat these very notions as sacred orthodoxies, will enforce agreement with them in every venue in which they have the power to do so (the universities being the prime example at the moment), and will severely punish any deviation that dares to make itself known.

Podhoretz may not have been aware of the genetic roots responsible for such behavior, but he was certainly good at describing it. His description of status seeking, virtue signaling, hatred of the outgroup, allergic reaction to heretics, etc., within the Family would be familiar to any student of urban street gangs. As anthropological studies go, his books have the added advantage of being unusually entertaining, if only by virtue of the fact that his ingroup included such lions of literature as Norman Mailer, Hannah Arendt, Mary McCarthy, Allen Ginsburg, and Lionel Trilling,

Podhoretz was editor of the influential cultural and intellectual magazine Commentary from 1960 to 1995. When he took over the magazine already represented the anti-Communist Left. However, he originally planned to take a more radically leftist line, based on the philosophy of Paul Goodman, a utopian anarchist. In his Growing Up Absurd, Goodman claimed that American society was stuck with a number of “incomplete revolutions.” To escape this “absurdity” it was necessary to complete the revolutions. Podhoretz seized on Goodman’s ideas as the “radical” solution to our social ills he was seeking, and immediately started a three-part serialization of his book in Commentary. Another major influence on Podhoretz at the time was Life Against Death by Norman O. Brown, a late Freudian tract intended to reveal “the psychoanalytical meaning of history.” It is depressing to read these books today in the knowledge that they were once taken perfectly seriously by people who imagined themselves to be the cream of the intellectual crop. Goodman certainly chose the right adjective for them – “absurd.”

In any case, as the decade wore on, the Left did become more radicalized, but not in the way foreseen by Podhoretz. What was known then as the New Left emerged, and began its gradual takeover of the cultural institutions of the country, a process that has continued to this day. When he came of age, most leftists had abandoned the Stalinism or Trotskyism they had flirted with in the 30’s and 40’s, and become largely “pro-American” and anti-Communist as the magnitude of the slaughter and misery in the Soviet Union under Stalin became impossible to ignore. However, as the war in Vietnam intensified, the dogs returned to their vomit, so to speak. Leftists increasingly became useful idiots – effectively pro-Communist whether they admitted it or not. As Israel revealed its ability to effectively defend itself, they also became increasingly anti-Semitic as well, a development that also continues to this day. Then, as now, anti-Semitism was fobbed off as “anti-Zionism,” but Podhoretz, a Jew as were many of the other members of the family, was not buying it. He may have been crazy enough to take Goodman and Brown seriously, but he was not crazy enough to believe that it was preferable to live in a totalitarian Communist state than in the “imperialist” United States, nor, in light of the Holocaust, was he crazy enough to believe that the creation of a Jewish state was “unjust.” In the following passage he describes his response when he first began to notice this shift in the Zeitgeist, in this case on the part of an erstwhile “friend”:

I was not afraid of Jason. I never hesitated to cut him off when he began making outrageous statements about others, and once I even made a drunken public scene in a restaurant when he compared the United States to Nazi Germany and Lyndon Johnson to Hitler. This comparison was later to become a commonplace of radical talk, but I had never heard it made before, and it so infuriated me that I literally roared in response.

Today, of course, one no longer roars. One simply concludes that those who habitually resort to Hitler comparisons are imbeciles, and leaves it at that. In any case, Podhoretz began publishing “heretical” articles in Commentary, rejecting these notions, and nibbling away that the shibboleths that defined what had once been his ingroup in the process. In the end, he became a full-blown neoconservative. The behavioral responses to Podhoretz “treason” to his ingroup should be familiar to all students of human behavior. His first book length challenge to his ingroup’s sense of its own purity and righteousness was Making It, published in 1967. As Podhoretz recalls,

In an article about Making It and its reception that was itself none too friendly to the book, Norman Mailer summed up the critical response as “brutal – coarse, intimate, snide, grasping, groping, slavering, slippery of reference, crude and naturally tasteless.” But, he added, “the public reception of Making It was nevertheless still on the side of charity if one compared the collective hooligan verdict to the earlier fulminations of the Inner Clan.” By the “Inner Clan,” Mailer meant the community of New York literary intellectuals I myself had called the Family. According to Mailer, what they had been saying in private about Making It even before it was published made the “horrors” of the public reception seem charitable and kind. “Just about everyone in the Establishment” – i.e., the Family – was “scandalized, shocked, livid, revolted, appalled, disheartened, and enraged.” They were “furious to the point of biting their white icy lips… No fate could prove undeserved for Norman, said the Family in thin quivering late-night hisses.”

Podhoretz notes that academia was the first of the cultural institutions of the country to succumb to the radical Gleichschaltung that has now established such firm control over virtually all the rest, to the point that it has become the new “normalcy.” In his words,

For by 1968 radicalism was so prevalent among college students that any professor who resisted it at the very least risked unpopularity and at the worst was in danger of outright abuse. Indeed it was in the universities that the “terror” first appeared and where it operated most effectively.

By the late 60’s the type of behavior that is now ubiquitous on university campuses was hardly a novelty. “De-platforming” was already part of the campus culture:

By 1968 SDS (the leftist Students for a Democratic Society) had moved from argument and example to shouting down speakers with whom it disagreed on the ground that only the “truth” had a right to be heard. And it also changed its position on violence… and a number of its members had gone beyond advocacy to actual practice in the form of bombings and other varieties of terrorism.

As Podhoretz documents, the War in Vietnam had originally been supported, and indeed started and continued by intellectuals and politicians on the left of the political spectrum. He noted that Robert Kennedy had been prominent among them:

Kennedy too then grew more and more radicalized as radicalism looked more and more like the winning side. Having been one of the architects of the war in Vietnam and a great believer in resistance to Communist power in general, he now managed to suggest that he opposed these policies both in the small and in the large.

However, in one of the rapid changes in party line familiar to those who’ve read the history of Communism in the Soviet Union and memorialized by George Orwell in 1984, the hawks suddenly became doves:

…a point was soon reached where speakers supporting the war were either refused a platform or shouted down when they attempted to speak. A speaker whose criticisms were insufficiently violent could even expect a hard time, as I myself discovered when a heckler at Wayne State in Detroit accused me, to the clear delight of the audience, of not being “that much” against the war because in expressing my opposition to the American role I had also expressed my usual reservations about the virtues of the Communist side.

Of course, there was no Internet in the 60’s, so “de-platforming” assumed a form commensurate with the technology available at the time. Podhoretz describes it as follows:

The word “terror,” like everything else about the sixties, was overheated. No one was arrested or imprisoned or executed; no one was even fired from a job (though there were undoubtedly some who lost out on job opportunities or on assignments or on advances from book publishers they might otherwise have had). The sanctions of this particular reign of “terror” were much milder: one’s reputation was besmirched, with unrestrained viciousness in conversation and, when the occasion arose, by means of innuendo in print. People were written off with the stroke of an epithet – “fink” or “racist” or “fascist” as the case might be – and anyone so written off would have difficulty getting a fair hearing for anything he might have to say. Conversely, anyone who went against the Movement party line soon discovered that the likely penalty was dismissal from the field of discussion.

Seeing others ruthless dismissed in this way was enough to prevent most people from voicing serious criticisms of the radical line and – such is the nature of intellectual cowardice – it was enough in some instances to prevent them even from allowing themselves to entertain critical thoughts.

The “terror” is more powerful and pervasive today than it ever was in the 60’s, and it’s ability to “dismiss from the field of discussion” is far more effective. As a result, denizens of the leftist ingroup or those who depend on them for their livelihood tend to be very cautious about rocking the boat.  That’s why young, pre-tenure professors include ritualistic denunciations of the established heretics in their fields before they dare to even give a slight nudge to the approved dogmas. Indeed, I’ve documented similar behavior by academics approaching retirement on this blog, so much do they fear ostracism by their own “Families.” Podhoretz noticed the same behavior early on by one of his erstwhile friends:

As the bad boy of American letters – itself an honorific status in the climate of the sixties – he (Normal Mailer) still held a license to provoke and he rarely hesitated to use it, even if it sometimes meant making a fool of himself in the eyes of his own admirers. But there were limits he instinctively knew how to observe; and he observed them. He might excoriate his fellow radicals on a particular point; he might discomfit them with unexpected sympathies (for right-wing politicians, say, or National Guardsmen on the other side of a demonstration) and equally surprising antipathies (homosexuality and masturbation, for example, he insisted on stigmatizing as vices); he might even on occasion describe himself as (dread word) a conservative. But always in the end came the reassuring gesture, the wink of complicity, the subtle signing of the radical loyalty oath.

So much for Podhoretz description of the behavioral traits of the denizens of an ideologically defined ingroup. I highly recommend all of the three books noted above, not only as an unwitting but wonderfully accurate studies of “human nature,” but as very entertaining descriptions of some of the many famous personalities Podhoretz crossed paths with during his long career. One of them was Jackie Kennedy, who happened to show up at his door one day in the company of his friend, Richard Goodwin, “who had worked in various capacities for President Kennedy.”

She and I had never met before, but we seemed to strike an instant rapport, and at her initiative I soon began seeing her on a fairly regular basis. We often had tea alone together in her apartment on Fifth Avenue where I would give her the lowdown on the literary world and the New York intellectual community – who was good, who was overrated, who was amusing, who was really brilliant – and she would reciprocate with the dirt about Washington society. She was not in Mary McCarthy‘s league as a bitchy gossip (who was?), but she did very well in her own seemingly soft style. I enjoyed these exchanges, and she (an extremely good listener) seemed to get a kick out of them too.

Elsewhere Podhoretz describes McCarthy as “our leading bitch intellectual.” Alas, she was an unrepentant radical, too, and even did a Jane Fonda in North Vietnam, but I still consider her one of our most brilliant novelists. I guess there’s no accounting for taste when it comes to ingroups.

Robert Plomin’s “Blueprint”: The Reply of the Walking Dead

The significance of Robert Plomin’s Blueprint is not that every word therein is infallible. Some reviewers have questioned his assertions about the relative insignificance of the role that parents, schools, culture, and other environmental factors play in the outcome of our lives, and it seems to me the jury is still out on many of these issues. See, for example, the thoughtful review of Razib Khan in the National Review. What is significant about it is Plomin’s description of new and genuinely revolutionary experimental tools of rapidly increasing power and scope that have enabled us to confirm beyond any reasonable doubt that our DNA has a very significant influence on human behavior. In other words, there is such a thing as “human nature,” and it is important. This truth might see obvious today. It is also a fact, however, that this truth was successfully suppressed and denied for over half a century by the vast majority of the “scientists” who claimed to be experts on human behavior.

There is no guarantee that such scientific debacles are a thing of the past. Ideologues devoted to the quasi-religious faith that the truth must take a back seat to their equalist ideals are just as prevalent now as they were during the heyday of the Blank Slate. Indeed, they are at least as powerful now as they were then, and they would like nothing better than to breathe new life into the flat earth dogmas they once foisted on the behavioral sciences. Consider, for example, a review of Blueprint by Nathaniel Comfort entitled “Genetic determinism rides again,” that appeared in the prestigious journal Nature. The first paragraph reads as follows:

It’s never a good time for another bout of genetic determinism, but it’s hard to imagine a worse one than this. Social inequality gapes, exacerbated by climate change, driving hostility towards immigrants and flares of militant racism. At such a juncture, yet another expression of the discredited, simplistic idea that genes alone control human nature seems particularly insidious.

Can anyone with an ounce of common sense, not to mention the editors of a journal that purports to speak for “science,” read such a passage and conclude that the author will continue with a dispassionate review of the merits of the factual claims made in a popular science book? One wonders what on earth they were thinking. Apparently Gleichschaltung is sufficiently advanced at Nature that the editors have lost all sense of shame. Consider, for example, the hoary “genetic determinism” canard. A “genetic determinist” is a strawman invented more than 50 years ago by the Blank Slaters of old. These imaginary beings were supposed to believe that our behavior is rigidly programmed by “instincts.” I’ve searched diligently during the ensuing years, but have never turned up a genuine example of one of these unicorns. They are as mythical as witches, but the Blank Slaters never tire of repeating their hackneyed propaganda lines. It would be hard to “discredit” the “simplistic idea that genes alone control human nature” by virtue of the fact that no one ever made such a preposterous claim to begin with, and Plomin certainly wasn’t the first. Beyond that, what could possibly be the point of dragging in all the familiar dogmas of the “progressive” tribe? Apparently Nature would have us believe that scientific “truth” is to be determined by ideological litmus tests.

In the next paragraph Comfort supplies Plomin, a professor of behavior genetics, with the title “educational psychologist,” and sulks that his emphasis on chromosomal DNA leaves microbiologists, epigeneticists, RNA experts, and developmental biologists out in the cold. Seriously? Since when did these fields manage to hermetically seal themselves off from DNA and become “non-overlapping magisteria?” Do any microbiologists, epigeneticists, RNA experts or developmental biologists actually exist who consider DNA irrelevant to their field?

Comfort next makes the logically questionable claim that, because “Darwinism begat eugenics”, “Mendelism begat worse eugenics,” and medical genetics begat the claim that men with an XYY genotype were violent, therefore behavioral genetics must also “begat” progeny that are just as bad. QED

Genome-wide association (GWA) methods, the increasingly powerful tool described in Blueprint that has now put the finishing touches on the debunking of the Blank Slate, are dismissed as something that “lures scientists” because of its “promise of genetic explanations for complex traits, such as voting behavior or investment strategies.” How Comfort distills this “promise” out of anything that actually appears in the book is beyond me. One wonders if he ever actually read it. That suspicion is greatly strengthened when one reads the following paragraph:

A polygenic score is a correlation coefficient. A GWAS identifies single nucleotide polymorphisms (SNPs) in the DNA that correlate with the trait of interest. The SNPs are markers only. Although they might, in some cases, suggest genomic neighborhoods in which to search for genes that directly affect the trait, the polygenic score itself is in no sense causal. Plomin understands this and says so repeatedly in the book – yet contradicts himself several times by arguing that the scores are in fact, causal.

You have to hand it to Comfort, he can stuff a huge amount of disinformation into a small package. In the first place, the second and third sentences contradict each other. If SNPs are variations in the rungs of DNA that occur between individuals, they are not just markers, and they don’t just “suggest genomic neighborhoods in which to search for genes that directly affect the trait.” If they are reliable and replicable GWA hits, they are one of the actual points at which the trait is affected. Plomin most definitely does not “understand” that polygenic scores are in no sense causal, and nowhere does he say anything of the sort, far less “repeatedly.” What he does say is:

In contrast, correlations between a polygenic score and a trait can only be interpreted causally in one direction – from the polygenic score to the trait. For example, we have shown that the educational attainment polygenic score correlates with children’s reading ability. The correlation means that the inherited DNA differences captured by the polygenic score cause differences between children in their school achievement, in the sense that nothing in our brains, behavior, or environment can change inherited differences in DNA sequence.

I would be very interested to hear what Comfort finds “illogical” about that passage, and by virtue of what magical mental prestidigitations he proposes to demonstrate that the score is a “mere correlation.” Elsewhere we read,

Hereditarian books such as Charles Murray and Richard Herrnstein’s The Bell Curve (1994) and Nicholas Wade’s 2014 A Troublesome Inheritance (see N. Comfort Nature 513, 306–307; 2014) exploited their respective scientific and cultural moments, leveraging the cultural authority of science to advance a discredited, undemocratic agenda. Although Blueprint is cut from different ideological cloth, the consequences could be just as grave.

In fact, neither The Bell Curve nor A Troublesome Inheritance have ever been discredited, if by that term is meant being proved factually wrong. If books are “discredited” by how many ideological zealots begin foaming at the mouth on reading them, of course, it’s a different matter. Beyond that, if something is true, it does not become false by virtue of Comfort deeming it “undemocratic.” I could go on, but what’s the point? Suffice it to say that Comfort’s favorite “scientific authority” is Richard Lewontin, an obscurantist high priest of the Blank Slate if ever there was one, and author of Not in Our Genes.

I can understand the editors of Nature’s desire to virtue signal their loyalty to the prevailing politically correct fashions, but this “review” is truly abject. It isn’t that hard to find authors on the left of the political spectrum who can write a book review that is at least a notch above the level of tendentious ideological propaganda. See, for example, Kathryn Paige Harden’s review of Blueprint in the Spectator. Somehow she managed to write it without implying that Plomin is a Nazi in every second sentence.  I suggest that next time they look a little harder.

My initial post about Blueprint tended to emphasize the historical ramifications of the book in the context of the Blank Slate disaster. As a result, my description of the scientific substance of the book was very broad brush. However, there are many good reviews out there that cover that ground, expressing some of my own reservations about Plomin’s conclusions about the importance of environment in the process. See, for example, the excellent review by Razib Khan in the National Review linked above. As I mentioned in my earlier post, the book itself is only 188 pages long, so, by all means, read it.