Harvey Fergusson does have a Wiki page, but he’s not exactly a household name today. Remembered mostly as a writer of fiction, he produced some great Western novels, and some of the characters in his “Capitol Hill” will still be familiar to anyone who has worked in the nation’s capital to this day. His name turns up in the credits as a screenwriter in a few movies, including “Stand Up and Fight,” starring the inimitable Wallace Beery, and his work even drew a few lines of praise from H. L. Mencken. As it happens, Fergusson wrote some non-fiction as well, including a remarkable book entitled Modern Man.
The main theme of the book is what Fergusson refers to as “the illusion of choice.” As one might expect of a good novelist, his conclusions are based on careful observation of human behavior, both in himself and others, rather than philosophical speculation. In his words,
It struck me sharply how much of the conversation of my typical modern fellow-being was devoted to explaining why he had done what he had done, why he was going to do what he intended, and why he had not done what he had once professed an intention to do. Some of my more sophisticated subjects would describe these explanations, when made by others, as “rationalizations” – a term which is vague but seems always to imply a recognition of the necessarily factitious nature of all such explanations of personal behavior. But I found none who did not take his own explanations of himself with complete seriousness. What is more, I have not found either in conversation or in print any recognition of what seems obvious to me – that these explanations typically have for their effect, if not for their unconscious motive, to sustain what I have termed the illusion of choice. This may be more adequately defined as the illusion that behavior is related more exactly and immediately to the conscious mental processes of the individual than any objective study of the evidence will indicate that it is.
Consider this in light of the following comment by Seth Schwartz who writes one of the Psychology Today blogs:
In a controversial set of experiments, neuroscientist Ben Libet (1985) scanned participants’ brains as he instructed them to move their arm. Libet found that brain activity increased even before participants were aware of their decision to move their arm. Libet interpreted this finding as meaning that the brain had somehow “decided” to make the movement, and that the person became consciously aware of this decision only after it had already been made. Many other neuroscientists have used Libet’s findings as evidence that human behavior is controlled by neurobiology, and that free will does not exist.
Fergusson was not quite as bold as “many other neuroscientists.” He made it quite clear that he wasn’t addressing the question of determinism or free will, but was merely recording his personal observations. In spite of that, he certainly anticipated what Libet and others would later observe in their experiments. What is even more remarkable is how accurately Fergusson describes the behavior of our current crop of public intellectuals.
Consider, for example, the question of morality. Some of them agree with me that moral judgments are subjective, and others insist they are objective. However, their moral behavior has nothing to do with their theoretical pronouncements on the matter. Just as Fergusson predicted, it is more or less identical with the moral behavior of everyone else. They all behave as if they actually believe in the illusion that natural selection has planted in our brains that Good and Evil are real, objective things. And just as Fergusson suggested, their after-the-fact claims about why they act that way are transparent rationalizations.
In the case of such “subjective moralists” as Richard Dawkins, Jonathan Haidt and Jerry Coyne, for example, we commonly find them passing down moral judgments that would be completely incomprehensible absent the tacit assumption of an objective moral law. In common with every other public intellectual I’m aware of, they tell us that one person is bad, and another person is good, as if these things were facts. To all appearances they feel no obligation whatsoever to explain how their “subjective” moral judgments suddenly acquired the power to leap out of their skulls, jump onto the back of some “bad” person, and constrain them to mend their behavior. Like me, the three cited above are atheists, and so must at least acknowledge some connection between our moral behavior and our evolutionary past. Under the circumstances, if one asked them to explain their virtuous indignation, the only possible response that has any connection with the reason moral behavior exists to begin with would be something like, “The ‘bad’ person’s actions are a threat to my personal survival,” or, “The ‘bad’ person is reducing the odds that the genes I carry will reproduce.” In either case, there is no way their moral judgments could have acquired the legitimacy or authority to dictate behavior to the “bad” person, or anyone else. I am not aware of a single prominent intellectual who has ever tried to explain his behavior in this way.
In fact, these people, like almost everyone else on the planet, are blindly responding to moral emotions, after seeking to “interpret” them in light of the culture they happen to find themselves in. In view of the fact that cultures that bear any similarity to the ones in which our moral behavior evolved are more or less nonexistent today, the chances that these “interpretations” will have anything to do with the reason morality exists to begin with are slim. In fact, there is little difference between the “subjective” moralists cited above and such “objective” moralists as Sam Harris in this regard. Ask them to explain one of their morally loaded pronouncements, and they would likely justify them in the name of some such nebulous “good” as “human flourishing.” After all, “human flourishing” must be “good,” right? Their whole academic and professional tribe agrees that it must be “really good.” To the extent that they feel any constraint to explain themselves at all, our modern “subjective” and “objective” moralists seldom get beyond such flimsy rationalizations.
Is it possible to defend “human flourishing” as a “moral good” that is at least consistent with the reason morality exists to begin with? I think not. To the extent that it is defined at all, “human flourishing” is usually associated with a modern utopia in which everyone is happy and has easy access to food, shelter, and anything else they could wish for. Such a future would be more likely to end in the dystopia comically portrayed in the movie Idiocracy than in the survival of our species. Its predictable end state would be biological extinction. Absent the reason high intelligence and the ability to thrive in diverse environments evolved, those characteristics would no longer be selected. If we use the survival of our species as the ultimate metric, “human flourishing” as commonly understood would certainly be “bad.”
Fergusson was an unusually original thinker, and there are many other thought-provoking passages in his book. Consider, for example, the following:
The basic assumption of conservatism is that “human nature does not change.” But it appears upon examination of the facts that human nature from the functional viewpoint has undergone constant change. Hardly any reaction of the human organism to its social environment has failed to change as the form, size, and nature of the human group has changed, and without such change the race could hardly have survived. That human nature will change and is changing seems to be one of the few things we can count upon, and it supports all our valid hopes for the amelioration of human destiny.
Here we see Fergusson as a typical denizen of the left of the ideological spectrum of his day. His comment encapsulates the reasons that led to the radical rejection of the existence of human nature, and the disaster in the behavioral sciences we now refer to as the Blank Slate. Like many others, Fergusson suffered from the illusion that “human nature” implies genetic determinism; the notion that our behavior is rigidly programmed by our genes. In fact, I am not aware of a single serious defender of the existence of human nature who has ever been a “genetic determinist.” All have agreed that we are inclined or predisposed to behave in some ways and not in others, but not that we are rigidly forced by our “genes” to do so. Understood in this way, it is clear that evolved human nature is hardly excluded by the fact that “Hardly any reaction of the human organism to its social environment has failed to change as the form, size and nature of the human group has changed.” Properly understood, it is entirely compatible with the “changed reactions” Fergusson cited.
In reality, rejection of the existence of human nature did not “support all our valid hopes for the amelioration of human destiny.” What it really did was bring any meaningful progress in the behavioral sciences to a screeching halt for more than half a century, effectively blocking the path to any real “hope for the amelioration of human destiny.”
The fact that I don’t always agree with Fergusson does not alter my admiration for him as an original thinker. And by the way, if you happen to live in Maryland, I think you will find “Stand Up and Fight” worthy of a couple hours of your time and a bowl of popcorn.