Corona and Nassim Taleb: The Tragedy of a Genius in a World of Imbeciles

I don’t pay much attention to Nassim Taleb. I doubt that it’s worth paying attention to anyone who claims that, after carefully pondering the matter, he’s concluded that Claire Lehmann of Quillette is a “neo-Nazi.” He’s also concluded that he’s a genius, and anyone who disagrees with him is an imbecile. Just ask him. Recently, however, he’s become the guru of the most extreme proponents of government lockdowns in response to virus pandemics. Let’s consider what they’re so excited about.

A good summary appears in a gushing article by Yves Smith at the “naked capitalism” website, entitled, “Taleb: The Only Man Who Has A Clue.” Smith does not inform us on what basis he feels himself qualified to dismiss every highly trained virologist and epidemiologist on the planet as “clueless.” Be that as it may, he cites Taleb’s “Systemic Risk Of Pandemic Via Novel Pathogens – Coronavirus: A note,” which summarizes his point of view under the headings, “General Precautionary Principle,” “Spreading Rate,” “Asymmetric Uncertainty,” and “Conclusion.” Let’s consider what he has to say under the first of these:

General Precautionary Principle : The general (non-naive) precautionary principle [3] delineates conditions where actions must be taken to reduce risk of ruin, and traditional cost-benefit analyses must not be used. These are ruin problems where, over time, exposure to tail events leads to a certain eventual extinction. While there is a very high probability for humanity surviving a single such event, over time, there is eventually zero probability of surviving repeated exposures to such events. While repeated risks can be taken by individuals with a limited life expectancy, ruin exposures must never be taken at the systemic and collective level. In technical terms, the precautionary principle applies when traditional statistical averages are invalid because risks are not ergodic.

Extinction? Zero probability of surviving repeated exposures to such events? Seriously?? There are millions of species on the planet. They all bear genetic material that has been around, in one form or another, for upwards of two billion years. Presumably, unless Taleb is claiming virus pandemics are unique to human beings, many of them have survived repeated exposures to such events. If life on our planet is not capable of surviving repeated exposures to pathogens without implementing Taleb’s nostrums, how is it that any life survives at all? It should all be gone, with the viruses turning out the lights on their way out. In what even remotely rational sense are we “risking ruin” with the coronavirus? We have lived with all kinds of much deadlier diseases throughout most of our history. Many of them have been persistent, and many others have passed over us repeatedly, and yet none of them, not even the Black Death, has managed to “ruin” us. Indeed, if you ask some of the legions of anti-natalists and radical environmentalists out there, this sort of “ruin” is just what the doctor ordered. After all, the global population has grown to the point where we are at least rocking the boat a bit lately.

This begs the question of where you draw the line when it comes to “ruination.” Apparently, we are not sufficiently “ruined” by the yearly visitations of flu. When is a new pathogen serious enough to take the proposed drastic steps? Assuming we do magically find a way to identify this “ruination” line in the sand, is it really probable that our only choices are obeying Taleb or going extinct? I rather suspect that our species, in spite of all the idiots and imbeciles that Taleb has identified among us, is intelligent enough to come up with measures a great deal more effective than lockdowns and wearing masks, if it ever really comes to our staring extinction in the face.

As far as “ruin exposures” go, there are many who believe that the risk at the “collective level” is much greater from shutting down systems as complex as modern economies than it is from COVID-19. No doubt Taleb, who considers himself a genius in economics as well as epidemiology would dismiss this concern with a wave of the hand. I’m not so sure. I rather doubt he’s the only man in the universe who has a clue. He informs us that this “precautionary principle applies when traditional statistical averages are invalid because risks are not ergodic.” This is a bit of jargon apparently intended to impress the rubes. I doubt that many of his worshipful fanboys have a clue what “ergodic” even means, and the term is best left in the realm of statistical mechanics in any case.

Speaking of rubes, have a look at the comments by Taleb’s followers on a Twitter thread in which Dr. Phil has the misfortune of being officially added to his list of “imbeciles.” These guys all consider themselves only a peg below Taleb among the ranks of the world’s greatest living geniuses. They must all walk around with permanent scowls on their faces from the stress of living among the rest of us idiots and imbeciles. They all imagine that they’re uttering great profundities as they squawk out variations on his latest tweets like so many parrots. In this case, Dr. Phil was being interviewed by Laura Ingraham, rightly pointing out that we accept significant risks of death from driving, smoking, swimming, etc., without feeling any need to do something as extreme as locking down the economy. Taleb tweeted the interview with the snarky comment, “Drowning in swimming pools is extremely contagious and multiplicative.” All of his acolytes thought this was most profound, but in fact it is a matter of complete indifference. Dr. Phil was making a point about acceptance of the risk of death. The fact that COVID-19 happens to be contagious just adds another number to factor in when quantifying risk. Overall risks can be calculated and compared regardless of whether one of them derives from a contagious disease or not.

I started to have my doubts about Taleb when I started wading through the grab bag of banalities in his third book, “The Bed of Procrustes: Philosophical and Practical Aphorisms.” Anyone whose tastes run to that sort of thing would be well advised to forget Taleb and buy a copy of Rochefoucauld’s maxims instead. They’re both more intelligent and more entertaining. One of them happens to be most appropriate in this case: “Self-love is the greatest of all flatterers.” In Taleb’s opinion, Claire Lehmann is a neo-Nazi, Bjorn Lomborg is a right-wing, sociopathic quack, and Dr. Phil is an imbecile. He’s entitled to his opinion. I’m entitled to mine too. In my opinion, he’s an over-inflated windbag.

More Fun with “Ethics”

One cannot make truth claims about morality because moral perceptions are subjective manifestations of evolved behavioral traits.  That fact should have been obvious to any rational human being shortly after the publication of The Origin of Species in 1859.  It was certainly obvious enough to Darwin himself.  Edvard Westermarck spelled it out for anyone who still didn’t get it in his The Origin and Development of Moral Ideas, published in 1906.  More than a century later one might think it should be obvious to any reasonably intelligent child.  Alas, most of us still haven’t caught on.  We still take our occasional fits of virtuous indignation seriously, and expect everyone else to take them seriously, too.  As for the “experts” who have assumed the responsibility of explaining to the rest of us when our fits are “really” justified, and when not, well, it seems they’ve never heard of a man named Darwin.  Or at least it does to anyone who takes the trouble to thumb through the pages of the journal Ethics.

You might describe Ethics as a playground for academic practitioners of moral philosophy.  They use it to regale each other with articles full of rarefied hair splitting and arcane jargon describing the flavor of morality they happen to prefer at the moment.  Of course, it also serves as a venue for accumulating the publications upon which academic survival depends.  Look through the articles in any given issue, and you’ll find statements like the following:

The reasons why actions are right or wrong sometimes are relatively straightforward, and then explicit moral understanding may be quite easy to achieve.

Since almost all civilians are innocent in war, and since killing innocent civilians is worse than killing soldiers, killing civilians is worse than killing soldiers.

We are constrained, it seems, not only not to treat others in certain ways, but to do so because they have the moral standing to demand that we do so, and to hold us accountable for wronging them if we fail.

Some deontologists claim that harm-enabling is a species of harm-allowing.  Others claim that while harm-enabling is properly classified as a species of harm-doing, it is nonetheless morally equivalent, all else equal, to harm-allowing.

Do you notice the common thread here?  That’s right!  All these statements are dependent on the tacit assumption that there actually is such a thing as moral truth.  In the first that assumption comes in the form of a statement that implies that what we call “good” and “evil” actually exist as objective things.  In the second it comes in the form of an assumption that there is an objective way to determine guilt or innocence.  In the third it manifests itself as a belief the moral emotions can jump out of the skull of one individual and acquire “standing,” so that they apply to other individuals as well.  In the fourth, it turns up in the form of a standard by which it can be determined whether acts are “morally equivalent” or not.  Westermarck cut through the fog obfuscating the basis of such claims in the first chapter of his book.  As he put it,

As clearness and distinctness of the conception of an object easily produces the belief in its truth, so the intensity of a moral emotion makes him who feels it disposed to objectivize the moral estimate to which it gives rise, in other words, to assign to it universal validity.  The enthusiast is more likely than anybody else to regard his judgments as true, and so is the moral enthusiast with reference to his moral judgments.  The intensity of his emotions makes him the victim of an illusion.  The presumed objectivity of moral judgments thus being a chimera, there can be no moral truth in the sense in which this term is generally understood.  The ultimate reason for this is that the moral concepts are based upon emotions, and that the contents of an emotion fall entirely outside the category of truth.

In other words, all the learned articles on the merits of this or that moral system in the pages of Ethics and similar journals are more or less the equivalent of a similar number of articles on the care and feeding of unicorns, or the number of persons, natures and wills of imaginary super-beings.  Why don’t these people face the obvious?  Well, perhaps first and foremost, because it would put them out of a job.  Beyond that, all their laboriously acquired “expertise,” would become as futile as the expertise of physicians in the 18th century on the proper technique for bleeding patients suffering from smallpox.  For that matter, most of them probably believe their own cant.  As Julius Caesar, among many others, pointed out long ago, human beings tend to believe what they want to believe.

Morality is what it is, and won’t become something different even if the articles in learned journals on the subject multiply until the stack reaches the moon.  What would happen if the whole world suddenly accepted the fact?  Very little, I suspect.  We don’t behave morally the way we do because of the scribblings of this or that philosopher.  We behave the way we do because that is our nature.  Accepting the truth about morality wouldn’t result in a chaos of moral relativism, or an astronomical increase in crime, or even a sudden jolt of the body politic to the right or the left of the political spectrum.  With luck, a few people might start considering the implications of the truth, and point out that all the virtue posturing and outbursts of pious wrath that are such a pervasive feature of the age we live in are more or less equivalent to the tantrums of children.  The result might be a world that is marginally less annoying to live in.  I personally wouldn’t mind living in a world in which the posturing of moral buffoons had become more a source of amusement than annoyance.

Haidt and Lukianoff on “The Coddling of the American Mind”

Jonathan Haidt and Greg Lukianoff have just published an article in The Atlantic entitled The Coddling of the American Mind that illustrates yet another pathological artifact of the bitter determination of our species to preserve the fantasy of objective morality.  They describe the current attempts of university students to enforce ideological orthodoxy by vilifying “microaggressions” against the true faith and insisting on “trigger warnings” to insure the pure in heart will not be traumatized by allusions to Crimethink.

Haidt has done some brilliant work on the nature of human morality, and his The Righteous Mind is a must read for anyone with a serious interest in understanding the subject.  Lukianoff is the CEO of the Foundation for Individual Rights in Education, which supports free-speech rights on campus.  According to his own account, his interest in the subject was catalyzed by his personal struggle with depression.

According to the article,

A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense.

Examples of the phenomena they describe may be found here, here, and here.  The authors note that,

The press has typically described these developments as a resurgence of political correctness.  That’s partly right, although there are important differences between what’s happening now and what happened in the 80s and 90s.  That movement sought to restrict speech… but it also challenged the literary, philosophical and historical canon, seeking to widen it by including more diverse perspectives.  The current movement is largely about emotional well-being.  More than the last, it presumes an extraordinary fragility of the collegiate psyche, and therefore elevates the goal of protecting students from psychological harm.

Which brings them to the theme of their article – that this “vindictive protectiveness,” as they call it is not protecting students from psychological harm, but is actually causing it.  The rest of the article mainly consists of the claim that modern students are really the victims of some of the dozen “common cognitive distortions” that are listed at the end of the article, and suggests that “cognitive behavioral therapy,” which helped Lukianoff overcome his own struggle with severe depression, might be something they should try as well.

Maybe, but I suspect that the real motivation behind this latest “movement” has no more to do with “preventing psychological harm” than “preventing discomfort or giving offense” were the real motivations behind the PC movement, which, BTW, is at least as active now as it was in the 80s and 90s.  Rather, these phenomena are best understood as modern versions of the ancient game of moralistic one-upmanship.  In other words, they’re just a mundane form of status seeking behavior.  As usual, by taking them seriously one just plays into the hands of the status seekers.

If you’d like to see what the phenomenon looked like in the 60’s, and didn’t happen to be around at the time, I recommend you have a look at the movie Getting Straight, starring Elliot Gould and Candice Bergen.  It’s all about the campus revolutions of the incredibly narcissistic and self-righteous Baby Boomers who were the “youth” of the time.  Encouraged by their doting parents, they imagined themselves the bearers of all worldly wisdom, guaranteed to be the creators of a future utopia.  Their doting (and despised) parents, of the generation that transcended the Great Depression, defeated the Nazis and fascists in World War II, and fended off Communism long enough for it to collapse of its own “internal contradictions” so their children had the opportunity to stage cathartic but entirely safe “revolutions,” are uniformly portrayed as idiots, standing in the way of “progress.”  The “student revolutionaries” in the film are every bit as pious as their modern analogs, and have equally idiotic demands.  The Brave New World will apparently only be possible if they are allowed to have coed dorms and gender and ethnic studies programs at every university, presumably to insure they will have no marketable skills, and so will be able to continue the “revolution” after they graduate.  The amusing thing about the film, especially in retrospect, is that its creators weren’t intentionally creating a comedy.  They actually took themselves seriously.

For earlier versions of “vindictive protectiveness” most of us must turn to the history books.  The great Sage of Baltimore, H. L. Mencken, devoted a great deal of his time and energy to fighting its manifestation as the “Uplift” of his own day.  Many examples may be found in his six volumes of Prejudices, or his autobiographical Trilogy.  The latest version, BTW, has string bookmarks, just like the old family Bibles.  I’m sure the old infidel would have been amused.

Shakespeare loathed the “vindictive protectiveness” of his day, which came in its first really modern version; Puritanism.  See, for example, his Twelfth Night, which scorns the “morally righteous” of his day as personified by that “devil of a Puritan,” Malvolio.  For cruder, less culturally evolved versions, one can go back to the Blues and Greens of the Byzantine circus, or the Christian squabbles over assorted flavors of heresy in the 3rd, 4th and 5th centuries.

In a word, there is nothing new under the sun.  I’m sure Haidt realizes this.  After all, he devotes much of The Righteous Mind to describing and analyzing the phenomenon.  He plays along with the “cognitive behavioral therapy” stuff, because that’s what blows his co-author’s hair back, but still gets enough in between the lines to describe what is really going on.  For example, from the article,

A claim that someone’s words are “offensive” is not just an expression of one’s own subjective feeling of offendedness.  It is, rather, a public charge that the speaker has done something objectively wrong.  It is a demand that the speaker apologize or be punished by some authority for committing an offense.

And that really is, and always has been, the crux of the problem.  Nothing can be “objectively wrong.”  The origin of all these sublime and now microscopically distilled moral emotions will probably eventually be found in the ancient portions of our brains that we share with every other mammal, and probably the reptiles as well.  This “root cause” of moral behavior exists because it evolved.  It did not evolve because of its efficacy in fending off microaggressions, or to insure that Mesozoic mammals would be sure to issue trigger warnings.  No, it evolved for the somewhat unrelated reason that it happened to increase the odds that certain “selfish genes,” or perhaps “selfish groups,” if you believe E. O. Wilson, would survive, and pass on the relevant DNA to latter day animals with big brains, namely, us.  We get into trouble like this by over-thinking what our reptile brains are trying to tell us.  The problem will never go away until our self-knowledge develops to the point that we finally grasp this essential truth.  Until that great day dawns, we will have to grin and bear the annoyance of dealing with the pathologically and delicately self-righteous among us.  Roll out the fainting couches!

fainting

The Consequences of Natural Morality

Good and Evil are not objective things.  They exist as subjective impressions, creating a powerful illusion that they are objective things.  This illusion that Good and Evil are objects independent of the conscious minds that imagine them exists for a good reason.  It “works.”  In other words, its existence has enhanced the probability that the genes responsible for its existence will survive and reproduce.  At least this was true at the time that the mental machinery we lump together under the rubric if morality evolved.  Unfortunately, it is no longer necessarily true today.  Times have changed rather drastically, making it all the more important that, when we speak of Good and Evil, we actually know what we’re talking about.

Philosophers, of course, have been “explaining” morality to the rest of us for millennia, erecting all sorts of complicated systems based on the false fundamental assumption that the illusion is real.  Now that the cat is out of the bag and the rest of us are finally showing signs of catching up with Darwin and Hume, it’s no wonder they’re feeling a little defensive.  Wouldn’t you be upset if you’d devoted a lot of time to struggling through Kant’s incredibly obscure and convoluted German prose, only to discover that his categorical imperative is based on assumptions about reality that are fundamentally flawed?

A typical reaction has been to assert that the truth can’t be the truth because they would be unhappy with it.  For example, they tell us that, if the enhanced probability that certain genes would survive is the ultimate reason for the very existence of morality, then it follows that,

•  We must all become moral relativists

•  Punishment of criminals will be unjustified if Good and Evil are mere subjective impressions, and thus ultimately matters of opinion.

•  We cannot object to being robbed if some individuals have genes that predispose them to steal.

•  We cannot object to racism, anti-Semitism, religious bigotry, etc., it they are “in our genes.”

…and so on, and so on.  It’s as if we’re forbidden to act morally without the permission of philosophers and theologians.  I’ve got news for them.  We’ll continue to act morally, continue to be moral absolutists, and continue to punish criminals.  Why?  Because Mother Nature wants it that way.  It is our nature to act morally, to perceive Good and Evil as absolutes, and to punish free riders.  If you need evidence, look at Richard Dawkins’ tweets.  He’s a New Atheist, yet at the same time the most moralistic and self-righteous of men.  If asked to provide a rational basis for his moralizations, he would go wading off into an intellectual swamp.  That hardly keeps him from moralizing.  In other words, morality works whether you can come up with a “rational” basis for the existence of Good and Evil or not.  Furthermore, morality is the only game in town for regulating our social interactions with a minimum of mayhem.  As a species, we’re much too stupid to begin analyzing all our actions rationally with respect to their potential effects on our genetic destiny.

Other than that, of course, the truth about morality is what it is whether the theologians and philosophers approve of the truth or not.  They can like it or lump it.  My personal preference would be to keep it simple, and limit its sphere to the bare necessities.  We should also understand it.  In an environment radically different than the one in which it evolved, it can easily become pathological, prompting us to do things that are self-destructive, and potentially suicidal.  It would be useful to recognize such situations as they arise.  It would also be useful to promote instant recognition of the pathologically pious among us.  Their self-righteous posing can quickly become a social irritant.  In such cases, it can’t hurt to point out that they lack any logical basis for applying their subjective versions of Good and Evil to the rest of us.

Morality and the Dilemma of the Pious Atheists

If Jonathan Haidt is right, we are a pathologically pious species, with logical minds that evolved mainly to serve our innate self-righteousness.  Contemplate the behavior of modern atheists, and it seems plausible enough.  After all, they realize, or at least the more intelligent among them do, that we are an evolved species.  If they’ve looked at any of the recent flood of books on the subject, they also realize that our morality is the expression, not of the opinion of some supernatural being, but of evolved behavioral traits.  It exists because it promoted our survival at times when our mode of existence and environment were radically different from what they are now.  Good and evil are not objects and things-in-themselves.  Rather, they are subjective perceptions in the minds of individuals.  As such, they have no existence independent of those minds.  The odd thing (or perhaps the predictable thing, given the nature of our  species) is that these perfectly straightforward, rational conclusions seem to matter hardly at all.

Consider, for example, the case of Jerry Coyne, like me, an atheist, and a latter day Darwin’s bulldog.    He rejected the notion of objective Good and Evil in a recent post on his blog.  For  example,

Now, I maintain that there is no objective morality: that morality is a guide for how people should get along in society, and that what is “moral” comports in general with the rules we need to live by in a harmonious society—one with greater “well being,” as Harris puts it.  A society in which half the inhabitants are dispossessed because they lack a Y chromosome is not a society brimming with well being, and I wouldn’t want to live in it.  And yes, what promotes “well being” can in principle be established empirically. But that still presumes that the best society is one that promotes the greatest “well being,” and that is an opinion, not a fact.

And yes, of course moral judgments can hinge on matters of real scientific truth! If you think that abortion is wrong because fetuses feel pain, that’s something that science can, in principle, find out. But in the end that still depends on an opinion: causing a fetus pain, even though doing so comports with the mother’s wishes, is immoral.  Just because a disagreement is “substantive” (whatever that means) does not mean that it can be resolved by determining objective truths.

Now I agree, of course, that throwing acid in the face of Afghan schoolgirls for trying to learn is wrong. But it is not an “objective” moral wrong—that is, you cannot deduce it from mere observation, not without adding some reasons why you think it’s wrong.  And those reasons are based on opinions.

Here we have an “is”:  Moral judgments are based on subjective perceptions or, if you will, opinions.  Nowhere does Coyne address the problem of “ought”:  how these subjective judgments might acquire the power to leap out of the brain of one individual and become applicable to other individuals as well, whether that other individual likes it or not.  Yet a couple of posts later, his own judgments have magically acquired that power! Referring to a little girl who was bitten by a dolphin at Seaword, he writes,

I’m sorry the little girl was bitten, but that’s only the human side of the equation. What about the sufferings (yes, I think they suffer) of animals like dolphins, sea otters, and beluga whales forced to endlessly swim in circles in small tanks? (I once was moved almost to tears by watching an otter do this at the Shedd Aquarium in Chicago. I filed a complaint with a person in charge, but they completely ignored me.)  As a biologist, this outrages me.

Let us make no mistake here: this is not about conservation, and only pretends to be about education. In the end, it’s all about money.

This begs the question, “So what?”  Of course, Coyne could always beg off by claiming that he was merely describing his own, personal state of mind.  To that, I would reply, “Nonsense!”  His “outrage” is not a clinical description of his subjective state of mind at a particular moment, but a moral judgment directed at the “person in charge.” His comment that the behavior he is outraged about is “all about money” is not just a neutered opinion, but a moral judgment.  How is it that Coyne’s state of mind has acquired this power over others?  In fact, if we are to believe what he has written on the subject himself, no such path to power and legitimacy exists.  The “person in charge” cannot be bound by Coynes “opinion,” any more than the managers of Seaworld and the Shedd Aquarium.  In spite of that, he has elevated his own perceptions of good and evil to the status of the very “objective truths” he denied a couple of posts earlier, as binding on others as on himself.

I don’t mean to single out Coyne.  His irrational behavior is pervasive, and predictable, given the nature of our species.  I, too, experience outrage at the maltreatment of animals.  Rationally, however, I realize that my outrage is a mental phenomenon that is in no way connected to a “Good” that exists independently, outside of my own brain.  The fact that Good and Evil don’t exist as independent objects in no way depends on acceptance of the hypothesis that human morality represents the expression of evolved behavioral traits, or on acceptance of the theory of evolution.  It does not even depend on whether a God exists or not.  A hypothetical super-being might have the power to fry me in hell for quadrillions and quintillions of years for failing to share his opinion, but his opinion, his “subject,” would not become an “object” for all that.

Why do I bother to bring this up?  I certainly am not immune to the “Coyne syndrome,” as readers of my blog will be quick to detect.  However, having long ago concluded that there is no rational basis for self-righteousness, I find it very tiresome, at least in others.  Beyond this personal whim, there is the matter of survival.  If, in fact, morality exists because it evolved, and it evolved because it promoted our survival, it would be somewhat incongruous if it became the ultimate cause of our extinction.  In the last century alone, the Communists murdered tens of millions for what they saw as the highest of moral reasons, and when Hitler exterminated the Jews, as he wrote in Mein Kampf, he believed he was doing “the Lord’s work.”  Under the circumstances, it seems to me that it would behoove us as a species to cultivate a lively awareness of the subjective nature of morality.  We must apply morality in our routine interactions with other individuals, because there is no alternative.  We should be leery of applying it outside of that sphere, or at least those of us should who, like me, subjectively prefer that our species not become extinct.

Note on the Pathologically Pious

I mentioned Malcolm Muggeridge’s post-mortem of a decade he had just lived through, The Thirties, in an earlier post.  There are any number of thought provoking nuggets in the book, but one of the best has to do with the people I sometimes refer to as the pathologically pious.  These are the self-appointed saviors of one category of the oppressed and downtrodden or other whose “selfless” crusades are always an irritant to the rest of us, and occasionally become downright dangerous.  Typically one finds them eternally locked in a noble struggle to right some egregious wrong, yet, in spite of all their self-attributed heroism, they never actually seem to reach the goal.  There’s good reason for that.  The “struggle” is the end in itself.  As Muggeridge put it,

In all movements which undertake the championship of the oppressed, and demand rectification of injustices and inequalities, there is, as in Don Quixote, a strong admixture of egotism.  Their leaders are usually heroic; but when their heroism is no longer required, they are left disconsolate, and sometimes embittered.  It seems cruel that they should be deprived of the limelight, or at best deserve as veterans only occasional acclamation, for no other reason than that what they agitated for has been wholly, or largely, obtained.  In their case, nothing fails like success.

The doom of all who invest imaginative hopes in earthly enterprises and mortal men, is for these enterprises to triumph.

In other words, as Skinner might have put it, the positive “reinforcement” for this sort of behavior lies not in actually achieving some hypothetical goal, but in the process of, or, perhaps more accurately, in the appearance of “struggling” to achieve that goal.  To put it more pithily, the pose is everything, and the reality nothing.

There’s nothing surprising or unexpected about this particular aspect of human behavior.  It’s perfectly “normal” manifestation of the human traits associated with morality.  As is usually the case, it requires the Don Quixote in question to perceive the Good as an object, existing independently, outside of the subjective mind.  We are all programmed to perceive the Good in that way, even though no such object actually exists.  Evolution doesn’t arrive at solutions that respect abstract truth.  It arrives at solutions that promote genetic survival.

It is not difficult to understand why we should be programmed to perceive the Good in this way.  Assuming moral behavior promoted our ancestors’ survival in the first place, it is more plausible that it would do so in the form of emotional imperatives rather than as a mix of subjective alternatives for cave dwelling philosophers to chew the fat over around the campfire at night.  This sort of programming apparently worked well enough in our prehistoric past.  After all, we’re still here.  In those days, the Good was associated almost exclusively with ones own tribe or group, and the Evil with ones neighbors.  The problem is, human societies have changed rather significantly since then.  We can now perceive the Evil in ways that Mother Nature never imagined during the long millennia in which we existed as small groups of hunter-gatherers.  Victor Davis Hanson provided just a few of the almost countless possibilities from a point of view on the political right in a recent article:

…there are new monsters in America, and I am starting to wonder whether I am to be considered among them: those of the uninvolved and uninformed lives, the bar-raisers, the downright mean ones, the never deserving of respect ones, the Vegas junketeers, the Super Bowl jet setters, the tuition stealers, the faux-Christians who do not pay higher taxes, the too much income makers, the tormenters of autistic children, the polluters, the enemies deserving of punishment, the targets to bring a gun against, the faces to get in front of, the limb-loppers, the tonsil pullers, the fat cats, the corporate jet owners, the one-percenters, the stupidly acting, the not paying their fair sharers, the discriminators on the “way you look”, the alligator raisers and moat builders, the vote deniers, the clingers, the typical something persons, the hunters of kids at ice cream parlors, the stereotypers and profilers, the cowards, the lazy and soft, the non-spreaders of money, the not my people people, the Tea party racists, the not been perfect and mistake makers, the disengaged and the dictating, the not the time to profiteers, the ones who did not know when to quit making money, and on and on.

Those on the left could compose a similar list, and it would be just as accurate.  One finds saviors of mankind occupying all points on the political spectrum, and they all perceive Good and Evil in a bewildering array of real and imagined entities that didn’t exist when the tendency to conceptualize Good and Evil as real, independent objects evolved.  As a result, human moral behavior is becoming increasingly dysfunctional.  If the preceding ages weren’t sufficient, the 20th century provided us with ample experimental confirmation of the fact.  Never before had so many people been slaughtered in the name of defending the Good in its Communist, Nazi, and assorted other ideological manifestations.

As one who cherishes the whim that our species should survive, I suggest that it’s high time that we a) realize we have a problem, and b) do something about it.  We have at least taken the first baby step towards this goal by finally realizing, after a bitter struggle, that there is such a thing as human nature, and that it exists because it evolved.  It seems to me that, once we have accepted these elementary facts and done a little thinking about their implications, we may be able to start breaking ourselves of the very satisfying but increasingly dangerous habit of inventing ever more imaginary Goods and the imaginary Evils of the sort noted by Mr. Hanson that invariably come along with them.

The advantages would be many.  For starters, we could finally dismiss all the pretentions of the pathologically pious, the obnoxiously self-righteous, and the permanently outraged among us to an exclusive knowledge of the ingredients of Virtue.  Instead of taking them seriously, would it not be better to smile in their faces, explain to them that the particular Good object that seems so real to them doesn’t actually exist, and, if they persist, house them in comfortable asylums?  The alternative is to wait and hope they go away, as we did so often in the past.  Sometimes it works, but sometimes it doesn’t and, as history has so copiously demonstrated, eventually they can accumulate enough power to start murdering those of us who are unfortunate enough to fit their description of Evil.  From a purely utilitarian point of view, it seems better not to take the risk.

Don’t Like People who Threaten Bloggers?

Then consider hitting Little Miss Attila’s tip jar. She’s being threatened by a religious nut case, is not independently wealthy, and could use your help. Insty and Eric at Classical Values have noticed, and I hope some of the other big dogs will pick up on the story as well. This cockroach needs to be dragged into the light.

Surprise! Cell Phones Really are Annoying

It’s official. The miserable creatures who sit down next to you in public places and yap on their cell phones nonstop really are annoying. According to a study soon to appear in the journal Psychological Science, people are irritated by cell phone conversations because it’s hard to tune them out.  It’s good to know I’m not unreasonably grumpy after all.  In the words of lead author Lauren Emberson of Cornell,

Hearing half a conversation is distracting because we are unable to predict the succession of speech… We believe this finding helps reveal how we understand language in conversation: We actively predict what the person is going to say next and this reduces the difficulty of language comprehension.

People are often more irritated by nearby cell phone conversations rather than conversations between two people who are physically present. Since halfalogues really are more distracting and you can’t tune them out, this could explain why people are irritated.

In all fairness, the authors of “Pearls before Swine,” should be given priority for this discovery. In a previous study of annoying people that appeared in their comic strip some years ago, they included Fred, the guy who “makes unnecessary personal calls on his cell phone while in public places,” right up there with Myrna, who “obliviously blocks a whole grocery aisle with her shopping cart while looking for her favorite cake mix,” and Dirk, the guy who “reclines his airline seat all the way back and crushes the knees of whoever sits behind him,” in their “Box o’ Stupid People.”  It’s true their strip isn’t peer reviewed, but it will likely be more heavily cited than the Psychological Science paper.

no-cell-phone-sign