Posted on January 31st, 2015 No comments
The thought of brilliant individuals is worth considering regardless of the historical pigeon holes they happen to end up in. Sometimes we ignore them because they’re not in the “right” pigeon hole. For example, “philosophers” are dismissed by some as irrelevant since the advent of “science.” Such a cavalier attitude can be perilous assuming one is really seeking the truth. True, many philosophers were born too early, before Darwin or his theory were heard of, but that doesn’t mean their musings were useless. At the very least, they are of historical value, informing us of what was on the minds of people who thought in days gone by. Sometimes, they are a great deal more valuable than that.
Consider, for example, what a certain 18th century British philosopher by the name of Francis Hutcheson had to say touching on the subject of morality as an expression of human nature. As my astute readers will recall, the “science” of much of the 20th century denied that human nature had anything to do with morality. The “scientists” who promoted this dogma, sometimes referred to today as the Blank Slaters, would have done well to read Hutcheson. He demolished the Blank Slate narrative two centuries before it became the greatest scientific debacle of the 20th century, if not of all time.
As it happens, there is a fascinating connection between thought about morality and human nature in British philosophy going back at least to the time of the Puritans of the 17th century. An excellent history of the subject was written by Michael Gill, entitled, The British Moralists on Human Nature and the Birth of Secular Ethics. Therein, Gill traces the debate between those who defended the possibility of morality based on reason alone, and those, like Hutcheson, as well as Shaftesbury before him and Hume after him, who claimed that a rational origin of morality was impossible. Of the three, Hutcheson deserves most of the credit for demonstrating that morality based on pure reason is impossible, and that a “moral sense,” grounded in human nature, is a prerequisite for its very existence.
As suggested above, history has deposited Hutcheson in the “philosopher” pigeon hole. However, he was well aware of the scientific method, and enthusiastic about the advance of science in his own time. He conscientiously sought to apply scientific technique to his own inquiries into human nature. His “experiments” consisted of keen observations of the moral behavior and reactions of other human beings, as well as a constant probing and examination of his own consciousness.
Hutcheson’s most important work on the subject was An Essay on the Nature and Conduct of the Passions and Affections, which was published in 1728. In that work he demonstrated that there can be no such thing as a purely reasonable morality, that reason cannot possibly serve as the end motivation for moral behavior, that only an innate moral sense can provide such motivating ends for moral actions, and that as a consequence of the fact that this moral sense is innate, it cannot be acquired purely by learning or, as moderns might put it, by “culture.” In other words, the Blank Slate is a logical impossibility.
Hutcheson begins by pointing out that many of the words available in human languages are imprecise, and as a result are blunt instruments for conducting inquiries into subjects as complex as the origins of human morality. In particular, it’s necessary to understand exactly what one means when one speaks of “reason.” As he puts it,
Since reason is understood to denote our power of finding out true propositions, reasonableness must denote the same thing, with conformity to true propositions, or to truth. Reasonableness in an action is a very common expression, but yet upon inquiry, it will appear very confused, whether we suppose it the motive to election, or the quality determining approbation.
It follows that, while reason can be applied to discover the consequences of an action, it can never provide motivation for choosing or approving it over any other:
If conformity to truth, or reasonable, denote nothing else but that “an action is the object of a true proposition,” ‘tis plain, that all actions should be approved equally, since as many truths may be made about the worst, as can be made about the best.
There is one sort of conformity to truth which neither determines to the one or the other; viz. that conformity which is between every true proposition and its object. This sort of conformity can never make us choose or approve one action more than its contrary, for it is found in all actions alike: Whatever attribute can be ascribed to a generous kind action, the contrary attribute may as truly be ascribed to a selfish cruel action: Both propositions are equally true.
Hutcheson went on to point out that, as a result, no ultimate end can ever be found using reason alone. Any end must have a motivating reason based on some other end. However, another reason must be supplied for this “other end,” and a reason must be found for that end as well. As each end is identified in turn, we can go on asking “why?” forever. As Hutcheson put it,
But as to the ultimate ends, to suppose exciting reasons for them, would infer, that there is no ultimate end, but that we desire one thing for another in an infinite series.
According to Hutcheson, two types of reason can supply the ultimate answer to the final “why,” thereby ending the chain, including “exciting” reasons, and “justifying” reasons. I encourage those interested in the precise definition of these words to read his book. However, in either case, they cannot be derived by reason, but presuppose the existence of “human nature:”
Now we shall find that all exciting reasons presuppose instincts and affections; and the justifying presuppose a moral sense.
If we assume the existence of human nature, the “reasons” fall easily into place:
Let us once suppose affections, instincts or desires previously implanted in our nature: and we shall easily understand the exciting reasons for actions, viz. “These truths which show them to be conducive toward some ultimate end, or toward the greatest end of that kind in our power.” He acts reasonably, who considers the various actions in his power, and forms true opinions of the tendencies; and then chooses to do that which will obtain the highest degree of that, to which the instincts of his nature incline him, with the smallest degree of those things to which the affections in his nature make him averse.
Of course, versions of the Blank Slate have been around since the days of the ancient Greek philosophers, and “updated” versions were current in Hutcheson’s own time. As he points out, they were as irrational then as they are now:
Some elaborate Treatises of great philosophers about innate ideas, or principles practical or speculative, amount to no more than this, “That in the beginning of our existence we have no ideas or judgments;” they might have added too, no sight, taste, smell, hearing, desire, volition. Such dissertations are just as useful for understanding human nature, as it would be in explaining the animal oeconomy, to prove that the faetus is animated before it has teeth, nails, hair, or before it can eat, drink, digest, or breathe: Or in a natural history of vegetables, to prove that trees begin to grow before they have branches, leaves, flower, fruit, or seed: And consequently that all these things were adventitious or the effect of art.
Now we endeavored to show, that “no reason can excite to action previously to some end, and that no end can be proposed without some instinct or affection.” What then can be meant by being excited by reason, as distinct from all motion of instincts or affections? …Then let any man consider whether he ever acts in this manner by mere election, without any previous desire? And again, let him consult his own breast, whether such kind of action gains his approbation. A little reflection will show, that none of these sensations depend upon our choice, but arise from the very frame of our nature, however we may regulate or moderate them.
A bit later, Hume used the same arguments as Hutcheson to demonstrate his famous dictum that,
Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.
As readers of such modern books as Jonathan Haidt’s The Righteous Mind are aware, Hume got all the credit, and Hutcheson is now more or less forgotten by all but professional philosophers. I suspect that’s because, as Gill pointed out, Hume supplied the final link in a chain of philosophers going back through Hutcheson to Shaftesbury, Cudworth, and many others, who had insisted on the origins of morality in human nature. Except for Hume, Hutcheson and most of the others had been firm believers in a Deity, and often Christian theologians. Like Hutcheson, they traced the origins of human nature to the hand of God. Hume was the exception, and could therefore be ensconced as the “Father of Secular Ethics.” That doesn’t alter the fact that Hutcheson had supplied compelling arguments for the existence and significance of human nature before Hume came on the scene. Those arguments remain unrefuted to this day. As the great Edvard Westermarck wrote nearly 200 years later:
That the moral concepts are ultimately based on emotions either of indignation or approval, is a fact which a certain school of thinkers have in vain attempted to deny.
Westermarck was familiar with Hutcheson, and referred to him in his own work. It’s a shame that the latter day Blank Slaters didn’t read him as well. It turns out that his “philosophy” was far in advance of their “science.” It took the “men of science” the greater part of the 20th century to finally crawl out of the swamp they had wandered into, and find Hutcheson there to greet them when they finally made it back to solid ground. There is no more important knowledge for human beings than self-knowledge. Occasionally one can find it hiding in the books of obscure philosophers.
Posted on December 31st, 2014 3 comments
It’s great to see another title by E. O. Wilson. Reading his books is like continuing a conversation with a wise old friend. If you run into him on the street you don’t expect to hear him say anything radically different from what he’s said in the past. However, you always look forward to chatting with him because he’s never merely repetitious or tiresome. He always has some thought-provoking new insight or acute comment on the latest news. At this stage in his life he also delights in puncturing the prevailing orthodoxies, without the least fear of the inevitable anathemas of the defenders of the faith.
In his latest, The Meaning of Human Existence, he continues the open and unabashed defense of group selection that so rattled his peers in his previous book, The Social Conquest of Earth. I’ve discussed some of the reasons for their unease in an earlier post. In short, if it can really be shown that the role of group selection in human evolution has been as prominent as Wilson claims, it will seriously mar the legacy of such prominent public intellectuals as Richard Dawkins and Steven Pinker, as well as a host of other prominent scientists, who have loudly and tirelessly insisted on the insignificance of group selection. It will also require some serious adjustments to the fanciful yarn that currently passes as the “history” of the Blank Slate affair. Obviously, Wilson is firmly convinced that he’s on to something, because he’s not letting up. He dismisses the alternative inclusive fitness interpretation of evolution as unsupported by the evidence and at odds with the most up-to-date mathematical models. In his words,
Although the controversy between natural selection and inclusive fitness still flickers here and there, the assumptions of the theory of inclusive fitness have proved to be applicable only in a few extreme cases unlikely to occur on Earth on any other planet. No example of inclusive fitness has been directly measured. All that has been accomplished is an indirect analysis called the regressive method, which unfortunately has itself been mathematically invalidated.
Interestingly, while embracing group selection, Wilson then explicitly agrees with one of the most prominent defenders of inclusive fitness, Richard Dawkins, on the significance of the gene:
The use of the individual or group as the unit of heredity, rather than the gene, is an even more fundamental error.
Very clever, that, a preemptive disarming of the predictable invention of straw men to attack group selection via the bogus claim that it implies that groups are the unit of selection. The theory of group selection already has a fascinating, not to mention ironical, history, and its future promises to be no less entertaining.
When it comes to the title of the book, Wilson himself lets us know early on that its just a forgivable form of “poetic license.” In his words,
In ordinary usage the word “meaning” implies intention. Intention implies design, and design implies a designer. Any entity, any process, or definition of any word itself is put into play as a result of an intended consequence in the mind of the designer. This is the heart of the philosophical worldview of organized religions, and in particular their creation stories. Humanity, it assumes, exists for a purpose. Individuals have a purpose in being on Earth. Both humanity and individuals have meaning.
Wilson is right when he says that this is what most people understand by the term “meaning,” and he decidedly rejects the notion that the existence of such “meaning” is even possible later in the book by rejecting religious belief more bluntly than in any of his previous books. He provides himself with a fig leaf in the form of a redefinition of “meaning” as follows:
There is a second, broader way the word “meaning” is used, and a very different worldview implied. It is that the accidents of history, not the intentions of a designer, are the source of meaning.
I rather suspect most philosophers will find this redefinition unpalatable. Beyond that, I won’t begrudge Wilson his fig leaf. After all, if one takes the trouble to write books, one generally also has an interest in selling them.
As noted above, another significant difference between this and Wilson’s earlier books is his decisive support for what one might call the “New Atheist” line, as set forth in books by the likes of Richard Dawkins, Sam Harris, and Christopher Hitchens. Obviously, Wilson has been carefully following the progress of the debate. He rejects religions, significantly in both their secular as well as their traditional spiritual manifestations, as both false and dangerous, mainly because of their inevitable association with tribalism. In his words,
Religious warriors are not an anomaly. It is a mistake to classify believers of particular religious and dogmatic religionlike ideologies into two groups, moderate versus extremist. The true cause of hatred and violence is faith versus faith, an outward expression of the ancient instinct of tribalism. Faith is the one thing that makes otherwise good people do bad things.
and, embracing the ingroup/outgroup dichotomy in human moral behavior I’ve often alluded to on this blog,
The great religions… are impediments to the grasp of reality needed to solve most social problems in the real world. Their exquisitely human flaw is tribalism. The instinctual force of tribalism in the genesis of religiosity is far stronger than the yearning for spirituality. People deeply need membership in a group, whether religious or secular. From a lifetime of emotional experience, they know that happiness, and indeed survival itself, require that they bond with oth3ers who share some amount of genetic kinship, language, moral beliefs, geographical location, social purpose, and dress code – preferably all of these but at least two or three for most purposes. It is tribalism, not the moral tenets and humanitarian thought of pure religion, that makes good people do bad things.
Finally, in a passage worthy of New Atheist Jerry Coyne himself, Wilson denounces both “accommodationists” and the obscurantist teachings of the “sophisticated Christians:”
Most serious writers on religion conflate the transcendent quest for meaning with the tribalistic defense of creation myths. They accept, or fear to deny, the existence of a personal deity. They read into the creation myths humanity’s effort to communicate with the deity, as part of the search for an uncorrupted life now and beyond death. Intellectual compromisers one and all, they include liberal theologians of the Niebuhr school, philosophers battening on learned ambiguity, literary admirers of C. S. Lewis, and others persuaded, after deep thought, that there most be Something Out There. They tend to be unconscious of prehistory and the biological evolution of human instinct, both of which beg to shed light on this very important subject.
In a word, Wilson has now positioned himself firmly in the New Atheist camp. This is hardly likely to mollify many of the prominent New Atheists, who will remain bitter because of his promotion of group selection, but at this point in his career, Wilson can take their hostility pro granulum salis.
There is much more of interest in The Meaning of Human Existence than I can cover in a blog post, such as Wilson’s rather vague reasons for insisting on the importance of the humanities in solving our problems, his rejection of interplanetary and/or interstellar colonization, and his speculations on the nature of alien life forms. I can only suggest that interested readers buy the book.Anthropology, Atheism, Blank Slate, Christianity, Evolution, Evolutionary psychology, Extraterrestrial life, Group Selection, human evolution, Human nature, Hunting Hypothesis, Ingroups and Outgroups, Kin selection, Morality, Philosophy, Religion, Secular Religions, The Meaning of Life, The Purpose of Life group selection, Meaning of Life
Posted on December 20th, 2014 No comments
‘Twas the month before Christmas, and Bill O’Reilly launched his usual jihad against the purported “War on Christmas.” It drew the predictable counterblasts from the Left, and I just happened to run across one that appeared back on December 4 on Huffpo, entitled “A War on Reason, Not on Christmas.” I must admit I find the “War on Christmas” schtick tiresome. Conservatives rightly point to the assorted liberal cults of victimization as so much pious grandstanding. It would be nice if they practiced what they preach and refrained from concocting similar cults of their own. Be that as it may, I found the article in question somewhat more unctuous and self-righteous than usual, and left a comment to that effect. It was immediately deleted.
My comment included no ad hominem attacks, nor was it abusive. I simply disagreed with the author on a few points, and noted that the political Left has an exaggerated opinion of its devotion to reason. The main theme of the article was the nature of the political divide in the U.S. According to the author, it is less between rich and poor than between “reasonable” liberals and “irrational” conservatives. As he put it,
Before imploding in the face of his sordid extramarital trysts, presidential candidate John Edwards based his campaign on the idea of two Americas, one rich the other poor. He was right about the idea that American is divided, but wrong about the nature of the division. The deeper and more important split is defined by religiosity, not riches.
The conflict between these two world views is made apparent in the details of our voting booth preferences. Religiosity alone is the most important, obvious and conclusive factor in determining voter behavior. Simply put, church goers tend to vote Republican. Those who instead go the hardware store on Sunday vote Democrat by wide margins.
He then continued,
Those who accept the idea of god tend to divide the world into believers and atheists. Yet that is incorrect. Atheist means “without god” and one cannot be without something that does not exist. Atheism is really a pejorative term that defines one world view as the negative of another, as something not what something else is.
This evoked my first comment, which seemed to me rather harmless on the face of it. I merely said that as an atheist myself, I had no objection to the term, and would prefer to avoid the familiar game of inventing ever more politically correct replacements until we ended up with some abomination seven or eight syllables long. However, what followed was even more remarkable. The author proceeded to deliver himself of a pronouncement about the nature of morality that might have been lifted right out of one of Ardrey’s books. In a section entitled, “Secular and Religious Morality,” he writes,
Traits that we view as moral are deeply embedded in the human psyche. Honesty, fidelity, trustworthiness, kindness to others and reciprocity are primeval characteristics that helped our ancestors survive. In a world of dangerous predators, early man could thrive only in cooperative groups. Good behavior strengthened the tribal bonds that were essential to survival. What we now call morality is really a suite of behaviors favored by natural selection in an animal weak alone but strong in numbers. Morality is a biological necessity and a consequence of human development, not a gift from god.
Exactly! Now, as I’ve often pointed out to my readers, if morality really is the expression of evolved traits as the author suggests, it exists because it happened to enhance the chances that certain genes we carry would survive and reproduce in the environment in which they happened to appear. There is no conceivable way in which they could somehow acquire the magic quality of corresponding to some “real, objective” morality in the sky. There is no way in which they could assume a “purpose” attributed to them by anyone, whether on the left or the right of the political spectrum. Finally, there is no way in which they could acquire the independent legitimacy to dictate to anyone the things they “really” ought or ought not to do. So much is perfectly obvious. Assuming one really is “reasonable,” it follows immediately from what the author of the article says about the evolved origins of morality above. That, of course, is not how the Left is spinning the narrative these days.
No, for a large faction on the secular Left, the fact that morality is evolved means not merely that the God-given morality of the Christians and other religious sects is “unreasonable.” For them, it follows that whatever whims they happen to tart up as the secular morality du jour become “reasonable.” That means that they are not immoral, or amoral. They are, by default, the bearers of the “true morality.” In the article in question it goes something like this:
The species-centric arrogance of religion cultivates a dangerous attitude about our relationship with the environment and the resources that sustain us. Humanists tend to view sustainability as a moral imperative while theists often view environmental concerns as liberal interference with god’s will. Conservative resistance to accepting the reality of climate change is just one example, and another point at which religious and secular morality diverge, as the world swelters.
It’s wonderful, really. The Left has always been addicted to moralistic posing, and now they don’t have to drop the charade! Now they can be as self-righteous as ever, as devotees of this secular version of morality that has miraculously acquired the power to become a thing-in-itself, presumably drifting up there in the clouds somewhere beyond the profane ken of the unenlightened Christians. As it happens, at the moment my neighbors are largely Mormon, and I must say their dogmas appear to me to be paragons of “reason” compared to this secular version of morality in the sky.
Of course, I couldn’t include all these observations in the Huffpo comment section. I merely pointed out that what the author had said about morality would have branded him as a heretic no more than 20 years ago, and evoked frenzied charges of “racism” and “fascism” from the same political Left in which he now imagines himself so comfortably ensconced. That’s because 20 years ago the behavioral sciences were still in thrall to the Blank Slate orthodoxy, as they had been for 50 years and more at the time. That orthodoxy was the greatest debacle in the history of science, and it was the gift, not of the Right, but of the “reasonable” secular Left. That was the point I made in the comment section, along with the observation that liberals would do well to keep it in mind before they break their arms patting themselves on the back for being so “reasonable.”
The author concluded his article with the following:
There is no war on Christmas; the idea is absurd at every level. Those who object to being forced to celebrate another’s religion are drowning in Christmas in a sea of Christianity dominating all aspects of social life. An 80 percent majority can claim victimhood only with an extraordinary flight from reality. You are probably being deafened by a rendition of Jingle Bells right now. No, there is no war on Christmas, but make no mistake: the Christian right is waging a war against reason. And they are winning. O’Reilly is riding the gale force winds of crazy, and his sails are full.
I must agree that the beloved Christian holiday does have a fighting chance of surviving the “War on Christmas.” Indeed, Bill O’Reilly himself has recently been so sanguine as to declare victory. When it comes to popular delusions, however, I suspect the Left’s delusion that it has a monopoly on “reason” is likely to be even more enduring. As for the deletion of my comment, we all know about the Left’s proclivity for suppressing speech that they find “offensive.” Thin skins are encountered in those political precincts at least as frequently as the characteristic delusions about “reason.”
Posted on December 14th, 2014 No comments
…to tell me why, in the absence of data, they were so sure that religion was bad for the world. That is, how do they know that if the world had never had religion, it would be better than it is now?
That would seem to be an empirical question, resolvable only with data. Yet as far as I can see (and I haven’t read every comment), most readers feel that the question can be resolved not with data, but with logic or from first principles. Or, they cite anecdotes like religiously-inspired violence (my response would be that it’s easy to measure deaths, but not so easy to measure the consolation and well being that, believers claim, religion brings them). But pointing out that religion does bad stuff doesn’t answer the question if it’s been harmful on the whole.
As an atheist myself, my answer would be that the question is neither empirical nor resolvable with logic from first principles, because it implies an objective standard whereby such terms as “bad,” “better,” and “harmful” can be defined. No such objective standard exists. At best, one can identify the consequences and then decide whether they are “go0d” or “bad” based on one’s personal subjective whims. As long as it is clearly understood that my reply is based on that standard, I would say that religion is “bad.”
Supernatural beings either exist or they don’t. I don’t claim to know the truth of the matter with absolute certainly. I don’t claim to know anything with absolute certainty. I base my actions and my goals in life on what I consider probable rather than absolute truths, and I consider the chance that a God or other supernatural beings exist to be vanishingly small.
The question then becomes, do I, again from my personal point of view, consider it a good thing for other people to believe in supernatural beings even though I consider that belief an illusion. In short, the answer is no. It will never be possible for us to know and understand ourselves, either as individuals or as a species, if we believe things that are false, and yet have a profound impact on our understanding of where we come from, what the future holds for us, what morality is and why it exists, the nature of our most cherished goals, and how we live our lives. Our very survival may depend on whether or not we have an accurate knowledge of ourselves. I want my species to survive, and therefore I want as many of us as possible to have that knowledge.
According to a current manifestation of the naturalistic fallacy, religion “evolved,” and therefore it is “good.” Among other places, articles to this effect have appeared at the This View of Life website, edited by David Sloan Wilson, a noted proponent of group selection. Examples may be found here and here. According to the latter:
For Darwin, an inevitable conflict between evolution and religion could not exist for the simple reason that religiosity and religions had been biocultural products of evolution themselves! He realized in the 19th century what many religious Creationists and so-called “New Atheists” are trying to ignore in their odd alliance to this day: If evolutionary theory is true, it must be able to explain the emergence of our cognitive tendencies to believe in supernatural agencies and the forms and impacts of its cultural products.
I’m not sure which passages from the work of Darwin the article’s author construed to mean that he believed that “an inevitable conflict between evolution and religion could not exist,” but the idea is nonsense in any case. Many flavors of both Christianity and Islam explicitly deny the theory of evolution, and therefore a conflict most certainly does exist. That conflict will not disappear whether religiosity and religions are biocultural products of evolution or not. Assuming for the sake of argument that they are, that mere fact would be irrelevant to the questions of whether religiosity and religions are “good,” or whether supernatural beings actually exist or not.
In any case, I doubt that religiosity and religion are biocultural products of evolution in any but a very limited sense. It is most unlikely that genes could be “smart enough” to distinguish between supernatural and non-supernatural agencies in the process of installing innate behavioral tendencies in our brains. Some subset of our suite of innate behavioral predispositions might make it more likely for us to respond to and behave towards “leaders” in some ways and not in others. Once we became sufficiently intelligent to imagine supernatural beings, it became plausible that we might imagine one as “leader,” and culture could take over from there to come up with the various versions of God or gods that have turned up from time to time. That does not alter the fact that the “root cause” of these manifestations almost certainly does not directly “program” belief in the supernatural.
This “root cause,” supposing it exists, is to be found in our genes, and our genes are not in the habit of rigidly determining what we believe or how we act. In other words, our genes cannot force us to believe in imaginary beings, as should be obvious from the prevalence of atheists on the planet. Because of our genes we may “tend” to believe in imaginary beings, but it is at least equally likely that because of them we “tend” to engage in warfare. Supposing both tendencies exist, that mere fact hardly insures that they are also “good.” Insisting that the former is “good” is equivalent to the belief that it is “good” for us to believe certain lies. This begs the question of how anyone is to acquire the legitimate right to determine for the rest of us that it is “good” for us to believe in lies, not to mention which particular version of the lie is “most good.”
One can argue ad nauseum about whether, on balance, religion has been “good” because of the comfort and consolation if provides in this vale of tears, the art products it has spawned, and the sense of community it has encouraged, or “bad” because of the wars, intolerance, bigotry, and social strife that can be chalked up to its account. In the end, it seems to me that the important question is not who “wins” this argument, but whether religious claims are true or not. If, as I maintain, they are not, then, from my personal point of view, it is “good” that we should know it. It matters in terms of answering such questions as what we want to do with our lives and why.
Consider, for example, the question of life after death. Most of us don’t look forward to the prospect of death with any particular relish, and it is certainly plausible to claim that religion provides us with the consolation of an afterlife. Suppose we look at the question from the point of view of our genes. They have given rise to our consciousness, along with most of the other essential features of our physical bodies, because consciousness has made it more probable that those genes would survive and reproduce. When we fear death, we fear the death of our consciousness, but as far as the genes are concerned, consciousness is purely ancillary – a means to an end. If they “program” an individual to become a Catholic priest in order to inherit eternal life, and that individual fails to have children as a result, then, from this “genes point of view,” they have botched it.
In a sense, it is more rational to claim that “we” are our genes rather than that “we” are this ancillary entity we refer to as consciousness. In that case, “we” have never died. “Our” lives have existed in an unbroken chain, passed from one physical form to another for billions of years. The only way “we” can die is for the last physical “link in the chain” to fail to have children. Of course, genes don’t really have a point of view, nor do they have a purpose. They simply are. I merely point out that it would be absurd to imagine that “we” suddenly spring into existence when we are born, and that “we” then die and disappear forever with the physical death of our bodies. Why on earth would Mother Nature put up with such nonsense? It seems to me that such an irrational surmise must be based on a fundamental confusion about who “we” actually are.
Posted on December 7th, 2014 No comments
The Blank Slate affair was probably the greatest scientific debacle in history. For half a century, give or take, an enforced orthodoxy prevailed in the behavioral sciences, promoting the dogma that there is no such thing as human nature. So traumatic was the affair that no accurate history of it has been written to this day. What was it about the Blank Slate affair that transmuted what was originally just another false hypothesis into a dogma that derailed progress in the behavioral sciences for much of the 20th century? After all, the blank slate as a theory has been around since the time of Aristotle. A host of philosophers have supported it in one form or another, including John Locke, Jean-Jacques Rousseau, and John Stuart Mill. Many others had opposed them, including such prominent British moral philosophers as Shaftesbury, Hutcheson, Hume, and Mackintosh.
Sometimes the theories of these pre-Darwinian philosophers were remarkably advanced. Hume, of course, is often cited by evolutionary psychologists in our own time for pointing out that such human behavioral phenomena as morality cannot be derived by reason, and are rooted in emotion, or “passions.” In his words, “Reason is wholly inactive, and can never be the source of so active a principle as conscience, or a sense of morals.” The relative sophistication of earlier thinkers can also be demonstrated by comparing them with the rigid dogmas of the Blank Slaters of the 20th century who followed them. For example, the latter day dogmatists invented the “genetic determinist” straw man. Anyone who insisted, however mildly, on the existence of human nature was automatically denounced as a “genetic determinist,” that is, one who believes that human “instincts” are as rigid as those of a spider building its nest, and we are powerless to control them rationally. Real “genetic determinists” must be as rare as unicorns, because in spite of a diligent search I have never encountered one personally. The opponents of the Blank Slate against whom the charge of “genetic determinism” was most commonly leveled were anything but. They all insisted repeatedly that human behavior was influenced, not by rigid instincts that forced us to engage in warfare and commit acts of “aggression,” but by predispositions that occasionally worked against each other and could be positively directed or controlled by reason. As it happens, this aspect of the nature of our “nature” was also obvious to earlier thinkers long before Darwin. For example, 19th century British moral philosopher William Whewell, referring to the work of his co-philosopher Henry Sidgwick, writes,
The celebrated comparison of the mind to a sheet of white paper is not just, except we consider that there may be in the paper itself many circumstances which affect the nature of the writing. A recent writer, however, appears to me to have supplied us with a much more apt and beautiful comparison. Man’s soul at first, says Professor Sidgwick, is one unvaried blank, till it has received the impressions of external experience. “Yet has this blank,” he adds, “been already touched by a celestial hand; and, when plunged in the colors which surround it, it takes not its tinge from accident but design, and comes out covered with a glorious pattern.” This modern image of the mind as a prepared blank is well adapted to occupy a permanent place in opposition to the ancient sheet of white paper.
Note that Sidgwick was a utilitarian, and is often referred to as a “blank slater” himself. Obviously, he had a much more nuanced interpretation of “human nature” than the Blank Slaters of a later day, and was much closer, both to the thought of Darwin and to that of modern evolutionary psychologists than they. This, by the by, illustrates the danger of willy-nilly throwing all the thinkers who have ever mentioned some version of the blank slate into a common heap, or of ordering them all in a neat row, as if each one since the time of Aristotle “begat” the next after the fashion of a Biblical genealogy.
In any case, these pre-Darwinian thinkers and philosophers could occasionally discuss their differences without stooping to ad hominem attacks, and even politely. That, in my opinion, is a fundamental difference between them and the high priests of the Blank Slate orthodoxy. The latter day Blank Slaters were ideologues, not scientists. They derailed the behavioral sciences because their ideological narrative invariably trumped science, and common sense, for that matter. Their orthodoxy was imposed and enforced, not by “good science,” but by the striking of moralistic poses, and the vicious vilification of anyone who opposed them. And for a long time, it worked.
By way of example, it will be illuminating to look at the sort of “scientific” writings produced by one of these high priests, Richard Lewontin. Steven Pinker’s book, The Blank Slate, is occasionally flawed, but it does do a good job of describing the basis of Lewontin’s Blank Slate credentials. Interested readers are encouraged to check the index. As Pinker puts it,
So while Gould, Lewontin, and Rose deny that they believe in a blank slate, their concessions to evolution and genetics – that they let us eat, sleep, urinate, defecate, grow bigger than a squirrel, and bring about social change – reveal them to be empiricists more extreme than Locke himself, who at least recognized the need for an innate faculty of “understanding.”
Anyone doubting the accuracy of this statement can easily check the historical source material to confirm it. For example, in a rant against E. O. Wilson’s Sociobiology in the New York Review of Books, which Lewontin co-authored with Gould and others, we find, along with copious references to the “genetic determinist” bugbear,
We are not denying that there are genetic components to human behavior. But we suspect that human biological universals are to be discovered more in the generalities of eating, excreting and sleeping than in such specific and highly variable habits as warfare, sexual exploitation of women and the use of money as a medium of exchange.
Anyone still inclined to believe that Lewontin wasn’t a “real” Blank Slater need only consult the title of his most significant book on the subject, Not In Our Genes, published in 1984. What on earth was he referring to as “not in our genes,” if not innate behavior? As it happens, that book is an excellent reference for anyone who cares to examine the idiosyncratic fashion in which the Blank Slaters were in the habit of doing “science.” Here are some examples, beginning with the “genetic determinist” bogeyman:
Biological determinism (biologism) has been a powerful mode of explaining the observed inequalities of status, wealth, and power in contemporary industrial capitalist societies, and of defining human “universals” of behavior as natural characteristics of these societies. As such, it has been gratefully seized upon as a political legitimator by the New Right, which finds its social nostrums so neatly mirrored in nature; for if these inequalities are biologically determined, they are therefore inevitable and immutable.
Biological determinist ideas are part of the attempt to preserve the inequalities of our society and to shape human nature in their own image. The exposure of the fallacies and political content of those ideas is part of the struggle to eliminate those inequalities and to transform our society.
All of these recent political manifestations of biological determinism have in common that they are directly opposed to the political and social demands of those without power.
The Nobel Prize laureate Konrad Lorenz, in a scientific paper on animal behavior in 1940 in Germany during the Nazi extermination campaign said: “The selection of toughness, heroism, social utility… must be accomplished by some human institutions if mankind in default of selective factors, is not to be ruined by domestication induced degeneracy. The racial idea as the basis of the state has already accomplished much in this respect.” He was only applying the view of the founder of eugenics, Sir Francis Galton, who sixty years before wondered that “there exists a sentiment, for the most part quite unreasonable, against the gradual extinction of an inferior race.” What for Galton was a gradual process became rather more rapid in the hands of Lorenz’s efficient friends. As we shall see, Galton and Lorenz are not atypical.
Of course, Lewontin is a Marxist. Apparently, by applying the “dialectic,” he has determined that the fact that the process was even more rapid and efficient in the hands of his Communist friends doesn’t have quite the same “ideological” significance. As far as eugenics is concerned, it was primarily promoted by leftists and “progressives” in its heyday. Apparently Lewontin “forgot” that as well, for, continuing in the same vein, he writes:
The sorry history of this century of insistence on the iron nature of biological determination of criminality and degeneracy, leading to the growth of the eugenics movement, sterilization laws, and the race science of Nazi Germany has frequently been told.
The claim that “human nature” guarantees that inherited differences between individuals and groups will be translated into a hierarchy of status, wealth, and power completes the total ideology of biological determinism. To justify their original ascent to power, the new middle class had to demand a society in which “intrinsic merit” could be rewarded. To maintain their position they now claim that intrinsic merit, once free to assert itself, will be rewarded, for it is “human nature” to form hierarchies of power and reward.
Biological determinism, as we have been describing it, draws its human nature ideology largely from Hobbes and the Social Darwinists, since these are the principles on which bourgeois political economy are founded.
Everyone had to be stretched or squeezed to fit on the Procrustean bed of Lewontin’s Marxist dogma. In the process, E. O. Wilson became a “bourgeois” like all the rest:
More, by emphasizing that even altruism is the consequence of selection for reproductive selfishness, the general validity of individual selfishness in behaviors is supported. E. O. Wilson has identified himself with American neoconservative liberalism, which holds that society is best served by each individual acting in a self-serving manner, limited only in the case of extreme harm to others. Sociobiology is yet another attempt to put a natural scientific foundation under Adam Smith. It combines vulgar Mendelism, vulgar Darwinism, and vulgar reductionism in the service of the status quo.
This, then, was the type of “scientific” criticism favored by the ideologues of the Blank Slate. They had an ideological agenda, and so assumed that everything that anyone else thought, wrote, or said, must be part of an ideological agenda as well. There could be no such thing as “mere disagreement.” Disagreement implied a different agenda, opposed to clearing the path to the Brave New World favored by the Blank Slaters. By so doing it sought to institutionalize inequality, racism, and the evil status quo, and was therefore criminal.
It’s hard to imagine anything more important than getting the historical record of the Blank Slate affair straight. We possess the means of committing suicide as a species. Self-knowledge is critical if we are to avoid that fate. The Blank Slate orthodoxy planted itself firmly in the path of any advance in human self-knowledge for a great many more years than we could afford to squander. In spite of that, the bowdlerization of history continues. Lewontin and the other high priests of the Blank Slate are being reinvented as paragons of reason, who were anything but “blank slaters” themselves, but merely applied some salutary adult supervision to the worst excesses of evolutionary psychology. Often, they left themselves such an “out” to their own eventual rehabilitation by themselves protesting that they weren’t “blank slaters” at all. For example, again quoting from Lewontin:
Yet, at the same time, we deny that human beings are born tabulae rasae, which they evidently are not, and that individual human beings are simple mirrors of social circumstances. If that were the case, there could be no social evolution.
One can easily see through this threadbare charade by merely taking the trouble to actually read Lewontin. What Pinker has to say as noted above about the degree to which he was “not a blank slater” is entirely accurate. I know of not a single instance in which he has ever agreed that anything commonly referred to in the vernacular as “human nature,” as opposed to urinating, defecating, being taller than a squirrel, etc., is real. Throughout his career he has rejected the behavioral hypotheses of ethology (yes, I am referring to the behavior of animals other than man, as well as our own species), sociobiology, and evolutionary psychology root and branch.
It has been said that those who do not learn from history are doomed to repeat it. However, it’s not out of the question that we don’t have enough time left to enjoy the luxury of making the same mistake twice. Under the circumstances, we would be well-advised to take a very dim view of any future saviors of the world who show signs of adopting political vilification as their way of “doing science.”
Posted on December 3rd, 2014 No comments
The human types afflicted with the messianic itch have never been too choosy about the ideology they pick to scratch it. For example, the Nazis turned up some of their most delirious converts among the ranks of former Communists, and vice versa. The “true believer” can usually make do with whatever is available. The main thing is that whatever “ism” he chooses enables him to maintain the illusion that he is saving the world and clearing the path to some heavenly or terrestrial paradise, and at the same time supplies him with an ingroup of like-minded zealots. In the 20th century both Communism and Nazism/fascism, which had served admirably in their time, collapsed, leaving an ideological vacuum behind. As we all know, nature abhors a vacuum, and something had to fill it. Paradoxically, that “something” turned out to be radical Islam. For the true believers, it is now pretty much the only game in town. The result of this ideological sea change has been quite spectacular. The “human types” one would normally have expected to find in the ranks of the atheist Communists 50 or 75 years ago are now powerfully attracted to the latest manifestation of industrial strength religious fanaticism.
So far the ideological gap between the secular left that supplied the Communists of yesteryear and the jihadis of today has been a bit too wide for most western “progressives” to hop across. Instead, they’ve been forced to settle for casting longing gazes at the antics of the less inhibited zealots on the other side of the chasm. They can’t quite manage the ideological double back flip between the culture they come from and obscurantist Islam. Instead, they seize on surrogates, defending the “oppressed” Palestinians against the “apartheid” Israelis, meanwhile furiously denouncing anyone who dares to criticize the new inamorata they are forced to love from afar as “islamophobic.”
An interesting manifestation of this phenomenon recently turned up on the website of The Jacobin Magazine, which styles itself, “The leading voice of the American left.” In an article entitled “Old Atheism, New Empire,” one Luke Savage, described as “a student of political theory and formerly the editor of Canada’s largest student newspaper,” demonstrates that the New Atheists are not really the paladins of Enlightenment they claim to be, but are actually conducting a clever underground campaign to defend imperialism and provide a “smokescreen for the injustice of global capitalism!” Similar attacks on such New Atheist stalwarts as Richard Dawkins, Sam Harris, and the late Christopher Hitchens are becoming increasingly common as the Left’s love affair with radical Islam continues to blossom. The New Atheists, in turn, are finding that the firm ground on the left of the ideological spectrum they thought they were standing on has turned to quicksand.
It isn’t hard to detect the Islamist pheromones in the article in question. We notice, for example, that Savage isn’t particularly concerned about New Atheist attacks on religion in general. He hardly mentions Christianity. When it comes to Islam, however, it’s a different story. As Savage puts it,
It is against the backdrop of the war on terror, with its violent and destructive adventurism, that the notion of a monolithic evil called “Islam” has found a sizable constituency in the circles of liberal respectability.
As one might expect, this is followed by the de rigueur charge of racism:
The excessive focus on Islam as something at once monolithic and exceptionally bad, whose backwards followers need to have their rights in democratic societies suppressed and their home countries subjected to a Western-led civilizing process, cannot be called anything other than racist.
Moslem zealots, we find, aren’t really the enemy of, but actually belong in the pantheon of, officially endorsed and certified victim groups:
Criticisms of the violence carried out by fundamentalists of any kind – honor killings, suicide bombings, systemic persecution of women or gay people, or otherwise – are neither coherent nor even likely to be effective when they falsely attribute such phenomena to some monolithic orthodoxy.
The cognoscenti will have immediately noticed some amusing similarities between this rhetoric and that used to defend Communism in a bygone era. Notice, for example, the repeated insistence that Islam is not “monolithic.” Back in the day, one of the most hackneyed defenses of Communism was also that it was not “monolithic.” No doubt it was a great comfort to the millions slowly starving to death in the Gulag, or on their way to get a bullet in the back of the neck, that they at least weren’t the victims of a “monolithic” assassin. In case that’s too subtle for you, Savage spells it out, quoting from a book by Richard Seymour:
The function of [Hitchens’] antitheism was structurally analogous to what Irving Howe characterized as Stalinophobia…the Bogey-Scapegoat of Stalinism justified a new alliance with the right, obliviousness towards the permanent injustices of capitalist society, and a tolerance for repressive practices conducted in the name of the “Free World.” In roughly isomorphic fashion Hitchens’ preoccupation with religion…authorized not just a blind eye to the injustices of capitalism and empire but a vigorous advocacy of the same.
One would think that defending “the opiate of the masses” would be a bitter pill for any dedicated fighter against “capitalism and empire” to swallow, but Savage manages it with aplomb. Channeling the likes of Karen Armstrong, David Bentley Hart, and the rest of the “sophisticated Christians,” he writes,
Whether directed at Catholicism, Paganism, or Islam, the methodology employed to expose the inherent “irrationality” of all religions betrays a fundamental misunderstanding (or perhaps misrepresentation) of the nature of religious discourses, beliefs, and practices.
If that’s not quite rarified enough for you, how about this:
Moreover, the core assertion that forms the discursive nucleus of books like The God Delusion, God is Not Great, and The End of Faith – namely, that religious texts can be read as literal documents containing static ideas, and that the ensuing practices are uniform – is born out by neither real, existing religion or by its historical reality as a socially and ideologically heterogeneous phenomenon.
This is particularly significant in relation to the New Atheists’ denunciations of what they call “the doctrine of Islam” because it renders bare their false ontology of religion – one which more or less assumes that fundamentalism is the product of bad ideas rather than particular social and material conditions.
So Stalin wasn’t a bad boy. He just had a bad environment. See how that works? At this point Marx must be spinning in his grave, so we’ll leave these eloquent defenses of religion at that, and let the old man get some rest. In point of fact Marxism was itself a religion for all practical purposes. It just happened to be a secular one, with an earthly rather than a heavenly paradise. In its heyday, Communism had to damn the older, spiritual versions because messianic religions are never tolerant. Now that it’s defunct as an effective vehicle for militant zealotry, it’s pointless to continue trying to defend it from its spiritual competition.
In any case, the “progressive” flirtation with medieval obscurantism continues unabated. Will it ever become a full-fledged embrace? I suppose it’s not completely out of the question, but a lot of ideological baggage will have to be ditched along the way to that consummation. As for the New Atheists, one might say that they’ve just had a religious experience in spite of themselves. They’ve all been excommunicated.
Thanks to Tom at Happyjar.com for the cartoon. Check out his store!
Posted on December 2nd, 2014 No comments
If an article that just appeared in the LA Times is any indication, the agitation for jump-starting the nuclear weapons program at the Department of Energy (DOE) and the three nuclear weapons laboratories (Lawrence Livermore, Los Alamos, and Sandia National Laboratories) continues unabated. Entitled “New nuclear weapons needed, many experts say, pointing to aged arsenal,” it cites all the usual talking points of the weaponeers. For example,
Warheads in the nation’s stockpile are an average of 27 years old, which raises serious concerns about their reliability, they say. Provocative nuclear threats by Russian President Vladimir Putin have added to the pressure to not only design new weapons but conduct underground tests for the first time since 1992.
“It seems like common sense to me if you’re trying to keep an aging machine alive that’s well past its design life, then you’re treading on thin ice,” said Rep. Mac Thornberry (R-Texas), chairman-elect of the House Armed Services Committee. “Not to mention, we’re spending more and more to keep these things going.”
Thornbury also offered support for renewed testing, saying, “You don’t know how a car performs unless you turn the key over. Why would we accept anything less from a weapon that provides the foundation for which all our national security is based on?”
Such comments are entirely typical. They would make a lot of sense if the U.S. nuclear weapons program existed in a vacuum. However, it doesn’t. It exists in a world with several other major nuclear powers, and they all have the same problems. Under the circumstances, the fact that such problems exist and are shared by all the nuclear powers is less significant than the question of which nuclear power is best equipped to deal with them. The question of who will benefit by the building of new weapons and a resumption of nuclear testing depends on the answer to that question. If one country has a significant advantage over its rivals in dealing with a common problem as long as the status quo is maintained, then it would be very ill-advised to initiate a change to the status quo that would allow them to catch up. At the moment, the United States is the country with an advantage. As noted in the article,
The U.S. has by far the greatest archive of test data, having conducted 1,032 nuclear tests. Russia conducted 715 and China only 45.
Beyond that, we have the ability to conduct tests with conventional explosives that mimic what goes on in the initial stages of a nuclear explosion, and superb diagnostics to extract a maximum of data from those tests. Perhaps more importantly, we have an unrivaled above ground experimental, or AGEX, capability. I refer to machines like Z at Sandia National Laboratories, or the NIF at Livermore, which are far more capable and powerful than similar facilities anywhere else in the world. Those who say they can’t access physical conditions relevant to those that occur in exploding nuclear weapons, or that they are useless for weapon effects or weapon physics experiments, either don’t know what they’re talking about or are attempting to deceive.
As far as the NIF is concerned, it is quite true that it has so far failed to achieve its fusion ignition milestone, but that by no means rules out the possibility that it ever will. More importantly, the NIF will remain a highly useful AGEX facility whether it achieves ignition or not. Indeed, before it was built, many of the weapons designers showed little interest in ignition. It would merely “muddy the waters,” making it more difficult for the diagnostics to precisely record the results of an experiment. The NIF could access weapons-relevant conditions without it. In fact, in spite of its failure to achieve ignition to date, the NIF has been a spectacular success as far as achieving its specifications are concerned. It is more than an order of magnitude more powerful than any previously existing laser system, its 192 laser beams are highly accurate, and its diagnostic suite is superb.
Another problem with the resumption of testing is that it will lead to the development of weapons that are much more likely to be used. Once the nuclear genie is out of the bottle, it will likely prove very difficult to put it back in. For example, again quoting the article,
John S. Foster Jr., former director of Lawrence Livermore National Laboratory and chief of Pentagon research during the Cold War, said the labs should design, develop and build prototype weapons that may be needed by the military in the future, including a very low-yield nuclear weapon that could be used with precision delivery systems, an electromagnetic pulse weapon that could destroy an enemy’s communications systems and a penetrating weapon to destroy deeply buried targets.
The commonly heard narrative at DOE goes something like this: “We need to develop small, precise, penetrating nuclear weapons because they will be a much better deterrent than the existing ones. Potential enemies are unlikely to believe that we would ever use one of the high yield weapons that are all that remain in the current arsenal. They would be far more likely to believe that we might use a small bunker buster that would minimize the possibility of significant collateral damage.” The problem with that narrative is that it’s true. We would be far more likely to use such a weapon than the ones in the current arsenal, and there would be no lack of voices within DOE and DoD calling for its use if an appropriate opportunity ever arose.
I can understand the agitation for a resumption of testing. It’s a lot sexier to make things that go boom than to serve as custodians for an aging pile of existing nukes. Unfortunately, the latter course is the wiser one. By resuming nuclear testing we would really be unilaterally surrendering a huge advantage, playing into the hands of our enemies and destabilizing the nuclear landscape at the same time.
Posted on November 19th, 2014 No comments
An article entitled “The Evolution of War – A User’s Guide,” recently turned up at “This View of Life,” a website hosted by David Sloan Wilson. Written by Anthony Lopez, it is one of the more interesting artifacts of the ongoing “correction” of the history of the debate over human nature I’ve seen in a while. One of the reasons it’s so remarkable is that Wilson himself is one of the foremost proponents of the theory of group selection, Lopez claims in his article that one of the four “major theoretical positions” in the debate over the evolution of war is occupied by the “group selectionists,” and yet he conforms to the prevailing academic conceit of studiously ignoring the role of Robert Ardrey, who was not only the most influential player in the “origins of war” debate, but overwhelmingly so in the whole “Blank Slate” affair as well. Why should that be so remarkable? Because at the moment the academics’ main rationalization for pretending they never heard of a man named Ardrey is (you guessed it) his support for group selection!
When it comes to the significance of Ardrey, you don’t have to take my word for it. His was the most influential voice in a growing chorus that finally smashed the Blank Slate orthodoxy. The historical source material is all still there for anyone who cares to trouble themselves to check it. One invaluable piece thereof is “Man and Aggression,” a collection of essays edited by arch-Blank Slater Ashley Montagu and aimed mainly at Ardrey, with occasional swipes at Konrad Lorenz, and with William Golding, author of “Lord of the Flies,” thrown in for comic effect. The last I looked you could still pick it up for a penny at Amazon. For example, from one of the essays by psychologist Geoffrey Gorer,
Almost without question, Robert Ardrey is today the most influential writer in English dealing with the innate or instinctive attributes of human nature, and the most skilled populariser of the findings of paleo-anthropologists, ethologists, and biological experimenters… He is a skilled writer, with a lively command of English prose, a pretty turn of wit, and a dramatist’s skill in exposition; he is also a good reporter, with the reporter’s eye for the significant detail, the striking visual impression. He has taken a look at nearly all the current work in Africa of paleo-anthropologists and ethologists; time and again, a couple of his paragraphs can make vivid a site, such as the Olduvai Gorge, which has been merely a name in a hundred articles.
In case you’ve been asleep for the last half a century, the Blank Slate affair was probably the greatest debacle in the history of science. The travails of Galileo and the antics of Lysenko are child’s play in comparison. For decades, whole legions of “men of science” in the behavioral sciences pretended to believe there was no such thing as human nature. As was obvious to any ten year old, that position was not only not “science,” it was absurd on the face of it. However, it was required as a prop for a false political ideology, and so it stood for half a century and more. Anyone who challenged it was quickly slapped down as a “fascist,” a “racist,” or a denizen of the “extreme right wing.” Then Ardrey appeared on the scene. He came from the left of the ideological spectrum himself, but also happened to be an honest man. The main theme of all his work in general, and the four popular books he wrote between 1961 and 1976 in particular, was that here is such a thing as human nature, and that it is important. He insisted on that point in spite of a storm of abuse from the Blank Slate zealots. On that point, on that key theme, he has been triumphantly vindicated. Almost all the “men of science,” in psychology, sociology, and anthropology were wrong, and he was right.
Alas, the “men of science” could not bear the shame. After all, Ardrey was not one of them. Indeed, he was a mere playwright! How could men like Shakespeare, Ibsen, and Moliere possibly know anything about human nature? Somehow, they had to find an excuse for dropping Ardrey down the memory hole, and find one they did! There were actually more than one, but the main one was group selection. Writing in “The Selfish Gene” back in 1976, Richard Dawkins claimed that Ardrey, Lorenz, and Irenäus Eibl-Eibesfeldt were “totally and utterly wrong,” not because they insisted there was such a thing as human nature, but because of their support for group selection! Fast forward to 2002, and Steven Pinker managed the absurd feat of writing a whole tome about the Blank Slate that only mentioned Ardrey in a single paragraph, and then only to assert that he had been “totally and utterly wrong,” period, on Richard Dawkins’ authority, and with no mention of group selection as the reason. That has been the default position of the “men of science” ever since.
Which brings us back to Lopez’ paper. He informs us that one of the “four positions” in the debate over the evolution of war is “The Killer Ape Hypothesis.” In fact, there never was a “Killer Ape Hypothesis” as described by Lopez. It was a strawman, pure and simple, concocted by Ardrey’s enemies. Note that, in spite of alluding to this imaginary “hypothesis,” Lopez can’t bring himself to mention Ardrey. Indeed, so effective has been the “adjustment” of history that, depending on his age, it’s quite possible that he’s never even heard of him. Instead, Konrad Lorenz is dragged in as an unlikely surrogate, even though he never came close to supporting anything even remotely resembling the “Killer Ape Hypothesis.” His main work relevant to the origins of war was “On Aggression,” and he hardly mentioned apes in it at all, focusing instead mainly on the behavior of fish, birds and rats.
And what of Ardrey? As it happens, he did write a great deal about our ape-like ancestors. For example, he claimed that Raymond Dart had presented convincing statistical evidence that one of them, Australopithecus africanus, had used weapons and hunted. That statistical evidence has never been challenged, and continues to be ignored by the “men of science” to this day. Without bothering to even mention it, C. K. Brain presented an alternative hypothesis that the only acts of “aggression” in the caves explored by Dart had been perpetrated by leopards. In recent years, as the absurdities of his hypothesis have been gradually exposed, Brain has been in serious row back mode, and Dart has been vindicated to the point that he is now celebrated as the “father of cave taphonomy.”
Ardrey also claimed that our apelike ancestors had hunted, most notably in his last book, “The Hunting Hypothesis.” When Jane Goodall published her observation of chimpanzees hunting, she was furiously vilified by the Blank Slaters. She, too, has been vindicated. Eventually, even PBS aired a program about hunting behavior in early hominids, and, miraculously, just this year even the impeccably politically correct “Scientific American” published an article confirming the same in the April edition! In a word, we have seen the vindication of these two main hypotheses of Ardrey concerning the behavior of our apelike and hominid ancestors. Furthermore, as I have demonstrated with many quotes from his work in previous posts, he was anything but a “genetic determinist,” and, while he strongly supported the view that innate predispositions, or “human nature,” if you will, have played a significant role in the genesis of human warfare, he clearly did not believe that it was unavoidable or inevitable. In fact, that belief is one of the main reasons he wrote his books. In spite of that, the “Killer Ape” zombie marches on, and turns up as one of the “four positions” that are supposed to “illuminate” the debate over the origins of war, while another of the “positions” is supposedly occupied by of all things, “group selectionists!” History is nothing if not ironical.
Lopez’ other two “positions” include “The Strategic Ape Hypothesis,” and “The Inventionists.” I leave the value of these remaining “positions” to those who want to “examine the layout of this academic ‘battlefield’”, as he puts it, to the imagination of my readers. Other than that, I can only suggest that those interested in learning the truth, as opposed to the prevailing academic narrative, concerning the Blank Slate debacle would do better to look at the abundant historical source material themselves than to let someone else “interpret” it for them.
Posted on November 4th, 2014 No comments
Modern philosophers are a touchy bunch. They resent their own irrelevance. The question is, why have they become so marginalized. After all, it wasn’t always so. Consider, for example, the immediate and enduring impact of the French philosophes of the 18th century. I can’t presume to give a complete answer in this blog post, but an article by Uri Bram that recently turned up at Café.com entitled, This Philosopher Wants to Change How You Think About Doing Good might at least contain a few hints.
It’s an account of the author’s encounter with a young philosopher named Will MacAskill who, not uncharacteristically, has a job in the Academy, in his case at Cambridge. Bram assures us that, “he’s already a superstar among his generation of philosophers.” We learn he also has, “fondness for mild ales, a rollicking laugh, a warm Scottish accent and a manner that reminds you of the kid everyone likes in senior year of high school—not the popular kid, mind, but the kid everyone actually likes.” If you get the sinking feeling that you’re about to read a hagiography, you won’t be mistaken. It reminded me of what Lenin was talking about when he referred to “the silly lives of the saints.”
According to Bram, MacAskill had already sensed the malaise in modern philosophy by the time he began his graduate studies:
“I kept going to academics and actively trying to find people who were taking these ideas seriously and trying to make a difference, trying to put them into practice,” he says. But (for better or worse), academic philosophy as a whole is not generally focused on having a direct, practical impact. “Someone studying the philosophy of linguistics or logic is probably doing it as a pure intellectual enterprise,” says MacAskill, “but what surprised me was the extent to which even applied philosophers weren’t having any impact on the world. I spoke to a giant in the field of practical ethics, one of the most successful applied philosophers out there, and asked him what impact he thought he’d had with his work; he replied that someone had once sent him an email saying they’d signed up for the organ donor register based on something he’d written. And that made me sad.”
Then he had an epiphany, inspired by a conversation with fellow graduate student Toby Ord:
One of the things that most impressed MacAskill about Ord was the extent to which the latter was walking the talk on his philosophical beliefs, manifested by his pledge to give away everything he earned above a certain modest threshold to organizations working effectively towards reducing global suffering (a story about Ord in the BBC’s news magazine became a surprise hit in 2010).
Ord, it turns out, was a modern incarnation of Good King Wenceslas, who had pledged to give away a million pounds to charity in the course of his career. To make a long story short, MacAskill decided to make a similar pledge, and founded an organization with Ord to get other people to do the same. He has since been going about doing similar good works, while at the same time publishing the requisite number of papers in all the right philosophical journals.
As it happens, I ran across this article thanks to a reference at 3quarksdaily, and my thoughts about it were the same as some of the commenters there. For example, from one who goes by the name of Lemoncookies,
I see nothing particularly original or profound in this young man’s suggestion, which basically amounts to: give more to charity. Lots of people have made this their clarion call, and lots of people already and will give to charity.
Another named Paul chimes in,
I find the suggestion humorous that a 27-year-old is going “to revolutionize the way you think about doing good.” What effort the philosophers will go to in order to maintain their hegemony on moral reasoning. Unfortunately, I think they missed the boat 150 years ago by ignoring evolution and biology. They have been treading water ever since yet still manage to attract followers.
He really hits the nail on the head with that one. It’s ludicrous to write hagiographies about people who are doing “good” unless you understand what “good” is, and there has been no excuse for not understanding what “good” is since Darwin published “On the Origin of Species.” Darwin himself saw the light immediately. Morality is a manifestation of evolved “human nature.” It exists purely because the features in the brain that are responsible for that nature happened to improve the odds that the genes responsible for those features would survive and reproduce. “Good” exists as a subjective perception in the mind of individuals, and there is no way in which it can climb out of the skull of individuals and magically acquire a life of its own. Philosophers, with a few notable exceptions, have rejected that truth. That’s one of the reasons, and a big one at that, why they’re marginalized.
It was a truth they couldn’t bear to face. It’s not really that the truth made philosophy itself irrelevant. The way to the truth had been pointed out long before Darwin by philosophers of the likes of Shaftesbury, Hutcheson, and Hume. At least a part of the problem was that this truth smashed the illusion that philosophers, or anyone else for that matter, could be genuinely better, more virtuous, or more righteous in some objective sense than anyone else. They’ve been fighting the truth ever since. The futility of that fight is demonstrated by the threadbare nature of the ideas that have been used to justify it.
For example, there’s “moral realism.” It goes like this: Everyone knows that two plus two equals four. However, numbers do not exist in the material world. Moral truths don’t exist in the material world either. Therefore, moral truths are also real. QED. Then there’s utilitarianism, which was demolished by Westermarck with the aid of the light provided courtesy of Darwin. It’s greatest proponent, John Stuart Mill, had the misfortune to write his book about it before the significance of Darwin’s great theory had time to sink in. If it had, I doubt he would ever have written it. He was too smart for that. Sam Harris’ “scientific morality” is justified mainly by bullying anyone who doesn’t go along with charges of being “immoral.”
With the aid of such stuff, modern philosophy has wandered off into the swamp. Commenter Paul was right. They need to stop concocting fancy new moral systems once and for all, and do a radical rewind, if not to Darwin, then at least to Westermarck. They’ll never regain their relevance by continuing to ignore the obvious.
Posted on November 1st, 2014 3 comments
Sometimes the best metrics for public intellectuals are the short articles they write for magazines. There are page limits, so they have to get to the point. It isn’t as easy to camouflage vacuous ideas behind a smoke screen of verbiage. Take, for example, the case of Oswald Spengler. His “Decline of the West” was hailed as the inspired work of a prophet in the years following its publication in 1918. Read Spengler’s Wiki entry and you’ll see what I mean. He should have quit while he was ahead.
Fast forward to 1932, and the Great Depression was at its peak. The Decline of the West appeared to be a fait accompli. Spengler would have been well-advised to rest on his laurels. Instead, he wrote an article for The American Mercury, still edited at the time by the Sage of Baltimore, H. L. Mencken, with the reassuring title, “Our Backs are to the Wall!” It was a fine synopsis of the themes Spengler had been harping on for years, and a prophecy of doom worthy of Jeremiah himself. It was also wrong.
According to Spengler, high technology carried within itself the seeds of its own collapse. Man had dared to “revolt against nature.” Now the very machines he had created in the process were revolting against man. At the time he wrote the article he summed up the existing situation as follows:
A group of nations of Nordic blood under the leadership of British, German, French, and Americans command the situation. Their political power depends on their wealth, and their wealth consists in their industrial strength. But this in turn is bound up with the existence of coal. The Germanic peoples, in particular, are secured by what is almost a monopoly of the known coalfields…
Spengler went on to explain that,
Countries industrially poor are poor all around; they cannot support an army or wage a war; therefore they are politically impotent; and the workers in them, leaders and led alike, are objects in the economic policy of their opponents.
No doubt he would have altered this passage somewhat had he been around to witness the subsequent history of places like Vietnam, Algeria, and Cambodia. Willpower, ideology, and military genius have trumped political and economic power throughout history. Spengler simply assumed they would be ineffective against modern technology because the “Nordic” powers had not been seriously challenged in the 50 years before he wrote his book. It was a rash assumption. Even more rash were his assumptions about the early demise of modern technology. He “saw” things happening in his own times that weren’t really happening at all. For example,
The machine, by its multiplication and its refinement, is in the end defeating its own purpose. In the great cities the motor-car has by its numbers destroyed its own value, and one gets on quicker on foot. In Argentina, Java, and elsewhere the simple horse-plough of the small cultivator has shown itself economically superior to the big motor implement, and is driving the latter out. Already, in many tropical regions, the black or brown man with his primitive ways of working is a dangerous competitor to the modern plantation-technic of the white.
Unfortunately, motor cars and tractors can’t read, so went right on multiplying without paying any attention to Spengler’s book. At least he wasn’t naïve enough to believe that modern technology would end because of the exhaustion of the coalfields. He knew that we were quite clever enough to come up with alternatives. However, in making that very assertion, he stumbled into what was perhaps the most fundamental of all his false predictions; the imminence of the “collapse of the West.”
It is, of course, nonsense to talk, as it was fashionable to do in the Nineteenth Century, of the imminent exhaustion of the coal-fields within a few centuries and of the consequences thereof – here, too, the materialistic age could not but think materially. Quite apart from the actual saving of coal by the substitution of petroleum and water-power, technical thought would not fail ere long to discover and open up still other and quite different sources of power. It is not worth while thinking ahead so far in time. For the west-European-American technology will itself have ended by then. No stupid trifle like the absence of material would be able to hold up this gigantic evolution.
Alas, “so far in time” came embarrassingly fast, with the discovery of nuclear fission a mere six years later. Be that as it may, among the reasons that this “gigantic evolution” was unstoppable was what Spengler referred to as “treason to technics.” As he put it,
Today more or less everywhere – in the Far East, India, South America, South Africa – industrial regions are in being, or coming into being, which, owing to their low scales of wages, will face us with a deadly competition. the unassailable privileges of the white races have been thrown away, squandered, betrayed.
In other words, the “treason” consisted of the white race failing to keep its secrets to itself, but bestowing them on the brown and black races. They, however, were only interested in using this technology against the original creators of the “Faustian” civilization of the West. Once the whites were defeated, they would have no further interest in it:
For the colored races, on the contrary, it is but a weapon in their fight against the Faustian civilization, a weapon like a tree from the woods that one uses as scaffolding, but discards as soon as it has served its purpose. This machine-technic will end with the Faustian civilization and one day will lie in fragments, forgotten – our railways and steamships as dead as the Roman roads and the Chinese wall, our giant cities and skyscrapers in ruins, like old Memphis and Babylon. The history of this technic is fast drawing to its inevitable close. It will be eaten up from within. When, and in what fashion, we so far know not.
Spengler was wise to include the Biblical caveat that, “…about that day or hour no one knows, not even the angels in heaven, nor the Son, but only the Father” (Matthew 24:36). However, he had too much the spirit of the “end time” Millennialists who have cropped up like clockwork every few decades for the last 2000 years, predicting the imminent end of the world, to leave it at that. Like so many other would-be prophets, his predictions were distorted by a grossly exaggerated estimate of the significance of the events of his own time. Christians, for example, have commonly assumed that reports of war, famine and pestilence in their own time are somehow qualitatively different from the war, famine and pestilence that have been a fixture of our history for that last 2000 years, and conclude that they are witnessing the signs of the end times, when, “…nation shall rise against nation, and kingdom against kingdom: and there shall be famines, and pestilences, and earthquakes, in divers places” (Matthew 24:7). In Spengler’s case, the “sign” was the Great Depression, which was at its climax when he wrote the article:
The center of gravity of production is steadily shifting away from them, especially since even the respect of the colored races for the white has been ended by the World War. This is the real and final basis of the unemployment that prevails in the white countries. It is no mere crisis, but the beginning of a catastrophe.
Of course, Marxism was in high fashion in 1932 as well. Spengler tosses it in for good measure, agreeing with Marx on the inevitability of revolution, but not on its outcome:
This world-wide mutiny threatens to put an end to the possibility of technical economic work. The leaders (bourgeoisie, ed.) may take to flight, but the led (proletariat, ed.) are lost. Their numbers are their death.
Spengler concludes with some advice, not for us, or our parents, or our grandparents, but our great-grandparents generation:
Only dreamers believe that there is a way out. Optimism is cowardice… Our duty is to hold on to the lost position, without hope, without rescue, like that Roman soldier whose bones were found in front of a door in Pompeii, who, during the eruption of Vesuvius, died at his post because they forgot to relieve him. That is greatness. That is what it means to be a thoroughbred. The honorable end is the one thing that can not be taken from a man.
One must be grateful that later generations of cowardly optimists donned their rose-colored glasses in spite of Spengler, went right on using cars, tractors, and other mechanical abominations, and created a world in which yet later generations of Jeremiahs could regale us with updated predictions of the end of the world. And who can blame them? After all, eventually, at some “day or hour no one knows, not even the angels in heaven,” they are bound to get it right, if only because our sun decides to supernova. When that happens, those who are still around are bound to dust off their ancient history books, smile knowingly, and say, “See, Spengler was right after all!”