The world as I see it
RSS icon Email icon Home icon
  • No, All Things are Not Permissible, and All Things are Not Not Permissible

    Posted on July 9th, 2018 Helian No comments

    IMHO it is a fact that good and evil do not exist as independent, objective things.  If they do not exist, then the moral properties that depend on them, such as “permissible,” have no objective existence, either.  It follows that it is not even rational to ask the question whether something is permissible or not as an independent fact.  In other words, if there is no such thing as objective morality, then it does not follow that “everything is permissible.”  It also does not follow that “everything is not permissible.”  As far as the universe is concerned, the term “permissible” does not exist.  In other words, there is no objective reason to obey a given set of moral rules, nor is there an objective reason not to obey those rules.

    I note in passing that if the above were not true, and the conclusion that good and evil do not exist as objective things actually did imply that “everything is permissible,” as some insist, it would not alter the facts one bit.  The universe would shrug its shoulders and ask, “So what?”  If the absence of good and evil as objective things leads to conclusions that some find unpleasant, will that alter reality and magically cause them to pop into existence?  That hasn’t worked with a God, and it won’t work with objective good and evil, either.

    I just read a paper by Matt McManus on the Quillette website that nicely, if unintentionally, demonstrates what kind of an intellectual morass one wades into if one insists that good and evil are real, objective things.  It’s entitled Why Should We Be Good?  The first two paragraphs include the following:

    Today we are witnessing an irrepressible and admirable pushback against the specters of ‘cultural relativism’ and moral ‘nihilism.’ …Indeed, relativism and the moral nihilism with which it is often affiliated, seems to be in retreat everywhere.  For many observers and critics, this is a wholly positive development since both have the corrosive effect of undermining ethical certainty.

    The author goes on to cite what he considers two motivations for the above, one “negative,” and one “positive.”  As he puts it,

    The negative motivation arises from moral dogmatism.  There are those who wish to dogmatically assert their own values without worrying that they may not be as universal as one might suppose… Ethical dogmatists do not want to be confronted with the possibility that it is possible to challenge their values because they often cannot provide good reasons to back them up.

    He adds that,

    The positive motivation was best expressed by Allan Bloom in his 1987 classic The Closing of the American Mind.

    Well, I wouldn’t exactly describe Bloom’s book as “positive.”  It struck me as a curmudgeonly rant about how “today’s youth” didn’t measure up to how he thought they “ought” to be.  Be that as it may, the author finally gets to the point:

    The issue I wish to explore is this:  even if we know which values are universal, why should we feel compelled to adhere to them?

    To this I would reply that there are no universal values, and since they don’t exist, they can’t be known.  This reduces the question of why we should feel compelled to adhere to them to nonsense.  In fact, what the author is doing here is outing himself as a dogmatist.  He just thinks he’s better than other dogmatists because he imagines he can “provide good reasons to back up” his personal dogmas.  It turns out his “good reasons” amount to an appeal to authority, as follows:

    Kant argued, very powerfully, that a human being’s innate practical reason begets a universal set of “moral laws” which any rational person knows they must follow.

    Good dogma, no?  After all, who can argue with Kant?  “Obscurely” would probably be a better word than “powerfully.”   Some of his sentences ran on for a page and a half, larded with turgid German philosophical jargon from start to finish.  Philosophers pique themselves on “understanding” him, but seldom manage to get much further than the categorical imperative in practice.  I suspect they’re wasting their time.  McManus assures us that Kant read Hume.  If so, he must not have comprehended what he was reading in passages such as,

    We speak not strictly and philosophically when we talk of the combat of passion and of reason.  Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.

    If morality had naturally no influence on human passions and actions, ’twere in vain to take such pains to inculcate it: and nothing wou’d be more fruitless than that multitude of rules and precepts, with which all moralists abound.

    Since morals, therefore, have an influence on the actions and affections, it follows, that they cannot be deriv’d from reason; and that because reason alone, as we have already prov’d, can never have any such influence.  Morals excite passions, and produce or prevent actions.  Reason of itself is utterly impotent in this particular.  The rules of morality, therefore, are not conclusions of our reason…

    What Hume wrote above isn’t just the expression of some personal ideological idiosyncrasy, but the logical conclusion of the thought of a long line of British and Scottish philosophers.  I find his thought on morality “very powerful,” and have seen no evidence that Kant ever seriously addressed his arguments.  We learned where the emotions Hume referred to actually came from in 1859 with the publication of The Origin of Species, more than half a century after Kant’s death.  It’s beyond me how Kant could have “argued powerfully” about a “universal set of moral laws” in spite of his ignorance of the real manner in which they are “begotten.”  No matter, McManus apparently still believes, “because Kant,” that we can “know” some “universal moral law.”  He continues,

    While we might know that these “moral laws” apply universally, why should we feel compelled to obey them?

    According to McManus, the 19th century philosopher Henry Sidgwick made some “profound contributions” to answering this question, which he considered “the profoundest problem in ethics.” Not everyone thought Sidgwick was all that profound.  Westermarck dealt rather harshly with his “profound” thoughts in his The Origin and Development of the Moral Ideas.  In the rest of his article, McManus reviews the thought of several other philosophers on the subject, and finds none of them entirely to his liking.  He finally peters out with nary an answer to the question, “Why should we be good?”  In fact there is no objective answer to the question, because there is no objective good.  McManus’ “dogma with good reasons” is just as imaginary as all the “dogmas without good reasons” at which he turns up his nose.

    The philosophers are in no hurry to wade back out of this intellectual morass.  Indeed, their jobs depend on expanding it.  For those of us who prefer staying out of swamps, however, the solution to McManus’ enigma is simple enough.  Stop believing in the ghosts of objective good and evil.  Accept the fact that what we call morality exists because the innate mental traits that give rise to it themselves exist by virtue of evolution by natural selection.  Then follow that fundamental fact to its logical conclusions.  One of those conclusions is that there is nothing whatsoever objective about morality.  It is a purely subjective phenomenon.  That is simply a fact of nature.  As such, it is quite incapable of rendering “everything permissible,” or “everything not permissible.”  Furthermore, realization of that fact will not change how the questions of what is permissible and what is not permissible are answered.  Those questions will continue to be answered just as they always have been, in the subjective minds of individuals.

    Acceptance of these truths about morality will not result in “moral nihilism,” or “cultural relativity,” or the hegemony of postmodernism.  All of these things can result from our attempts to reason about what our emotions are trying to tell us, but so can moral absolutism.  On the other hand, acceptance of the truth may enable us to avoid some of the real dangers posed by our current “system” of blindly responding to moral emotions, and just as blindly imagining that the result will be “moral progress.”  For example, if morality is a manifestation of evolved behavioral traits, those traits must have been selected in times that were very different from the present.  It is highly unlike that blindly following where our emotions seem to be leading us will have the same effect now as it did then.  In fact, those emotions might just as well be leading us over the edge of a cliff.

    If morality is a manifestation of evolved behavioral traits, then arbitrarily isolating moral behavior from the rest of our innate behavioral repertoire, sometimes referred to as human nature, can also be misleading.  For example, we have a powerful innate tendency to distinguish others in terms of ingroup and outgroup, applying different versions of morality to each.  This can delude us into seriously believing that vast numbers of the people we live with are “bad.”  In the past, we have often imagined that we must “resist” and “fight back” against these “bad” people, resulting in mayhem that has caused the death of countless millions, and misery for countless millions more.  From my own subjective point of view, it would be better to understand the innate emotional sources of such subjective fantasies, and at least attempt to find a way to avoid the danger they pose.  Perhaps one day enough people will agree with me to make a difference.  The universe doesn’t care one way or the other.

    Nihilism and chaos will not result from acceptance of the truth.  When it comes to morality, nihilism and chaos are what we have now.  I happen to be among those who would prefer some form of “moral absolutism,” even though I realize that its legitimacy must be based on the subjective desires of individuals rather on some mirage of “objective truth.”  I would prefer living under a simple moral code, in harmony with human nature, designed to enable us to live together with a minimum of friction and a maximum of personal liberty.  No rule would be accepted without examining its innate emotional basis, what the emotions in question accomplished at the time they evolved, and whether they would still accomplish the same thing in the different environment we live in now.  Generalities about “moral progress” and “human flourishing” would be studiously ignored.

    I see no reason why the subjective nature of morality would prevent us from adopting such an “absolute morality.”  There would, of course, be no objective reason why we “should be good” according to the rules of such a system.  The reasons would be the same subjective ones that have always been the real basis for all the versions of morality our species has ever come up with.  In the first place, if the system really was in harmony with human nature, then for many of us, our “conscience” would prompt us to “do good.”  Those with a “weak conscience” who ignored the moral law, free riders if you will, would be dealt with much the same way they have always been dealt with.  They would be shamed, punished, and, if necessary, isolated from the rest of society.

    I know, we are very far from realizing this utopia, or even from accepting the most simple truths about morality and what they imply.  I’ve always been one for daydreaming, though.

  • Please, Leave Me Out of Your Philosophical Pigeonholes

    Posted on June 27th, 2018 Helian 2 comments

    Yes, I know it is human nature to categorize virtually everything. As I noted in my last post, it reduces complexity to manageable levels. When it comes to worldviews and philosophies, we categorize them into schools of thought. I hope my readers will resist the tendency to stuff me into one of these pigeonholes. For better or worse, it seems to me I don’t belong in any of them.

    The fundamental truth I defend is the non-existence of objective morality. That does not mean, however, that I belong in the postmodernist category. Postmodernists may claim that moral truths are social constructs, but that doesn’t prevent them from furiously defending their own preferred version as their “truth,” or defending the alternative preferred versions of certain fashionable identity groups as “true” for those groups. I am not a postmodernist because I reject claims by any individual or group whatsoever that they have a legitimate right to apply their moral rules to me, whether they are socially constructed or not. Postmodernists act as if they had this right to dictate to others, regardless of what they say about “moral relativity.”

    Neither does the fact that I deny the existence of objective morality mean I am a “moral nihilist.” In fact, we actually live in a state of moral nihilism and chaos today for the very reason that we insist on the believing the illusion that there are objective moral truths. Human beings have an overwhelming innate tendency to believe that their idiosyncratic versions of “good” and “evil” represent “truths.” For the most part, they will continue to believe that regardless of what anyone happens to write on the subject. My personal preference would be to live in a world where such an “absolute” morality prevails. However, this “absolute” system would be constructed in full knowledge of the fact that it represented a necessary and useful expedient, and most decidedly not that it reflected objective moral truths. It would be possible to alter and amend this “absolute” system when necessary, but by a means more rational than the current method of allowing those bullies who throw the most flamboyant moralistic temper tantrums to set it up as they please. I propose such a system not because I think we “ought” to do it as a matter of objective fact, but merely because I would personally find it expedient as a means of pursuing the goals I happen to have in life, and believe that others may agree it would be expedient as far as they’re concerned as well.

    Finally, the fact that I deny the existence of objective morality most decidedly does not mean that I belong in the “error theory” category with the likes of J. L. Mackie. Mackie claimed he denied the objective existence of moral properties. However, he also claimed that we “ought” to do some things, and had a “duty” to do others. I consider this nonsense, and a complete contradiction of his claims about the non-existence of objective good and evil. I recently ran across a paper that illustrates very nicely why I would prefer to stay out of this particular pigeonhole. The paper in question was written by Prof. Bart Streumer of the University of Groningen in the Netherlands, and is entitled The Unbelievable Truth about Morality. The opening paragraph of the paper reads as follows:

    Have you ever suspected that even though we call some actions right and other actions wrong, nothing is really right or wrong? If so, there is a philosophical theory that agrees with you: the error theory. According to the error theory, moral judgments are beliefs that ascribe moral properties to actions or to people, but these properties do not exist. The error theory therefore entails that all moral judgments are false. Just as atheism says that God does not exist and that all religious beliefs are false, the error theory says that moral properties do not exist and that all moral judgments are false.

    That may seem to be a concise statement of my own beliefs regarding objective moral claims, but hold onto your hat. In what follows the author comes up with a number of highly dubious conclusions about the supposed implications of “error theory.” In the end he runs completely off the track into the same swamp we were in before, and something indistinguishable from objective morality still prevails. In closing, he triumphantly informs us of his amazing discovery that “error theory” doesn’t “undermine morality!”

    I’m not going to review the entire paper in detail. Interested readers are welcome to do that on their own. Instead I will focus on some of the things the author imagines follow from error theory. These include the notion that a “part” of error theory is “cognitivism.” A “cognitivist” is one who claims that moral judgments are “beliefs.” According to the author, there is a whole “school” of “cognitivists,” countered by another whole “school” of “non-cognitivists.” In his words,

    Opponents of cognitivism, who are known as non-cognitivists, deny that these judgments are beliefs. They instead take moral judgments to be non-cognitive attitudes, such as feelings of approval or disapproval.

    Really? Have philosophers now become that ignorant of philosophy? Whatever happened to the likes of Shaftesbury, Hutcheson, and Hume? They claimed that moral beliefs and moral “feelings of approval or disapproval” were inextricably bound together, that the former were the result of reasoning about the latter, and that moral beliefs are, in fact, impossible without these “feelings.” The very idea that human beings are capable of blindly responding to emotions without forming beliefs about what they imply is referred to by behavioral scientists as “genetic determinism,” and the term “genetic determinist” itself is used merely as a pejorative to describe someone who believes in an impossible fantasy. If we are to credit the author, such specimens actually exist somewhere in the dank halls of academia.

    It would seem, then, that I can’t be an “error theorist,” because I find this false dichotomy between “cognitivism” and “non-cognitivism” absurd, regardless of the author’s claims about how fashionable it is among the philosophers. Not only does the author fail to mention the work of important philosophers who would have deemed this dichotomy nonsense, but he fails to mention any connection between morality and evolution by natural selection. Is he ignorant of a discipline known as evolutionary psychology? Is he completely oblivious to what the neuroscientists have been telling us lately? If “error theory” rejects the objective existence of moral properties, shouldn’t a paper on the subject at least discuss in passing what reasons there might be for the nearly universal belief in such imaginary objects?  Natural selection is certainly among the more plausible explanations.

    In what follows, we finally discover the connection between this remarkable dichotomy and the “unbelievable truth” mentioned in the article’s title. According to the paper, an objection to error theory is as follows:

    If the error theory is true, all moral judgments are false.
    It is wrong to torture babies for fun.
    So the judgment that it is wrong to torture babies for fun is true.
    So at least one moral judgment is true.
    So the error theory is false.

    The author allows that this is a tough one for error theorists. In his words,

    …this objection is hard to answer for error theorists. It is overwhelmingly plausible that it is wrong to torture babies for fun. Error theorists could deny that this entails that the judgment that it is wrong to torture babies for fun is true. But they can only deny this if they endorse non-cognitivism about this judgment, and non-cognitivism conflicts with the error theory. It therefore seems that error theorist must answer this objection by denying that it is wrong to torture babies for fun. But then we should ask what is more plausible: that the error theory is true, or that it is wrong to torture babies for fun. This objection therefore seems to show that we should reject the error theory.

    Now do you see where the false dichotomy comes in? Why on earth should it be “overwhelmingly plausible” that it is wrong to torture babies for fun, regardless of what any individual happens to think about the matter, but as a matter of objective fact? Where is the basis for this “fact?” How did that basis acquire an independent and legitimate authority to dictate to human beings what they ought and ought not to do? How did it come into existence to begin with? Unless one can answer these questions, there is no reason to believe in the existence of objective moral truths, and therefore no rational explanation for the conclusion that any moral claim whatsoever is “overwhelmingly plausible.” It makes as much sense as the claim that there must be unicorns because one really, really believes deep down that it is “overwhelmingly plausible” that there are unicorns. It is only “overwhelmingly plausible” that it is wrong to torture babies because most of us have a very powerful “feeling” that it is wrong. But (aha, oho!) “error theorists” are prohibited from referencing that feeling in denying this “truth” because that would be “non-cognitivism” and they can’t be “non-cognitivists!”

    The rest of the paper goes something like this: Error theory is true. However, if error theory is true, then the claim that it is wrong to torture babies is false, and that is unbelievable. Therefore, error theory is both true and unbelievable. The conclusion:  “Our inability to believe this general error theory therefore prevents it from undermining morality.”  Whatever. One thing that the paper very definitely shows is that I am not an “error theorist.”

    What the “tortured babies” argument really amounts to is the claim that truth can be manufactured out of the vacuum by effective manipulation of moral emotions. It’s just another version of the similar arguments Sam Harris uses to prop up his equally bogus claim that there are objective moral truths. I note in passing the author’s claim that J. L. Mackie was the first philosopher to defend the error theory. That may be true as far as the description of error theory presented in the paper is concerned. However, a far more coherent argument to the effect that objective moral properties do not exist was published by Edvard Westermarck more than 70 years earlier. Perhaps it would be helpful if philosophers would at least reference his work in future discussions of error theory and related topics instead of continuing to ignore him.

    But to return to the moral of the story, not only am I not a postmodernist, a moral nihilist, or a moral relativist, I am not an “error theorist” either. I certainly believe that there are facts about the universe, and that they will stubbornly remain facts regardless of whether any conscious being chooses to believe they are facts or not. I simply don’t believe that these facts include objective moral truths. Apparently, at the risk of overdramatizing myself, I must conclude that I represent a church of one. I hope not but, in any case, when it comes to pigeonholing, please don’t round me up as one of the “usual suspects.”

  • On the “Immorality” of Einstein

    Posted on June 24th, 2018 Helian 2 comments

    Morality exists because of “human nature.”  In other words, it is a manifestation of innate behavioral traits that themselves exist by virtue of evolution by natural selection.  It follows that morality has no goal, no purpose, and no function, because in order to have those qualities it must necessarily have been created by some entity capable intending a goal, purpose, or function for it.  There was no such entity.  In human beings, the traits in question spawn the whimsical illusion that purely imaginary things that exist only in the subjective minds of individuals, such as good, evil, rights, values, etc., actually exist as independent objects.  The belief in these mirages is extremely powerful.  Occasionally a philosopher here and there will assert a belief in “moral relativity,” but in the end one always finds them, to quote a pithy Biblical phrase, returning like dogs to their own vomit.  After all their fine phrases, one finds them picking sides, sagely informing us that some individual or group is “good,” and some other ones “evil,” and that we “ought” to do one thing and “ought not” to do another.

    What does all this have to do with Einstein?  Well, recently he was accused of expressing impure thoughts in some correspondence he imagined would be private.  The nutshell version can be found in a recent article in the Guardian entitled, Einstein’s travel diaries reveal ‘shocking’ xenophobia Among other things, Einstein wrote that the Chinese he saw were “industrious, filthy, obtuse people,” and “even the children are spiritless and look obtuse.”  He, “…noticed how little difference there is between men and women,” adding, “I don’t understand what kind of fatal attraction Chinese women possess which enthralls the corresponding men to such an extent that they are incapable of defending themselves against the formidable blessing of offspring.”  He was more approving of the Japanese, noting that they were “unostentatious, decent, altogether very appealing,” and that he found “Pure souls as nowhere else among people.  One has to love and admire this country.”

    It goes without saying that only a moron could seriously find such comments “shocking” in the context of their time.  In the first place, Einstein was categorizing people into groups, as all human beings do, because we lack the mental capacity to store individual vignettes of all the billions of people on the planet.  He then pointed out certain things about these groups that he honestly imagined to be true.  He nowhere expressed hatred of any of the people he described, nor did he anywhere claim that the traits he described were “innate” or had a “biological origin,” as falsely claimed by the author of the article.  He associated them with the Chinese “race,” but might just as easily been describing cultural characteristics at a given time as anything innate.  Furthermore, “race” at the time that Einstein wrote could be understood quite differently from the way it is now.  In the 19th century, for example, the British and Afrikaners in South Africa were commonly described as different “races.”  Today we have learned some hard lessons about the potential harm of broadly associating negative qualities to entire populations, but in the context of the time they were written, ideas similar to the ones expressed by Einstein were entirely commonplace.

    In light of the above, consider the public response to the recent revelations about the content of Einstein’s private papers.  It is a testimony to the gross absurdity of human moral behavior in the context of an environment radically different from the one in which it evolved.  Einstein is actually accused by some of being a “racist,” a “xenophobe,” a “misogynist,” or, in short, a “bad” man.  Admirers of Einstein have responded by citing all the good-sounding reasons for the claim that Einstein was actually a “good” man.  These responses are equivalent to debating whether Einstein was “really a green unicorn,” or “really a blue unicorn.”  The problem with that is, of course, that there are no unicorns to begin with.  The same is true of objective morality.  It doesn’t exist.  Einstein wasn’t “good,” nor was he “bad,” because these categories do not exist as independent objects.  They are subjective, and exist only in our imaginations.  They are imagined to be real because there was a selective advantage to imagining them to be real in a given environment.  That environment no longer exists.  These are simple statements of fact.

    As so often happens in such cases, one side accuses the other of “moral relativity.”  In his response to this story at the Instapundit website, for example, Ed Driscoll wrote, “A century later, is the age of moral relativity about to devour the legacy of the man who invented the real theory of relativity?”  The problem here is most definitely not moral relativity.  In fact, it is the opposite – the illusion of objective morality.  The people attacking Einstein are moral absolutists.  If that were not true, what could possibly be the point of attacking him?  A genuine moral relativist would simply conclude that Einstein’s personal version of morality was different from theirs, and leave it at that.  That is not what is happening here.  Instead, Einstein is accused of violating “moral laws,” the most fashionable and up-to-date versions of which were concocted long after he was in his grave.  In spite of that, these “moral laws” are treated as if they represented objective facts.  Einstein was “bad” for violating them even though he had no way of knowing that these “moral laws” would exist nearly a century after he wrote his journals.  Is it not obvious that judging Einstein in this way would be utterly irrational unless these newly minted “moral laws” were deemed to be absolute, with a magical existence of their own, independent of what goes on in the subjective minds of individuals?

    Consider what is actually meant by this accusation of “racism.”  Normally a racist is defined as one who considers others to be innately evil or inferior by virtue of their race, and who hates and despises them by virtue of that fact.  It is simply one manifestation of the universal human tendency to perceive others in terms of ingroups and outgroups.  When this type of behavior evolved, there was no ambiguity about the identity of the outgroup.  It was simply the next tribe over.  The perception of “outgroup” could therefore be determined by very subtle differences without affecting the selective value of the behavior.  Now, however, with our vastly increased ability to travel long distances and communicate with others all over the world, we are quite capable of identifying others as “outgroup” whom we never would have heard of or come in contact with in our hunter-gatherer days.  As a result, the behavior has become “dysfunctional.”  It no longer accomplishes the same thing it did when it evolved.  Racism is merely one of the many manifestations of this now dysfunctional trait that has been determined by hard experience to be harmful in the environment we live in today.  As a result, it has been deemed “bad.”  Without understanding the underlying innate traits that give rise to the behavior, however, this attempt to patch up human moral behavior is of very limited value.

    The above becomes obvious if we examine the behavior of those who are in the habit of accusing others of racism.  They are hardly immune to similar manifestations of bigotry.  They simply define their outgroups based on criteria other than race.  The outgroup is always there, and it is hated and despised just the same.  Indeed, they may hate and despise their outgroups a great deal more violently and irrationally than those they accuse of racism ever did, but are oblivious to the possibility that their behavior may be similarly “bad” merely because they perceive their outgroup in terms of ideology, for example, rather than race.  Extreme examples of hatred of outgroups defined by ideology are easy to find on the Internet.  For example,

    • Actor Jim Carrey is quoted as saying, “40 percent of the U.S. doesn’t care if Trump deports people and kidnaps their babies as political hostages.
    • Actor Peter Fonda suggested to his followers on Twitter that they should “rip Barron Trump from his mother’s arms and put him in a cage with pedophiles.”  The brother of Jane Fonda also called for violence against Secretary of Homeland Security Kirstjen Nielsen and called White House Press Secretary Sarah Sanders a “c**t.”
    • An unidentified FBI agent is quoted as saying in a government report that, “Trump’s supporters are all poor to middle class, uneducated, lazy POS.”
    • According to New York Times editorialist Roxanne Gay, “Having a major character on a prominent television show as a Trump supporter normalizes racism and misogyny and xenophobia.”

    Such alternative forms of bigotry are often more harmful than garden variety racism itself merely by virtue of the fact that they have not yet been included in one of the forms of outgroup identification that has already been generally recognized as “bad.”  The underlying behavior responsible for the extreme hatred typified by the above statements won’t change, and if we whack the racism mole, or the anti-Semitism mole, or the homophobia mole, other moles will pop up to take their places.  The Carreys and Fondas and Roxanne Gays of the world will continue to hate their ideological outgroup as furiously as ever, until it occurs to someone to assign as “ism” to their idiosyncratic version of outgroup hatred, and people finally realize that they are no less bigoted than the “racists” they delight in hating.  Then a new “mole” will pop up with a new, improved version of outgroup hatred.  We will never control the underlying behavior and minimize the harm it does until we understand the innate reasons it exists to begin with.  In other words, it won’t go away until we learn to understand ourselves.

    And what of Einstein, not to mention the likes of Columbus, Washington, Madison, and Jefferson?  True, these men did more for the welfare of all mankind than any combination of their Social Justice Warrior accusers you could come up with, but for the time being, admiring them is forbidden.  After all, these men were “bad.”

  • How a “Study” Repaired History and the Evolutionary Psychologists Lived Happily Ever After

    Posted on June 12th, 2018 Helian No comments

    It’s a bit of a stretch to claim that those who have asserted the existence and importance of human nature have never experienced ideological bias. If that claim is true, then the Blank Slate debacle could never have happened. However, we know that it happened, based not only on the testimony of those who saw it for the ideologically motivated debasement of science that it was, such as Steven Pinker and Carl Degler, but of the ideological zealots responsible for it themselves, such as Hamilton Cravens, who portrayed it as The Triumph of Evolution. The idea that the Blank Slaters were “unbiased” is absurd on the face of it, and can be immediately debunked by simply counting the number of times they accused their opponents of being “racists,” “fascists,” etc., in books such as Richard Lewontin’s Not in Our Genes, and Ashley Montagu’s Man and Aggression. More recently, the discipline of evolutionary psychology has experienced many similar attacks, as detailed, for example, by Robert Kurzban in an article entitled, Alas poor evolutionary psychology.

    The reasons for this bias has never been a mystery, either to the Blank Slaters and their latter day leftist descendants, or to evolutionary psychologists and other proponents of the importance of human nature. Leftist ideology requires not only that human beings be equal before the law, but that the menagerie of human identity groups they have become obsessed with over the years actually be equal, in intelligence, creativity, degree of “civilization,” and every other conceivable measure of human achievement. On top of that, they must be “malleable,” and “plastic,” and therefore perfectly adaptable to whatever revolutionary rearrangement in society happened to be in fashion. The existence and importance of human nature has always been perceived as a threat to all these romantic mirages, as indeed it is. Hence the obvious and seemingly indisputable bias.

    Enter Jeffrey Winking of the Department of Anthropology at Texas A&M, who assures us that it’s all a big mistake, and there’s really no bias at all! Not only that, but he “proves” it with a “study” in a paper entitled, Exploring the Great Schism in the Social Sciences, that recently appeared in the journal Evolutionary Psychology. We must assume that, in spite of his background in anthropology, Winking has never heard of a man named Napoleon Chagnon, or run across an article entitled Darkness’s Descent on the American Anthropological Association, by Alice Degler.

    Winking begins his article by noting that “The nature-nurture debate is one that biologists often dismiss as a false dichotomy,” but adds, “However, such dismissiveness belies the long-standing debate that is unmistakable throughout the biological and social sciences concerning the role of biological influences in the development of psychological and behavioral traits in humans.” I agree entirely. One can’t simply hand-wave away the Blank Slate affair and a century of bitter ideological debate by turning up one’s nose and asserting the term isn’t helpful from a purely scientific point of view.

    We also find that Winking isn’t completely oblivious to examples of bias on the “nature” side of the debate. He cites the Harvard study group which “evaluated the merits of sociobiology, and which included intellectual giants like Stephen J. Gould and Richard Lewontin.” I am content to let history judge whether Gould and Lewontin were really “intellectual giants.” Regardless, if Winking actually read these “evaluations,” he cannot have failed to notice that they contained vicious ad hominem attacks on E. O. Wilson and others that it is extremely difficult to construe as anything but biased. Winking goes on to note similar instances of bias by other authors in various disciplines, such as,

    Many researchers use [evolutionary approaches to the study of international relations] to justify the status quo in the guise of science.

    The totality [of sociobiology and evolutionary psychology] is a myth of origin that is compelling precisely because it resonates strongly with Euro American presuppositions about the nature of the world.

    …in the social sciences (with the exception of primatology and psychology) sociobiology appeals most to right-wing social scientists.

    These are certainly compelling examples of bias. Now, however, Winking attempts to demonstrate that those who point out the bias, and correctly interpret the reasons for it, are just as biased themselves. As he puts it,

    Conversely, those who favor biological approaches have argued that those on the other side are rendered incapable of objective assessment by their ideological promotion of equality. They are alleged to erroneously reject evidence of biological influences because such evidence suggests that social outcomes are partially explained by biology, and this might inhibit the realization of equality. Their critiques of biological approaches are therefore often blithely dismissed as examples of the moralistic/naturalistic fallacy. This line of reason is exemplified in the quote by biologist Jerry Coyne

    If you can read the [major Evolutionary Psychology review paper] and still dismiss the entire field as worthless, or as a mere attempt to justify scientists’ social prejudices, then I’d suggest your opinions are based more on ideology than judicious scientific inquiry.

    I can’t imagine what Winking finds “blithe” about that statement! Is it really “blithe” to so much as suggest that people who dismiss entire fields of science as worthless may be ideologically motivated? I note in passing that Coyne must have thought long and hard about that statement, because his Ph.D. advisor was none other than Richard Lewontin, whom he still honors and admires!  Add to that the fact that Coyne is about as far as you can imagine from “right wing,” as anyone can see by simply visiting his Why Evolution is True website, and the notion that he is being “blithe” here is ludicrous. Winking’s other examples of “blithness” are similarly dubious, including,

    For critics, the heart of the intellectual problem remains an ideological adherence to the increasingly implausible view that human behavior is strictly determined by socialization… Should [social]hierarchies result strictly from culture, then the possibilities for an egalitarian future were seen to be as open and boundless as our ever-malleable brains might imagine.

    Like the Church, a number of contemporary thinkers have also grounded their moral and political views in scientific assumptions about… human nature, specifically that there isn’t one.

    Unlike the “comparable” statements by the Blank Slaters, these statements neither accuse those who deny the existence of human nature of being Nazis, nor is evidence lacking to back them up.  On the contrary, one could cite a mountain of evidence to back them up supplied by the Blank Slaters themselves.  Winking soon supplies us with the reason for this strained attempt to establish “moral equivalence” between “nature” and “nurture.”  It appears in his “hypothesis,” as follows:

    It is entirely possible that confirmation bias plays no role in driving disagreement and that the overarching debate in academia is driven by sincere disagreements concerning the inferential value of the research designs informing the debate.

    Wait a minute!  Don’t roll your eyes like that!  Winking has a “study” to back up this hypothesis.  Let me explain it to you.  He invented some “mock results” of studies which purported to establish, for example, the increased prevalence of an allele associated with “appetitive aggression” in populations with African ancestry.  Subtle, no?  Then he used Mechanical Turk and social media to come up with a sample of 365 people with Masters degrees or Ph.D.’s for a survey on what they thought of the “inferential power” of the fake data.  Another sample of 71 were scraped together for another survey on “research design.”  In the larger sample, 307 described themselves as either only “somewhat” on the “nature” side, or “somewhat” on the “nurture” side.  Only 57 claimed they leaned strongly one way or the other.  The triumphant results of the study included, for example, that,

    Participants perceptions of inferential value did not vary by the degree to which results supported a particular ideology, suggesting that ideological confirmation bias is not affecting participant perceptions of inferential value.

    Seriously?  Even the author admits that the statistical power of his “study” is low because of the small sample sizes.  However statistical power only applies where the samples are truly random, meaning, in this case, where the participants are either unequivocably on the “nature” or “nurture” side.  That is hardly the case.  Mechanical Turk samples, for example are biased towards a younger and more liberal demographic.  Most of the participants were on the fence between nature and nurture.  In other words, there’s no telling what their true opinions were even if they were honest about them.  Even the most extreme Blank Slaters admitted that nature plays a significant role in such bodily functions as urinating, defecating, and breathing, and so could have easily described themselves as “somewhat bioist.”  Perhaps most importantly, any high school student could have easily seen what this “study” was about.  There is no doubt whatsoever that holders of Masters and Doctors degrees in related disciplines had no trouble a) inferring what the study was about, and b) had an interest in making sure that the results demonstrated that they were “unbiased.”  In other words, were not exactly talking “double blind” here.

    I think the author was well aware that most readers would have no trouble detecting the blatant shortcomings of his “study.”  Apparently to ward off ridicule he wrote,

    Regardless of one’s position, it is important to remind scholars that if they believe a group of intelligent and informed academics could be so unknowingly blinded by ideology that they wholeheartedly subscribe to an unquestionably erroneous interpretation of an entire body of research, then they must acknowledge they themselves are equally as capable of being so misguided.

    Kind of reminds you of the curse over King Tut’s tomb, doesn’t it?  “May those who question my study be damned to dwell among the misguided forever!”  Sorry, my dear Winking, but “a group of intelligent and informed academics” not only could, but were “so unknowingly blinded by ideology that they wholeheartedly subscribed to an unquestionably erroneous interpretation of an entire body of research.”  It was called the Blank Slate, and it derailed the behavioral sciences for more than half a century.  That’s what Pinker’s book was about.  That’s what Degler’s book was about, and yes, that’s even what Cravens’ book was about.  They all did an excellent job of documenting the debacle.  I suggest you read them.

    Or not.  You could decide to believe your study instead.  I have to admit, it would have its advantages.  History would be “fixed,” the lions would lie down with the lambs, and the evolutionary psychologists would live happily ever after.

  • On the Gleichschaltung of Evolutionary Psychology

    Posted on June 11th, 2018 Helian No comments

    When Robert Ardrey began his debunking of the ideologically motivated dogmas that passed for the “science” of human behavior in 1961 with the publication of his first book, African Genesis, he knew perfectly well what was at stake.  By that time what we now know as the Blank Slate orthodoxy had derailed any serious attempt by our species to achieve self-understanding for upwards of three decades.  This debacle in the behavioral sciences paralyzed any serious attempt to understand the roots of human warfare and aggression, the sources of racism, anti-Semitism, religious bigotry, and the myriad other manifestations of our innate tendency to perceive others in terms of ingroups and outgroups, the nature of human territorialism and status-seeking behavior, and the wellsprings of human morality itself.  A bit later, E. O. Wilson summed up our predicament as follows:

    Humanity today is like a waking dreamer, caught between the fantasies of sleep and the chaos of the real world.  The mind seeks but cannot find the precise place and hour.  We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology.  We thrash about.  We are terribly confused about the mere fact of our existence, and a danger to ourselves and the rest of life.

    In the end, the Blank Slate collapsed under the weight of its own absurdity, in spite of the now-familiar attempts to silence its opponents by vilification rather than logical argument.  The science of evolutionary psychology emerged based explicitly on acceptance of the reality and importance of innate human behavioral traits.  However, the ideological trends that resulted in the Blank Slate disaster to begin with haven’t disappeared.  On the contrary, they have achieved nearly unchallenged control of the social means of communication, including the entertainment industry, the “mainstream” news media, Internet monopolies such as Facebook, Google and Twitter, and, perhaps most importantly, academia.  There an ingroup defined by ideology has emerged that has always viewed the new science with a jaundiced eye.  By its very nature it challenges their assumptions of moral superiority, their cherished myths about the nature of human beings, and the viability of the various utopias they have always enjoyed concocting for the rest of us.  As Marx might have put it, this clash of thesis and antithesis has led to a synthesis in evolutionary psychology that might be described as creeping Gleichschaltung.  In other words, it is undergoing a slow process of getting “in step” with the controlling ideology.  It no longer seriously challenges the dogmas of that ideology, and the “studies” emerging from the field are increasingly, if not yet exclusively, limited to subjects that are deemed ideologically “benign.”  As a result, when it comes to addressing issues that are of real importance in terms of the survival and welfare of our species, the science of evolutionary psychology has become largely irrelevant.

    Consider, for example, the sort of articles that one typically finds in the relevant journals.  In the last four issues of Evolutionary Behavioral Sciences they have addressed such subjects as “Committed romantic relationships,” Long-term romantic relationships,” “The effect of predictable early childhood environments on sociosexuality in early adulthood,” “Daily relationship quality in same-sex couples,” “Modern-day female preferences for resources and provisioning by long-term mates,” “Behavioral reactions to emotional and sexual infidelity: mate abandonment versus mate retention,” and “An evolutionary perspective on orgasm.”  Peering through the last four issues of Evolutionary Psychology Journal we find, “Mating goals moderate power’s effect on conspicuous consumption among women,” “In-law preferences in China: What parents look for in the parents of their children’s mates,” “Endorsement of social and personal values predicts the desirability of men and women as long-term partners,” “Adaptive memory: remembering potential mates,” “Passion, relational mobility, and proof of commitment,” “Do men produce high quality ejaculates when primed with thoughts of partner infidelity?” and “Displaying red and black on a first date: A field study using the ‘First Dates’ television series.”

    All very interesting stuff, I’m sure, but the last time I checked humanity wasn’t faced with an existential threat due to cluelessness about the mechanics of reproduction.  Articles that might actually bear on our chances of avoiding self-destruction, on the other hand, are few and far between.  In short, evolutionary psychology has been effectively neutered.  Ostensibly, it’s only remaining purpose is to pad the curriculum vitae of the professoriat in the publish or perish world of academia.

    Does it really matter?  Probably not much.  The claims of any branch of psychology to be a genuine science have always been rather tenuous, and must remain so as long as our knowledge of how the mind works and how consciousness can exist remains so limited.  Real knowledge of how the brain gives rise to innate behavioral predispositions, and how they are perceived and interpreted by our “rational” consciousness is far more likely to be forthcoming from fields like neuroscience, genetics, and evolutionary biology than evolutionary psychology.  Meanwhile, we are free of the Blank Slate straitjacket, at least temporarily.  We must no longer endure the sight of the court jesters of the Blank Slate striking heroic poses as paragons of “science,” and uttering cringeworthy imbecilities that are taken perfectly seriously by a fawning mass media.  Consider, for example, the following gems from clown-in-chief Ashley Montagu:

    All the field observers agree that these creatures (chimpanzees and other great apes) are amiable and quite unaggressive, and there is not the least reason to suppose that man’s pre-human primate ancestors were in any way different.

    The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.

    …man is man because he has no instincts, because everything he is and has become he has learned, acquired, from his culture, from the man-made part of the environment, from other human beings.

    In fact, I also think it very doubtful that any of the great apes have any instincts.  On the contrary, it seems that as social animals they must learn from others everything they come to know and do.  Their capacities for learning are simply more limited than those of Homo sapiens.

    In his heyday Montagu could rave on like that nonstop, and be taken perfectly seriously, not only by the media, but by the vast majority of the “scientists” in the behavioral disciplines.  Anyone who begged to differ was shouted down as a racist and a fascist.  We can take heart in the fact that we’ve made at least some progress since then.  Today one finds articles about human “instincts” in the popular media, and even academic journals, as if the subject had never been the least bit controversial.  True, the same “progressives” who brought us the Blank Slate now have evolutionary psychology firmly in hand, and are keeping it on a very short leash.  For all that, one can now at least study the subject of innate human behavior without fear that undue interest in the subject is likely to bring one’s career to an abrupt end.  Who knows?  With concurrent advances in our knowledge of the actual physics of the mind and consciousness, we may eventually begin to understand ourselves.

  • Morality and the Floundering Philosophers

    Posted on May 26th, 2018 Helian No comments

    In my last post I noted the similarities between belief in objective morality, or the existence of “moral truths,” and traditional religious beliefs. Both posit the existence of things without evidence, with no account of what these things are made of (assuming that they are not things that are made of nothing), and with no plausible explanation of how these things themselves came into existence or why their existence is necessary. In both cases one can cite many reasons why the believers in these nonexistent things want to believe in them. In both cases, for example, the livelihood of myriads of “experts” depends on maintaining the charade. Philosophers are no different from priests and theologians in this respect, but their problem is even bigger. If Darwin gave the theologians a cold, he gave the philosophers pneumonia. Not long after he published his great theory it became clear, not only to him, but to thousands of others, that morality exists because the behavioral traits which give rise to it evolved. The Finnish philosopher Edvard Westermarck formalized these rather obvious conclusions in his The Origin and Development of the Moral Ideas (1906) and Ethical Relativity (1932). At that point, belief in the imaginary entities known as “moral truths” became entirely superfluous. Philosophers have been floundering behind their curtains ever since, trying desperately to maintain the illusion.

    An excellent example of the futility of their efforts may be found online in the Stanford Encyclopedia of Philosophy in an entry entitled Morality and Evolutionary Biology. The most recent version was published in 2014.  It’s rather long, but to better understand what follows it would be best if you endured the pain of wading through it.  However, in a nutshell, it seeks to demonstrate that, even if there is some connection between evolution and morality, it’s no challenge to the existence of “moral truths,” which we are to believe can be detected by well-trained philosophers via “reason” and “intuition.”  Quaintly enough, the earliest source given for a biological explanation of morality is E. O. Wilson.  Apparently the Blank Slate catastrophe is as much a bugaboo for philosophers as for scientists.  Evidently it’s too indelicate for either of them to mention that the behavioral sciences were completely derailed for upwards of 50 years by an ideologically driven orthodoxy.  In fact, a great many highly intelligent scientists and philosophers wrote a great deal more than Wilson about the connection between biology and morality before they were silenced by the high priests of the Blank Slate.  Even during the Blank Slate men like Sir Arthur Keith had important things to say about the biological roots of morality.  Robert Ardrey, by far the single most influential individual in smashing the Blank Slate hegemony, addressed the subject at length long before Wilson, as did thinkers like Konrad Lorenz and Niko Tinbergen.  Perhaps if its authors expect to be taken seriously, this “Encyclopedia” should at least set the historical record straight.

    It’s already evident in the Overview section that the author will be running with some dubious assumptions.  For example, he speaks of “morality understood as a set of empirical phenomena to be explained,” and the “very different sets of questions and projects pursued by philosophers when they inquire into the nature and source of morality,” as if they were examples of the non-overlapping magisterial once invoked by Stephen Jay Gould. In fact, if one “understands the empirical phenomena” of morality, then the problem of the “nature and source of morality” is hardly “non-overlapping.”  In fact, it solves itself.  The suggestion that they are non-overlapping depends on the assumption that “moral truth” exists in a realm of its own.  A bit later the author confirms he is making that assumption as follows:

    Moral philosophers tend to focus on questions about the justification of moral claims, the existence and grounds of moral truths, and what morality requires of us.  These are very different from the empirical questions pursued by the sciences, but how we answer each set of questions may have implications for how we should answer the other.

    He allows that philosophy and the sciences must inform each other on these “distinct” issues.  In fact, neither philosophy nor the sciences can have anything useful to say about these questions, other than to point out that they relate to imaginary things.  “Objects” in the guise of “justification of moral claims,” “grounds of moral truths,” and the “requirements of morality” exist only in fantasy.  The whole burden of the article is to maintain that fantasy, and insist that the mirage is real.  We are supposed to be able to detect that the mirages are real by thinking really hard until we “grasp moral truths,” and “gain moral knowledge.”  It is never explained what kind of a reasoning process leads to “truths” and “knowledge” about things that don’t exist.  Consider, for example, the following from the article:

    …a significant amount of moral judgment and behavior may be the result of gaining moral knowledge, rather than just reflecting the causal conditioning of evolution.  This might apply even to universally held moral beliefs or distinctions, which are often cited as evidence of an evolved “universal moral grammar.”  For example, people everywhere and from a very young age distinguish between violations of merely conventional norms and violations of norms involving harm, and they are strongly disposed to respond to suffering with concern.  But even if this partly reflects evolved psychological mechanisms or “modules” governing social sentiments and responses, much of it may also be the result of human intelligence grasping (under varying cultural conditions) genuine morally relevant distinctions or facts – such as the difference between the normative force that attends harm and that which attends mere violations of convention.

    It’s amusing to occasionally substitute “the flying spaghetti monster” or “the great green grasshopper god” for the author’s “moral truths.”  The “proofs” of their existence work just as well.  In the above, he is simply assuming the existence of “morally relevant distinctions,” and further assuming that they can be grasped and understood logically.  Such assumptions fly in the face of the work of many philosophers who demonstrated that moral judgments are always grounded in emotions, sometimes referred to by earlier authors as “sentiments,” or “passions,” and it is therefore impossible to arrive at moral truths through reason alone.  Assuming some undergraduate didn’t write the article, one must assume the author had at least a passing familiarity with some of these people.  The Earl of Shaftesbury, for example, demonstrated the decisive role of “natural affections” as the origins of moral judgment in his Inquiry Concerning Virtue or Merit (1699), even noting in that early work the similarities between humans and the higher animals in that regard.  Francis Hutcheson very convincingly demonstrated the impotence of reason alone in detecting moral truths, and the essential role of “instincts and affections” as the origin of all moral judgment in his An Essay on the Nature and Conduct of the Passions and Affections (1728).  Hutcheson thought that God was the source of these passions and affections.  It remained for David Hume to present similar arguments on a secular basis in his A Treatise on Human Nature (1740).

    The author prefers to ignore these earlier philosophers, focusing instead on the work of Jonathan Haidt, who has also insisted on the role of emotions in shaping moral judgment.  Here I must impose on the reader’s patience with a long quote to demonstrate the type of “logic” we’re dealing with.  According to the author,

    There are also important philosophical worries about the methodologies by which Haidt comes to his deflationary conclusions about the role played by reasoning in ordinary people’s moral judgments.

    To take just one example, Haidt cites a study where people made negative moral judgments in response to “actions that were offensive yet harmless, such as…cleaning one’s toilet with the national flag.” People had negative emotional reactions to these things and judged them to be wrong, despite the fact that they did not cause any harms to anyone; that is, “affective reactions were good predictors of judgment, whereas perceptions of harmfulness were not” (Haidt 2001, 817). He takes this to support the conclusion that people’s moral judgments in these cases are based on gut feelings and merely rationalized, since the actions, being harmless, don’t actually warrant such negative moral judgments. But such a conclusion would be supported only if all the subjects in the experiment were consequentialists, specifically believing that only harmful consequences are relevant to moral wrongness. If they are not, and believe—perhaps quite rightly (though it doesn’t matter for the present point what the truth is here)—that there are other factors that can make an action wrong, then their judgments may be perfectly appropriate despite the lack of harmful consequences.

    This is in fact entirely plausible in the cases studied: most people think that it is inherently disrespectful, and hence wrong, to clean a toilet with their nation’s flag, quite apart from the fact that it doesn’t hurt anyone; so the fact that their moral judgment lines up with their emotions but not with a belief that there will be harmful consequences does not show (or even suggest) that the moral judgment is merely caused by emotions or gut reactions. Nor is it surprising that people have trouble articulating their reasons when they find an action intrinsically inappropriate, as by being disrespectful (as opposed to being instrumentally bad, which is much easier to explain).

    Here one can but roll ones eyes.  It doesn’t matter a bit whether the subjects are consequentialists or not.  Haidt’s point is that logical arguments will always break down at some point, whether they are based on harm or not, because moral judgments are grounded in emotions.  Harm plays a purely ancillary role.  One could just as easily ask why the action in question is considered disrespectful, and the chain of logical reasons would break down just as surely.  Whoever wrote the article must know what Haidt is really saying, because he refers explicitly to the ideas of Hume in the same book.  Absent the alternative that the author simply doesn’t know what he’s talking about, we must conclude that he is deliberately misrepresenting what Haidt was trying to say.

    One of the author’s favorite conceits is that one can apply “autonomous applications of human intelligence,” meaning applications free of emotional bias, to the discovery of “moral truths” in the same way those logical faculties are applied in such fields as algebraic topology, quantum field theory, population biology, etc.  In his words,

    We assume in general that people are capable of significant autonomy in their thinking, in the following sense:

    Autonomy Assumption: people have, to greater or lesser degrees, a capacity for reasoning that follows autonomous standards appropriate to the subjects in question, rather than in slavish service to evolutionarily given instincts merely filtered through cultural forms or applied in novel environments. Such reflection, reasoning, judgment and resulting behavior seem to be autonomous in the sense that they involve exercises of thought that are not themselves significantly shaped by specific evolutionarily given tendencies, but instead follow independent norms appropriate to the pursuits in question (Nagel 1979).

    This assumption seems hard to deny in the face of such abstract pursuits as algebraic topology, quantum field theory, population biology, modal metaphysics, or twelve-tone musical composition, all of which seem transparently to involve precisely such autonomous applications of human intelligence.

    This, of course, leads up to the argument that one can apply this “autonomy assumption” to moral judgment as well.  The problem is that, in the other fields mentioned, one actually has something to reason about.  In mathematics, for example, one starts with a collection of axioms that are simply accepted as true, without worrying about whether they are “really” true or not.  In physics, there are observables that one can measure and record as a check on whether one’s “autonomous application of intelligence” was warranted or not.  In other words, one has physical evidence.  The same goes for the other subjects mentioned.  In each case, one is reasoning about something that actually exists.  In the case of morality, however, “autonomous intelligence” is being applied to a phantom.  Again, the same arguments are just as strong if one applies them to grasshopper gods.  “Autonomous intelligence” is useless if it is “applied” to something that doesn’t exist.  You can “reflect” all you want about the grasshopper god, but he will still stubbornly refuse to pop into existence.  The exact nature of the recondite logical gymnastics one must apply to successfully apply “autonomous intelligence” in this way is never explained.  Perhaps a Ph.D. in philosophy at Stanford is a prerequisite before one can even dare to venture forth on such a daunting logical quest.  Perhaps then, in addition to the sheepskin, they fork over a philosopher’s stone that enables one to transmute lead into gold, create the elixir of life, and extract “moral truths” right out of the vacuum.

    In short, the philosophers continue to flounder.  Their logical demonstrations of nonexistent “moral truths” are similar in kind to logical demonstrations of the existence of imaginary super-beings, and just as threadbare.  Why does it matter?  I can’t supply you with any objective “oughts,” here, but at least I can tell you my personal prejudices on the matter, and my reasons for them.  We are living in a time of moral chaos, and will continue to do so until we accept the truth about the evolutionary origin of human morality and the implications of that truth.  There are no objective moral truths, and it will be extremely dangerous for us to continue to ignore that fact.  Competing morally loaded ideologies are already demonstrably disrupting our political systems.  It is hardly unlikely that we will once again experience what happens when fanatics stuff their “moral truths” down our throats as they did in the last century with the morally loaded ideologies of Communism and Nazism.  Do you dislike being bullied by Social Justice Warriors?  I’m sorry to inform you that the bullying will continue unabated until we explode the myth that they are bearers of “moral truths” that they are justified, according to “autonomous logic” in imposing on the rest of us.  I could go on and on, but do I really need to?  Isn’t it obvious that a world full of fanatical zealots, all utterly convinced that they have a monopoly on “moral truth,” and a perfect right to impose these “truths” on everyone else, isn’t exactly a utopia?  Allow me to suggest that, instead, it might be preferable to live according to a simple and mutually acceptable “absolute” morality, in which “moral relativism” is excluded, and which doesn’t change from day to day in willy-nilly fashion according to the whims of those who happen to control the social means of communication?  As counter-intuitive as it seems, the only practicable way to such an outcome is acceptance of the fact that morality is a manifestation of evolved human nature, and of the truth that there are no such things as “moral truths.”

     

  • Morality and the Spiritualism of the Atheists

    Posted on May 11th, 2018 Helian No comments

    I’m an atheist.  I concluded there was no God when I was 12 years old, and never looked back.  Apparently many others have come to the same conclusion in western democratic societies where there is access to diverse opinions on the subject, and where social sanctions and threats of force against atheists are no longer as intimidating as they once were.  Belief in traditional religions is gradually diminishing in such societies.  However, they have hardly been replaced by “pure reason.”  They have merely been replaced by a new form of “spiritualism.”  Indeed, I would maintain that most atheists today have as strong a belief in imaginary things as the religious believers they so often despise.  They believe in the “ghosts” of good and evil.

    Most atheists today may be found on the left of the ideological spectrum.  A characteristic trait of leftists today is the assumption that they occupy the moral high ground. That assumption can only be maintained by belief in a delusion, a form of spiritualism, if you will – that there actually is a moral high ground.  Ironically, while atheists are typically blind to the fact that they are delusional in this way, it is often perfectly obvious to religious believers.  Indeed, this insight has led some of them to draw conclusions about the current moral state of society similar to my own.  Perhaps the most obvious conclusion is that atheists have no objective basis for claiming that one thing is “good” and another thing is “evil.”  For example, as noted by Tom Trinko at American Thinker in an article entitled “Imagine a World with No Religion,”

    Take the Golden Rule, for example. It says, “Do onto others what you’d have them do onto you.” Faithless people often point out that one doesn’t need to believe in God to believe in that rule. That’s true. The problem is that without God, there can’t be any objective moral code.

    My reply would be, that’s quite true, and since there is no God, there isn’t any objective moral code, either.  However, most atheists, far from being “moral relativists,” are highly moralistic.  As a consequence, they are dumbfounded by anything like Trinko’s remark.  It pulls the moral rug right out from under their feet.  Typically, they try to get around the problem by appealing to moral emotions.  For example, they might say something like, “What?  Don’t you think it’s really bad to torture puppies to death?”, or, “What?  Don’t you believe that Hitler was really evil?”  I certainly have a powerful emotional response to Hitler and tortured puppies.  However, no matter how powerful those emotions are, I realize that they can’t magically conjure objects into being that exist independently of my subjective mind.  Most leftists, and hence, most so-called atheists, actually do believe in the existence of such objects, which they call “good” and “evil,” whether they admit it explicitly or not.  Regardless, they speak and act as if the objects were real.

    The kinds of speech and actions I’m talking about are ubiquitous and obvious.  For example, many of these “atheists” assume a dictatorial right to demand that others conform to novel versions of “good” and “evil” they may have concocted yesterday or the day before.  If those others refuse to conform, they exhibit all the now familiar symptoms of outrage and virtuous indignation.  Do rational people imagine that they are gods with the right to demand that others obey whatever their latest whims happen to be?  Do they assume that their subjective, emotional whims somehow immediately endow them with a legitimate authority to demand that others behave in certain ways and not in others?  I certainly hope that no rational person would act that way.  However, that is exactly the way that many so-called atheists act.  To the extent that we may consider them rational at all, then, we must assume that they actually believe that whatever versions of “good” or “evil” they happen to favor at the moment are “things” that somehow exist on their own, independently of their subjective minds.  In other words, they believe in ghosts.

    Does this make any difference?  I suggest that it makes a huge difference.  I personally don’t enjoy being constantly subjected to moralistic bullying.  I doubt that many people enjoy jumping through hoops to conform to the whims of others.  I submit that it may behoove those of us who don’t like being bullied to finally call out this type of irrational, quasi-religious behavior for what it really is.

    It also makes a huge difference because this form of belief in imaginary objects has led us directly into the moral chaos we find ourselves in today.  New versions of “absolute morality” are now popping up on an almost daily basis.  Obviously, we can’t conform to all of them at once, and must therefore put up with the inconvenience of either keeping our mouths shut or risk being furiously condemned as “evil” by whatever faction we happen to offend.  Again, traditional theists are a great deal more clear-sighted than “atheists” about this sort of thing.  For example, in an article entitled, “Moral relativism can lead to ethical anarchy,” Christian believer Phil Schurrer, a professor at Bowling Green State University, writes,

    …the lack of a uniform standard of what constitutes right and wrong based on Natural Law leads to the moral anarchy we see today.

    Prof. Schurrer is right about the fact that we live in a world of moral anarchy.  I also happen to agree with him that most of us would find it useful and beneficial if we could come up with a “uniform standard of what constitutes right and wrong.”  Where I differ with him is on the rationality of attempting to base that standard on “Natural Law,” because there is no such thing.  For religious believers, “Natural Law” is law passed down by God, and since there is no God, there can be no “Natural Law,” either.  How, then, can we come up with such a uniform moral code?

    I certainly can’t suggest a standard based on what is “really good” or “really bad” because I don’t believe in the existence of such objects.  I can only tell you what I would personally consider expedient.  It would be a standard that takes into account what I consider to be some essential facts.  These are as follows.

    • What we refer to as morality is an artifact of “human nature,” or, in other words, innate predispositions that affect our behavior.
    • These predispositions exist because they evolved by natural selection.
    • They evolved by natural selection because they happened to improve the odds that the genes responsible for their existence would survive and reproduce at the time and in the environment in which they evolved.
    • We are now living at a different time, and in a different environment, and it cannot be assumed that blindly responding to the predispositions in question will have the same outcome now as it did when those predispositions evolved.  Indeed, it has been repeatedly demonstrated that such behavior can be extremely dangerous.
    • Outcomes of these predispositions include a tendency to judge the behavior of others as “good” or “evil.”  These categories are typically deemed to be absolute, and to exist independently of the conscious minds that imagine them.
    • Human morality is dual in nature.  Others are perceived in terms of ingroups and outgroups, with different standards applying to what is deemed “good” or “evil” behavior towards those others depending on the category to which they are imagined to belong.

    I could certainly expand on this list, but the above are certainly some of the most salient and essential facts about human morality.  If they are true, then it is possible to make at least some preliminary suggestions about how a “uniform standard” might look.  It would be as simple as possible.  It would be derived to minimize the dangers referred to above, with particular attention to the dangers arising from ingroup/outgroup behavior.  It would be limited in scope to interactions between individuals and small groups in cases where the rational analysis of alternatives is impractical due to time constraints, etc.  It would be in harmony with innate human behavioral traits, or “human nature.”  It is our nature to perceive good and evil as real objective things, even though they are not.  This implies there would be no “moral relativism.”  Once in place, the moral code would be treated as an absolute standard, in conformity with the way in which moral standards are usually perceived.  One might think of it as a “moral constitution.”  As with political constitutions, there would necessarily be some means of amending it if necessary.  However, it would not be open to arbitrary innovations spawned by the emotional whims of noisy minorities.

    How would such a system be implemented?  It’s certainly unlikely that any state will attempt it any time in the foreseeable future.  Perhaps it might happen gradually, just as changes to the “moral landscape” have usually happened in the past.  For that to happen, however, it would be necessary for significant numbers of people to finally understand what morality is, and why it exists.  And that is where, as an atheist, I must part company with Mr. Trinko, Prof. Schurrer, and the rest of the religious right.  Progress towards a uniform morality that most of us would find a great deal more useful and beneficial than the versions currently on tap, regardless of what goals or purposes we happen to be pursuing in life, cannot be based on the illusion that a “natural law” exists that has been handed down by an imaginary God, any more than it can be based on the emotional whims of leftist bullies.  It must be based on a realistic understanding of what kind of animals we are, and how we came to be.  However, such self knowledge will remain inaccessible until we shed the shackles of religion.  Perhaps, as they witness many of the traditional churches increasingly becoming leftist political clubs before their eyes, people on the right of the political spectrum will begin to find it less difficult to free themselves from those shackles.  I hope so.  I think that an Ansatz based on simple, traditional moral rules, such as the Ten Commandments, is more likely to lead to a rational morality than one based on furious rants over who should be allowed to use what bathrooms.  In other words, I am more optimistic that a useful reform of morality will come from the right rather than the left of the ideological spectrum, as it now stands.  Most leftists today are much too heavily invested in indulging their moral emotions to escape from the world of illusion they live in.  To all appearances they seriously believe that blindly responding to these emotions will somehow magically result in “moral progress” and “human flourishing.”  Conservatives, on the other hand, are unlikely to accomplish anything useful in terms of a rational morality until they free themselves of the “God delusion.”  It would seem, then, that for such a moral “revolution” to happen, it will be necessary for those on both the left and the right to shed their belief in “spirits.”

     

  • On the Illusion of Moral Relativism

    Posted on April 8th, 2018 Helian No comments

    As recently as 2009 the eminent historian Paul Johnson informed his readers that he made “…the triumph of moral relativism the central theme of my history of the 20th century, Modern Times, first published in 1983.”  More recently, however, obituaries of moral relativism have turned up here and there.  For example one appeared in The American Spectator back in 2012, fittingly entitled Moral Relativism, R.I.P.  It was echoed a few years later by a piece in The Atlantic that announced The Death of Moral Relativism.”  There’s just one problem with these hopeful announcements.  Genuine moral relativists are as rare as unicorns.

    True, many have proclaimed their moral relativism.  To that I can only reply, watch their behavior.  You will soon find each and every one of these “relativists” making morally loaded pronouncements about this or that social evil, wrong-headed political faction, or less than virtuous individual.  In other words, their “moral relativism” is of a rather odd variety that occasionally makes it hard to distinguish their behavior from that of the more zealous moral bigots.  Scratch the surface of any so-called “moral relativist,” and you will often find a moralistic bully.  We are not moral relativists because it is not in the nature of our species to be moral relativists.  The wellsprings of human morality are innate.  One cannot arbitrarily turn them on or off by embracing this or that philosophy, or reading this or that book.

    I am, perhaps, the closest thing to a moral relativist you will ever find, but when my moral emotions kick in, I’m not much different from anyone else.  Just ask my dog.  When she’s riding with me she’ll often glance my way with a concerned look as I curse the lax morals of other drivers.  No doubt she’s often wondered whether the canine’s symbiotic relationship with our species was such a good idea after all.  I know perfectly well the kind of people Paul Johnson was thinking of when he spoke of “moral relativists.”  However, I’ve watched the behavior of the same types my whole life.  If there’s one thing they all have in common, it’s a pronounced tendency to dictate morality to everyone else.  They are anything but “amoral,” or “moral relativists.”  The difference between them and Johnson is mainly a difference in their choice of outgroups.

    Edvard Westermarck may have chosen the title Ethical Relativity for his brilliant analysis of human morality, yet he was well aware of the human tendency to perceive good and evil as real, independent things.  The title of his book did not imply that moral (or ethical) relativism was practical for our species.  Rather, he pointed out that morality is a manifestation of our package of innate behavioral predispositions, and that it follows that objective moral truths do not exist.  In doing so he was pointing out a fundamental truth.  Recognition of that truth will not result in an orgy of amoral behavior.  On the contrary, it is the only way out of the extremely dangerous moral chaos we find ourselves in today.

    The moral conundrum we find ourselves in is a result of the inability of natural selection to keep up with the rapidly increasingly complexity and size of human societies.  For example, a key aspect of human moral behavior is its dual nature – our tendency to perceive others in terms of ingroups and outgroups.  We commonly associate “good” traits with our ingroup, and “evil” ones with our outgroup.  That aspect of our behavior enhanced the odds that we would survive and reproduce at a time when there was no ambiguity about who belonged in these categories.  The ingroup was our own tribe, and the outgroup was the next tribe over.  Our mutual antagonism tended to make us spread out and avoid starvation due to over-exploitation of a small territory.  We became adept at detecting subtle differences between “us” and “them” at a time when it was unlikely that neighboring tribes differed by anything as pronounced as race or even language.  Today we have given bad names to all sorts of destructive manifestations of outgroup identification without ever grasping the fundamental truth that the relevant behavior is innate, and no one is immune to it.  Racism, anti-Semitism, bigotry, you name it.  They’re all fundamentally the same.  Those who condemn others for one manifestation of the behavior will almost invariably be found doing the same thing themselves, the only difference being who they have identified as the outgroup.

    Not unexpectedly, behavior that accomplished one thing in the Pleistocene does not necessarily accomplish the same thing today.  The disconnect is often absurd, leading in some cases to what I’ve referred to as morality inversions – moral behavior that promotes suicide rather than survival.  That has not prevented those who are happily tripping down the path to their own extinction from proclaiming their moral superiority and raining down pious anathemas on anyone who doesn’t agree.  Meanwhile, new versions of morality are concocted on an almost daily basis, each one pretending to objective validity, complete with a built in right to dictate “goods” and “bads” that never occurred to anyone just a few years ago.

    There don’t appear to be any easy solutions to the moral mess we find ourselves in.  It would certainly help if more of us could accept the fact that morality is an artifact of natural selection, and that, as a consequence, objective good and evil are figments of our imaginations.  Perhaps then we could come up with some version of “absolute” morality that would be in tune with our moral emotions and at the same time allow us to interact in a manner that minimizes both the harm we do to each other and our exposure to the tiresome innovations of moralistic bullies.  That doesn’t appear likely to happen anytime soon, though.  The careers of too many moral pontificators and “experts on ethics” depend on maintaining the illusion.  Meanwhile, we find evolutionary biologists, evolutionary psychologists, and neuroscientists who should know better openly proclaiming the innate sources of moral behavior in one breath, and extolling some idiosyncratic version of “moral progress” and “human flourishing” in the next.  As one of Evelyn Waugh’s “bright young things” might have said, it’s just too shy-making.

    There is a silver lining to the picture, though.  At least you don’t have to worry about “moral relativism” anymore.

     

     

  • More Egg on Pinker’s Face: E. O. Wilson’s “The Origins of Creativity”

    Posted on March 12th, 2018 Helian No comments

    If you’re expecting a philosophical epiphany, E. O. Wilson’s The Origins of Creativity isn’t for you. His theme is that science and the humanities can form a grandiose union leading to a “third enlightenment” if only scholars in the humanities would come up to speed with advances in the sciences via “thorough application of five disciplines – paleontology, anthropology, psychology, evolutionary biology, and neurobiology.”  Good luck with that.  We can smile and nod as the old man rambles on about his latest grand, intellectual scheme, though.  He isn’t great because of such brainstorms.  He’s great because he combines courage and common sense with an ability to identify questions that are really worth asking.  That’s what you’ll discover if you read his books, and that’s why they’re well worth reading.  You might even say he’s succeeded in realizing his own dream to some extent, because reading Wilson is like reading a good novel.  You constantly run across anecdotes about interesting people, tips about unfamiliar authors who had important things to say, and thought provoking comments about the human condition.  For example, in “The Origins of Creativity” you’ll find a portrayal of the status games played by Harvard professors, his take on why he thinks Vladimir Nabokov is a better novelist than Jonathan Franzen, his reasons for asserting that, when it comes to the important questions facing humanity, “the grail to be sought is the nature of consciousness, and how it originated,” and some interesting autobiographical comments to boot.

    Those who love to explore the little ironies of history will also find some interesting nuggets in Wilson’s latest. The history I’m referring to is, of course, that of the Blank Slate.  For those who haven’t heard of it, it was probably the greatest perversion of science of all time.  For more than half a century, a rigid orthodoxy was imposed on the behavioral sciences according to which there is no such thing as human nature, that at birth our minds are “blank slates,” and that all human behavior is learned.  This dogma, transparently ludicrous to any reasonably intelligent child, has always been attractive to those whose tastes run to utopian schemes that require human behavior to be a great deal more “malleable” than it actually is.  Communism, fashionable during the heyday of the Blank Slate, is a case in point.

    Where does Wilson fit in?  Well, in 1975, he published Sociobiology, in a couple of chapters of which he suggested that there may actually be such a thing as human nature, and it may actually be important.  In doing so he became the first important member of the academic tribe to break ranks with the prevailing orthodoxy.  By that time, however, the Blank Slate had already long been brilliantly debunked and rendered a laughing stock among intelligent lay people by an outsider; a man named Robert Ardrey.  Ardrey wrote a series of books on the subject beginning with African Genesis in 1961.  He had been seconded by other authors, such as Konrad Lorenz, Niko Tinbergen, Lionel Tiger and Robin Fox, long before the appearance of Sociobiology.  Eventually, the behavioral “scientists” were forced to throw in the towel and jettison the Blank Slate orthodoxy.  However, it was much to humiliating for them to admit the truth – that they had all been exposed as charlatans by Ardrey, a man who had spent much of his life as a “mere playwright.”  Instead, they anointed Wilson, a member of their own tribe, as the great hero who had demolished the Blank Slate.  This grotesque imposture was enshrined in Steven Pinker’s The Blank Slate, which now passes as the official “history” of the affair.

    Where does the irony come in?  Well, Pinker needed some plausible reason to ignore Ardrey.  The deed was done crudely enough.  He simply declared that Ardrey had been “totally and utterly wrong,” based on the authority of a comment to that effect in Richard Dawkins’ The Selfish Gene.  In the process, he didn’t mention exactly what it was that Ardrey was supposed to have been “totally and utterly wrong” about.  After all, to all appearances the man had been “totally and utterly” vindicated.  As it happens, Dawkins never took issue with the main theme of all of Ardrey’s books; that there is such a thing as human nature, and it is important and essential to understanding the human condition.  He merely asserted in a single paragraph of the book that Ardrey, along with Konrad Lorenz and Irenäus Eibl-Eibesfeldt, had been wrong in endorsing group selection, the notion that natural selection can operate at the level of the group as well as of the individual or gene.  In other words, Pinker’s whole, shabby rationale for dismissing Ardrey was based on his support for group selection, an issue that was entirely peripheral to the overall theme of all Ardrey’s work.  Now for the irony – in his last three books, including his latest, Wilson has come out unabashedly and whole heartedly in favor of (you guessed it) group selection!

    In The Origins of Creativity Wilson seems to be doing his very best to rub salt in the wound.  In his last book, The Hunting Hypothesis, Ardrey had elaborated on the theory, also set forth in all his previous books, that the transition from ape to man had been catalyzed by increased dependence on hunting and meat eating.  The Blank Slaters long insisted that early man had never been guilty of such “aggressive” behavior, and that if he had touched meat at all, it must have been acquired by scavenging.  They furiously attacked Ardrey for daring to suggest that he had hunted.  If you watch the PBS documentary on the recent discovery of the remains of Homo naledi, you’ll see that the ancient diehards among them have never given up this dogma.  They insist that Homo naledi was a vegetarian even though, to the best of my knowledge, no one had even contended that he wasn’t, going so far as to actually call out the “unperson” Ardrey by name.  The realization that they were still so bitter after all these years brought a smile to my face.  What really set them off was Ardrey’s support for a theory first proposed by Raymond Dart that hunting had actually begun very early, in the pre-human species Australopithecus africanus. Well, if they were still mad at Ardrey, they’ll be livid when they read what Wilson has to say on the subject in his latest, such as,

    By a widespread consensus, the scenario drawn by scientists thus far begins with the shift by one of the African australopiths away from a vegetarian diet to one rich in cooked meat.  The event was not a casual change as in choosing from a menu, nor was it a mere re-wiring of the palate.  Rather the change was a full hereditary makeover in anatomy, physiology, and behavior.

    and

    This theoretical reconstruction has gained traction from fossil remains and the lifestyles of contemporary hunter-gatherers.  Meat from larger prey was shared, as it is by wolves, African wild dogs, and lions.  Given, in addition, the relatively high degree of intelligence possessed by large, ground-dwelling primates in general, the stage was then set in prehuman evolution for an unprecedented degree of cooperation and division of labor.

    Here, Wilson almost seems to be channeling Ardrey.  But wait, there’s more.  This one is for the real historical connoisseurs out there.  As noted above, in the bit from The Selfish Gene Pinker used for his clumsy attempt to airbrush Ardrey out of history, Dawkins condemned two others for the sin of supporting group selection as well; Konrad Lorenz and Austrian ethologist Irenäus Eibl-Eibesfeldt.  I suspect Lorenz was a bit too close to Ardrey for comfort, as the two were often condemned by the Blank Slaters in the same breath, but, sure enough, Eibl-Eibesfeldt makes a couple of cameo appearances in Wilson’s latest book!  For example, in chapter 12,

    During his classic field research in the 1960s, the German anthropologist Irenäus Eibl-Eibesfeldt demonstrated in minute detail that people in all societies, from primitive and preliterate to modern and urbanized, use the same wide range of paralinguistic signals.  These entail mostly facial expressions, denoting variously fear, pleasure, surprise, horror, and disgust.  Eibl-Eibesfeldt lived with his subjects and further, to avoid self-conscious behavior, filmed them in their daily lives with a right-angle lens, by which the subject is made to think that the camera is pointed elsewhere.  His general conclusion was that paralinguistic signals are hereditary traits shared by the whole of humanity.

    Brilliant, but according to Pinker this, too, must be “totally and utterly wrong,” since Eibl-Eibesfeldt is mentioned in the very same sentence in Dawkins’ book that he used to redact Ardrey from history!  At least it’s nice to see this bit of vindication for at least one of Pinker’s “totally and utterly wrong” trio.  I suspect Wilson is perfectly well aware of the dubious nature of Pinker’s “history,” but I doubt if he will ever have anything to say about Lorenz, not to mention Ardrey.  He has too much interest in preserving his own legacy for that.  I can’t really blame a man his age for wanting to go down in history as the heroic knight in shining armor who slew the Blank Slate dragon. He actually tries to push the envelope a bit in his latest with comments like,

    At first thought, this concept of kin selection, extended beyond nepotism to cooperation and altruism within an entire group, appears to have considerable merit.  I said so when I first synthesized the discipline of sociobiology in the 1960s and early 1970s.  Yet it is deeply flawed.

    During Ardrey’s day, the scientific discipline most often associated in the lay vernacular with resistance to the Blank Slate was ethology.  A few years after Wilson published his book with that title in 1975, it became sociobiology.  Now evolutionary psychology has displaced both of them.  I’m not sure what Wilson means by “sociobiology” here, but I’ve never seen anything he published prior to 1975 that comes close to being a forthright defense of the existence and importance of human nature.  Ardrey and others had published pretty much everything of real significance he had to say on the subject more than a decade earlier.

    Be that as it may, I have no reservations about recommending “The Origins of Creativity” to my readers.  True, I’m a bit skeptical about his latest project for a grand unification of science and the humanities, and the book is really little more than a pamphlet.  For all that, reading him is like having a pleasant conversation with someone who is very wise about the ways of the world, knows about the questions that are important for us to ask, and can tell you a lot of things that are worth knowing.

  • On the Purpose of Life

    Posted on January 29th, 2018 Helian 3 comments

    There is no purpose to your life other than the purpose you choose to give it.

    Is your goal the brotherhood of all mankind?  Is your goal human flourishing?  Is your goal a just and democratic society?  Is your goal to serve some God or gods?  The first cause of all of these goals, and any others you can think of, may be found in innate emotions and predispositions that exist because they evolved.  They did not evolve for a purpose.  They exist because at some time that was likely quite different from the present, they happened to increase the odds that the responsible genes would survive and reproduce.  They are the foundation that gives rise to every single human aspiration, no matter how noble or sublime that aspiration is imagined to be.

    There is no objective reason why the goals and aspirations of a Plato or a Kant are more worthy, more legitimate, or more morally good than the goals and purposes of a thief or a murderer.  In the end, every human being on the planet is merely seeking to satisfy emotional whims that he has interpreted or tried to make sense of in one way or another.  Any individual’s assumption that his goals are intrinsically superior to or more right and proper in themselves than the goals of others is a delusion.  The universe doesn’t care.

    What does that imply concerning what our goals should be, or what we really ought to do?  Nothing!  Nothing, that is, unless we are speaking of what some individual should do or ought to do to satisfy some idiosyncratic whim that cannot possibly be objectively more legitimate or praiseworthy than the whim of any other individual.

    How, then, do we choose what are goals and purposes will be.  After all, we will have them regardless, because it is our nature to have them.  In the end, all of us must decide for ourselves.  However, in choosing them I personally think it is useful to be aware of the above fundamental facts.  The alternative is to stumble blindly through life, chasing mirages, clueless as to what is really motivating us and why.  Again, purely from my personal point of view, that does not seem an attractive alternative.  Blind stumbling tends to be self-destructive, not to mention inconvenient to others.  I personally find it incongruous and disturbing to witness the spectacle of emotions and passions inspiring people to pursue ends that are the precise opposite of the ends that account for the existence of those emotions and passions to begin with.

    I personally pursue goals and purposes that seem to me in harmony with the fundamental reason that my goals and purposes exist to begin with.  In other words, my basic goal in life has been to survive and reproduce.  Beyond that, I seek first to promote the survival of my species, and beyond that the survival of biological life in general.  These goals seem noble and sublime enough to me personally.  Our very existence seems to me improbable and awe-inspiring.  Think of how complex and intelligent we are, and of all our highly developed senses and abilities.  Look in a mirror and consider the fact that a creature like you could have evolved from inanimate matter.  Think of the mind-boggling length of time it took for that to happen, and the conditions that were necessary for it to occur in the first place.  Stunning!  We are all final links in an unbroken chain of life that began with direct ancestors that existed billions of years ago.  There are millions of links in the chain, and all of those links succeeded in generating new links, so that the chain would remain unbroken through all that incredible gulf of time.  Under the circumstances, my personal purpose seems obvious to me.  Don’t break the chain!

    There is no objective reason why these purposes of mine are any more good, legitimate, or worthy than any alternatives whatsoever.  They are not intrinsically better than the purposes of an anti-natalist, a suicide bomber, or a celibate priest.  However, for personal reasons, I would prefer that, as others pursue their purposes, they at least be aware of what is actually motivating them.  It might lead them to consider whether blindly breaking the chain, destroying themselves and harming others in the process, is really a goal worth pursuing after all.