The world as I see it
RSS icon Email icon Home icon
  • Joshua Greene’s “Moral Tribes”: The Minting of a New Morality

    Posted on January 24th, 2014 Helian No comments

    Joshua Greene is a professor of psychology at Harvard.  In reality, he’s not proposing an entirely new morality, but an updated version of John Stuart Mill’s utilitarianism.  Greene refers to it as “Deep Pragmatism.” He describes his goal in writing Moral Tribes as follows:

    This book is an attempt to understand morality from the ground up.  It’s about understanding what morality is, how it got here, and how it’s implemented in our brains.  It’s about understanding the deep structure of moral problems as well as the differences between the problems that our brains were designed to solve and the distinctively modern problems we face today.  Finally, it’s about taking this new understanding of morality and turning it into a universal moral philosophy that members of all human tribes can share.

    I won’t go into too much detail about Greene’s version of utilitarianism, or his rationale for proposing it.  Suffice it to say that Greene is familiar with Darwin.  He knows that our moral emotions exist because they promoted our survival and procreation.  In other words, they evolved, as he puts it, as a solution to the Tragedy of the Commons, familiar to students of philosophy.  However, while they solved that problem by promoting cooperation within groups, they did nothing to solve the problem of hostility between groups.  In Greene’s words,

    Our moral brains did not evolve for cooperation between groups (at least not all groups).  How do we know this?  Why couldn’t morality have evolved to promote cooperation in a more general way?  Because universal cooperation is inconsistent with the principles governing evolution by natural selection.

    In other words, Greene knows about ingroups and outgroups.   He refers to this lack of universal cooperation as the “Tragedy of Commonsense Morality.”  As he puts it,

    Morality did not evolve to promote universal cooperation.  On the contrary, it evolved as a device for successful intergroup competition.  In other words, morality evolved to avert the Tragedy of the Commons, but it did not evolve to avert the Tragedy of Commonsense Morality.

    In proposing a solution to this problem, Greene introduces us to a metaphor that appears repeatedly throughout the rest of the book.  He compares the human moral machinery to a camera that has both an automatic, point and shoot mode and a manual mode.  It’s basically just a revamped version of the old reason versus untamed emotion dichotomy that has busied philosophers since Plato’s allegory of the chariot.  In general, the automatic mode is fine for dealing with problems within groups.  However, as Greene puts it,

    …the Tragedy of Commonsense Morality is a tragedy of moral inflexibility.  There is strife on the new pastures not because herders are hopelessly selfish, immoral, or amoral, but because they cannot step outside their respective moral perspectives.  How should they think?  The answer is now obvious:  They should shift into manual mode.

    In other words, we need to stop and think.  However, as he points out, “reasoning has no end of its own.”  I other words, he explicitly agrees with Hume, who wrote that reason is a “slave of the passions,” noting that “reason cannot produce good decisions without some kind of emotional input, however indirect.”  And what is that emotional input to be?  Basically, the desire for “happiness,” that sine qua non of utilitarians everywhere, combined with impartiality, which Greene claims is the “essence of morality.”  Now, of these two, impartiality is the only one that really has anything to do with human moral emotions per se.  Assuming for the sake of argument that happiness, and particularly the esoteric version in which utilitarians take such delight, is something we all want, it can hardly be said that people who are unhappy are also evil, and vice versa.  Focusing on impartiality, Greene writes,

    First, the human manual mode is, by nature, a cost-benefit reasoning system that aims for optimal consequences.  Second, the human manual mode is susceptible to the ideal of impartiality.  And, I submit, this susceptibility is not tribe-specific.  Members of any tribe can get the idea behind the Golden Rule.  Put these two things together and we get manual modes that aspire, however imperfectly, to produce consequences that are optimal from an impartial perspective, giving equal weight to all people.

    Here I can but wonder what species Greene is talking about.  It certainly isn’t ours.  I could cite dozens of passages in his own book that demonstrate that he himself has anything but an “impartial perspective.”  In any case, the result of brewing together happiness and impartiality to create what Greene refers to as a new “metamorality” is predictable.  It stands human morality completely on its head.  Divorced completely from the reasons it evolved to begin with, this new utilitarian morality, which Greene likes to refer to as “Deep Pragmatism,” insists that we reject the “inflexible, automatic mode, moral gizmos” that belong to the normal human complement of moral emotions whenever they don’t promote “happiness.”  We are not referring to our own happiness here.  Rather, we are to become servants of the happiness of all mankind.  As Greene puts it,

    Utilitarianism is a very egalitarian philosophy, asking the haves to do a lot for the have-nots.  Were you to wake up tomorrow as a born-again utilitarian, the biggest change in your life would be your newfound devotion to helping unfortunate others.

    We can excuse Mill for promoting such a philosophy.  He wrote before his philosophy could be informed by work of Darwin.  As a result, even though he was aware of contemporary theories claiming an innate basis to moral behavior, he rejected them.  In other words, he was a Blank Slater, though certainly not in the same sense as the ideologically motivated Blank Slaters who came after him, or the religiously motivated Blank Slaters, like Locke, who came before him.  As a result, he believed that the human mind could adopt virtually any morality, and concluded that the best one would be that which was also most useful.  Clearly, he realized that, if morality were innate, it would have profound implications for his theories.  As I have written elsewhere, I think it highly probable that, if he had lived in our times, he would have put two and two together and rejected utilitarianism.

    Not so Greene.   As he puts it,

    We can, for example, donate money to faraway strangers without expecting anything in return.  From a biological point of view, this is just a backfiring glitch, much like the invention of birth control.  But from our point of view as moral beings who can kick away the evolutionary ladder, it may be exactly what we want.  Morality is more than what it evolved to be.

    Kick away the evolutionary ladder?  Turn morality on its head?  Such notions are delusional unless you believe in some kind of objective “moral truth.”  Greene claims that he’s “agnostic” when it comes to the idea of moral truth, and it doesn’t really matter as far as utilitarianism is concerned, but that’s nonsense.  There has to be some reason for rejecting normal human “automatic mode” moral emotions in favor of some “meta-morality” that serves purposes that are diametrically opposed to the reasons that moral emotions evolved to begin with, and I can think of no other reason than an irrational faith in some kind of objective moral truth.  And in spite of his disclaimers, one can cite dozens of passages in his book that demonstrate that he does embrace what Mill referred to as “transcendental morality.”  For example,

    (referring to someone in a fine Italian suit that will be ruined if he wades into a pond to save a drowning child) Is it morally acceptable to let this child drown in order to save your suit?  Clearly not, we say.  That would be morally monstrous.

    Utilitarianism says that we should do whatever really works best, in the long run, and not just for the moment.  (Implies that there is a universal standard of what is “best.”)

    Happiness is the ur-value, the Higgs boson of normativity, the value that gives other values their value.

    We’ll dispense with the not especially moral goal of spreading genes and focus instead on the more proximate goal of cooperation.

    In other words, dangling before Greene’s imagination is a Morality that has nothing to do with the reasons that led to the evolution of moral behavior to begin with.  I have different goals.  I don’t hide them behind a smokescreen of “meta-morality.”  They are, first, to promote the survival of my own genes, second, to promote the survival of my species, and third, to promote the survival of terrestrial life.  I do not consider my conscious mind anything but a transitory, evolved aspect of my phenotype, but to that mind there is something sublime and majestic in being the link in a chain of life that has existed for billions of years.  The idea that I will be the last link in that chain is repugnant to me.  Serving as a “happiness pump” for a huge colony of happy ants that has no perceptible reason for existing except to “flourish” and be “happy” is completely repugnant to me.

    Greene, of course, is of a different opinion.  I agree that it may be possible to sort out such differences in “manual mode,” but one that is based as much as possible on reason and that takes as little account of morality as possible.  As far as I’m concerned, nothing could be more selfish than attempting to tart up my own whims as a “meta-morality.”  The result of such attempts in the past should serve as a sufficient deterrent from trying it again, even with a philosophy as transparently impractical to implement as utilitarianism.  Greene is well aware of these potential drawbacks.  He writes,

    History offers no shortage of grand utopian visions gone bad, including the rise and (nearly complete) fall of communism during the twentieth century.  Communists such as Stalin and Mao justified thousands of murders, millions more deaths from starvation, and repressive totalitarian governments in the name of the “greater good.”  Shouldn’t we be very wary of people with big plans who say that it’s all for the greater good?  Yes, we should.  Especially when those big plans call for big sacrifices.  And especially, especially when the people making the sacrifices (or being sacrificed!) are not the ones making the big plans.  But this wariness is perfectly pragmatic, utilitarian wariness.  What we’re talking about here is avoiding bad consequences.  Aiming for the greater good does not mean blindly following any charismatic leader who says that it’s all for the greater good.  That’s a recipe for disaster.

    So Greene thinks that the whole Communist debacle, with its gestation period of well over a century, during which time its development was carried forward by a host of convinced theorists, many of whom were neither charismatic themselves nor particularly attracted to charismatic leaders, could have easily been avoided if its adepts had just been “pragmatic,” and had been more circumspect in their choice of leaders?  Sorry, but I think a better way to avoid such catastrophes in the future would be to stop cobbling together new “meta-moralities” altogether.

    We cannot dispense with morality, at least at the level of individual interactions.  We’re not smart enough to do without it.  That said, we can at least attempt to understand its evolutionary roots and the reasons for its existence, and, in the realization that the traits we associate with moral behavior evolved at times utterly unlike the present, do our best to keep our moral emotions from blowing up in our faces.  Greene’s utilitarianism will never be a miraculous solution to the “Tragedy of Commonsense Morality.”  There will always be ingroups and outgroups, and they will always be hostile to each other, manual mode or no manual mode.  What could possibly be more manifest than the furious hostility of Greene’s own liberal tribe to their conservation outgroup?  If we are to survive, we must learn to manage this hostility, and creating yet another new moral system seems to me an extremely unpromising approach to the problem.

     

     

    Leave a reply