Even More Fun with Free Will

That inimitable and irascible physicist Lubos Motl, who blogs at The Reference Frame, sought to vindicate the existence of free will in a recent post entitled Free Will of Particles and People.  To begin, he insisted that he must have free will because he feels like he has it:

The actual reason why I am sure about the existence of free will (and I mean my free will) is that I feel it.

Well, I feel it, too, but human beings have been known to feel any number of things that aren’t true, so I don’t find that argument convincing.  Lubos’ second argument is based on the fact that the universe is not deterministic in the classical sense.  We live in a quantum universe, and quantum phenomena appear to be random.  Since free will, at least as defined by Lubos, exists at the level of atomic and sub-atomic particles, and single particles can change the state of cells, and single cells can change the state of the human brain, then we, too, must have free will.  I’m not so sure about that one either.  True, the outcome of a measurement at the quantum scale is unpredictable, and therefore appears to be random, but we don’t really know that it is.  We can never measure exactly the same thing twice.  We can repeat experiments, but we can never measure exactly the same particle at exactly the same time in exactly the same place twice.

Then there’s the problem of what all this stuff we’re measuring really is.  We know how matter behaves at the atomic scale in great detail.  The fact that the atomic bomb worked demonstrated that convincingly enough.  We can use Maxwell’s equations and the Schrödinger Equation to make particles of matter and energy jump through hoops, but that doesn’t alter the fact that we don’t really know what they are at the most fundamental level, or even why they exist at all.  In short, I have a problem with making positive claims about things we don’t understand.  Positive claims about free will assume a level of knowledge that we just don’t have.

On the other hand, I have no problem at all with assuming that we do have free will.  As Lubos says, it certainly feels like we do, and if we actually do, then we are merely assuming something that is true.  On the other hand, if we don’t have free will, then assuming that we do couldn’t change things for the worse, for the very good reason that, lacking free will, we would be incapable of changing anything.

Arguments against the existence of free will are absurd, because they imply the assumption of free choice.  If there is no free will, then there is no point in arguing about it, because it can’t possibly change anything in a way that wasn’t pre-programmed before the argument started.  True, if there is no free will, than the one making the argument couldn’t decide not to make it, but the fundamental absurdity remains.  What could possibly be the point of arguing with me about my assumptions regarding free will if I have no choice in the matter?  The future will be different depending on whether a robot tightens or loosens a screw.  However, if the robot is pre-programmed, and has no choice in the matter, it won’t alter a thing.  Nothing will shake the future out of its predestined rut.  In spite of that, I suspect that the most insistent deniers of free will don’t really believe their arguments are pointless.  And yet their arguments would be completely pointless unless they believed in their heart of hearts, either that they could make a free choice to argue one way or the other, or that the person listening could may such a choice.

If there is no free will, then my assumption that there is won’t change a thing.  If, on the other hand, we do have free will, and my assumption that we do despite my lack of any proof to that effect actually represents a free choice, then it seems to me that it’s a choice that is likely to make life a great deal more pleasant.  Where’s the fun in being a robot?  As far as I’m concerned, the assumption is justified if I can relieve even a single person of the despair and sense of futility that are predictable responses to the opposite assumption.

We can certainly debate the question of free will as stubbornly as we please.  However, I would contend that we lack the knowledge necessary to decide the matter one way or the other.  Perhaps one day that knowledge will be ours.  If it turns out we actually don’t have free will, then it will be illogical to blame me for my assumption that we do.  If, on the other hand, we discover that we actually do have free will, then it seems that those who argued furiously that we don’t will look rather foolish.  Why take the risk?

The Alternate Reality Fallacy

The alternate reality fallacy is ubiquitous.  Typically, it involves the existence of a deity, and goes something like this:  “God must exist because otherwise there would be no absolute good, no absolute evil, no unquestionable rights, life would have no purpose, life would have no meaning,” and so on and so forth.  In other words, one must only demonstrate that a God is necessary.  If so, he will automatically pop into existence.  The video of a talk by Christian apologist Ravi Zacharias included below is provided as an illustrative data point for the reader.

The talk, entitled, “The End of Reason:  A Response to the New Atheists,” was Zacharias’ contribution to the 2012 Contending with Christianity’s Critics Conference in Dallas.  I ran across it at Jerry Coyne’s Why Evolution is True website in the context of a discussion of rights.  We find out where Zacharias is coming from at minute 4:15 in the talk when he informs us that the ideas,

…that steadied this part of the world, rooted in the notion of the ineradicable difference between good and evil, facts on which we built our legal system, our notions of justice, the very value of human life, how intrinsic worth was given to every human being,

all have a Biblical mooring.  Elaborating on this theme, he quotes Chesterton to the effect that “we are standing with our feet firmly planted in mid-air.”  We have,

…no grounding anymore to define so many essential values which we assumed for many years.

Here Zacharias is actually stating a simple truth that has eluded many atheists.  Christianity and other religions do, indeed, provide some grounding for such things as objective rights, objective good, and objective evil.  After all, it’s not hard to accept the reality of these things if the alternative is to burn in hell forever.  The problem is that the “grounding” is an illusion.  The legions of atheists who believe in these things, however, actually are “standing with their feet firmly planted in mid-air.”  They have dispensed even with the illusion, sawing off the limb they were sitting on, and yet they counterintuitively persist in lecturing others about the nature of these chimeras as they float about in the vacuum, to the point of becoming quite furious if anyone dares to disagree with them.  Zacharias’ problem, on the other hand, isn’t that he doesn’t bother to provide a grounding.  His problem is his apparent belief in the non sequitur that, if he can supply a grounding, then that grounding must necessarily be real.

Touching on this disconcerting tendency of many atheists to hurl down anathemas on those they consider morally impure in spite of the fact that they lack any coherent justification for their tendency to concoct novel values on the fly, Zacharias remarks at 5:45 in the video,

The sacred meaning of marriage (and others) have been desacralized, and the only one who’s considered obnoxious is the one who wants to posit the sacredness of these issues.

Here, again, I must agree with him.  Assuming he’s alluding to the issue of gay marriage, it makes no sense to simply dismiss anyone who objects to it as a bigot and a “hater.”  That claim is based on the obviously false assumption that no one actually takes their religious beliefs seriously.  Unfortunately, they do, and there is ample justification in the Bible, not to mention the Quran, for the conclusion that gay marriage is immoral.  Marriage has a legal definition, but it is also a religious sacrament.  There is no rational basis for the claim that anyone who objects to gay marriage is objectively immoral.  Support for gay marriage represents, not a championing of objective good, but the statement of a cultural preference.  The problem with the faithful isn’t that they are all haters and bigots.  The problem is that they construct their categories of moral good and evil based on an illusion.

Beginning at about 6:45 in his talk, Zacharias continues with the claim that we are passing through a cultural revolution, which he defines as a,

decisive break with the shared meanings of the past, particularly those which relate  to the deepest questions of the nature and purpose of life.

noting that culture is,

an effort to provide a coherent set of answers to the existential questions that confront all human beings in the passage of their lives.

In his opinion, it can be defined in three different ways. First, there are theonomous cultures.  As he puts it,

These are based on the belief that God has put his law into our hearts, so that we act intuitively from that kind of reasoning.  Divine imperatives are implanted in the heart of every human being.

Christianity is, according to Zacharias, a theonomous belief.  Next, there are heteronymous cultures, which derive their laws from some external source.  In such cultures, we are “dictated to from the outside.”  He cites Marxism is a heteronymous world view.  More to the point, he claims that Islam also belongs in that category.  Apparently we are to believe that this “cultural” difference supplies us with a sharp distinction between the two religions.  Here we discover that Zacharias’ zeal for his new faith (he was raised a Hindu) has outstripped his theological expertise.  Fully theonomous versions of Christianity really only came into their own among Christian divines of the 18th century.  The notion, supported by the likes of Francis Hutcheson and the Earl of Shaftesbury, that “God has put his law into our hearts,” was furiously denounced by other theologians as not only wrong, but incompatible with Christianity.  John Locke was one of the more prominent Christian thinkers among the many who denied that “divine imperatives are implanted in the heart of every human being.”

But I digress.  According to Zacharias, the final element of the triad is autonomous culture, or “self law”, in which everyone is a law into him or herself.  He notes that America is commonly supposed to be such a culture.  However, at about the 11:00 minute mark he notes that,

…if I assert sacred values, suddenly a heteronymous culture takes over, and tells me I have no right to believe that.  This amounts to a “bait and switch.”  That’s the new world view under which the word “tolerance” really operates.

This regrettable state of affairs is the result of yet another triad, in the form of the three philosophical evils which Zacharias identifies as secularization, pluralism, and privatization.  They are the defining characteristics of the modern cultural revolution.  The first supposedly results in an ideology without shame, the second in one without reason, and the third in one without meaning.  Together, they result in an existence without purpose.

One might, of course, quibble with some of the underlying assumptions of Zacharias’ world view.  One might argue, for example, that the results of Christian belief have not been entirely benign, or that the secular societies of Europe have not collapsed into a state of moral anarchy.  That, however, is really beside the point.  Let us assume, for the sake of argument, that everything Zacharias says about the baleful effects of the absence of Christian belief is true.  It still begs the question, “So what?”

Baleful effects do not spawn alternate realities.  If the doctrines of Christianity are false, then the illusion that they supply meaning, or purpose, or a grounding for morality will not transmute them into the truth.  I personally consider the probability that they are true to be vanishingly small.  I do not propose to believe in lies, whether their influence is portrayed as benign or not.  The illusion of meaning and purpose based on a belief in nonsense is a paltry substitute for the real thing.  Delusional beliefs will not magically become true, even if those beliefs result in an earthly paradise.  As noted above, the idea that they will is what I refer to in my title as the alternate reality fallacy.

In the final part of his talk, Zacharias describes his own conversion to Christianity, noting that it supplied what was missing in his life.  In his words, “Without God, reason is dead, hope is dead, morality is dead, and meaning is gone, but in Christ we recover all these.”  To this I can but reply that the man suffers from a serious lack of imagination.  We are wildly improbable creatures sitting at the end of an unbroken chain of life that has existed for upwards of three billion years.  We live in a spectacular universe that cannot but fill one with wonder.  Under the circumstances, is it really impossible to relish life, and to discover a reason for cherishing and preserving it, without resort to imaginary super beings?  Instead of embracing the awe-inspiring reality of the world as it is, does it really make sense to supply the illusion of “meaning” and “purpose” by embracing the shabby unreality of religious dogmas?  My personal and admittedly emotional reaction to such a choice is that it is sadly paltry and abject.  The fact that so many of my fellow humans have made that choice strikes me, not as cause for rejoicing, but for shame.

Notes on “A Clergyman’s Daughter” – George Orwell’s Search for the Meaning of Life

A synopsis of George Orwell’s A Clergyman’s Daughter may be found in the Wiki entry on the same.  In short, it relates the experiences of Dorothy Hare, only daughter of the Reverend Charles Hare, a “gentleman” clergyman with a chronic habit of living beyond his means.  Dorothy’s life is consumed by a frantic struggle to maintain respectability in spite of a mountain of debt owed to the local tradesmen, a dwindling congregation, and a church gradually decaying to ruin for lack of maintenance.  There’s also a problem so repressed in Dorothy’s mind that she’s hardly conscious of it; she is losing her Christian faith.

Eventually the pressure becomes unbearable.  At the end of Chapter 1 we leave Dorothy exhausted, working herself beyond endurance late at night to prepare costumes for a children’s play.  At the start of Chapter 2 we find her teleported to the Old Kent Road, south of London, where she wakes up with a bad case of amnesia and only half a crown in her pocket.  A good German might describe this rather remarkable turn of events as an den Haaren herbeigezogen (dragged in by the hair.)  In other words, it’s far fetched, but we can forgive it because Orwell refrains from boring us with explanatory psychobabble, it’s in one of his earliest books, and he needs some such device in order to dish up a fictional version of the autobiographical events described in his Down and Out in Paris and London, published a couple of years earlier.

Eventually Dorothy is rescued from starvation and squalor by a much older cousin, who sets her up as a school teacher at Ringwood House, which Orwell describes as a fourth rate private school with only 21 female inmates.  At this point the astute reader will discover something that might come as a revelation to those who are only familiar with Animal Farm and 1984.  Orwell was a convinced socialist when he wrote the book, and remained one until the end of his life.  Mrs. Creevy, the woman who runs the school, is a grasping capitalist, interested only in squeezing as much profit out of the enterprise as possible.  The girls “education” consists mainly of a mind-numbing routine of rote memorization and handwriting drills.  Dorothy’s attempts at education reform are nipped in the bud, and she is eventually sacked.  In Mrs. Creevy’s words,

It’s the fees I’m after, not developing the children’s minds.  It’s not to be supposed as anyone’s to go to all the trouble of keeping a school and having the house turned upside down by a pack of brats, if it wasn’t that there’s a bit of money to be made out of it.  The fee comes first, and everything else comes afterwards.

Orwell later elaborates,

There are, by the way, vast numbers of private schools in England.  Second-rate, third-rate, and fourth-rate (Ringwood House was a specimen of the fourth-rate school), they exist by the dozen and the score in every London suburb and every provincial town.  At any given moment there are somewhere in the neighborhood of ten thousand of them, of which less than a thousand are subject to Government inspection.  And though some of them are better than others, and a certain number, probably, are better than the council schools with which they compete, there is the same fundamental evil in all of them; that is , that they have ultimately no purpose except to make money.

So long as schools are run primarily for money, things like this will happen.  The expensive private schools to which the rich send their children are not, on the surface, so bad as the others, because they can afford a proper staff, and the Public School examination system keeps them up to the mark; but they have the same essential taint.

Recall that the book was published in 1935.  The Spanish Civil War, in which Orwell fought with a socialist unit not affiliated with the Communists, began in 1936.  In that conflict he had his nose rubbed in the reality of totalitarianism, socialism that had dropped the democratic mask.  The experience is described in his Homage to Catalonia, which is essential reading for anyone interested in learning what inspired his later work.  There he tells how the Communist legions attacked and destroyed his own division, regardless of the fact that it was fighting on the same side.  Totalitarianism has never recognized more than two sides; the side that it controls, and the side that it doesn’t.  He saw that its real reason for existence was nothing like a worker’s paradise, or any other version of “human flourishing,” but absolute, unconditional power.  The nature of the system and the power it aimed at was what he described in 1984.  When A Clergyman’s Daughter was published, that revelation still lay in the future.  It may be that in 1935 Orwell still thought of the socialists as one big, happy, if occasionally quarrelsome, family.

Be that as it may, the real interest of the book, at least as far as I’m concerned, lies at the end.  There, more explicitly than in any other of his novels or essays, Orwell takes up the question of the Meaning of Life.  While down and out, Dorothy had lost her faith once and for all.  In spite of that, after Mrs. Creevy sacks her, she finds her way back to the family parsonage, and takes up again where she left off.  She suffers from no illusions.  As Orwell puts it,

It was not that she was in any doubt about the external facts of her future.  She could see it all quite clearly before her… Whatever happened, at the very best, she had got to face the destiny that is common to all lonely and penniless women.  “The Old Maids of Old England,” as somebody called them.  She was twenty-eight – just old enough to enter their ranks.

She was not the same women as before.  She had lost her faith, and yet, she meditated,

Faith vanishes, but the need for faith remains the same as before.  And given only faith, how can anything else matter?  How can anything dismay you if only there is some purpose in the world which you can serve, and which, while serving it, you can understand?  Your whole life is illumined by that sense of purpose.

Life, if the grave really ends it, is monstrous and dreadful.  No use trying to argue it away.  Think of life as it really is, think of the details of life; and then think that there is no meaning in it, no purpose, no goal except the grave.  Surely only fools or self-deceivers, or those whose lives are exceptionally fortunate, can face that thought without flinching?

Her mind struggled with the problem, while perceiving that there was no solution.  There was, she saw clearly, no possible substitute for faith; no pagan acceptance of life as sufficient unto itself, no pantheistic cheer-up stuff, no pseudo-religion of “progress” with visions of glittering Utopias and ant-heaps of steel and concrete.  It is all or nothing.  Either life on earth is a preparation for something greater and more lasting, or it is meaningless, dark and dreadful.

Here we see that, even in 1935, Orwell wasn’t quite convinced that the Soviet version of a Brave New World really represented “progress.”  And while democratic socialism may have later given him something of a sense of purpose, it wasn’t yet filling the void.  Dorothy considers,

Where had she got to?  She had been saying that if death ends all, then there is no hope and no meaning in anything.  Well, what then?

At this point, the true believers chime in.  They know the answer.  Bring back faith, and, voila, the void is filled!  So many of them honestly seem to believe that, because they feel a need, the thing needed will automatically pop into existence.  They need absolute moral standards.  Therefore their faith must be true.  They need a purpose in life.  Therefore their faith must be true.  They need human existence to have meaning.  Therefore their faith must be true.  They must have unquestionable rights.  Therefore their faith must be true.  And so on, and so on.  Orwell is having none of it.  Dorothy muses on,

And how cowardly, after all, to regret a superstition that you had got rid of – to want to believe something that you knew in your bones to be untrue.

Orwell provides us with no magic solution to this thorny problem.  Indeed, in the end his answer is singularly unsatisfying.  He suggests that we just get on with it and leave it at that.  As Dorothy glues together strips of paper, forming the boots, armor, and other accoutrements required for the next church play, she has stumbled into the solution without realizing it:

The smell of glue was the answer to her prayer.  She did not know this.  She did not reflect, consciously, that the solution to her difficulty lay in accepting the fact that there was no solution; that if one gets on with the job that lies to hand, the ultimate purpose of the job fades into insignificance; that faith and no faith are very much the same provided that one is doing what is customary, useful and acceptable.  She could not formulate these thoughts as yet, she could only live them.  Much later, perhaps, she would formulate them and draw comfort from them.

and, finally,

Dorothy sliced two more sheets of brown paper into strips, and took up the breastplate to give it its final coating.  The problem of faith and no faith had vanished utterly from her mind.  It was beginning to get dark, but, too busy to stop and light the lamp, she worked on, pasting strip after strip of paper into place, with absorbed, with pious concentration, in the penetrating smell of the gluepot.

Orwell didn’t want A Clergyman’s Daughter to be republished, unless, perhaps, in a cheap version to scare up a few pounds for his heirs.  No doubt he considered it too immature.  We can be grateful that his literary executors thought otherwise, else we might never have known of his struggles with the Meaning of Life problem so early in his career.  He didn’t spill much ink over the problem later on, but we must assume that he had found some more inspiring purpose to strive for than just “getting on with it.”  Weak and in pain, he fought to complete 1984 on his death bed with incredible tenacity and dedication.  It was a gift to all of us that didn’t follow him to the grave, but lived long after he was gone as the single most effective literary weapon against a threat that had materialized as Communism in his own day, but will likely always lurk among us in one form or another.

And what of the Meaning of Life?  That’s a question we must all provide an answer for on our own.  None of the imaginary super-beings we have dreamed up over the years is likely to materialize to trivialize the search.  And just as Orwell wrote, whether we care to deal with the problem or not, there is no objective solution.  It must be subjective and individual.  It need not be any less compelling for all that.

 

 

The Regrettable Overreach of “Faith versus Fact”

The fact that the various gods that mankind has invented over the years, including the currently popular ones, don’t exist has been sufficiently obvious to any reasonably intelligent pre-adolescent who has taken the trouble to think about it since at least the days of Jean Meslier.  That unfortunate French priest left us with a Testament that exposed the folly of belief in imaginary super-beings long before the days of Darwin.  It included most of the “modern” arguments, including the dubious logic of inventing gods to explain everything we don’t understand, the many blatant contradictions in the holy scriptures, the absurdity of the notion that an infinitely wise and perfect being could be moved to fury or even offended by the pathetic sins of creatures as abject as ourselves, the lack of any need for a supernatural “grounding” for human morality, and many more.  Over the years these arguments have been elaborated and expanded by a host of thinkers, culminating in the work of today’s New Atheists.  These include Jerry Coyne, whose Faith versus Fact represents their latest effort to talk some sense into the true believers.

Coyne has the usual human tendency, shared by his religious opponents, of “othering” those who disagree with him.  However, besides sharing a “sin” that few if any of us are entirely free of, he has some admirable traits as well.  For example, he has rejected the Blank Slate ideology of his graduate school professor/advisor, Richard Lewontin, and even goes so far as to directly contradict him in FvF.  In spite of the fact that he is an old “New Leftist” himself, he has taken a principled stand against the recent attempts of the ideological Left to dismantle freedom of speech and otherwise decay to its Stalinist ground state.  Perhaps best of all as far as a major theme of this blog is concerned, he rejects the notion of objective morality that has been so counter-intuitively embraced by Sam Harris, another prominent New Atheist.

For the most part, Faith versus Fact is a worthy addition to the New Atheist arsenal.  It effectively dismantles the “sophisticated Christian” gambit that has encouraged meek and humble Christians of all stripes to imagine themselves on an infinitely higher intellectual plane than such “undergraduate atheists” as Richard Dawkins and Chris Hitchens.  It refutes the rapidly shrinking residue of “God of the gaps” arguments, and clearly illustrates the difference between scientific evidence and religious “evidence.”  It destroys the comfortable myth that religion is an “other way of knowing,” and exposes the folly of seeking to accommodate religion within a scientific worldview.  It was all the more disappointing, after nodding approvingly through most of the book, to suffer one of those “Oh, No!” moments in the final chapter.  Coyne ended by wandering off into an ideological swamp with a fumbling attempt to link obscurantist religion with “global warming denialism!”

As it happens, I am a scientist myself.  I am perfectly well aware that when an external source of radiation such as that emanating from the sun passes through an ideal earthlike atmosphere that has been mixed with a dose of greenhouse gases such as carbon dioxide, impinges on an ideal earthlike surface, and is re-radiated back into space, the resulting equilibrium temperature of the atmosphere will be higher than if no greenhouse gases were present.  I am also aware that we are rapidly adding such greenhouse gases to our atmosphere, and that it is therefore reasonable to be concerned about the potential effects of global warming.  However, in spite of that it is not altogether irrational to take a close look at whether all the nostrums proposed as solutions to the problem will actually do any good.

In fact, the earth does not have an ideal static atmosphere over an ideal static and uniform surface.  Our planet’s climate is affected by a great number of complex, interacting phenomena.  A deterministic computer model capable of reliably predicting climate change decades into the future is far beyond the current state of the art.  It would need to deal with literally millions of degrees of freedom in three dimensions, in many cases using potentially unreliable or missing data.  The codes currently used to address the problem are probabilistic, reduced basis models, that can give significantly different answers depending on the choice of initial conditions.

In a recently concluded physics campaign at Lawrence Livermore National Laboratory, scientists attempted to achieve thermonuclear fusion ignition by hitting tiny targets containing heavy isotopes of hydrogen with the most powerful laser system ever built.  The codes they used to model the process should have been far more accurate than any current model of the earth’s climate.  These computer models included all the known relevant physical phenomena, and had been carefully benchmarked against similar experiments carried out on less powerful laser systems.  In spite of that, the best experimental results didn’t come close to the computer predictions.  The actual number of fusion reactions hardly came within two orders of magnitude of expected values.  The number of physical approximations that must be used in climate models is far greater than were necessary in the Livermore fusion codes, and their value as predictive tools must be judged accordingly.

In a word, we have no way of accurately predicting the magnitude of the climate change we will experience in coming decades.  If we had unlimited resources, the best policy would obviously be to avoid rocking the only boat we have at the moment.  However, this is not an ideal world, and we must wisely allocate what resources we do have among competing priorities.  Resources devoted to fighting climate change will not be available for medical research and health care, education, building the infrastructure we need to maintain a healthy economy, and many other worthy purposes that could potentially not only improve human well-being but save many lives.  Before we succumb to frantic appeals to “do something,” and spend a huge amount of money to stop global warming, we should at least be reasonably confident that our actions will measurably reduce the danger.  To what degree can we expect “science” to inform our decisions, whatever they may be?

For starters, we might look at the track record of the environmental scientists who are now sounding the alarm.  The Danish scientist Bjorn Lomborg examined that record in his book, The Skeptical Environmentalist, in areas as diverse as soil erosion, storm frequency, deforestation, and declining energy resources.  Time after time he discovered that they had been crying “wolf,” distorting and cherry-picking the data to support dire predictions that never materialized.  Lomborg’s book did not start a serious discussion of potential shortcomings of the scientific method as applied in these areas.  Instead he was bullied and vilified.  A kangaroo court was organized in Denmark made up of some of the more abject examples of so-called “scientists” in that country, and quickly found Lomborg guilty of “scientific dishonesty,” a verdict which the Danish science ministry later had the decency to overturn.  In short, the same methods were used against Lomborg as were used decades earlier to silence critics of the Blank Slate orthodoxy in the behavioral sciences, resulting in what was possibly the greatest scientific debacle of all time.  At the very least we can conclude that all the scientific checks and balances that Coyne refers to in such glowing terms in Faith versus Fact have not always functioned with ideal efficiency in promoting the cause of truth.  There is reason to believe that the environmental sciences are one area in which this has been particularly true.

Under the circumstances it is regrettable that Coyne chose to equate “global warming denialism” a pejorative term used in ideological squabbles that is by its very nature unscientific, with some of the worst forms of religious obscurantism.  Instead of sticking to the message, in the end he let his political prejudices obscure it.  Objections to the prevailing climate change orthodoxy are hardly coming exclusively from the religious fanatics who sought to enlighten us with “creation science,” and “intelligent design.”  I invite anyone suffering from that delusion to have a look at some of the articles the physicist and mathematician Lubos Motl has written about the subject on his blog, The Reference Frame.  Examples may be found here, here and, for an example with a “religious” twist,  here.  There he will find documented more instances of the type of “scientific” behavior Lomborg cited in The Skeptical Environmentalist.  No doubt many readers will find Motl irritating and tendentious, but he knows his stuff.  Anyone who thinks he can refute his take on the “science” had better be equipped with more knowledge of the subject than is typically included in the bromides that appear in the New York Times.

Alas, I fear that I am once again crying over spilt milk.  I can only hope that Coyne has an arrow or two left in his New Atheist quiver, and that next time he chooses a publisher who will insist on ruthlessly chopping out all the political Nebensächlichkeiten.  Meanwhile, have a look at his Why Evolution is True website.  In addition to presenting a convincing case for evolution by natural selection and a universe free of wrathful super beings, Professor Ceiling Cat, as he is known to regular visitors for reasons that will soon become apparent to newbies, also posts some fantastic wildlife pictures.  And if it’s any consolation, I see his book has been panned by John Horgan.  Anyone with enemies like that can’t be all bad.  Apparently Horgan’s review was actually solicited by the editors of the Wall Street Journal.  Go figure!  One wonders what rock they’ve been sleeping under lately.

Was There a Time Before the Blank Slate?

Yes, dear reader, there was.  It’s quite true that, for half a decade and more, the “Men of Science” imposed on the credulity of mankind by insisting that something perfectly obvious and long familiar to the rest of us didn’t exist.  I refer, of course, to human nature.  It was a herculean effort in self-deception that confirmed yet again George Orwell’s observation that, “There are some ideas so absurd that only an intellectual could believe them.”  In the heyday of the Blank Slate orthodoxy, such “Men of Science” as Ashley Montagu could say things such as,

…man is man because he has no instincts, because everything he is and has become he has learned, acquired from his culture, from the man-made part of the environment, from other human beings.

and

The fact is, that with the exception of the instinctoid reactions in infants to sudden withdrawals of support and to sudden loud noises, the human being is entirely instinctless.

and do it with a perfectly straight face.  It was an episode in our history that must never be forgotten, and one that should be recalled whenever we hear someone claim that “science says” this or that, or that “the science is settled.”  The scientific method is the best butterfly net our species has come up with so far to occasionally capture a fluttering bit of truth.  However, it can never be separated from the ideological context in which it functions.  As the Blank Slate episode demonstrated, that context is quite capable of subverting and adulterating the truth when the truth stands in the way of ideological imperatives.

In the case of the Blank Slate, as it happens, those imperatives did not derail our search for truth for some time after Darwin first grasped the behavioral implications of his revolutionary theory.  And just as those implications were obvious to Darwin, they were obvious to many others.  The existence and selective significance of human nature were immediately apparent to anyone with an open mind and rudimentary powers of self-observation.  Indeed, they were treated almost as commonplaces in the behavioral sciences for decades after Darwin until they finally succumbed to the ideological fog.

For example, at about the same time that J. B. Watson and Frank Boas began fabricating the first serious “scientific” rationalizations of the Blank Slate, there was no evidence in the popular media of the rigid ideological orthodoxy that became such a remarkable feature of their coverage of anything dealing with human behavior in the 60’s and 70’s.  The later vilification of heretics as “racists” and “fascists” was nowhere to be seen.  Indeed, one Dr. Grace Adams, who held a Ph.D. in psychology from Cornell, was actually guileless enough to contribute an article entitled Human Instincts to H. L. Mencken’s The American Mercury as late as 1928!  Apparently without the faintest inkling of the hijacking of the behavioral sciences that was then already in the works, she wrote,

The recognition of the full scope and function of the human instincts will appear to those who come after us as the most important advance made by psychology in our time. (!)

How ironic those words seem now!  The very term “instinct” became toxic during the ascendancy of the Blank Slate, when the high priests of the prevailing orthodoxy insisted on their own rigid definition of the term, and then proceeded to exploit it as a handy tool for “smarter than thou” posturing and scientific one-upmanship.  Adams’ article includes some interesting remarks on the origin of the word “instinct” in the biological sciences and the later, gradual redefinitions that occurred when it was taken up by the psychologists.  In particular, she notes that, while the biologists of the time still used the term to describe behaviors that were unaffected by either “experience or volition,” and were “purely mechanical processes lying completely outside the province of consciousness,” psychologists preferred a much more flexible definition.  Referring to the great American ur-psychologist William James, Adams wrote,

So it was obvious, to him at least, “that every instinctive act in an animal with memory must cease to be ‘blind’ after being once repeated.”  In this way, according to James, an instinct could become not only conscious but capable of modification and conscious direction and change.

Or, as we would say today, the expression of “instincts” could be modified by “culture.”  Adams notes that, as early as 1890,

James was able to state complacently that there was agreement among his contemporaries that the human instincts were: sucking, biting, chewing, grinding the teeth, licking, making grimaces, spitting, clasping, grasping, pointing, making sounds of expressive desire, carrying to the mouth, the function of alimentation, crying, smiling, protrusion of the lips, turning the head aside, holding the head erect, sitting up, standing, locomotion, vocalization, imitation, emulation or rivalry, pugnacity, anger, resentment, sympathy, the hunting instinct, fear, appropriation or acquisitiveness, constructiveness, play, curiosity, sociability and shyness, secretiveness, cleanliness, modesty and shame, love, the anti-sexual instincts, jealousy, and parental love.  (Italics are mine)

Turn the page to the 20th century, and we already find two of the prominent psychologists of the day, James Angell and Edward Thorndike, squabbling over the definition of “instinct.”  According to Adams,

Angell, accepting James’ argument that instincts once yielded to are thereafter felt in connection with the foresight of their ends, expands this idea into the statement that “instincts, in the higher animals, at all events, appear always to involve consciousness.” And he makes consciousness the essential element of instincts. Thorndike, on the other hand, remembers James’ admission that instincts are originally blind and maintains that “all original tendencies are aimless in the sense that foresight of the consequences does not affect the response.”  For him the only necessary components of an instinct are “the ability to be sensitive to a certain situation, the ability to make a certain response, and the existence of a bond or connection whereby that response is made to that situation.” While the ideas of neither Angell nor Thorndike are actually inconsistent with James’ two-fold definition of an instinct, they lead to very different lists of instincts.

To cut to the chase, here are the lists of Angell,

Angell, by making consciousness the mark that distinguishes an instinct from a reflex, has to narrow the number of instincts to fear, anger, shyness, curiosity, sympathy, modesty (?), affection, sexual love, jealousy and envy, rivalry, sociability, play, imitation, constructiveness, secretiveness and acquisitiveness.

…and Thorndike,

But Thorndike admits no gap between reflexes and instincts, so he must both expand and subdivide James’ list. He does this in a two hundred page inventory (!) which he regrets is incomplete. He adds such activities as teasing, tormenting, bullying, sulkiness, grieving, the horse-play of youths, the cooing and gurgling of infants and their satisfaction at being held, cuddled and carried, attention-getting, responses to approving behavior, responses to scornful behavior, responses by approving behavior, responses by scornful behavior, the instinct of multiform physical activity, and the instinct of multiform mental activity. The “so-called instinct of fear” he analyzes into the instinct of escape from restraint, the instinct of overcoming a moving obstacle, the instinct of counterattack, the instinct of irrational response to pain, the instinct to combat in rivalry, and the threatening or attacking movements with which the human male tends to react to the mere presence of a male of the same species during acts of courtship.

In a word, the psychologists of the 20’s were still quite uninhibited when it came to compiling lists of instincts.  It is noteworthy that Thorndike’s The Elements of Psychology, which originally included extensive discussions of human “instincts” in Chapters 12 and 13, continued in use as a textbook for many years.  Indeed, Thorndike was one of the many psychologists of his day who seem surprisingly “modern” in the context of the early 21st century.  For example, again quoting Adams,

And Thorndike points out that a complete inventory of man’s original nature is needed not only as a basis of education but for economic, political, ethical and religious theories.

And, in a passage that, in light of recent developments in the field of evolutionary psychology, can only be described as stunning, Adams continues,

For Colvin and Bagly the chief essential of instincts is that “they are directed toward some end that is useful.” But they do not mean useful in a selfish or materialistic sense, for they are able to describe an altruistic instinct which is as real to them as the predatory instinct. And Kirkpatrick conceives of man being by native endowment even more noble. Indeed he credits to the human being a regulative instinct “which exists in the moral tendency to conform to law and to act for the good of others as well as self, and in the religious tendency to regard a Higher Power.”

Writing in the June and August, 1928 editions of the Mercury, H. M. Parshley elaborates on the connection, noticed decades earlier by Darwin himself, between “instincts” and morality:

Ethics certainly involves the consideration of motives, values, and ideals; and a scientific ethics requires genuine knowledge about these elusive matters.

As if anticipating Stephen J. Gould’s delusional theories of “non-overlapping magisterial,” he continues,

…in my “opinion, the chief support of obscurantism at this moment is the notion that motives, values, and ideals, unlike material things, are beyond the range of scientific study, and thus afford a free and exclusive field in which religion and philosophy may disport themselves authoritatively without challenge.

Parshley continues with a comment that we now recognize was sadly mistaken:

The biological needs are clear enough to see and we know a great deal about them – quite sufficient to establish the futility of asceticism and give rise to a complete distrust of any ethics that involves us in serious conflict with them.  Science has done this, and, I think, it will never be undone.

Parshley’s naïve faith in the integrity and disinterestedness of science was to be shattered all too soon.  Indeed, without recognizing the danger, Adams was already quite familiar with its source:

For many years the iconoclastic Watson strove to explain instincts in suitably behavioristic terms. But neither his definition nor his classification need concern us now, for in 1924 Watson repudiated everything he had previously said about them by declaring that “there are no instincts,” and furthermore, that “there is no such thing as an inheritance of capacity, talent, temperament, mental constitution and characteristics.” With these two statements Watson cast aside the biological as well as the psychological notion of mental inheritance.

For Adams, the behaviorist creed of Watson and Boas was just a curiosity.  She didn’t realize they were already riding on the crest of an ideological wave that would submerge the behavioral sciences in a sea of obscurantism for decades to come.  Marxism was hardly the only dogma that required their theories to be “true.”  The same could be said of many other pet utopias that could generally be included in the scope of E. O. Wilson’s epigram, “Great theory, wrong species.”  The ideological imperative was described in a nutshell by psychologist Geoffrey Gorer in an essay entitled The Remaking of Man, published in 1956:

One of the most urgent problems – perhaps the most urgent problem – facing the world today is how to change the character and behavior of adult human beings within a single generation.  This problem of rapid transformation has underlaid every revolution (as opposed to coups d’etat) at least from the time of the English Revolution in the seventeenth century, which sought to establish the Rule of the Saints by some modifications in the governing institutions and the laws they promulgated; and from this point of view every revolution has failed… the character of the mass of the population, their attitudes and expectations, change apparently very little.

Up till the present century revolutions were typically concerned with the internal arrangements of one political unit, one country; but the nearly simultaneous development of world-wide communications and world-wide ideologies – democracy, socialism, communism – has posed the problem not merely of how to transform ourselves – whoever ‘ourselves’ may be – but how to transform others.

This imperative shattered the naïve faith of Adams and Parshley in the inevitability of scientific progress with astonishing rapidity.  Later, during the heyday of the Blank Slate, Margaret Mead described the triumph of the “new ideas,” just a few short years after their articles appeared in the Mercury:

In the central concept of culture as it was developed by Boas and his students, human beings were viewed as dependent neither on instinct nor on genetically transmitted specific capabilities but on learned ways of life that accumulated slowly through endless borrowing, readaptation, and innovation… The vast panorama which Boas sketched out in 1932 in his discussion of the aims of anthropological research is still the heritage of American anthropology.

And so the darkness fell, and remained for more than half a decade.  The victory of the Blank Slate was, perhaps, the greatest debacle in the history of scientific thought.  Even today the “men of science” are incapable of discussing that history without abundant obfuscation and revision.  Still, the salient facts aren’t that hard to ferret out for anyone curious enough to dig for them a little.  It would behoove anyone with an exaggerated tendency to believe in the “integrity of science” to grab a shovel.

American Mercury

Interstellar Travel: Which Species Gets to Go?

Popular Mechanics just published an article entitled How Many People Does It Take to Colonize Another Star System?  Apparently the number needed to maintain sufficient genetic diversity is very large indeed – 40,000 would be ideal!  Unfortunately, if you do the math, the amount of energy it would take to transport that many people to another star system, even allowing a couple of thousand years for the voyage, is enormous.  As several commenters pointed out, by the time our technology advances to the point that such missions are feasible, it will also be feasible to send the necessary “genetic diversity” along in the form of frozen eggs and sperm with carefully chosen DNA sequences, complete libraries of human alleles that can be fabricated and inserted into DNA sequences as needed, etc.  It might not even be necessary to send anything as bulky as fully formed humans on the voyage.  Self-replicating robots could be sent in advance to create housing, farms, and birthing facilities prepared to receive fertilized eggs.  The first humans born would have robotic “parents.”

It’s always fun to speculate on what we might be able to do assuming our technology becomes sufficiently advanced.  The question is, what can we do now, or at least in the foreseeable future with existing technologies, or ones that seem accessible in the near future?  “Existing technologies” means travel times of 25,000 years, give or take.  In other words, we must rule out our own species, at least for the time being.  It will be necessary for us to send some of our relatives.  For some of them – other species – such lengthy interstellar voyages are feasible now.  As I wrote in an earlier post,

The 32,000 year old seed of a complex, flowering plant recovered from the ice was recently germinated by a team of scientists in Siberia. Ancient bacteria, as much as 250 million years old have been recovered from sea salt in New Mexico, and also brought back to life. Tiny animals known as tardigrades have survived when exposed to the harsh environment of outer space. We might choose the species from among such candidates most likely to survive the 50,000 to 100,000 years required to journey to nearby stars with conventional rocket propulsion, and most likely to evolve into complex, land-dwelling life forms in the shortest time, and send them now, instead of waiting 100’s or 1000’s of years for the emergence of the advanced technologies necessary to send humans. Slowing down at the destination star would not pose nearly the problem that it does for objects traveling at significant fractions of the speed of light. The necessary maneuvers to enter orbit around and seed promising planets could be performed by on-board computers with plenty of time to spare. Oceans might be seeded with algae in advance of the arrival of organisms that feed on it (and breathe the oxygen it would release).

Why would we want to do such a thing?  Survival!  Morality exists only because animals equipped with it were more likely to survive.  We are one such animal.  There is no such thing as an objective “ought.”  However, given the reason that morality exists to begin with, the conclusion that nothing can be more immoral than failing to survive does not seem unreasonable.  It one accepts that logic, it follows that our first priority “should” be the survival of our own species, and our second should be the preservation of biological life.  It’s really just a whim, but I hope that many others will share it.  The alternative is to accept the fact that one is a defective biological unit, resigned to extinction, which I personally don’t find an entirely pleasant thought.

Let’s assume that a canonical voyage will last 25,000 years.  Conventional rockets are capable of reaching the nearest star systems in that time.  By using nuclear propulsion of the type that was successfully tested 50 years ago, we should be able to reach stars within a distance of a dozen light years or so within the same period.  As noted above, there are life forms that could survive the voyage.  The particular ones chosen would be those most compatible with the conditions existing on candidate planets.  Needless to say, the conditions of our own atmosphere, oceans, etc., have been drastically altered by the long existence of life on our planet.  Finding such conditions on reachable planets is most unlikely, and our biological voyagers must be chosen accordingly.

It will be necessary to develop certain technologies that we do not as yet possess.  Fortunately, they are all within reach, and nowhere near as demanding as, say, fusion or anti-matter propulsion systems.  For example, we will need a timing device that can keep “ticking” for 25,000 years, and, when necessary, signal the rest of the interstellar package to “wake up.”  The Long Now Foundation has made some interesting starts in this direction, in the form of giant mechanical clocks that are designed to run for 10,000 years.  Of course, those designs aren’t exactly what we’re looking for, but if one can conceive of a 10,000 year mechanical clock, than a 25,000 year digital clock must be feasible as well.  A similar problem was solved by John Harrison more than two centuries ago, in the form of a clock that kept time exactly enough to keep track of a ship’s longitude.  If he succeeded in solving the British Navy’s problem with the technology that existed then, we should be able to succeed in solving our own clock problem with a technology that is now far more advanced.

It will be necessary to develop systems that will perform reliably over extremely long times.  As it happens, that, too, is a problem that has already been taken in hand by earth-bound scientists.  The relevant acronym is ULLS (Ultra Long Life Systems), and some of the required technologies are discussed in a NASA presentation entitled, Technology Needs for the Development of the Ultra Long Life Missions.  Some of the ideas being considered include,

Generic Redundant Blocks – redundant components that are generic and can be programmed to replace any type of failed component.  An example might be field-programmable gate arrays (FPGA’s).

Adaptive Fault Tolerance – Working around failures instead of replacing failed components with spares.

Self-repair components – Including self-repair with nano-technologies and self-healing with biologically inspired technologies.

Regenerative systems – Modular regrow with biologically inspired technologies.

In interesting presentation on the subject by NASA scientist Henry Garrett, who happens to prefer Project Orion-type interstellar missions with propulsion by few kiloton nuclear devices, may be found here (sorry about the long-winded introduction).  Dave Reneke recently posted an interesting if somewhat speculative article  on various types of self-replicating interstellar probes entitled How Self-Replicating Spacecraft Could Take Over the Galaxy.

Of course, none of this fine technology will work without a reliable power supply that needs to last, potentially, for upwards of 25,000 years.  It so happens that we have just the isotope – plutonium 239.  You might call it the ultimate dual use material – life or death.  It is ideal for making nuclear bombs or carrying life across interstellar distances.  Of course, another isotope of plutonium, plutonium 238, has already been used to power many spacecraft, including the Voyagers and New Horizons.  Unfortunately, with a radioactive half-life of only 87.7 years, there would only be a few atoms of it left after 25,000 years.  Pu-239, on the other hand, has a half-life of 24,100 years – just about what’s needed.  Of course, it could only provide a tiny fraction of the power of Pu-238 via radioactive decay.  Not much is required, though – only enough to keep the clock going.  At key points in the mission, of course, a great deal more power will be necessary.  And that’s what brings us to the reason that Pu-239 is ideal – it’s fissile.  In other words, it’s an ideal fuel for a nuclear reactor.  When high power is needed, the plutonium can be assembled into a critical mass, serving as either a conventional reactor or a space propulsion system.

I am convinced that all of the above can be accomplished in a matter of a decade or two instead of centuries if we can somehow again achieve the level of collective willpower we reached during the Apollo Program.  Of course, this old planet of ours could easily go on supporting high tech human civilizations until we master the art of interstellar travel on our own.  It might – but why take chances?

Starship

What is the “Atheist Agenda?”

There is none.  An atheist is someone who doesn’t believe in a God or gods, period!  Somehow, that simple definition just never seems to register in the minds of large cohorts of atheists and believers alike.  Take, for example, Theo Hobson, who supplies us with his own, idiosyncratic definition in a piece entitled “Atheism is an Offshoot of Deism” that recently turned up in the Guardian:

Atheism is less distinct from deism than it thinks. It inherits the semi-Christian assumptions of this creed.

Atheism derives from religion? Surely it just says that no gods exist, that rationalism, or ‘scientific naturalism’, is to be preferred to any form of supernaturalism. Actually, no: in reality what we call atheism is a form of secular humanism; it presupposes a moral vision, of progressive humanitarianism, of trust that universal moral values will triumph. (Of course there is also the atheism of Nietzsche, which rejects humanism, but this is not what is normally meant by ‘atheism’).

So what we know as atheism should really be understood as an offshoot of deism. For it sees rationalism as a benign force that can liberate our natural goodness. It has a vision of rationalism saving us, uniting us.

Sorry, but I beg to differ with you Theo.  There certainly are many delusional atheists who embrace such a “moral vision,” but the notion that all of them do is nonsense.  For example, I reject any such “moral vision,” which Michael Rosen accurately described as “Religion Lite.”  If you’ll trouble yourself to read the comments after your article, you’ll see I’m not alone.  For example, from commenter Whiterthanwhite,

So is my Afairyism an offshoot of my five-year-old’s belief in fairies?  Is my Afatherchristmasism an offshoot of her belief in Father Christmas?

Topher chimes in,

Indeed.  Certain (rather more arrogant) religious people insist on seeing atheism as a reflection of theism rather than a rejection of it. It makes them feel better I guess, but of course is absolutely misguided.

Dogfondler adds,

Yes what a bag of bollocks this is. Atheism is an ‘offshoot’ of deism the way that absence is an offshoot of presence.  It seems that what theists can’t stand about atheism is the sheer absence of belief. Get over it.

Ituae concurs,

Can you, and others like you, please stop talking about atheism as if it were a belief system? I don’t believe in God. Doesn’t mean a subscribe to whatever incoherent, ill-thought-out Humanism you’re passing off as philosophy.

There are many similar comments, but as noted above, it never seems to register, even among some atheists.  Follow an atheist website long enough, and you’re sure to run across commenters who insist on associating atheism with veganism, progressivism, schemes for gladdening us with assorted visions of “human flourishing,” and miscellaneous secular Puritans of all stripes.  No.  I don’t think so!  Atheism doesn’t even come pre-packaged with “scientific rationalism.”  It is merely the absence of a belief in a God or gods – Period! Aus!  Schluss!  Basta!

If any word is long overdue for a re-definition, it’s “religion,” not “atheism.”  Instead of being rigidly associated with theism, it should embrace all forms of belief in imaginary, supernatural entities, or at least those with normative powers.  In particular, in addition to a God or gods, it should include belief in such things as Rights, Good, and Evil as things-in-themselves, independent of the subjective impressions of them that exist in the minds of individuals.

Among other things, such a re-definition would add a certain coherence to theories according to which the predisposition to embrace “religion” is an evolved behavior.  I rather doubt that we’ll eventually find something quite so specific as “You shall believe in supernatural beings!” hard-wired in our brains.  On the other hand, there may be predispositions that make it substantially more likely that belief in such beings will follow once a certain level of intelligence is reached.  I suspect that the origins of secular religions such as Communism will eventually be found by rummaging about in the very same behavioral baggage.  I’m not the only one who’s seen the affinity.  Many others have spoken of the “popes,” “bishops,” and “priesthood” of Communism and its antecedents, for almost as long as they’ve been around.

In any case, not all atheists are secular Puritans who embrace these various versions of “religion lite.”  I personally hope our species will eventually grow up enough to jettison them along with the older editions.  Darwin immediately grasped the truth, as did many others since him.  It follows immediately from his theory.  Evolved behavioral traits are the ultimate cause for the existence of morality and the perception of such subjective entities as Good and Evil that go with it.  That is the simple truth, and it follows that belief in the existence of Good and Evil as objective things with some kind of a legitimate, independent normative power, whether ones tastes run to the versions preferred by the “heavy” or “lite” versions of religion, is a chimera.

Does that mean it’s time to jettison morality?  No, sorry, our species doesn’t have that option.  We will continue to act morally in spite of the vociferous objections of legions of philosophers, because it is our nature to act morally.  It’s a “good” thing, too, because even if morality isn’t “real,” we would have a very hard time getting along without it.  On the other hand, we do have the option of recognizing the pathologically self-righteous among us for the charlatans they are.

Elmer Gantry

China Bets on Thorium Reactors

According to the South China Morning Post (hattip Next Big Future),

The deadline to develop a new design of nuclear power plant has been brought forward by 15 years as the central government tries to reduce the nation’s reliance on smog-producing coal-fired power stations.  A team of scientists in Shanghai had originally been given 25 years to try to develop the world’s first nuclear plant using the radioactive element thorium as fuel rather than uranium, but they have now been told they have 10, the researchers said.

I have to admit, I feel a little envious when I read things like that.  The Chinese government is showing exactly the kind of leadership that’s necessary to guide the development of nuclear power along rational channels, and it’s a style of leadership of which our own government no longer seems capable.

What do I mean by “rational channels?”  Among other things, I mean acting as a responsible steward of our nuclear resources, instead of blindly wasting them , as we are doing now.  How are we wasting them?  By simply throwing away the lion’s share of the energy content of every pound of uranium we mine.

Contrary to the Morning Post article, thorium is not a nuclear fuel.  The only naturally occurring nuclear fuel is uranium 235 (U235).  It is the only naturally occurring isotope that can be used directly to fuel a nuclear reactor.  It makes up only a tiny share – about 0.7% – of mined uranium.  The other 99.3% is mostly uranium 238 (U238).  What’s the difference?  When a neutron happens along and hits the nucleus of an atom of U235, it usually fissions.  When a neutron happens along and hits the nucleus of an atom of U238, unless its going very fast, it commonly just gets absorbed.  There’s more to the story than that, though.  When it gets absorbed, the result is an atom of U239, which eventually decays to an isotope of plutonium – plutonium 239 (Pu239).  Like U235, Pu239 actually is a nuclear fuel.  When a neutron hits its nucleus, it too will usually fission.  The term “fissile” is used to describe such isotopes.

In other words, while only 0.7% of naturally occurring uranium can be used directly to produce energy, the rest could potentially be transmuted into Pu239 and burned as well.  All that’s necessary for this to happen is to supply enough extra neutrons to convert the U238.  As it happens, that’s quite possible, using so-called breeder reactors.  And that’s where thorium comes in.  Like U238, the naturally occurring isotope thorium 232 (Th232) absorbs neutrons, yielding the isotope Th233, which eventually decays to U233, which is also fissile.  In other words, useful fuel can be “bred” from Th232 just as it can from U238.  Thorium is about three times as abundant as uranium, and China happens to have large reserves of the element.  According to current estimates, reserves in the U.S. are much larger, and India’s are the biggest on earth.

What actually happens in almost all of our currently operational nuclear reactors is a bit different.  They just burn up that 0.7% of U235 in naturally occurring uranium, and a fraction of the Pu239 that gets bred in the process, and then throw what’s left away.  “What’s left” includes large amounts of U238 and various isotopes of plutonium as well as a brew of highly radioactive reaction products left over from the split atoms of uranium and plutonium.  Perhaps worst of all, “what’s left” also includes transuranic actinides such as americium and curium as well as plutonium.  These can remain highly radioactive and dangerous for thousands of years, and account for much of the long-term radioactive hazard of spent nuclear fuel.  As it happens, these actinides, as well as some of the more dangerous and long lived fission products, could potentially be destroyed during the normal operation of just the sort of molten salt reactors the crash Chinese program seeks to develop.  As a result, the residual radioactivity from operating such a plant for, say, 40 years, could potentially be less than that of the original uranium ore after a few hundreds of years instead of many thousands.  The radioactive hazard of such plants would actually be much less than that of burning coal, because coal contains small amounts of both uranium and thorium.  Coal plants spew tons of these radioactive elements, potentially deadly if inhaled, into the atmosphere every year.

Why on earth are we blindly wasting our potential nuclear energy resources in such a dangerous fashion?  Because it’s profitable.  For the time being, at least, uranium is still cheap.  Breeder reactors would be more expensive to build than current generation light water reactors (LWRs).  To even start one, you’d have to spend about a decade, give or take, negotiating the highly costly and byzantine Nuclear Regulatory Commission licensing process.  You could count on years of even more costly litigation after that.  No reprocessing is necessary in LWRs.  Just quick and dirty storage of the highly radioactive leftovers, leaving them to future generations to deal with.  You can’t blame the power companies.  They’re in the business to make a profit, and can’t continue to operate otherwise.  In other words, to develop nuclear power rationally, you need something else in the mix – government leadership.

We lack that leadership.  Apparently the Chinese don’t.

 

Thorium metal
Thorium metal

The Consequences of Natural Morality

Good and Evil are not objective things.  They exist as subjective impressions, creating a powerful illusion that they are objective things.  This illusion that Good and Evil are objects independent of the conscious minds that imagine them exists for a good reason.  It “works.”  In other words, its existence has enhanced the probability that the genes responsible for its existence will survive and reproduce.  At least this was true at the time that the mental machinery we lump together under the rubric if morality evolved.  Unfortunately, it is no longer necessarily true today.  Times have changed rather drastically, making it all the more important that, when we speak of Good and Evil, we actually know what we’re talking about.

Philosophers, of course, have been “explaining” morality to the rest of us for millennia, erecting all sorts of complicated systems based on the false fundamental assumption that the illusion is real.  Now that the cat is out of the bag and the rest of us are finally showing signs of catching up with Darwin and Hume, it’s no wonder they’re feeling a little defensive.  Wouldn’t you be upset if you’d devoted a lot of time to struggling through Kant’s incredibly obscure and convoluted German prose, only to discover that his categorical imperative is based on assumptions about reality that are fundamentally flawed?

A typical reaction has been to assert that the truth can’t be the truth because they would be unhappy with it.  For example, they tell us that, if the enhanced probability that certain genes would survive is the ultimate reason for the very existence of morality, then it follows that,

•  We must all become moral relativists

•  Punishment of criminals will be unjustified if Good and Evil are mere subjective impressions, and thus ultimately matters of opinion.

•  We cannot object to being robbed if some individuals have genes that predispose them to steal.

•  We cannot object to racism, anti-Semitism, religious bigotry, etc., it they are “in our genes.”

…and so on, and so on.  It’s as if we’re forbidden to act morally without the permission of philosophers and theologians.  I’ve got news for them.  We’ll continue to act morally, continue to be moral absolutists, and continue to punish criminals.  Why?  Because Mother Nature wants it that way.  It is our nature to act morally, to perceive Good and Evil as absolutes, and to punish free riders.  If you need evidence, look at Richard Dawkins’ tweets.  He’s a New Atheist, yet at the same time the most moralistic and self-righteous of men.  If asked to provide a rational basis for his moralizations, he would go wading off into an intellectual swamp.  That hardly keeps him from moralizing.  In other words, morality works whether you can come up with a “rational” basis for the existence of Good and Evil or not.  Furthermore, morality is the only game in town for regulating our social interactions with a minimum of mayhem.  As a species, we’re much too stupid to begin analyzing all our actions rationally with respect to their potential effects on our genetic destiny.

Other than that, of course, the truth about morality is what it is whether the theologians and philosophers approve of the truth or not.  They can like it or lump it.  My personal preference would be to keep it simple, and limit its sphere to the bare necessities.  We should also understand it.  In an environment radically different than the one in which it evolved, it can easily become pathological, prompting us to do things that are self-destructive, and potentially suicidal.  It would be useful to recognize such situations as they arise.  It would also be useful to promote instant recognition of the pathologically pious among us.  Their self-righteous posing can quickly become a social irritant.  In such cases, it can’t hurt to point out that they lack any logical basis for applying their subjective versions of Good and Evil to the rest of us.

Ingroups and Outgroups Ain’t Goin’ Nowhere!

Every day in every way things are getting better and better.  Well, all right, maybe that’s a stretch, but now and then, things actually do take a turn for the better, at least from my point of view.  Take this interview of Oliver Scott Curry at the This View of Life website, for example.  Here’s a young guy who gets human nature, and gets morality, and isn’t in the least bit afraid to talk about them as matter-of-factly as if he were discussing the weather.  Have a look at some of the things this guy says:

MICHAEL PRICE (Interviewer): What can evolutionary approaches tell us about human moral systems that other approaches cannot tell us? That is, what unique and novel insights about morality does an evolutionary approach provide?

OLIVER SCOTT CURRY: Well, everything. It can tell us what morality is, where it comes from, and how it works. No other approach can do that.  The evolutionary approach tells us that morality is a set of biological and cultural strategies for solving problems of cooperation and conflict. We have a range of moral instincts that are natural selection’s attempts to solve these problems. They are sophisticated versions of the kind of social instincts seen in other species…Above all, the evolutionary approach demystifies morality and brings it down to earth. It tells us that morality is just another adaptation that can be studied in the same way as any other aspect of our biology or psychology.

PRICE: The ordinary view in biology is that adaptations evolve primarily to promote individual fitness (survival and reproduction of self/kin). Do you believe that this view is correct with regard to the human biological adaptations that generate moral rules? Does this view imply that individuals moralize primarily to promote their own fitness interests (as opposed to promoting, e.g., group welfare)?  (TVOL editor David Sloan Wilson is one of the foremost advocates of group selection, ed.)

CURRY: No. Adaptations evolve to promote the replication of genes; natural selection cannot work any other way. Genes replicate by means of the effects that they have on the world; these effects include the formation of things like chromosomes, multicellular individuals, and groups. (My understanding is that everyone agrees about this, although there is some debate about whether groups are sufficiently coherent to constitute vehicles [1].)

PRICE: What work by others on the evolution of morality (or just on morality in general) have you found most enlightening?

CURRY: David Hume’s work has been particularly inspiring. In many ways he is the great-great-great granddaddy of evolutionary psychology. He almost stumbled upon the theory of evolution. He undertook a comparative “anatomy of the mind” that showed “the correspondence of passions in men and animals.” His “bundle theory of the self” hints at massive modularity. His A Treatise of Human Nature [2] introduced “the experimental method of reasoning into moral subjects,” and discusses relatedness, certainty of paternity, coordination and convention, reciprocal exchange, costly signals, dominance and submission, and the origins of property. He even anticipated by-product theories of religion, describing religious ideas as “the playsome whimsies of monkies in human shape” [3]. Remarkable.

Remarkable, indeed!  Curry just rattles off stuff that’s been hidden in plain sight for the last 100 years, but that would have brought his career to a screeching halt not that long ago.  Beginning in the 1920’s, the obscurantists of the Blank Slate controlled the message about human nature in both the scientific and popular media for more than 50 years.  They imposed a stifling orthodoxy on the behavioral sciences that rendered much of the work in those fields as useless and irrelevant as the thousands of tomes about Marxism that were published during the heyday of the Soviet Union.  Their grip was only broken when a few brave authors stood up to them, and it became obvious to any 10-year-old that their “science” was absurd.  This should never, ever be forgotten in our hubris over the triumphs of science.  When the “men of science” start declaring that they have a monopoly on the truth, and that anyone who disagrees with them is not only wrong, but evil, it’s reasonable to suspect that what they’re promoting isn’t the truth, but an ideological narrative.

It’s refreshing, indeed, to hear from someone who, in spite of the fact that he clearly understands where morality comes from, doesn’t immediately contradict that knowledge by spouting nonsense about moral “truths.”  At least in this interview, I find nothing like Sam Harris’ delusions about “scientific moral truths,” or Jonathan Haidt’s delusions about “anthropocentric moral truths, or Joshua Greene’s delusions about “utilitarian moral truths.”  I can but hope that Curry will never join them in their wild goose chase after the will-o’-the-wisp of “human flourishing.”

At the end of the interview, Curry reveals that he’s also aware of another aspect of human morality that makes many otherwise sober evolutionary psychologists squirm; our tendency to see the world in terms of ingroups and outgroups.  When Price questions him about the most important unsolved scientific puzzles in evolutionary moral psychology he replies that one of the questions that keeps him up at night is, “Why are people so quick to divide the world into ‘us and them’? Why not just have a bigger us? (I’d like to see an answer rooted in three-player game theory.)”

Hey, three-player game theory is fine with me, as long as we finally realize that the ingroup-outgroup thing is a fundamental aspect of human moral behavior, and one that it would behoove us to deal with rationally assuming we entertain hopes for the survival of our species.  As it happens, that’s easier said than done.  The academic milieu that is home to so many of the moral theorists and philosophers of our day has long been steeped in an extremely moralistic culture; basically a secular version of the Puritanism of the 16th and 17th centuries, accompanied by all the manifestations of self-righteous piety familiar to historians of that era.  It is arguably more difficult for such people to give up any rational basis for their addiction to virtuous indignation and the striking of highly ostentatious pious poses than it is for them to give up sex.  For them, the “real” Good must prevail.  As a result we have such gaudy and delusional “solutions” to the problem as Joshua Greene’s proposal that we simply stifle our moral emotions in favor of his “real” utilitarian morality, Sam Harris’ more practical approach of simply dumping everyone who doesn’t accept his “scientific” morality into a brand new outgroup, and various schemes for “expanding” our ingroup to include all mankind.

Sorry, it won’t work.  Ingroups and outgroups ain’t goin’ nowhere.  Stifle racism, and religious bigotry will take its place.  Stifle religious bigotry, and homophobia will jump in to take over.  Stifle all those things, and there will always be a few deluded souls around who dare to disagree with you.  They, in their turn, will become your new outgroup.  The outgroup have ye always with you.  Better understand the problem than pretend it’s not there.

As for Curry’s suggestion that we declare Hume the father of evolutionary psychology, nothing could please me more.