The world as I see it
RSS icon Email icon Home icon
  • German “Greens” and the Poisoning of Eastern Europe

    Posted on April 22nd, 2013 Helian No comments

    A while back in an online discussion with a German “Green,” I pointed out that, if Germany shut down its nuclear plants, coal plants would have to remain in operation to take up the slack.  He was stunned that I could be so obtuse.  Didn’t I realize that the lost nuclear capacity would all be replaced by benign “green” energy technology?  Well, it turns out things didn’t quite work out that way.  In fact, the lost generating capacity is being replaced by – coal.

    Germany is building new coal-fired power plants hand over fist, with 26 of them planned for the immediate future.  According to Der Spiegel, the German news magazine that never misses a trick when it comes to bashing nuclear, that’s a feature, not a bug.  A recent triumphant headline reads, “Export Boom:  German Coal Electricity Floods Europe.”  Expect more of the same from the home of Europe’s most pious environmentalists.  Germany has also been rapidly expanding its solar and wind capacity recently thanks to heavy state subsidies, but the wind doesn’t always blow and the sun doesn’t always shine, especially in Germany.  Coal plants are required to fill in the gaps – lots of them.  Of course, it would be unprofitable to let them sit idle when wind and solar are available, so they are kept going full blast.  When the power isn’t needed in Germany, it is sold abroad, serving as a useful prop to Germany’s export fueled economy.

    Remember the grotesque self-righteousness of Der Spiegel and the German “Greens” during the Kyoto Treaty debates at the end of the Clinton administration?  Complying with the Kyoto provisions cost the Germans nothing.  They had just shut down the heavily polluting and grossly unprofitable industries in the former East Germany, had brought large numbers of new gas-fired plants on line thanks to increasing gas supplies from the North Sea fields, and had topped it off with a lame economy in the 90′s compared to the booming U.S.  Their greenhouse gas emissions had dropped accordingly.  Achieving similar reductions in the U.S. wouldn’t have been a similar “freebie.”  It would have cost tens of thousands of jobs.  The German “Greens” didn’t have the slightest problem with this.  They weren’t interested in achieving a fair agreement that would benefit all.  They were only interested in striking pious poses.

    Well, guess what?  Times have changed.  Last year U.S. carbon emissions were at their lowest level since 1994, and down 3.7% from 2011.  Our emissions are down 7.7% since 2006, the largest drop among major industrial states on the planet.  German emissions were up at least 1.5% last year, and probably more like 2%.  Mention this to a German “Green,” and he’s likely to mumble something about Germany still being within the Kyoto limits.  That’s quite true.  Germany is still riding the shutdown of what news magazine Focus calls “dilapidated, filthy, communist East German industry after the fall of the Berlin Wall,” to maintain the facade of environmental “purity.”

    That’s small comfort to her eastern European neighbors.  Downwind from Germany’s coal-fired plants, their “benefit” from her “green” policies is acid rain, nitrous oxide laced smog, deadly particulates that kill and sicken thousands and, last but not least, a rich harvest of radioactive fallout.  That’s right, Germany didn’t decrease the radioactive hazard to her neighbors by shutting down her nuclear plants.  She vastly increased it.  Coal contains several parts per million each of radioactive uranium and thorium.  These elements are harmless enough – if kept outside the body.  The energetic alpha particles they emit are easily stopped by a normal layer of skin.  When that happens, they dump the energy they carry in a very short distance, but, since skin is dead, it doesn’t matter.  It’s an entirely different matter when they dump those several million electron volts of energy into a living cell – such as a lung cell.  Among other things, that can easily derange the reproductive equipment of the cell, causing cancer.  How can they reach the lungs?  Very easily if the uranium and thorium that emit them are carried in the ash from a coal-fired plant.  A typical coal-fired plant releases about 5 tons of uranium and 12 tons of thorium every year.  The German “Greens” have no problem with this, even though they’re constantly bitching about the relatively miniscule release of uranium from U.S. depleted uranium munitions.  Think scrubber technology helps?  Guess again!  The uranium and thorium are concentrated in the ash, whether it ends up in the air or not.  They can easily leach into surrounding cropland and water supplies.

    The last time there was an attempt to move radioactive waste to the Gorleben storage facility within Germany, the “Greens” could be found striking heroic poses as saviors of the environment all along the line, demonstrating, tearing up tracks, and setting police vehicles on fire.  Their “heroic” actions forced the shutdown of Germany’s nuclear plants.  The “gift” (German for “poison”) of their “heroic” actions to Germany’s neighbors came in the form of acid rain, smog, and airborne radiation.  By any reasonable standard, coal-fired plants are vastly more dangerous and damaging to the environment than the nuclear facilities they replaced.

    It doesn’t matter to Germany’s “Greens.”  The acid rain, the radiation, the danger of global warming they always pretend to be so concerned about?  It doesn’t matter.  For them, as for the vast majority of other environmental zealots worldwide, the pose is everything.  The reality is nothing.

    coal-power-plant

  • More on “Where have all the Babies Gone?”

    Posted on February 25th, 2013 Helian 1 comment

    Apropos the baby bust discussed in my last post, an interesting article on the subject by Joel Kotkin and Harry Siegel recently turned up at The Daily Beast via Newsweek entitled, “Where have all the Babies Gone?”  According to the authors, “More and more Americans are childless by choice.  But what makes sense for the individual may spell disaster for the country as a whole.”  Their forebodings of doom are based on a subset of the reasons cited in Jonathan Last’s “What to Expect when No One’s Expecting,” and are the same as those that usually turn up in similar articles.  The lack of babies, “….is likely to propel us into a spiral of soaring entitlement costs and diminished economic vigor, and create a culture marked by hyperindividualism and dependence on the state as the family unit erodes.”  As I pointed out in my earlier post on the subject, if all these things really will result absent a constantly increasing population, they are not avoidable outcomes, for the simple reason that, at some point, the population of the planet must stop growing.  The only question is, how many people will be around to experience those outcomes when they happen, and what fraction of the planet’s depleted resources will still be around at the time to deal with the situation.  Many of the commenters on the article do an excellent job of pointing that out.  For example, from Si 1979,

    At some point the population has to stop growing, space on the earth is finite.  As such there is going to come a generation that needs to ‘take the hit’ and the earlier that hit is taken the easier it will be.  The larger the population gets the worse an ageing population each generation will experience.  We can take the hit now or leave future generations a much worse problem to deal with.

    To deal with the “problem,” the authors propose some of the usual “solutions”;

    These include such things as reforming the tax code to encourage marriage and children; allowing continued single-family home construction on the urban periphery and renovation of more child-friendly and moderate-density urban neighborhoods; creating extended-leave policies that encourage fathers to take more time with family, as has been modestly successfully in Scandinavia; and other actions to make having children as economically viable, and pleasant, as possible. Men, in particular, will also have to embrace a greater role in sharing child-related chores with women who, increasingly, have careers and interests of their own.

    As a father of children who has strongly encouraged his own children to have children as well, I am fully in favor of all such measures, as long as they remain ineffective.

    The baby promoters have remarkably short historical memories.  Are they unaware of the other side of this coin?  One need go no further back than the 20th century.  What spawned Hitler’s grandiose dreams of “Lebensraum in the east” for Germany, at the expense of Russia?  Hint:  It wasn’t a declining German population.  Why did the Japanese come up with the “Greater East Asian Co-prosperity Sphere” idea and start invading their neighbors?  It happened at a time when her population was expanding at a rapid pace, and was, by the way, only about half of what it is now.  In spite of that, in the days before miracle strains of rice and other grains, it seemed impossible that she would be able to continue to feed her population.  Have we really put such fears of famine behind us for all times?  Such a claim must be based on the reckless assumption that the planet will never again suffer any serious disruptions in food production.  These are hardly isolated examples.  A book by Henry Cox entitled The Problem of Population, which appeared in 1923 and is still available on Amazon, cites numerous other examples.

    Needless to say, I don’t share the fears of the Kotkins and Siegels of the world.  My fear is that we will take foolhardy risks with the health of our planet by spawning unsustainably large populations in the name of maintaining entitlements which mankind has somehow incomprehensibly managed to live without for tens of thousands of years.  In truth, we live in wonderful times.  I and those genetically close to me can procreate without limit on a planet where the population may soon begin to decline to within sustainable limits because increasing numbers of people have decided not to have children.  It’s a win-win.  I’m happy with their choice, and presumably, they’re happy with mine, assuming they want to have some remnant of a working population available to exploit (or at least try to) once they’ve retired.  My only hope is that people like Kotkin and Siegel don’t succeed in rocking the boat.

  • Jonathan Last and the Un-Problem of Shrinking Populations

    Posted on February 24th, 2013 Helian 3 comments

    In his latest book, What to Expect When No One’s Expecting, Jonathan Last warns us of the dire consequences of shrinking populations.  He’s got it backwards.  It’s the best thing that could happen to us.

    Before proceeding with my own take on this issue, I would like to assure the reader that I am not a rabid environmentalist or a liberal of the sort who considers people with children morally suspect.  I have children and have encouraged my own children to have as many children as possible themselves.  It seems to me that the fact that those among us who are supposedly the most intelligent are also the most infertile is a convincing proof of the stupidity of our species.

    Why did I decide to have children?  In the end, it’s a subjective whim, just like every other “purpose of life” one might imagine.  However, as such I think it’s justifiable enough.  The explanation lies in the way in which I perceive my “self.”  As I see it, “we” are not our conscious minds, although that is what most of us perceive as “we.”  Our conscious minds are evanescent manifestations of the physical bodies whose development is guided by our genes.  They pop into the world for a moment and are then annihilated in death.  They exist for that brief moment for one reason only – because they happened to promote our genetic survival.  Is it not more reasonable to speak of “we” as that about us which has existed for billions of years and is potentially immortal, namely, our genes, than to assign that term to an ancillary manifestation of those genes that exists for a vanishingly small instant of time by comparison?  We have a choice.  We can choose that this “we” continue to survive, or we can choose other goals, and allow this “we” to be snuffed out, so that the physical bodies that bear our “we” become the last link in an unbroken chain stretching back over billions of years.  There is no objective reason why we should prefer one choice or the other.  The choice is purely subjective.  The rest of the universe cares not a bit whether our genes survive or not.  I, however, care.  If countless links in a chain have each created new links in turn and passed on the life they carried over the eons, only to come to a link possessed of qualities that cause it to fail to continue the chain, it seems reasonable to consider that link dysfunctional, or, in the most real sense imaginable, a failure.  I personally would not find the realization comforting that I am a sick and dysfunctional biological unit, a failure at carrying out that one essential function that a process of natural selection has cultivated for an almost inconceivable length of time.  Therefore, I have children.  As far as I am concerned, they, and not wealth, or property, or fame, are the only reasonable metric of success in the life of any individual.  The very desire for wealth, property or fame only exist because at some point in our evolutionary history they have promoted our survival and procreation.  As ends in themselves, divorced from the reason they came into existence in the first place, they lead only to death.

    Am I concerned if others don’t agree with me?  Far from it!  And that brings us back to the main point of this post.  I do not agree with Jonathan Last that a constantly increasing population, or even a stable one at current levels, is at all desirable.  As far as I am concerned, it is a wonderful stroke of luck that in modern societies the conscious minds of so many other humans have become dysfunctional, resulting in their genetic death.  I am interested in keeping other genes around only to the extent that they promote the survival of my own.  That is also the only reason that I would prefer one level of population on the planet to one that is larger or smaller.  That, of course, is a very personal reason, but it seems to me that it is a conclusion that must follow for anyone else to the extent that they prefer survival to the alternative.

    Survival, then, is my sine qua non.  Given that this planet is, for practical purposes, the only one we can depend on to support our survival, I consider it foolhardy to prefer a population that is potentially unsustainable, or that will diminish everyone’s chances of long term survival.  I am hardly a fanatical environmentalist.  I would just prefer that we refrain from rocking the boat.  I have read Bjorn Lomborg’s The Skeptical Environmentalist, and am well aware of how frequently the environmentalists have been crying “wolf” lo now these many years.  However, like Lomborg, I agree that there is still reason for concern.  Pollution and environmental degradation are real problems, as is the rapid exploitation of limited sources of cheap energy and other raw materials.  Obviously, Paul Ehrlich’s dire predictions that we would run out of everything in short order were far off the mark.  However, eventually, they will run out, and it seems reasonable to me to postpone the date as long as possible.  Let us consider the reasons Jonathan Last believes all these risks are worth taking.  In all honesty, assuming we are agreed that survival is a worthwhile goal, they seem trivial to me.

    To begin, while paying lip service to the old chestnut that a correlation does not necessarily indicate causation, Last suggests exactly that.  On page 7 of the hardcover version of his book he writes, “Declining populations have always followed or been followed by Very Bad Things.  Disease.  War.  Economic stagnation or collapse.”  To see whether this suggestion holds water, let’s look at one of Lasts own examples of “declining populations.”  On p. 36 he writes, “World population also declined steeply between 1340 and 1400, shrinking from 443 million to 374 million.  This was not a period of environmental and social harmony; it was the reign of the Black Death.  I leave it as an exercise for the reader to determine whether declining populations were the cause of the Black Death, or the Black Death was the cause of declining populations.  To anyone who has read a little history, it is abundantly clear that, while disease, war, and economic collapse may cause depopulation, the instances where the reverse was clearly the case are few and far between.  In a similar vein, referring to the Roman Empire, Last writes on p. 35, “Then, between A.D. 200 and 600, population shrank from 257 million to 208 million, because of falling fertility.  We commonly refer to that period as the descent into the Dark Ages.”  Where is the evidence that the population fell because of “falling fertility”?  Last cites none.  On the other hand, there is abundant source material from the period to demonstrate that, as in the case of the Black Death, declining populations were a result, and not a cause.  In Procopius‘ history of the Great Italian War in the 6th century, for example, he notes that Italy has become depopulated.  The great historian was actually there, and witnessed the cause first hand.  It was not “declining fertility,” but starvation resulting from the destruction of food sources by marauding armies.

    However, this allusion to “Very Bad Things” is really just a red herring.  Reading a little further in Last’s book, it doesn’t take us long to discover the real burrs under his saddle.  Most of them may be found by glancing through the 50 pages between chapters 5 and 7 of his book.  They include, 1) The difficulty of caring for the elderly.  2) The decrease in inventiveness and entrepreneurship (because of an over proportion of elderly)  3) A decline in military strength, accompanied by an unwillingness to accept casualties, and 4) Lower economic growth.  The idea that anyone could seriously suggest that any of these transient phenomena could justify playing risky games with the ability of our planet to sustain life for millennia into the future boggles the mind.  The population of the planet cannot keep increasing indefinitely in any case.  At some point, it must stabilize, and these consequences will follow regardless.  The only question is, how many people will be affected.

    Consider Japan, a country Last considers an almost hopeless demographic basket case.  Its population was only 42 million as recently as 1900.  At the time it won wars against both China and Russia, which had much greater populations of 415 million and 132 million, respectively at the time.  Will it really be an unmitigated disaster if its population declines to that level again?  It may well be that Japan’s elderly will have to make do with less during the next century or two.  I hereby make the bold prediction that, in spite of that, they will not all starve to death or be left without health care to die in the streets.  Demographically, Japan is the most fortunate of nations, not the least favored.  At least to date, she does not enjoy the “great advantage” of mass immigration by culturally alien populations, an “advantage” that is likely to wreak havoc in the United States and Europe.

    As for military strength, I doubt that we will need to fear enslavement by some foreign power as long as we maintain a strong and reliable nuclear arsenal, and, with a smaller population, the need to project our power overseas, for example to protect sources of oil and other resources, will decline because our needs will be smaller.  As for inventiveness, entrepreneurship, and economic growth, it would be better to promote them by restraining the cancerous growth of modern tax-devouring welfare states than by artificially stimulating population growth.  Again, all of Last’s “Very Bad Things” are also inevitable things.  What he is proposing will not enable us to avoid them.  It will merely postpone them for a relatively short time, as which point they will be even more difficult to manage because of depleted resources and a degraded environment than they are now.  It seems a very meager excuse for risking the future of the planet.

    In a word, I favor a double standard.  Unrestricted population growth of my own family and those closely related to me genetically balanced by an overall decline in the population overall.  There is nothing incongruous about this.  It is the inherent nature of our species to apply one standard to our ingroup, and an entirely different one to outgroups.  We all do the same, regardless of whether we are prepared to admit it or not.  I leave you, dear reader, in the hope that you will not become confused by the distinction between the two.

  • Nuclear Energy and the “Too Cheap to Meter” Lie

    Posted on February 4th, 2013 Helian No comments

    According to a German proverb, “Lügen haben kurze Beine” – Lies have short legs.  That’s not always true.  Some lies have very long ones.  One of the most notorious is the assertion, long a staple of anti-nuclear propaganda, that the nuclear industry ever claimed that nuclear power would be “Too cheap to meter.”  In fact, according to the New York Times, the phrase did occur in a speech delivered to the National Association of Science Writers by Lewis L. Strauss, then Chairman of the Atomic Energy Commission, in September 1954.  Here is the quote, as reported in the NYT on September 17, 1954:

    “Our children will enjoy in their homes electrical energy too cheap to meter,” he declared.   …    “It is not too much to expect that our children will know of great periodic regional famines in the world only as matters of history, will travel effortlessly over the seas and under them and through the air with a minimum of danger and at great speeds, and will experience a lifespan far longer than ours, as disease yields and man comes to understand what causes him to age.”

    Note that nowhere in the quote is there any direct reference to nuclear power, or for that matter, to fusion power, although the anti-nuclear Luddites have often attributed it to proponents of that technology as well.  According to Wikipedia, Strauss was “really” referring to the latter, but I know of no evidence to that effect.  In any case, Strauss had no academic or professional background that would qualify him as an expert in nuclear energy.  He was addressing the science writers as a government official, and hardly as a “spokesman” for the nuclear industry.  The sort of utopian hyperbole reflected in the above quote is just what one would expect in a talk delivered to such an audience in the era of scientific and technological hubris that followed World War II.  There is an excellent and detailed deconstruction of the infamous “Too cheap to meter” lie on the website of the Canadian Nuclear Society.  Some lies, however, are just too good to ignore, and anti-nuclear zealots continue to use this one on a regular basis, as, for example, here, here and here.  The last link points to a paper by long-time anti-nukers Arjun Makhijani and Scott Saleska.  They obviously knew very well the provenance of the quote and the context in which it was given.  For example, quoting from the paper:

    In 1954, Lewis Strauss, Chairman of the U.S. Atomic Energy Commission, proclaimed that the development of nuclear energy would herald a new age. “It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter,” he declared to a science writers’ convention.  The speech gave the nuclear power industry a memorable phrase to be identified with, but also it saddled it with a promise that was essentially impossible to fulfill.

    In other words, it didn’t matter that they knew very well that Strauss had no intention of “giving the nuclear power industry a memorable phrase to be identified with.”  They used the quote in spite of the fact that they knew that claim was a lie.  I all fairness, it can be safely assumed that most of those who pass along the “too cheap to meter” lie are not similarly culpable.  They are merely ignorant.

  • Second Thoughts about Green Energy in Germany

    Posted on January 7th, 2013 Helian No comments

    Der Spiegel, Germany’s top news magazine, has been second to none in promoting green energy, striking pious poses over the U.S. failure to jump on the Kyoto bandwagon, and trashing nuclear energy.  All this propaganda has succeeded brilliantly.  Germany has a powerful Green Party and is a world leader in the production of wind and solar energy, the latter in a cloudy country, the lion’s share of which lies above the 50th parallel of latitude.  Now the bill has come due.  In 2012 German consumers paid more than 20 billion Euros for green energy that was worth a mere 2.9 billion on the open market.  True to form, Der Spiegel has been churning out shrill condemnations of the high prices, as if it never had the slightest thing to do with promoting them in the first place.  In an article entitled “Green Energy Costs Consumers More Than Ever Before,” we find, among other things, that,

    The cost of renewable energy continues climbing year after year.  At the beginning of the year it increased from 3.59 to 5.27 (Euro) cents per kilowatt hour.  One of the reasons for the increase is solar energy:  more new solar facilities were installed in Germany in 2012 than ever before.  The drawback of the solar boom is that it drives up the production costs paid by consumers.  The reason – green energy producers will receive guaranteed compensation for every kilowatt hour for the next 20 years.

    As a result, German consumers saw their bills for electricity increase by an average of 12% at the beginning of 2013.  The comments following the article are at least as revealing as its content.  The environmental hubris of the population shows distinct signs of fading when tranlated into terms of cold, hard cash.  Examples:

    What a laugh!  The consumers pay 17 billion Euros, and the producers receive 2.9 billion Euros.  Conclusion:  End the subsidies for solar facilities immediately!!  It’s too bad that the pain of consumers – if the Green Party joins the government after the Bundestag election – won’t end, but will only get worse.  Other than that, solar facilities belong in countries with significantly more hours of sunlight than Germany.

    Those were the days, when (Green politician) Trittin told shameless lies to the public, claiming that the switch to green energy would only cost 1.5 Euros per household.

    In ten years we’ll learn what the green energy lies are really going to cost us.

    The real costs are even higher.  When there’s no wind, or clouds cut off the sunlight, then the conventional energy sources held in reserve must make up the deficit; the oil, coal and brown coal energy plants.  If production costs are calculated correctly, then their expense should be included in the price of green energy.  All at once there is a jump from 17 billion to 25 billion Euros in the price we have to pay for the “favors” the Green-Red parties have done us.

    Specious arguments about the supposedly comparable costs of the nuclear power plants Germany is in the process of shutting down are no longer swallowed with alacrity.  For example, in response to the familiar old chestnut of citing exaggerated costs for decommissioning nuclear plants and storing the waste a commenter replies:

    Hmmm, if nuclear energy is so expensive, why are so many countries in central Europe – for example, the Czech Republic – interested in nuclear power?  Certainly not to breed actinides to build nuclear weapons in order to become “nuclear powers.”  The cost of long term waste storage in terms of the energy produced only amounts to about 0.01 Euros per Kw/h.  Even decommissioning expenses don’t add significantly to the overall cost… Let us split atoms, not hairs.

    A “green” commenter suggests that the cleanup costs for the Fukushima reactors be automatically added to the cost of all reactors:

    According to the latest figures for November 2012 for Fukushima:  100 billion Euros.  Distributing this over the total energy production of 880,000 GWh (according to Wikipedia) that’s 11 cents per kilowatt hour.  That amounts to twice the “prettified” cost of nuclear power (without insurance and without subsidies) of 5 cents per kilowatt hour.  And even then the Japanese were lucky that the wind didn’t shift in the direction of Tokyo.  But the 100 billion won’t be the last word.

    Drawing the response from another reader:

    Let’s see.  Japanese nuclear power plants produce 7,656,400 GWh of energy.  In comparison to economic costs in the high tens of billions, 100 billion suddenly doesn’t seem so unreasonable.  It only adds 1.3 cent per KWh to the cost of nuclear energy.  Peanuts.  In Germany, renewables are currently costing an average of 18 cents per KWh.  That translates to 100 billion in under four years.  In other words, thanks to renewables, we have a Fukushima in Germany every four years.

    In response to a remark about all the wonderful green jobs created, another commenter responds,

    Jobs created?  Every job is subsidized to the tune of 40,000 Euros; how, exactly, is that supposed to result in a net gain for the economy overall??  According to your logic, all we have to do to eliminate any level of unemployment is just subsidize it away.  That’s Green politics for you.

    Another unhappy power customer has noticed that, in addition to the hefty subsidy he’s paying for his own power, he has to finance his well-healed “green” neighbors rooftop solar array as well:

    Whoever is surprised about the increases in the cost of electricity hasn’t been paying attention.  There’s no such thing as a free lunch.  At the moment the consumer is paying for the solar cells on his neighbor’s roof right along with his own electricity bill.  Surprising?  Who’s surprised?

    It’s amazing how effective a substantial and increasing yearly hit to income can be in focusing the mind when it comes to assessing the real cost of green energy.

  • Nuclear Fusion Update

    Posted on June 10th, 2012 Helian 2 comments

    As I mentioned in a previous post about fusion progress, signs of life have finally been appearing in scientific journals from the team working to achieve fusion ignition at the National Ignition Facility, or NIF, located at Lawrence Livermore National Laboratory (LLNL) in California.  At the moment they are “under the gun,” because the National Ignition Campaign (NIC) is scheduled to end with the end of the current fiscal year on September 30.  At that point, presumably, work at the facility will be devoted mainly to investigations of nuclear weapon effects and physics, which do not necessarily require fusion ignition.  Based on a paper that recently appeared in Physical Review Letters, chances of reaching the ignition goal before that happens are growing dimmer.

    The problem has to do with a seeming contradiction in the physical requirements for fusion to occur in the inertial confinement approach pursued at LLNL.  In the first place, it is necessary for the NIF’s 192 powerful laser beams to compress, or implode, a target containing fusion fuel in the form of two heavy isotopes of hydrogen to extremely high densities.  It is much easier to compress materials that are cold than those that are hot.  Therefore, it is essential to keep the fuel material as cold as possible during the implosion process.  In the business, this is referred to as keeping the implosion on a “low adiabat.”  However, for fusion ignition to occur, the nuclei of the fuel atoms must come extremely close to each other.  Unfortunately, they’re not inclined to do that, because they’re all positively charged, and like charges repel.  How to overcome the repulsion?  By making the fuel material extremely hot, causing the nuclei to bang into each other at high speed.  The whole trick of inertial confinement fusion, then, is to keep the fuel material very cold, and then, in a tiny fraction of a second, while its inertia holds it in place (hence the name, “inertial” confinement fusion), raise it, or at least a small bit of it, to the extreme temperatures necessary for the fusion process to begin.

    The proposed technique for creating the necessary hot spot was always somewhat speculative, and more than one fusion expert at the national laboratories were dubious that it would succeed.  It consisted of creating a train of four shocks during the implosion process, which were to overtake one another all at the same time precisely at the moment of maximum compression, thereby creating the necessary hot spot.  Four shocks are needed because of well-known theoretical limits on the increase in temperature that can be achieved with a single shock.   Which brings us back to the paper in Physical Review Letters.

    The paper, entitled Precision Shock Tuning on the National Ignition Facility, describes the status of efforts to get the four shocks to jump through the hoops described above.  One cannot help but be impressed by the elegant diagnostic tools used to observe and measure the shocks.  They are capable of peering through materials under the extreme conditions in the NIF target chamber, focusing on the tiny, imploded target core, and measuring the progress of a train of shocks over a period that only lasts for a few billionths of a second!  These diagnostics, developed with the help of another team of brilliant scientists at the OMEGA laser facility at the University of Rochester’s Laboratory for Laser Energetics, are a triumph of human ingenuity.  They reveal that the NIF is close to achieving the ignition goal, but not quite close enough.  As noted in the paper, “The experiments also clearly reveal an issue with the 4th shock velocity, which is observed to be 20% slower than predictions from numerical simulation.”

    It will be a neat trick indeed if the NIF team can overcome this problem before the end of the National Ignition Campaign.  In the event that they don’t, one must hope that the current administration is not so short-sighted as to conclude that the facility is a failure, and severely reduce its funding.  There is too much at stake.  I have always been dubious about the possibility that either the inertial or magnetic approach to fusion will become a viable source of energy any time in the foreseeable future.  However, I may be wrong, and even if I’m not, achieving inertial fusion ignition in the laboratory may well point the way to as yet undiscovered paths to the fusion energy goal.  Ignition in the laboratory will also give us a significant advantage over other nuclear weapons states in maintaining our arsenal without nuclear testing.

    Based on the progress reported to date, there is no basis for the conclusion that ignition is unachievable on the NIF.  Even if the central hot spot approach currently being pursued proves too difficult, there are alternatives, such as polar direct drive and fast ignition.  However, pursuing these alternatives will take time and resources.  They will become a great deal more difficult to realize if funding for NIF operations is severely cut.  It will also be important to maintain the ancillary capability provided by the OMEGA laser.  OMEGA is much less powerful but also a good deal more flexible and nimble than the gigantic NIF, and has already proved its value in testing and developing diagnostics, investigating novel experimental approaches to fusion, developing advanced target technology, etc.

    We have built world-class facilities.  Let us persevere in the quest for fusion.  We cannot afford to let this chance slip.

  • Evolutionary Psychology in the Dark Ages: The Legacy of Theodosius Dobzhansky

    Posted on May 8th, 2012 Helian 1 comment

    Theodosius Dobzhansky was in important early proponent of what is now generally referred to as evolutionary psychology.  Although his last book appeared as recently as 1983, he is generally forgotten today, at least in the fanciful and largely imaginery “histories” of the field that appear in college textbooks.  Unfortunately, he was indelicate enough to jump the gun, joining contemporaries like Robert Ardrey and Konrad Lorenz in writing down the essential ideas of evolutionary psychology, particularly as applied to humans, long before the publication of E. O. Wilson’s Sociobiology in 1975.

    That event was subsequently arbitrarily anointed by the gatekeepers of the chronicles of the science as the official “beginning” of evolutionary psychology.  In fact, the reason Sociobiology gained such wide notoriety was Wilson’s insistence that what is commonly referred to as human nature actually does exist.  As I have noted elsewhere, neither that claim nor the controversy surrounding it began with Wilson.  Far from it.  The “Blank Slate” opponents of Wilson’s ideas had long recognized Robert Ardrey as their most significant and effective opponent, with Konrad Lorenz a close second.  Dobzhansky’s Mankind Evolving also presented similar hypotheses, well-documented with copious experimental evidence which, if textbooks such as David Buss’ Evolutionary Psychology are to be believed, didn’t exist at the time.  Anyone who reads Mankind Evolving, published in 1962, a year after Ardrey’s African Genesis, will quickly realize from the many counter-examples noted in the book that Buss’ claim that the early ethologists and their collaborators, “…did not develop rigorous criteria for discovering adaptations,” is a myth.  Alas, Dobzhansky was premature.  He wrote too early to fit neatly into the “history” of evolutionary psychology concocted later.

    It’s unfortunate that Dobzhansky has been swept under the rug with the rest, because he had some interesting ideas that don’t appear in many other works.  He also wrote from the point of view of a geneticist, which enabled him to explain the mechanics of evolution with unusual clarity.

    Latter day critics of evolutionary psychology commonly claim that it minimizes the significance of culture.  Not only is that not true today, but it has never been true.  Thinkers like Ardrey, Lorenz and Eibes-Eiblfeldt never denied the importance of culture.  They merely insisted that the extreme cultural determinism of the Blank Slate orthodoxy that prevailed in their day was wrong, and that innate, evolved traits also had a significant effect on human behavior.  Dobzhansky was very explicit about it, citing numerous instances in which culture and learning played a dominant role, and others more reliant on innate predispositions.  As he put it,

    In principle any trait is modifiable by changes in the genes and by manipulation of the environment.

    He went so far as to propose a theory of superorganisms:

    In producing the genetic basis for culture, biological evolution has transcended itself – it has produced the superorganic.

    …and constantly stressed the interdependence of innate predispositions and culture. For example,

    Why do so many people insist that biological and cultural evolution are absolutely independent?  I suggest that this is due in large part to a widespread misunderstanding of the nature of heredity… Biological heredity, which is the basis of biological evolution, doesn not transmit cultural, or for that matter physical, traits ready-made; what it does is determine the response of the developing organism to the environment in which the development takes place.

    The dichotomy of hereditary and environmental traits is untenable:  in principle, any trait is modifiable by changes in the genes and by manipulation of the environment.

    In higher animals and most of all in man instinctual behavior is intertwined with, overlaid by, and serves merely as a backdrop to learned behavior. Yet it would be rash to treat this backdrop as unimportant.

    …the old fashioned nature-nurture debates were meaningless.  The dichotomy of environment vs. genetic traits is invalid; what we really want to know are the relative magnitudes of the genetic and environmental components in the variance observed in a given trait, a certain population, at a particular time.

    It has a surprisingly modern ring to it for something written in 1962, doesn’t it?  Dobzhansky was as well aware as Ardrey of the reasons for the Blank Slate orthodoxy that prevailed in the behavioral sciences when he wrote Mankind Evolving, and that is now being so assiduously ignored, as if the ideological derailment and insistence on doctrines so bogus they could have been immediately recognized as such by a child over a period of decades in such “sciences” as anthropology, sociology and psychology, was a matter of no concern.  Citing Ashley Montagu, editor of that invaluable little document of the times, Mankind and Aggression, as a modern proponent of such ideas, he writes,

    Some philosophes who were perhaps bothered by questions of this sort (whether human nature was really good or not) concluded that human nature is, to begin with, actually a void, an untenanted territory.  The “tabula rasa” theory was apparently first stated clearly by John Locke (1632-1704).  The mind of a newborn infant is, Locke thought, a blank page.

    Patore (1949) compared the sociopolitical views of twenty-four psychologists, biologists, and sociologists with their opinions concerning the nature-nurture problem.  Among the twelve classified as “liberals or radicals,” eleven were environmentalists and one an hereditarian; among the twelve “conservatives,” eleven were hereditarians and one an environmentalist.  This is disconcerting!  If the solution of a scientific problem can be twisted to fit one’s biases and predilections, the field of science concerned must be in a most unsatisfactory state.

    That is certainly the greatest understatement in Dobzhansky’s book.  In fact, for a period of decades in the United States, major branches of the behavioral sciences functioned, not as sciences, but as ideological faiths posing as such.  The modern tendency to sweep that inconvenient truth under the rug is dangerous in the extreme.  It is based on the apparent assumption that such a thing can never happen again.  It not only will happen again, but is happening even as I write this.  It will happen a great deal more frequently as long as we continue to refuse to learn from our mistakes.

  • Nuclear Power, Thorium, and the Role of Government

    Posted on May 6th, 2012 Helian 9 comments

    Nuclear power is an attractive candidate for meeting our future energy needs.  Nuclear plants do not release greenhouse gases.  They release significantly less radiation into the environment than coal plants, because coal contains several parts per million of radioactive thorium and uranium.  They require far less space and are far more reliable than alternative energy sources such as wind and solar.  In spite of some of the worst accidents imaginable due to human error and natural disasters, we have not lost any cities or suffered any mass casualties, and the horrific “China Syndrome” scenarios invented by the self-appointed saviors of mankind have proven to be fantasies.  That is not to say nuclear power is benign.  It is just more benign than any of the currently available alternatives.  The main problem with nuclear is not that it is unsafe, but that it is being ill-used.  In this case, government could actually be helpful.  Leadership and political will could put nuclear on a better track.

    To understand why, it is necessary to know a few things about nuclear fuel, and how it “burns.”  Bear with me while I present a brief tutorial in nuclear engineering.  Nuclear energy is released by nuclear fission, or the splitting of heavy elements into two or more lighter ones.  This doesn’t usually happen spontaneously.  Before a heavy element can undergo fission, an amount of energy above a certain threshold must first be delivered to its nucleus.  How does this happen?  Imagine a deep well.  If you drop a bowling ball into the well, it will cause a large splash when it hits the water.  It does so because it has been accelerated by the force of gravity.  A heavy nucleus is something like a well, but things don’t fall into it because of gravity.  Instead, it relies on the strong force, which is very short range, but vastly more powerful than gravity.  The role of “bowling ball” can be played by a neutron.  If one happens along and gets close enough to fall into the strong force ”well,” it will also cause a “splash,” releasing energy as it is bound to the heavy element’s nucleus, just as the real bowling ball is “bound” in the water well until someone fishes it out.  This “splash,” or release of energy, causes the heavy nucleus to “jiggle,” much like an unstable drop of water.  In one naturally occurring isotope – uranium with an atomic weight of 235 – this “jiggle” is so violent that it can cause the “drop of water” to split apart, or fission.

    There are other isotopes of uranium.  All of them have 92 protons in their nucleus, but can have varying numbers of neutrons.  The nucleus of uranium 235, or U235, has 92 protons and 143 protons, adding up to a total of 235.  Unfortunately, U235 is only 0.7% of natural uranium.  Almost all the rest is U238, which has 92 protons and 146 neutrons.  When a neutron falls into the U238 “well,” the “splash” isn’t big enough to cause fission, or at least not unless the neutron had a lot of energy to begin with, as if the “bowling ball” had been shot from a cannon.  As a result, U238 can’t act as the fuel in a nuclear reactor.  Almost all the nuclear reactors in operation today simply burn that 0.7% of U235 and store what’s left over as radioactive waste.  Unfortunately, that’s an extremely inefficient and wasteful use of the available fuel resources.

    To understand why, it’s necessary to understand something about what happens to the neutrons in a reactor that keep the nuclear chain reaction going.  First of all, where do they come from?  Well, each fission releases more neutrons.  The exact number depends on how fast the neutron that caused the fission was going, and what isotope underwent fission.  If enough are released to cause, on average, one more fission, then the resulting chain reaction will continue until the fuel is used up.  Actually, two neutrons, give or take, are released in each fission.  However, not all of them cause another fission.  Some escape the fuel region and are lost.  Others are absorbed in the fuel material.  That’s where things get interesting.

    Recall that, normally, most of the fuel in a reactor isn’t U235, but the more common isotope, U238.  When U238 absorbs a neutron, it forms U239, which quickly decays to neptunium 239 and then plutonium 239.  Now it just so happens that plutonium 239, or Pu239, will also fission if a neutron “falls into its well,” just like U235.  In other words, if enough neutrons were available, the reactor could actually produce more fuel, in the form of Pu239, than it consumes, potentially burning up most of the U238 as well as the U235.  This is referred to as the “breeding” of nuclear fuel.  Instead of just lighting the U235 “match” and letting it burn out, it would be used to light and burn the entire U238 “log.”  Unfortunately, there are not enough neutrons in normal nuclear reactors to breed more fuel than is consumed.  Such reactors have, however, been built, both in the United States and other countries, and have been safely operated for periods of many years.

    Plutonium breeders aren’t the only feasible type.  In addition to U235 and Pu239, another isotope will also fission if a neutron falls into its “well” - uranium 233.  Like Pu239, U233 doesn’t occur in nature.  However, it can be “bred,” just like Pu239, from another element that does occur in nature, and is actually more common than uranium – thorium.  I’ve had a few critical things to say about some of the popular science articles I’ve seen on thorium lately, but my criticisms were directed at inaccuracies in the articles, not at thorium technology itself.  Thorium breeders actually have some important advantages over plutonium.  When U233 fissions, it produces more neutrons than Pu239, and it does so in a “cooler” neutron spectrum, where the average neutron energy is much lower, making the reactor significantly easier to control.  These extra neutrons could not only breed more fuel.  They could also be used to burn up the transuranic elements – those beyond uranium on the table of the elements – that are produced in conventional nuclear reactors, and account for the lion’s share of the long-lived radioactive waste.  This would be a huge advantage.  Destroy the transuranics, and the residual radioactivity from a reactor would be less than that of the original ore, potentially in a few hundred years, rather than many thousands.

    Thorium breeders have other potentially important advantages.  The fuel material could be circulated through the core in the form of a liquid, suspended in a special “salt” material.  Of course, this would eliminate the danger of a fuel meltdown.  In the event of an accident like the one at Fukushima, the fuel would simply be allowed to run into a holding basin, where it would be sub-critical and cool quickly.  Perhaps more importantly, the United States has the biggest proven reserves of thorium on the planet.

    Breeders aren’t the only reactor types that hold great promise for meeting our future energy needs.  High temperature gas cooled reactors would produce gas heated to high temperature in addition to electricity.  This could be used to produce hydrogen gas via electrolysis, which is much more efficient at such high temperatures.  When hydrogen burns, it produces only water.  Such reactors could also be built over the massive oil shale deposits in the western United States.  The hot gas could then be used to efficiently extract oil from the shale “in situ” without the need to mine it.  It is estimated that the amount of oil that could be economically recovered in this way from the Green River Basin deposits in Utah, Wyoming and Colorado alone is three times greater than the oil reserves of Saudi Arabia.

    Will any of this happen without government support and leadership?  Not any time soon.  The people who build nuclear reactors expect to make a profit, and the easiest way to make a profit is to build more conventional reactors of the type we already have.  Raise the points I’ve mentioned above, and they’ll simply tell you that there’s plenty of cheap uranium around and therefore no need to breed more fuel, the radioactive danger of transuranics has been much exaggerated, etc., etc.  All these meretricious arguments make sense if your goal is to make a profit in the short run.  They make no sense at all if you have any concern for the energy security and welfare of future generations.

    Unless the proponents of controlled fusion or solar and other forms of alternative energy manage to pull a rabbit out of their collective hats, I suspect we will eventually adopt breeder technology.  The question is when.  After we have finally burnt our last reserves of fossil fuel?  After we have used up all our precious reserves of U238 by scattering it hither and yon in the form of “depleted uranium” munitions?  The longer we wait, the harder and more expensive it will become to develop a breeder economy.  It would be well if, in this unusual case, government stepped in and did what it is theoretically supposed to do; lead.

  • All Quiet on the Fusion Front: Notes on ITER and the National Ignition Facility

    Posted on February 29th, 2012 Helian 1 comment

    It’s quiet out there – too quiet.  The National Ignition Facility, or NIF, at Lawrence Livermore National Laboratory, a giant, 192 beam laser facility, has been up and running for well over a year now.  In spite of that, there is a remarkable lack of the type of glowing journal articles with scores of authors one would expect to see if the facility had achieved any notable progress towards its goal of setting off fusion ignition in a tiny target with a mix of fuel in the form of tritium and deuterium, two heavy isotopes of hydrogen.  Perhaps they will turn things around, but at the moment it doesn’t look good.

    The NIF was built primarily to study various aspects of nuclear weapons science, but it is potentially also of great significance to the energy future of mankind.  Fusion is the source of the sun’s energy.  Just as energy is released when big atoms, such as uranium, are split, it is also released when the central core, or nuclei, of light atoms are “fused” together.  This “fusion” happens when the nuclei are moved close enough together for the attraction of the ”strong force,” a very powerful force but one with a range limited to the very short distances characteristic of atomic nuclei, to overwhelm the “Coulomb” repulsion, or electric force that tends to prevent two like charges, such as positively charged atomic nuclei, from approaching each other.  When that happens with deuterium, whose nucleus contains a neutron and a proton, and tritium, whose nucleus contains two neutrons and a proton, the result is a helium nucleus, containing two neutrons and two protons, and a free neutron that carries off a very large quantity of energy.

    The problem is that overcoming the Coulomb force is no easy matter.  It can only be done if you pump in a lot of energy to “light” the fusion fire.  On the sun, this is accomplished by the massive force of gravity.  Here on earth the necessary energy can be supplied by a fission explosion, the source of energy that “lights” thermonuclear bombs.  Mother Nature decided, no doubt very wisely, to make it very difficult to accomplish the same thing in a controlled manner on a laboratory scale.  Otherwise we probably would have committed suicide with pure fusion weapons by now.  At the moment, two major approaches are being pursued to reach this goal.  One is inertial confinement fusion, or ICF, as used on the NIF.  In inertial confinement fusion, the necessary energy is supplied in such a short period of time by massive lasers or other “drivers” that the fuel is held in place by its own inertia long enough for significant fusion to occur.  In the other approach, magnetic fusion, the fusion fuel is confined by powerful magnetic fields as it is heated to fusion temperatures.  This is the approach being pursued with ITER, the International Thermonuclear Experimental Reactor, currently under construction in France.

    Based on computer models and the results of experiments on much smaller facilities, such as NOVA at Livermore, and OMEGA at the University of Rochester, it was expected that fusion could be accomplished with the nominal 1.8 megajoules of energy available from the 192 NIF laser beams.  It was to happen like this – carefully shaped laser pulses would implode the fusion fuel to extremely high densities.  Such implosions have already been demonstrated many times in the laboratory.  The problem is that, to achieve the necessary densities, one must compress the fuel while it is in a relatively “cold” state (it is much more difficult to “squeeze” something that is “hot” in that way).  Unfortunately, fusion doesn’t happen in cold material.  Once the necessary high densities have been achieved, it is somehow necessary to heat at least a small portion of the material to the extreme temperatures necessary for fusion to occur.  If that can be done, a “burn wave” will move out from this “hot spot,” igniting the rest of the cold fuel material.   Of course, this begs the question of how one is to produce the “hot spot” to begin with.

    On the NIF, the trick was to be accomplished by setting off a series of converging shocks in the fuel material during the implosion process.  Once the material had reached the necessary high density, these shocks would converge at a point in the center of the imploded target, creating a spot hot enough to set off the burn wave referred to above.  It would be a neat trick if it could be done.  Unfortunately, it was never demonstrated on a laboratory scale before the NIF was built.  Obviously, the “trick” is turning out to be harder than the scientists at Livermore expected.  There could be many reasons for this.  If the implosion isn’t almost perfectly symmetric, the hot and cold fuel materials will mix, quenching the fusion reaction.  If the timing of the shocks isn’t just right, or the velocity of the implosion is too slow, the resulting number of fusion reactions will not be enough to achieve ignition.  All kinds of complicated physical processes, such as the generation of huge magnetic and electric fields, so-called laser-plasma instabilities, and anomalies in the absorption of laser light, can happen that are extremely difficult to include in computer models.

    The game isn’t up yet, though.  There are some very bright folks at Livermore, and they may yet pull a rabbit out of the hat.  Even if the current “mainline” approach using central hot spot ignition doesn’t work, it may be possible to create a hot spot on the outer surface of the imploded target using a technique known as fast ignition.  Currently, “indirect drive” is being used on the NIF.  In other words, the laser beams are shot into a cylindrical can, or “hohlraum,” where their energy is converted to x-rays.  These x-rays then “indirectly” illuminate the target.  The NIF can also accommodate a “direct drive” approach, in which the laser beams are aimed directly at the target.  Perhaps it will work better.  One hopes so.  Some of the best old knights of science have been riding towards that El Dorado for a long time.  It would be great to see them finally reach it.  Alas, to judge by the deafening silence coming out of Livermore, it seems they are still a long way off.

    And what of ITER?  Let me put it this way.  Along with the International Space Station, the project is one of the two greatest scientific white elephants ever concocted by the mind of man.  The NIF is justified because it cost only a fraction of ITER, and it was never conceived as an energy project.  It was always intended as an above ground experimental facility that would enable us to maintain our nuclear arsenal in the absence of testing.  As such, it is part of an experimental capability unequalled in the rest of the world, and one which will give us a very significant advantage over any potential enemy as long the ban on testing continues.  ITER, on the other hand, can only be justified as an energy project.  The problem with that is that, while it may work scientifically, it will be an engineering nightmare.  As a result, it is virtually inconceivable that magnetic fusion reactors similar to ITER will ever produce energy economically any time in the next few hundred years.

    A big part of the problem is that such reactors will require a tritium economy.  Each of them will burn on the order of 50 kilograms of tritium per year.  Tritium is highly radioactive, with a half-life of 12.3 years, is as difficult to contain as any other form of hydrogen, and does not occur naturally.  In other words, failing some outside source, each reactor will have to produce as much tritium as it consumes.  Each fusion reaction produces a single neutron, and neutrons can interact with an isotope of lithium to produce tritium.  However, some of the neutrons will inevitably be lost, so it will be necessary to multiply their number.  This trick can be accomplished with the element beryllium.  In other words, in order to build a workable reactor, it will be necessary to have a layer of some extremely durable material containing the plasma, thick enough to resist radiation embrittlement and corrosion for some reasonable period of time, followed by a layer of highly toxic beryllium thick enough to generate enough neutrons, followed by a layer of highly reactive lithium thick enough to produce enough tritium to keep the reaction going.  But wait, there’s more!  It will then be necessary to somehow quickly extract the lithium and return it to the reaction chamber without losing any of it.  Tritium?  Lithium?  Beryllium?  Forget about it!  I’m sure there are any number of reactor design studies that all “prove” that all of the above can be done economically.  I’m also sure none of them are worth the paper they are printed on.  We have other options that don’t suffer from the drawbacks of a tritium economy and are far more likely to produce the energy we need at a fraction of the cost.

    Meanwhile, ITER crawls ahead, sucking enormous amounts of research money from a host of more worthy projects.  A classic welfare project for smart guys in white coats, there are no plans to even fuel it with tritium before the year 2028!  I’m sure that at this point many European scientists are asking a simple question; Can’t we please stop this thing?

    Fusion is immensely promising as a potential future source of energy.  However, we should not be seduced by that promise into throwing good money after bad, funding a white elephant that has virtually no chance of ever fulfilling that promise.  I suspect that one of these days we will “finesse” Mother Nature, and devise a clever way to overcome the Coulomb barrier without gigantic superconducting magnets or massive arrays of lasers.  Scientists around the world are currently working on many novel and speculative approaches to fusion.  Few of them are likely to succeed, but it just takes one.  We would be much better off funding some of the more promising of these approaches with a fraction of the money currently being wasted on ITER, and devoting the rest to developing other technologies that have at least a fighting chance of eventually producing energy economically.

    Meanwhile, I’m keeping my fingers crossed for the NIF crew at Livermore.  It ain’t over until the fat lady sings, and she’s still a long way off.

  • The Theology of Rick Santorum

    Posted on February 20th, 2012 Helian No comments

    Rick Santorum threw the Left a meaty pitch right down the middle with his comments about “theology” to an audience in Columbus.  Here’s what he said:

    It’s not about you.  It’s not about your quality of life. It’s not about your job. It’s about some phony ideal, some phony theology. Oh, not a theology based on the Bible. A different theology.  But no less a theology.

    The quote seems to lend credence to the “Santorum is a scary theocrat” meme, and the Left lost no time in flooding the media and the blogosphere with articles to that effect.  The Right quickly fired back with the usual claims that the remarks were taken out of context.  This time the Right has it right.  For example, from Foxnews,

    Rick Santorum said Sunday he wasn’t questioning  whether President Obama is a Christian when he referred to his “phony theology”  over the weekend, but was in fact challenging policies that he says place the  stewardship of the Earth above the welfare of people living on it.

    “I wasn’t suggesting the president’s not a  Christian. I accept the fact that the president is a Christian,” Santorum  said.

    “I was talking about the radical environmentalist,”  he said. “I was talking about energy, this idea that man is here to serve the  Earth as opposed to husband its resources and be good stewards of the Earth. And  I think that is a phony ideal.

    I note in passing a surprising thing about almost all the articles about this story, whether they come from the Left or the Right. The part of Santorum’s speech that actually does put things in context is absent. Here it is:

    I think that a lot of radical environmentalists have it backwards. This idea that man is here to serve the earth, as opposed to husband its resources and be good stewards of the earth. Man is here to use the resources and use them wisely. But man is not here to serve the earth.

    I can understand its absence on the Left, but on the Right? Could it be that contrived controversies are good for the bottom line? Well, be that as it may, I’m not adding my two cents worth to this kerfluffle because I’m particularly fond of Santorum. However, he did touch on a matter that deserves serious consideration; the existence of secular religions.

    In fact, there are secular religions, and they have dogmas, just like the more traditional kind. It’s inaccurate to call those dogmas “theologies,” because they don’t have a Theos, but otherwise they’re entirely similar. In both cases they describe elaborate systems of belief in things that either have not or cannot be demonstrated and proved. The reason for this is obvious in the case of traditional religions. They are based on claims of the existence of spiritual realms inaccessible to the human senses. Secular dogmas, on the other hand, commonly deal with events that can’t be fact-checked because they are to occur in the future.

    Socialism in it’s heyday was probably the best example of a secular religion to date.  While it lasted, millions were completely convinced that the complex social developments it predicted were the inevitable fate of mankind, absent any experimental demonstration or proof whatsoever.  Not only did they believe it, they considered themselves superior in intellect and wisdom to other mere mortals by virtue of that knowledge.  They were elitists in the truest sense of the word.  Thousands and thousands of dreary tomes were written elaborating on the ramifications and details of the dogma, all based on the fundamental assumption that it was true.  They were similar in every respect to the other thousands and thousands of dreary tomes of theology written to elaborate on conventional religious dogmas, except for the one very important distinction referred to above.  Instead of describing an entirely different world, they described the future of this world.

    That was their Achilles heal.  The future eventually becomes the present.  The imaginary worker’s paradise was eventually exchanged for the very real Gulag, mass executions, and exploitation by a New Class beyond anything ever imagined by the bourgeoisie.  Few of the genuine zealots of the religion ever saw the light.  They simply refused to believe what was happening before their very eyes, on the testimony of thousands of witnesses and victims.  Eventually, they died, though, and their religion died with them.  Socialism survives as an idea, but no longer as the mass delusion of cocksure intellectuals.  For that we can all be grateful.

    In a word, then, the kind of secular “theologies” Santorum was referring to really do exist.  The question remains whether the specific one he referred to, radical environmentalism, rises to the level of such a religion.  I think not.  True, some of the telltale symptoms of a secular religion are certainly there.  For example, like the socialists before them, environmental ideologues are characterized by a faith, free of any doubt, that a theoretically predicted future, e.g., global warming, will certainly happen, or at least will certainly happen unless they are allowed to “rescue” us.  The physics justifies the surmise that severe global warming is possible.  It does not, however, justify fanatical certainty.  Probabilistic computer models that must deal with billions of ill-defined degrees of freedom cannot provide certainty about anything.

    An additional indicator is the fact that radical environmentalists do not admit the possibility of honest differences of opinion.  They have a term for those who disagree with them; “denialists.”  Like the heretics of religions gone before, denialists are an outgroup.  It cannot be admitted that members of an outgroup have honest and reasonable differences of opinion.  Rather, they must be the dupes of dark political forces, or the evil corporations they serve, just as, in an earlier day, anyone who happened not to want to live under a socialist government was automatically perceived as a minion of the evil bourgeoisie.

    However, to date, at least, environmentalism possesses nothing like the all encompassing world view, or “Theory of Everything,” if you will, that, in my opinion at least, would raise it to the level of a secular religion.  For example, Christianity has its millennium, and the socialists had their worker’s paradise.  The environmental movement has nothing of the sort.  So far, at least, it also falls short of the pitch of zealotry that results in the spawning of warring internal sects, such as the Arians and the Athanasians within Christianity, or the Mensheviks and Bolsheviks within socialism.

    In short, then, Santorum was right about the existence of secular religions.  He was merely sloppy in according that honor to a sect that really doesn’t deserve it.